How Algorithms Influence Our Voting Behaviour: Transparency in AI as a driver of democracy

The WWTF funded project Interpretability and Explainability as Drivers to Democracy, led by our board member Sebastian Tschiatschek and based within the Research Network Data Science @ Uni Vienna, was recently featured in the Austrian daily Kurier. It examines how algorithms influence decision-making in modern society — from political strategies to credit approvals — and how greater transparency can strengthen democratic understanding

Making machine learning understandable


The project explores how machine learning can be made more transparent and democratically accountable. It also studies how explanations can be tailored to different audiences – for example, to help people understand why a loan application was declined.

Guidelines for transparent AI decisions


The team develops practical guidelines for public authorities, companies, and institutions to communicate AI-based decisions clearly. These address questions of transparency, trust, and fairness, highlighting how explainable AI supports participation in a digital democracy.

This project has been funded by the Vienna Science and Technology Fund (WWTF) and the City of Vienna through project ICT20-065.