Making machine learning understandable
The project explores how machine learning can be made more transparent and democratically accountable. It also studies how explanations can be tailored to different audiences – for example, to help people understand why a loan application was declined.
Guidelines for transparent AI decisions
The team develops practical guidelines for public authorities, companies, and institutions to communicate AI-based decisions clearly. These address questions of transparency, trust, and fairness, highlighting how explainable AI supports participation in a digital democracy.
This project has been funded by the Vienna Science and Technology Fund (WWTF) and the City of Vienna through project ICT20-065.
