After Explainability: Directions for Rethinking Current Debates on Ethics, Explainability, and Transparency 

EU guidelines on trustworthy AI posits that one of the key aspects of creating AI systems is accountability, the routes to which lead through, among other things, explainability and transparency of AI systems. While working on AI Forensics project, which positions accountability as a matter […]

Messy Concepts: How to Navigate the Field of XAI?

Entering the field of explainable artificial intelligence (XAI) entails encountering different terms that the field is based on. Numerous concepts are mentioned in articles, talks and conferences and it is crucial for researchers to familiarize themselves with them. To mention some, there’s explainability, interpretability, understandability, […]

Experimenting with flows of work: how to create modes of working towards epistemic justice?

This is part three of a blog post series reflecting on a workshop, held at FAccT conference 2020 in Barcelona, about machine learning and epistemic justice. If you are interested in the workshop concept and the theory behind it as well as what is a […]

“Where is the difficulty in that?” On planning responsible interdisciplinary collaboration

By Aviva de Groot,  Danny Lämmerhirt,  Phillip Lücking, Goda Klumbyte, Evelyn Wan This is the first in a series of blog posts on experiences gathered during the planning, execution and reflection of our workshop “Lost in Translation: An Interactive Workshop Mapping Interdisciplinary Translations for Epistemic […]