Tangible Embodied Interactions and AI – reflections from TEI 2025

Tangible and embodied interactions – can they offer something for the way users understand and engage large language models (LLMs)? This is the question that we explored in studio (a.k.a. hands-on workshop) titled “Tangible LLMs: Tangible Sense-Making for Trustworthy Large Language Models” that PIT, namely […]

After Explainability: Directions for Rethinking Current Debates on Ethics, Explainability, and Transparency 

EU guidelines on trustworthy AI posits that one of the key aspects of creating AI systems is accountability, the routes to which lead through, among other things, explainability and transparency of AI systems. While working on AI Forensics project, which positions accountability as a matter […]

Messy Concepts: How to Navigate the Field of XAI?

Entering the field of explainable artificial intelligence (XAI) entails encountering different terms that the field is based on. Numerous concepts are mentioned in articles, talks and conferences and it is crucial for researchers to familiarize themselves with them. To mention some, there’s explainability, interpretability, understandability, […]

Impressions from “Shaping AI”: Controversies and Closure in Media, Policy, and Research

On January 29th and 30th 2024, a small but international group of scholars, civil society representatives, and practitioners came together at the Berlin Social Science Center to discuss the controversies, imaginaries, developments, and policy solutions surrounding AI under the name “Shifting AI Controversies – Prompts, […]

Bayesian Knowledge: Situated and Pluriversal Perspectives

November 9 & 10, 2023 09:00–12:30 BST / 10:00–13:30 CET / 04:00–07:30 ET / 20:00–23:30 AEDT Hybrid workshop (online + at Goldsmiths, London, UK) This workshop examines potential conceptual and practical correlations between Bayesian approaches, in statistics, data science, mathematics and other fields, and feminist […]