EU guidelines on trustworthy AI posits that one of the key aspects of creating AI systems is accountability, the routes to which lead through, among other things, explainability and transparency of AI systems. While working on AI Forensics project, which positions accountability as a matter […]
Author: Goda Klumbytė
Bayesian Knowledge: Situated and Pluriversal Perspectives
November 9 & 10, 2023 09:00–12:30 BST / 10:00–13:30 CET / 04:00–07:30 ET / 20:00–23:30 AEDT Hybrid workshop (online + at Goldsmiths, London, UK) This workshop examines potential conceptual and practical correlations between Bayesian approaches, in statistics, data science, mathematics and other fields, and feminist […]
Feminist XAI: From centering “the human” to centering marginalized communities
Explainable artificial intelligence (XAI) as a design perspective towards explainable AI can benefit from feminist perspectives. This post explores some dimensions of feminist approaches to explainability and human-centred explainable AI.
critML: Critical Tools for Machine Learning that Bring Together Intersectional Feminist Scholarship and Systems Design
Critical Tools for Machine Learning or CritML is a project that brings together critical intersectional feminist theory and machine learning systems design. The goal of the project is to provide ways to work with critical theoretical concepts that are rooted in intersectional feminist, anti-racist, post/de-colonial […]
Call for Participation: Critical Tools for Machine Learning
Join a workshop “Critical Tools for Machine Learning” as part of CHItaly conference on July 11, 2021.
Join us in reading Jackson’s “Becoming Human: Matter and Meaning in an Antiblack World”
In what ways has animality, humanity and race been co-constitutive of each other? How do our understandings of being and materiality normalize humanity as white, and where does that leave the humanity of people of color? How can alternative conceptualizations of being human be found […]
Epistemic justice # under (co)construction #
This is the final part of a blogpost series reflecting on a workshop, held at FAccT conference 2020 in Barcelona, about machine learning and epistemic justice. If you are interested in the workshop concept and the theory behind it as well as what is a […]
Experimenting with flows of work: how to create modes of working towards epistemic justice?
This is part three of a blog post series reflecting on a workshop, held at FAccT conference 2020 in Barcelona, about machine learning and epistemic justice. If you are interested in the workshop concept and the theory behind it as well as what is a […]
Finding common ground: charting workflows
This is part two of a blogpost series reflecting on a workshop, held at FAccT conference 2020 in Barcelona, about machine learning and epistemic justice. If you are interested in the workshop concept and the theory behind it, read our first article here. This post […]
Reading group Spring session: Ruha Benjamin’s “Race After Technology”
Are robots racist? Is visibility a trap? Are technofixes viable? What is social justice in the high-tech world? These questions are raised in Ruha Benjamin’s book Race After Technology (2019, Polity Press) that we will read during our TBD reading group spring session 2020. Race […]