This is the final part of a blogpost series reflecting on a workshop, held at FAccT conference 2020 in Barcelona, about machine learning and epistemic justice. If you are interested in the workshop concept and the theory behind it as well as what is a workflow and why we worked with them, read the other three posts here , here and here.
In this last post we thought to describe our individual takeaways from the workshops, staying true to disciplinary pluralism, without trying to come up with one unified perspective. Rather than synthesizing, we’re interested to see whether our differentness has something of itself to offer. We each focus on what we thought were the most significant points, whether in reflecting on the process of negotiating epistemic justice in an interdisciplinary setting, or in imagining what we would take into account if we did this kind of workshop again.
“To demonstrably aim for”
I’d be interested to experiment with a different order of things in a second edition. Not: introduction of theory and method, after which activity, but: activity, after which we discuss together how theory, method, and action combine in terms of epistemic justice. As it happened, we described the epistemic values we aimed for our participants to explore, illustrated by describing the wrongs that easily ensue when these are left to the implicit. But by emphasizing the tough plights of deconstructing unfair theory and the need to be on guard perpetually, we may have made people feel as though the goal of epistemic justice lay beyond their direct reach. Justice as a label that can only be applied in retrospect. This may encourage participants to rely on their expertise more, rather than less; to apply only those lessons they feel they’ve responsibly learned. Maybe, that is not entirely fair when the aim is to explore together the force of ‘having been trained.’ To try and realize the hold our methods have on us, and the effort it takes to release them just enough so that we may try on another discipline’s suit and feel the difference.
A reverse procedure would arguably work better with a bit more time on our hands. Not a bad idea, as it was challenging to reconcile the pressure-cooker functionalities of a speed-trip workshop with purposes of abstraction and reflection. I can see how conducting an experiment could be followed by a theory-informed second half in which we deconstruct our experiences, chew on how these may translate into intentions (and abundantly available food) and write that up. The point, to speak with Lorraine Code, is not to claim, but demonstrably aim for social epistemic values (Code 1987).
Can the can?
Can workshops like ours sustain the need for ‘supra disciplinary’ quality control? As it is, quality control within disciplines happen alongside assertions of authority vis a vis other disciplines. Think of disputes between qualitative and quantitative fields. Where, and to whom, does this leave the assessment of academic knowledge creation as a thing in itself? This needs to be addressed when aiming to work interdisciplinary, responsibly. One article that helped me think about these issues is by Katri Huutoniemi, who explores how we may extract value from interdisciplinary accountability practices. The justificatory demands of disciplines’ particular epistemic goals and procedures are presented as “black boxes of disciplinary culture”, and she explains how efforts to open these up vis-a-vis each other also challenges academia’s foundational authoritative structures as a whole. This then provides a meta-narrative. Interdisciplinarity rendered as “another mode of accountability, which promises to overcome some of the problems of disciplinary self-referentiality.” To encourage this potential, efforts to tame the beast (“make legitimate appraisals on the basis of moving parameters”), should avoid over-formalization. Obsessing over what interdisciplinary knowledge is (“disciplining interdisciplinarity”) can stand in the way of understanding disciplinary interdependencies (“interdisciplining knowledge”).
Deconstruction, unpacking, untangling and other buzzwords: is it time to construct?
I think for me one of the (possibly productive) tensions that emerged was that between the need to generate better understandings, unpack, untangle (why such urge to “sort things out”?) and complexify, and the more generative, productive and pragmatic need to build new (epistemological, technological, techno-epistemological) tools which can be used. I am so used to social sciences and humanities (SSH) always unpacking, disentangling, deconstructing that I would yearn to have more constructive approaches, more pro-active engagement in what can be built, changed, done differently. As a colleague recently noted, we in the SSH seem to only provide commentary on what’s already there, meanwhile all the discoveries, innovations and inventions are made without us. I would like to see questions added that challenge researchers outside of informatics to feel that they can and should engage with tech creation, and would be motivated to generate workflows that make that possible.
Working with the unknowns
Evelyn (co-facilitator) said in one of the feedback sessions that she is interested in working with the unknowns. So much tech production can seem to be based on filling the unknowns, turning them into knowable, manageable substances that then can be modulated in technological research and production. I wonder whether this, like the above tension regarding production/deconstruction, is something that is bridgeable, whether there can be some generative ways of working with this tension? Can production mode rely on leaving space for the unknowns without trying to fill them in? If “world models” are needed for technology generation, how to make sure these world models leave in some spaces for the unexpected?
Speculation as a method of inquiry
We know that certain existing AI practices are problematic. Critiques have been repeatedly rehearsed. Change is yet to come. A key issue in designing a workshop that tackles a large and serious theme like ‘justice’ is the need for activities and exercises that would help elicit meta-questions and reflections without directly asking them. It is also important not to ‘waste time’ on rehashing concerns that we already are all aware of. What I would like to explore further in future workshops is how we could activate our imagination through speculative inquiry that allows us to rethink the present, without getting too caught up in the intricacies of present obstacles. Nocek, in Propositions in the Making: Experiments in a Whiteheadian Laboratory (2020), writes that “the speculative imagination is not a mere fanciful flight where anything goes, but it is a demanding operation of abstraction whereby the imagination leaves the place from which it originated to find connections beyond itself” (71).
What could help us find new entry points into the current predicament? In a workshop setting, how do we enable people to find points of connection and productive frictions to tackle ‘justice’ in a myriad number of creative ways, rather than rediscovering old arguments, and points of differences/ disagreement? How do we move forward and produce something together in a matter of hours? Borrowing from research and practice in the humanities on speculative and science fiction, fictional scenarios might help us create a new speculative plane of reflection.
Imagine: The year is 2050. You are working together to set up an advisory committee to revise strategies around AI deployment, after a robot crisis has happened. Work together to draw up new principles for AI usage.
Imagine: Facial recognition has finally been banned after years of activism and advocacy, but a new threat to civil liberties is on the horizon with the introduction of yet another piece of AI surveillant technology. Work together to create a new campaign against this new tech.
Participants are still who they are (coder, SSH researcher, lawyer, policymaker, what have you), but having a speculative scenario to land on might bring out other sides of their disciplinary trainings and assumptions and activate their collective imaginations, which the group could reflect on afterwards. And who knows, we might discover new tactics to tackle issues of the present, and it is definitely more fun than reading another exegesis on the problems of AI!
Moving from ML as focus towards the “methods” and sensibilities of engaging with ML
Before and during our workshop, the word “workflow” was used to describe how different people engage with ML. We did so to use the ML workflow as a blueprint for various disciplines to engage with. The term workflow, however, has a specific history (it comes from organisational, rational planning) and politics (it frames and orders our actions in a specific way).
Instead of clinging to topics emerging from info science and data science, like data quality, ML, or the “workflows” that are imagined, I think there is value in collecting and testing different disciplinary engagement “methods” and exploring their sensibilities. What is often problematised, or tried to get rid of around ML, such as notions of “bias”, are considered a basic fact and actual resource to study in other disciplines such as the sociology of critique. Ethnographic inquiries from anthropology have developed their own sensibilities, including the researcher’s positionality, or the awareness that scholarly accounts are co-produced with the research subjects. Foregrounding such disciplinary methods and sensibilities seems something many discussions overlook as they are immediately locked in framings used by data scientists and others. I think we did some good work to make these sensibilities visible, by reworking the ML workflow, and we could do more work to think about how we deal with fundamentally different sensibilities – are translations between them actually possible or needed? Is perhaps a “concertation” or choreography a more useful analogy to pull in different ways of seeing? This remains to be seen.
Caught in contradiction
As an interdisciplinary team who first met in person on-site, for me there was always a slight feeling of discontinuity in understanding among us, something that was difficult to articulate for me personally. Probably due to my computer science background, talking, reflecting and discussing extensively about epistemology or justice has its ultimate limits for me if it is not translated into practice. I am used to the practice of coding and immediate results. I tend to get lost in – sometimes interesting, sometimes tiring – spiraling thoughts, free floating above material, graspable, personal, emotional attachment. Our first miniscule steps towards working interdisciplinary against epistemic violence in machine learning provided a good example of how difficult it is to translate theory into practice, or, to convey knowledge, or, to convey knowledge about how to convey knowledge across disciplinary boundaries. There are definitely some relatively “cheap” take-aways: providing concrete case studies that participants can identify with; eliciting commitment for the full duration of the workshop (a conference for some is a distracting place full of potential contacts, opportunities for optimizing the CV and mediocre free snacks); offering translations (as one participant remarked a little timidly, as far as they understood it “epistemic justice” could be called “responsible knowledge creation”). A more fundamental problem appears to be the lack of explicit pedagogical education me and I figure most academic workers have. From my experience as someone coming from computer science surrounded by social scientists, I was more or less trained to be an expert in my very specific field of computer science and only by switching towards a more mixed group of scientists it became obvious to me how different ways of approaching questions, problems is and that to cross disciplines entails to learn really strange new languages and meanings. For working collaboratively, to communicate and understand each other across disciplines, to create a shared responsibility for knowledge created, I figure that it is not enough to rely on some mystically connecting “scientific language” that every scientist is supposedly speaking automatically by entering an academic career but rather a new set of pedagogical methodologies to further understanding driven by imperfect speculations, by – here I agree explicitly with Goda – creating stuff together, being productive together, having a common ground of material attachment for the object of our interest guided by critical pedagogical methods like exemplary learning.
To conclude, there is no clear conclusion, not least because epistemic justice is a process, a procedural value, instead of a state at which we can once and for all arrive.
To explore further how this process could look like, we are planning more activities in the future. If you are interested in the topic, get in touch by sending an email to email@example.com or firstname.lastname@example.org. If you are interested in the topic, we have prepared a short reading list as well as a glossary.
Image credits: Construction Signs _ Construction signs _ jphilipg _ Flickr_files, https://www.flickr.com/photos/15708236@N07/2754478731, Attribution 2.0 Generic (CC BY 2.0)