Feeling AI: ways of knowing beyond the rationalist perspective

Felt AI is a collaborative project between myself (Goda Klumbytė), design researcher Daniela K. Rosner and artist Mika Satomi.  Throughout the last year and a half, the three of us have been meeting regularly to probe, discuss and research together how can one feel AI.  Our coversations began in 2020 as a series of explorations of questions around AI, its meanings and operations from our disciplinary perspectives, before starting to work concretely on the notion of feeling AI. For me, this notion points to knowledge making practices and resources that are parallel to, askew from and contiguous with academic practices that primarily appeal to logic and reason as distinct from something as ephemeral as “feeling”.* I was interested to explore what knowledge might feeling, as an embodied, situational and affective modality, give rise to and what this knowledge can teach us about understanding and explaining algorithms.

One of the motivations for such an exploration was the observation that the effects of AI and machine learning systems might not always be rationally or consciously graspable due to the opacity of their operations, yet that does not preclude them from being felt. I started by asking myself, how might I feel AI in my own life, which resulted in this brief preliminary list:


  • through vision (of language), through my fingers on the keyboard
  • through disembodied voice of assistants
  • through interfaces that entail humanoid figures (chatbots and the like)
  • through anxieties around passing, crossing, accessing (because of being evaluated, say, by a face recognition at the airport, or the German SCHUFA credit score to access housing) that then emerge as various feelings in my body
  • perceptual feeling of an ephemeral entity, as I interact with systems like chatGPT, that require me to cleverly phrase and rephrase my prompts to get the response that I want

The above is an excerpt from early research notes, 24 April 2024.

This preliminary foray into my own experience already signals that there are many sensory and perceptual channels for feeling AI, as well as many domains and operations through which various AI systems make themselves felt and present in daily life.

During our explorations, we engaged several experiments, including creative conversational writing with LLMs (large language models), explorations of LLM self-representation, material-sculptural storytelling, and collective reading. One of the forms of exploration contributing to our conversation was Mika Satomi’s artistic research that led to the creation of a machine learning-powered “instrument”: a body composed of stuffed animals, threaded with conducted wool and thread, with machine learning algorithm synthesizing specific sounds based on touch.

Here’s a quick demonstration of the instrument “in action”:

stuffed animal ML synth : voder

What we learned through this experience and research, and through interacting with Mika Satomi’s work, is that affective and haptic engagement is an important dimension of knowledge making and knowability itself, and so is interaction. This connects well to relational approach to knowledge and 4E cognition (seeing cognition as embedded, embodied, enactive and extended).  Additionally, through interaction we observed how AI multiplies and slips between different roles and positions: sometimes it emerges as an interlocutor or agent, other times it occurs more as a tool, and yet in other moments it reveals itself as an infrastructure or system. We read this as an example of onto-ethico-epistemology (Barad), whereby the knowledge object emerges through and with knowledge practice and ethics.

This shows even more clearly the importance of different approaches, concepts and material metaphors, such as for instance those based on material, tangible, haptic interactions and affective encounters, for generating different kinds of sociotechnical systems. Our research on feeling AI therefore offers one such alternative avenue, divesting from vision-based rationalist understandings and metaphors of explainability and knowability, and instead exploring more situated, material, affective and embodied forms of knowing in and with AI.

* not to say that such neat separation can or should be supported.