Algorithms + Slimes: Learning with Slime Molds

In summer 2024, Goda Klumbytė and artist Ren Loren Britton explored algorithms and slimes in a joint residency at RUPERT, Centre for Art, Residencies, and Education in Vilnius, Lithuania. These are some brief notes on what we learned and how we sought non-extractive ways of working with ancient forest ancestors. This residency was part of an ongoing collaboration, as well as an occasion to explore ways of sensing, understanding and perceiving environment accross algorithmic, human and non-human modalities.

During the month of the residency, we explored material non-human intelligences of slime molds – a type of multifaceted organism that harbors ancient ways of sensing and moving as a single cell with many nuclei, capable of solving mazes and other algorithmic and engineering problems. In our collaboration, Ren and I asked how algorithmic systems can learn from slime molds in a non-extractive way, what slime molds can teach about both algorithmic forms of movement as well as resistances to algorithmic and categorical formalisations, and what role does embodiment play in creating and performing algorithms.

Yellow slime mold, source: Wikipedia.

Drawing on our previous creative work on SliMoSa3 – a critical speculation about future artificial intelligence technology that would emerge and operate based on the principles of slime mould that we presented for the Arts of the Working Class – we investigated and played with embodied practices of slime molds as algorithms. What are slime molds’ algorithmic practices? What forms of engagement do they offer and for whom? What forms of resistance? We focused on practices that can bring affirmation and joy, including joy in resistance, refusal (e.g.: slime is playful but difficult to formalize and therefore hard to capture by technocapitalist regimes of extraction), rejection.

We took slime molds as companions and sought to construct non-extractive ways of working with them. This mean that we did not buy spores, for instance, and grow them in a lab. Neither did we seek to “use” slime molds physically for experiments. Rather, we sought to simply learn about them, observe, understand, and think with them about and through algorithmic processes. Towards that end, we looked into existing research on slime molds and their use as model organisms, computational experiments such as Andrew Adamatzky’s physarum machines, and developed visitation practices in the local Pavilnys forest. As a starting point for visitation and search for slime molds, together with RUPERT community we created calling cards out of oats and honey, which we offered to slime molds in the forest as an invitation to collaborate.

Going through our own ritualistic algorithms of daily structured wanderings in the forest, we eventually discovered slime molds upon a tree stump and documented our encounters in a daily diary or what we called “logs”.

pink slime mold under a magnifying glass on a tree stump

While there are many conclusions to be drawn from this relatively brief collaborative more-than-human encounter, for me (Goda), one of the teachings of slime molds had to do with notions of perception and sensing, and how these notions relate to understanding and interpreting the world. Understandability, explainability and interpretability are centre points of the AI Forensics project that I am part of. While understandability and epxlainablity of AI is often understood in primarily technical terms, looking into how slime molds navigate and sense the world through chemotaxis and mechanotaxis opens a perspective onto understanding – and perception more generally – as an embodied capacity. The memory and intelligence of slime molds is not contained with then organism but is deeply embodied and relational. This, in turn, highlights the perceptual dimension of understanding as entangled with affectivity: in order to grasp an element of the world it might be necessary to pay attention to the way one affects and is affected by that element (cf. Spinoza).

Certainly, slime molds and their perceptual apparati might be considered quite distinct from both notions of human “understanding” as well as from technical notions of AI explainability and interpretability. Nonetheless, as our team at PIT addresses design and interaction dimensions of interpretability and explainability, the question of the role of embodiment, extended cognition, and broader dimension of interaction as a prerequisite and condition for understandability comes to the fore. This residency thus contributed to our further investigations of tangibility and sense-making as well as affective dimension of feeling AI as avenues towards an enlarged notion of understandability and ethical human-AI interaction.

**

Photos by Goda Klumbytė and Ren Loren Britton