Nushin Yazdani on Visionary Fiction, Intersectional Feminism, and Alternative Frameworks for AI Development 11 min read

The meteoric rise of machine learning in the past year portends to upend given assumptions about developing technology products, as well as the values and goals that guide such development. Nushin Yazdani, an interdisciplinary designer, artist, and artificial intelligence researcher, envisions a technological ecosystem that centers voices and bodies historically left out of or harmed by sociotechnical shifts. For Signals, she dives into some of the projects, interventions, and collaborations she has initiated to bring such a future about.


Hannah Scott: The C/Change project is all about creating space through digital means – space for people to share ideas they may not be able to speak about freely in their geopolitical contexts, space for people to connect and collaborate with others they may never encounter in their physical location, space for people to work together to create better futures for both our planet and our digital landscapes. What motivated your development of Dreaming Beyond AI as a space for organizing community? Why is it important to build spaces that enable people to discuss and collaborate around technosocial problems and possibilities outside of traditional technological research and development contexts?

Nushin Yazdani: Many theoretical and practical approaches for a better future are unfortunately too short-sighted and tend to exclude people who are already marginalized, make decisions for them or understand technologies as the only solution strategies for complex issues.

Machine learning systems are sold with the promise of a more efficient and better future, but they often reproduce the past and thus help maintain an unjust present. And many people who create AI tools neglect their responsibilities to consider the consequences of their work on people impacted by racism, sexism, classism, ableism, on groups that have historically had a more difficult path to contest or have been excluded from important technology or policy decisions.

With Dreaming Beyond AI we specifically wanted to create a space that is BIPOC and queer-centered and based on our visions for dealing with tech. We don’t believe in tech solutionism, but want to discuss technology critically and celebrate queer and BIPOC pleasure and joy in a world with technology. That wasn’t so difficult for us, because we come from these positions ourselves, but of course it doesn’t mean that we always do everything right. And of course we are still dependent on institutions.

The idea for Dreaming Beyond AI came about because there was very little content on AI in Germany at the time that looked intersectionally at the effects of these systems. There was, and still is, a lot of silo thinking around AI, i.e. structurally separated areas that do not communicate and interact enough with each other — for example between academia, policymaking and civil society. In particular, there was little merging between anti-racism work and tech critique. For this reason, together with the illustrator José Rojas, I created an explanatory video on facial recognition that can be used via Creative Commons, for example, by teachers and workshop leaders to address this topic in schools. After the great response to the video, I wanted to continue and thanks to the Landecker Democracy Fellowship I was able to develop the first idea of Dreaming Beyond AI. With my colleagues Buse Çetin, Iyo Bisseck and Sarah Diedro Jordão and our collaboration partner ifa (Institut für Auslandsbeziehungen), the project really took off and with their inspiration evolved into what it is now. Our wish with Dreaming Beyond AI is to create a space for others to dream, think, discuss and create together in a critical and multidisciplinary way about technologies like AI.

Dreaming Beyond AI’s Explorative Mode

HS: Tell us more about the design of Dreaming Beyond AI. How do alternative modes of interacting with knowledge (such as the 3D world in the “explorative” mode) yield different experiences of connecting with other community members and with the DBAI space when compared to traditional platforms?

NY: Our creative technologist Iyo Bisseck is responsible for our design. Our idea of the 3D world, the Pluriverse, as we call it with reference to Arturo Escobar, came from the idea of a vast sea of knowledge around AI and tech, from which we want to elevate certain drops of knowledge that we consider important. We also want our design to visualize what we see as a connection between nature and technology, after all, mycelium is the original internet (see also the Dreaming Beyond AI contribution of Neema Githere and Petja Ivanova). And of course, implementing a playful approach to knowledge sharing, that’s important to us. We hope that our design will appeal to people like us — people who may not feel classically comfortable in tech bubbles or academia.

We have noticed time and again that it is not easy to be accessible with cutting edge and interactive design, which is why we decided to offer two different ways of accessing the platform’s content. As far as digital accessibility is concerned, we are still learning every day.

There is one type of platform contribution I particularly like — our “carrier bags.” Inspired by Ursula Le Guin’s 1986 essay “The Carrier Bag Theory of Fiction,” they contain — like a small gift bag — personally compiled resources, thoughts, inspirations and memories from our artists on a particular topic. In times of endless algorithmically compiled feeds and the desire for ever new content, we want to create a counterbalance, because the artists draw on existing work by themselves and others, and one has to take one’s time to explore the carrier bag. (See the carrier bag I designed together with Ulla Heinrich here). 

There was, and still is, a lot of silo thinking around AI, i.e. structurally separated areas that do not communicate and interact enough with each other — for example between academia, policymaking and civil society.

HS: One of C/Change’s focus areas this year is feminist technologies, a theme that also runs through your work, from the Feminist AI Ethics Toolkit to the Digital Identities Feminist Futures zine. What does it mean to consider and create technological tools and experiences from a feminist perspective?

NY: That is a big question! And I can’t answer it exclusively — and only from my own perspective. For me, my work means combining social criticism and technologies. It means making myself aware that I stand on the shoulders of giants who have long fought to change things. A feminist perspective also includes sharing and giving space, centering marginalized voices and elevating their/our ideas of a more just world. This basically means orienting myself towards values such as justice, accountability, softness, care, self-reflection, transparent communication, joy, non-binarity, slowness, trust, maintenance and awareness of resources, and gratitude, and always trying to think about possible consequences of consequences. So my approaches do not refer exclusively to technological aspects, because in my opinion these cannot be separated anyway.

At the SUPERRR Lab I was able to work together with researchers, activists and artists on the Feminist Tech Principles. We wrote about how technology development would have to change to be less hurtful and unjust and what principles come out of that. The contributors give examples of where it works well, and also imagine what it could be like if some of these principles were implemented. It is a great luxury and privilege for me to work with and learn from such great and inspiring artists, for which I am very grateful.

HS: When so much of technological research and development takes place behind the proprietary walls of large corporations or the ivory towers of academia, how do you think artists and independent creative technologists can retain a sense of agency when exploring and working with technological tools?

NY: This is a very important and complex question. I know this dichotomy from community and collective work as well. Big companies have more money for accessibility options, for bug fixing and also to provide easy-to-use software for a variety of devices. And many people already know how to use the services. While there are quite some open and secure software approaches, they unfortunately often are still buggy or complicated to use. When it comes to organizing, the choice of tools is frequently a question of accessibility vs. security — not only for the people in the collective, but also for the people we work with beyond.

I believe that besides accessibility, a balance of pragmatism, transparency and communication is important when it comes to which tool to use. In general, there are actually many possibilities to establish a more just way of dealing with things — e.g. to design terms of service not only as take it or leave it model. But we are currently seeing with the approach to browser cookies that it doesn’t work without solid regulation. Cookie banners suggest choice, but through problematic design they coerce us into giving up all our data.

With AI tools it is even more complicated, because often the data sets are already compiled in a questionable way, the code is mostly intransparent, and it is often not easy to see how the own input is further used by the companies.

HS: Staying with the theme of feminist technologies, how is your deep engagement with the ethics of (pre-ChatGPT) artificial intelligence through a feminist lens shaping your perspective on the current AI hype cycle?

NY: To be honest, I find the outputs quite homogeneous, they bore me personally a bit by now. I think ChatGPT and image generators will soon become common tools for many. This wouldn’t be dramatic if we didn’t live in the system we live in – which is all about efficiency, resource exploitation, growth and profit maximization.

A good recent example is the organized resistance to AI use in the film industry in Hollywood. The works and performances of screenwriters and actors could be recycled forever through AI and compulsory rights transfers. Only the studios would then earn from it, not the people who did the actual work.

Machine learning systems are being integrated into more and more products and services, but often this is not necessary and is not transparent. The current AI hype cycle has still not changed anything about the incredibly high resource consumption of data centers, for example, or about the fact that nature as well as precariously employed tech workers are being exploited. I’m thinking of the clickworkers in Kenya, for example, who had to rate traumatizing texts for less than $2 per hour so that ChatGPT wouldn’t spit out hate messages.

HS: Much of contemporary art and experimental media (as well as the C/Change initiative) is imbued with a posture toward the future, a call to collectively imagine and enact better and more just futures than the ones imagined by Big Tech and authoritarian governments. Can you speak to your personal approach to envisioning speculative futures? What strategies have you developed for creating work that effects change in the world?

NY: It is often not so easy to find time to think about what exactly this future should look like that we are working towards when problems are acute and urgent. Many people around me are exhausted. Sometimes I also get very tired because it feels like we can only work in small ways in our communities and bring about much less change than, for example, big tech. But envisioning works like a guiding star, and that’s why I love reading visionary fiction, for example by Marge Piercy, adrienne maree brown, Octavia Butler, or this essay by my colleague Buse Çetin.

In the residency I just organized with Dreaming Beyond AI, we built ourselves a microcosm in which we could experience what it feels like to live a kind of future that we aspire to. I believe that such sensory bodily perceptions of community, safety, healing and liberation in a space for BIPOC and queers are very important in order to not only approach this work in a purely intellectual way. Remembering such moments and spaces gives me personally strength not to get discouraged.

Encouraging each other to let go of capitalistic ingrained behavior patterns such as “I have to work a lot to be worth something,” or supporting each other in solidarity, for example through financial redistribution or passing on jobs, I also find important, to name a few examples.