What does decolonising AI really mean? An interview with artist Ameera Kawash

Illustration by Jafar Safatli for UntoldMag. Used with permission

This post by Donatella Della Ratta was first published by UntoldMag on August 14, 2024. This edited version is republished on Global Voices as part of a content-sharing agreement. 

“Decolonizing AI” has become a mantra echoed across various institutions, from academia to cultural venues worldwide. As AI boosterism is shaping global public debates with excessive praise or absolute terror, concerns have emerged emphasizing its tendency to reproduce colonial dynamics of exploitation and extraction. 

However, while the inner mechanism through which a new form of digital colonialism is reactivated by data-powered technologies has been largely unveiled and denounced, the strategies for counteracting it remain less clear. 

Ameera Kawash, a Palestinian–Iraqi–American artist and researcher whose interdisciplinary projects powerfully situate her artistic practice within critical AI studies, is challenging the discriminatory and repressive tech sector. 

Untold Mag (UM): What does decolonizing AI really mean and how can we implement it as a practice?

Ameera Kawash (AK): Decolonizing AI is a multilayered endeavor, requiring a reaction against the philosophy of “universal computing — an approach that is broad, universalistic, and often overrides the local. We must counteract this with varied and localized approaches, focusing on labor, ecological impact, bodies and embodiment, feminist frameworks of consent, and the inherent violence of the digital divide.

This holistic thinking should connect the military use of AI-powered technologies with their seemingly innocent, everyday applications in apps and platforms. By exploring and unveiling the inner bond between these uses, we can understand how the normalization of day-to-day AI applications sometimes legitimizes more extreme and military employment of these technologies.

There are normalized paths and routine ways to violence embedded in the very infrastructure of AI, such as the way prompts (the written instructions or queries we give AI tools) are rendered into actual imagery. This process can contribute to dehumanizing people, making them legitimate targets by rendering them invisible. 

Take Palestine as an example: when I experimented with simple prompts like “Palestinian child in a city” or “Palestinian woman walking”, the AI-generated imagery often depicted scenarios that normalize violence against Palestinians. The child is shown running from a collapsing building, with utter urban devastation in the background. Destruction is ubiquitous, yet the perpetrator of this violence, Israel, is never visually held accountable.

These AI-generated images contribute to shaping a default narrative where, without context or reason, Palestinians are portrayed as living in perpetual devastation. This kind of imagery perpetuates a biased and harmful narrative, further entrenching the normalization of violence against them as a result of more dehumanization.

What I call the “futuricide” of the Palestinian people stems from a complex interplay between how data is trained — by scraping the internet on a large scale and absorbing all the existing stereotypical representations circulating on the web — and then generalizing this data, making it sort of “universal.” As AI generates patterns and models, it crystallizes categories.

The Palestinian city resulting from my prompts risks becoming “the” Palestinian city — a quintessential, solidified entity where suffering is turned into a purely visual item that gets infinitely commodified through generative AI in all its forms and aspects. These traumatic aftereffects occur without a visible perpetrator, resulting in an occupation without an occupier. It mirrors a horror film: pure devastation without cause or reason, just senseless violence and trauma.

UM: If we were to dismantle the colonial foundations embedded in the creation and default structure of AI as conceived today, where should we start?

AK: I believe we should start from very small, local instances. For example, I am working to involve real-world cultural institutions in the creation of datasets, thereby developing highly curated and customized models to train AI without scraping the internet. This approach helps resist the exploitation that typically underpins the making and training of these technologies, which is also where most biases are introduced. 

Decolonizing AI means eliminating this exploitative aspect and turning towards more curated, artisanal labor and practices of care.

Of course, this approach is not scalable, and perhaps that is part of the problem. Conceiving the digital as quintessentially scalable makes it colonial, commercial, and commodified by default. It might be that decolonizing AI, as a project, is inherently unworkable — machine learning, in its current structure and conception, offers little room to decolonial practices.

However, by collaborating with real-world institutions such as universities and cultural centers to create training datasets, we can address at least one layer of the problem: data collection. There are many layers involved in making AI work, all of which should be considered when attempting to ‘decolonize’ it. 

Starting with data collection is a meaningful first step, but we need to acknowledge that a comprehensive approach will require addressing each layer of the process. For example, even if the information is collected fairly, curated meticulously, and consent is given, the training model might be exploitative in itself. The act of turning data into labels and categories and universalizing them is inherently problematic and very much part of the colonial legacy. It can perpetuate biases and reinforce harmful structures, regardless of the fairness of the initial data collection.

For me, it would be useful to think about AI within the framework of critical archival practices. Data is a precious resource from the past upon which future knowledge is built. Understanding AI as an extension of archival practice allows us to critically assess how we collect, categorize, and utilize data, ensuring that we approach it with the same care, consent, and contextual awareness that we would with any other archival material. There is always a selection criteria and an organizing principle driven by choice.

To create a decolonial or anti-colonial archive, we must adopt feminist perspectives and include other forms of knowledge beyond the traditional, language-based ones. As an artist, this is integral to my daily practice — I engage with non-traditional forms of knowing and learning that are embodied and ephemeral, thus less likely to be datafied and commodified. And yet, if we were to truly decolonize AI, would it remain the same object, or would it be something entirely different?

The AI-generated “All Eyes on Rafah” image that went viral on Instagram in May 2024. Image from Wikimedia Commons. Public domain.

UM: What about the role of generative AI in spreading awareness about the genocide in Gaza? Why did the “All Eyes on Rafah” synthetic picture go viral, while so many evidence-based images offering proof of the massacre have faded from public attention?

AK: Many elements contributed to the virality of this AI-generated image. Firstly, the readable text embedded within the image allowed it to bypass contemporary platform censorship, facilitating exponential sharing. Secondly, people likely perceived it as a “safe” image — it is sanitized and free from explicit violence, making it more palatable for widespread dissemination. 

The visuals inhabit a safe space, which is the space of AI, not Palestine. Removing the specific context creates a comfortable distance for viewers. From a Palestinian perspective, this is highly problematic as it contributes to the colonial process of dehumanizing and erasing the local population. Palestinians are redacted from the image, as if their lived experiences are not credible or do not count at all.

The messaging is also problematic: “All Eyes on Rafah” — what does it really mean? It doesn’t suggest actions or call personal agency into question. It doesn’t urge you to protest, contact your MP, or demand sanctions on Israel. It doesn’t push you to do anything concrete; it’s very passive. The whole world is looking, witnessing genocide in real-time, which might be a more sophisticated form of clicktivism. Doing the absolute minimum — just sharing an image — gives a false sense of having contributed, of having “done something.”

Of course, the positive aspect is that 50 million people have shared it across platforms. However, Palestinians do not want to go viral and be invisible at the same time. We need virality to work for us, to bring an end to the violence.

What would happen if these AI-powered technologies were used to affirm Palestinian futures instead of contributing to their annihilation? This question guides my practice. Technology is integral to the discourse on the future, and we Palestinians need to be part of the future. We must be involved in shaping it, not cut out from it.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.