· August, 2023

Left side: A photo of Earth from NASA's DISCOVR Earth Polychromatic Imaging Camera on February 12, 2025. Image free to use. Right side: an AI-generated image of earth using the DALLE-3 platform, which is owned by OpenAI. Image composed using Canva.

When ChatGPT, a chatbot that uses large language model (LLM) technology to make human-like speech, was first released to the public in November 2022, it quickly went viral and became the fastest-growing consumer application in history, stoking international concerns about the power of what has been called Artificial Intelligence (AI), its potential benefits, and the unintended consequences of this technology, such as job loss, an increase in misinformation, data breaches, copyright infringement, and more.

The terminology itself is problematic. “Artificial Intelligence” is a term derived from science fiction, where it is typically used to refer to self-aware, independently thinking machines. The current technologies are decidedly not that, and the popularization of AI as an umbrella term to cover a wide range of different technologies and applications, all far from the phrase's original meaning, adds to the confusion in any discussion. And the discussion is essential.

Over two years after the release of ChatGPT, a number of similar AI-powered tools have emerged, including generative chatbots, such as Google’s Gemini, Microsoft’s Pilot, and China’s DeepSeek — which recently usurped ChatGPT as the top downloaded app in the Google and Apple Play stores in the US and China. Image and video generators have also become widely popular as tools like DeepAi, Synesthesia, DALLE, and MidJourney have evolved from cartoonishly disproportionate depictions of humanoids with six fingers to startlingly lifelike images that make it challenging to tell fiction from reality. 

This technology has rapidly been incorporated into many aspects of online life — from food delivery and photo editing to Zoom meetings and messaging apps (Whatsapp, iMessage, and Meta’s Messenger all claim “AI-enhanced” features). Even Google Docs, where this article is being written, has a generative AI feature, which will not be used for this article. 

Much of the research and discussion around these tools has centered on how they are being used in Western countries and China. As the “AI war” wages on between the US and China, how is this technology playing out in other parts of the world? There is not one single correct position or approach to this issue. For instance, some Indigenous leaders have formed a coalition called Indigenous AI, which aims to create an ethical AI tool that centers Indigenous concerns and needs, while other Indigenous activists are speaking out against AI altogether over concerns about misinformation, land displacement, and environmental degradation. Others are seeking to use AI to find solutions to the climate crisis and ever-worsening environmental crises.

The field remains just as conflicted in the journalism sector, where some newsrooms are banning any form of AI, while others seek a middle ground, and still others seek to turn AI tools into their cornerstone — claiming it's a saving grace against news deserts and unbalanced budgets.

In this rapidly evolving landscape, it’s no surprise that we at Global Voices are still solidifying our own position on this technology. We are committed to only republishing stories written by humans, but we are also aware that, given the geographic, cultural, and linguistic diversity in our community, many of our community members have different approaches to AI. And that's okay. 

This special coverage aims to explore the realities of this technological moment we find ourselves in, discover how AI is being used in Global Majority countries, and offer some insight into why this matters. 

Stories about AI in the Global Majority from August, 2023