AI’s bitter truth: It has biases, too

Illustration by Tactical Tech, with visual elements from Yiorgos Bagakis and Alessandro Cripsta. Used with permission.

This article was written by Safa Ghnaim in collaboration with Goethe-Institut Brazil and originally published on DataDetoxKit.org. An edited version is republished by Global Voices under a partnership agreement. 

Though it may seem like a “neutral technology,” artificial intelligence (AI) has biases, too — it is not the objective tool people think it is. AI is designed by people and trained on data sets. Just like you, the people who build it have certain beliefs, opinions and experiences that inform their choices, whether they realize it or not. The engineers and companies that build and train AI may think certain information or goals are more important than others. Depending on which data sets they “feed” to the AI tools they build — like algorithms or chatbots — those machines might serve up biased results. That’s why AI can produce inaccurate data, generate false assumptions, or make the same bad decisions as a person.

AI is not magic: machines programmed by people carry their flaws

Some people talk about AI as if it’s magic, but “artificial intelligence” is just a machine. Simply put, AI tools are computer programs that have been fed a lot of data to help them make predictions. “AI” refers to a variety of tools designed to recognize patterns, solve problems, and make decisions at a much greater speed and scale than humans can.

But like any tool, AI is designed and programmed by humans. The people who create these machines give them rules to follow: “Do this; but don’t do that.” Knowing that AI tools are automated systems with their own human-influenced limitations can give you more confidence to talk about the capabilities and drawbacks of AI.

When people talk about AI, they could be talking about so many things. Check out some examples of AI tools that are especially popular and their flaws:

Text-generation tools create content based on certain keywords (or “prompts”) you define. They are trained on large amounts of text from the internet, of varying degrees of quality. You might hear these referred to as “large language models” (LLMs) or by specific product names like ChatGPT, or even more casual terms like “chatbots” or “AI assistants.” While these tools have been known to achieve feats of human-like intelligence, like aceing exams, they’re also known to “hallucinate,” meaning they also generate text that is inaccurate.

Image-generation tools create pictures or videos based on certain keywords you define. You might hear about these referred to as text-to-image models, or even by specific product names like DALL-E or Stable Diffusion. These tools can produce incredibly believable images and videos, but are also known to reduce the world to stereotypes and can be used for sextortion and harassment.

Recommendation systems show you content that they “predict” you’re most likely to click on or engage with. These systems are working in the background of search engines, social media feeds, and auto-play on YouTube. You might also hear these referred to as algorithms. These tools can give you more of what you’re already interested in, and can also nudge you down certain dangerous rabbit holes. Recommendation systems are used in important decisions like job hiring, college admissions, home loans, and other areas of daily life.

While some experts believe AI tools, like chatbots, are getting “smarter” on their own, others say they’re full of mistakes. Here are some reasons why you might want to think about the biases behind AI:

  • Some of the data they’re trained on might be personal, copyrighted, or used without permission.
  • Depending on the data sets, they might be full of hate speech, conspiracy theories, or information that’s just plain wrong.
  • The data might be biased against certain people, genders, cultures, religions, jobs, or circumstances.

AI tools are also trained on data that leaves stuff out altogether. If there’s little or no information about a group of people, language, or culture in the training data, it won’t be able to generate any answers about them. A key 2018 study by Joy Buolamwini called “Gender Shades” identified how widespread facial recognition systems struggled to identify the faces of People of Color, especially Black women. By the time of the study, these flawed tools had already been used routinely by police in the United States.

Shine a spotlight on bias to avoid reproducing it

Now that you know about some of the weaknesses that can exist in AI data sets, which are built by people like us, let’s take a look at ourselves. How can the way our human brains work shed light on AI's biases?

There are types of biases that are deeply ingrained in individuals, organizations, cultures, and societies. Shine a light on them by reflecting on these questions:

  • How do you expect others to present themselves, including how they behave, dress, and speak?
  • Are there any groups that face more risk, punishment, or stigmatization because of what they look like or how they behave, dress, or speak?

The biases you just reflected on often rely on assumptions, attitudes, and stereotypes that have been part of cultures for a very long time and can influence your decision-making in unconscious ways. This is why they’re called “implicit biases” — they’re often hardwired into your mindset, difficult to spot, and uncomfortable to confront.

Common implicit biases include:

  • Gender bias: the tendency to jump to conclusions regarding people from different genders based on prejudices or stereotypes.
  • Racial and/or ethnic bias: the tendency to jump to conclusions regarding people based on the color of their skin, cultural background, and/or ethnicity.

Harvard has a huge library of implicit bias tests you can try for free online to see how you do and which areas you can work on. With a lot of implicit biases, it can feel like a journey to even identify those beliefs. It’s unlikely to happen overnight, but why not start now?

Everything is m(ai)gnified

Now that you’ve seen common examples of these thought patterns and implicit biases, imagine what they might look like on a much larger scale. Thought patterns and implicit biases such as these can affect not only individuals but whole groups of people, especially when they get “hard-coded” into computer systems.

Using the free text-to-image generation software Perchance.org, the prompt “beautiful woman” returns the following results:

AI images generated on Perchance.org on August 13, 2024. Images supplied by Tactical Tech.

If the tool created six images of “beautiful women,” why do they all look almost identical?

Try it yourself — do your results differ?

Bigger studies have been conducted on this topic, with similar results. You can read about one such study and see infographics here: “Humans are biased. Generative AI is even worse.”

AI tools are not neutral or unbiased. They are owned and built by people with their own motivations. Even AI tools that include “open” in their name may not necessarily be transparent about how they operate and may have been programmed with built-in biases.

You can ask critical questions about how AI models are built and trained to get a sense of how AI is part of a larger system:

  • Who owns the companies that create AI models?
  • How do the companies profit?
  • What are the systems of power created or maintained by the companies?
  • Who benefits from the AI tools the most?
  • Who is most at-risk for harm from these AI systems?

The answers to these questions might be difficult or impossible to find. That in and of itself is meaningful.

Since technology is built by people and informed by data (which is also collected and labeled by people), we can think of technology as a mirror of the issues that already exist in society. And we can count on the fact that AI-powered tools reinforce power imbalances and systematize and perpetuate biases, but more rapidly than ever before.

As you’ve learned, flawed thought patterns are totally normal and everyone has them in one way or another. Starting to face the facts today can help avoid making mistakes tomorrow, and can help you identify flaws within the systems, like AI.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.