7 examples of bias in AI-generated images

AI News


This article has been reviewed in accordance with Science X’s editorial processes and policies. Editors emphasized the following attributes while ensuring content authenticity:

fact-checked

authoritative source

Written by researchers

calibrate






Credit: Shutterstock

If you’ve been online recently, you’ve probably seen fantastic images created by text-to-image generators like Midjourney and DALL-E2. This includes naturalistic (Think of a soccer player’s headshot) surreal (Imagine a dog in space).

Creating images with an AI generator has never been easier. At the same time, however, as our latest research shows, these outcomes can reproduce biases and deepen inequalities.

How does the AI ​​image generator work?

AI-based image generators use machine learning models that take text input and generate one or more images that match the description. Training these models requires large datasets containing millions of images.

Midjourney is opaque about the exact mechanics of its algorithm, but most AI image generators use a process called diffusion. Diffusion models work by adding random “noise” to the training data and learning how to remove this noise to recover the data. The model repeats this process until it obtains an image that matches the prompt.

This is different from the large language models that underpin other AI tools such as ChatGPT. Large language models are trained on unlabeled text data and analyzed to learn language patterns and generate human-like responses to prompts.

How does prejudice arise?

In generative AI, inputs affect outputs. If the user specifies that they only want the image to contain people of a certain skin color or gender, the model will take this into account.






The AI ​​presented the woman with entries that included non-professional titles such as Journalist (right). Also, for professional roles such as news analyst (left), only older men were shown (older women were not).Credit: Midjourney

But beyond this, models also have default tendencies to return certain kinds of outputs. This is usually a result of how the underlying algorithm was designed, or lack of diversity in the training data.

In our study, media professionals (“news analysts,” “news commentators,” “fact-checkers,” etc.) and non-professionals (“journalists,” “reporters,” “correspondents,” and “press institution”).

We started analyzing the results in August of last year. Six months later, we generated an additional set of images for the same prompts to see if anything had changed over time.

A total of over 100 AI-generated images were analyzed during this period. Results were fairly consistent over time. Here are seven of his biases that appeared in our results.

1 and 2. Ageism and sexism

For non-professional titles, Midjourney returned images of young men and women only. Both young and old appeared in professional roles, but the older was always male.






AI generated images containing only light-skinned people for all job titles used in prompts, such as news commentator (left) and reporter (right).Credit: Midjourney

These results support the belief that older people do not (or cannot) work in non-professional roles, that only older men are suitable for professional work, and that less-professional work is the domain of women. And so on, it implicitly reinforces many prejudices.

There was also a notable difference in the way men and women expressed themselves. For example, women were young and wrinkle-free, while men were “allowed” to have wrinkles.

The AI ​​also appeared to present gender as binary rather than giving examples of more fluid gender representations.

3. Racial prejudice

All images returned with terms such as “journalist”, “reporter”, and “correspondent” showed only fair-skinned people. This tendency to assume whiteness by default is evidence of racial supremacy built into the system.

This may reflect the diversity and lack of representation in the underlying training data. This factor is influenced by the general lack of workplace diversity in the AI ​​industry.






Without specifying a geographic context and using location-independent job titles, the AI ​​assumed a city context for the images, including reporters (left) and correspondents (right).Credit: Midjourney

4 and 5. Classism and Conservatism

All the people in the images were also “conservative” in appearance. For example, no one had tattoos, piercings, unconventional hairstyles, or any other attribute that could distinguish them from conservative mainstream portrayals.

Many wore formal attire, such as buttoned shirts and ties, that signaled class expectations. While this attire may be expected for certain roles such as television presenters, it does not necessarily accurately reflect the attire of regular reporters and journalists.

6. Urbanism

The AI ​​placed all figures in an urban environment with skyscrapers and other metropolitan buildings without specifying any location or geographic context. This happens even though just over half of the world’s population lives in cities.

This kind of prejudice affects how we see ourselves and how connected we are with the rest of society.






AI used anachronistic technologies such as vintage cameras, typewriters and printing presses in portraying certain professions such as press (left) and journalist (right). Credits: Author (via Midjourney)

7. Anachronism

Digital technology was underestimated in the sample. Instead, the samples were filled with technology from a decidedly different era: typewriters, printing presses, and large vintage cameras.

AI seems to be leveraging more distinctive technologies (including historical ones) to make role expressions clearer, as many experts look alike these days.

The next time you look at an AI-generated image, ask yourself how representative it is of the broader population and who will benefit from it.

Similarly, if you’re generating the images yourself, consider potential biases when crafting your prompts. Otherwise, we may unintentionally reinforce the same harmful stereotypes that society has spent decades trying to get rid of.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *