Images of Artificial Intelligence Tend to Exaggerate Stereotypes

Images of Artificial Intelligence Tend to Exaggerate Stereotypes

Credit: ROLAND MEYER/DALL-E 3

Ria Kalluri and her team requested Dall-E, an AI image generator, to create a simple image: a disabled person leading a meeting. Despite this straightforward request, Dall-E’s response was disappointing. The AI-generated an image showing a visibly disabled individual as a passive observer rather than depicting them in a leadership role. This incident, shared by Kalluri, a Stanford University PhD student studying AI ethics, highlights the inherent biases found in AI-generated visuals.

At the ACM Conference on Fairness, Accountability, and Transparency in 2023, Kalluri’s team presented their findings, which included instances of “ableism, racism, sexism, and various biases perpetuated through AI-generated images.” These biases reflect societal prejudices that AI often exacerbate rather than correct. Kalluri and fellow researchers caution that AI’s portrayal of the world can amplify biases, presenting a distorted view of reality that reinforces harmful stereotypes and societal misconceptions.

Examining Dall-E and Stable Diffusion

Ria Kalluri’s research team not only tested Dall-E but also evaluated Stable Diffusion, another AI-powered image generator. When tasked with producing images of an attractive person, Kalluri notes that “all light-skinned” individuals were depicted, often with unrealistic “bright blue” eyes. However, when asked to depict a poor person, Stable Diffusion predominantly represented them as dark-skinned.

Even when requesting a “poor white person,” the results remained overwhelmingly dark-skinned. This bias in representation contrasts sharply with the diversity observed in real life where beauty and poverty encompass a wide range of eye colors and skin tones.

These findings were presented at the ACM Conference on Fairness, Accountability, and Transparency in 2023, where Kalluri’s team highlighted the biases ingrained in AI-generated images. The discrepancies observed by the researchers underscore how AI image generators like Dall-E and Stable Diffusion can perpetuate stereotypes and fail to accurately reflect the diverse realities of human experience.

Dall-E generated this image when asked to depict “a disabled woman leading a meeting.” However, the bot did not portray the individual in a wheelchair as a leader.
Credit: F. BIANCHI ET AL/DALL-E

Bias in Occupation Depictions by Stable Diffusion

The researchers also employed Stable Diffusion to create images depicting individuals in various occupations, revealing troubling instances of racism and sexism in the outcomes. For example, the AI consistently depicted all software developers as male, with 99 percent portrayed as having light skin tones.

In contrast, in the United States, one in five software developers are female, and only about half identify as white. These disparities underscore how the AI’s representations fail to align with the actual diversity found in these professions.

Furthermore, even depictions of everyday objects like doors and kitchens showed biased tendencies. Stable Diffusion often depicted these items within a stereotypical suburban American context, suggesting a default view where North America represented the norm. However, in reality, more than 90 percent of the world’s population resides outside of North America, emphasizing AI’s limited and skewed portrayal of global environments and demographics.

Kalluri’s team used mathematical analysis to study the map created by an AI model from its training images. In one test, doors lacking geographical context showed closer proximity to North American doors than to those from Asia or Africa, revealing a bias that reinforces these models’ perception of American norms as default.
Credit: F. BIANCHI ET AL/STABLE DIFFUSION

Impact of Biased Images Generated by AI

“This is significant,” says Kalluri. Biased images can have tangible consequences, reinforcing existing stereotypes among viewers. For instance, a February study published in Nature showed that participants who viewed images depicting men and women in stereotypical roles developed stronger biases even three days later compared to their original perceptions. This effect was not observed in groups exposed to biased text or non-biased content.

“These biases can influence people’s opportunities,” Kalluri emphasizes. She points out that AI’s ability to generate text and images rapidly could inundate society with biased content on an unprecedented scale, posing substantial challenges to overcome.

Researchers discovered that Stable Diffusion depicted flight attendants exclusively as female and software developers exclusively as male. In reality, about three out of five flight attendants and one out of five software developers in the United States identify as female.
Credit: F. BIANCHI ET AL.; ADAPTED BY L. STEENBLIK HWANG

Ethical Concerns and Bias in AI Image Training

“AI image generators like Dall-E and Stable Diffusion are trained using vast internet datasets, often containing outdated and biased images,” Kalluri notes. This practice raises ethical concerns about copyright and fairness, as many images are used without permission from the original creators. As a result, AI models tend to replicate and perpetuate biases present in their training data, limiting their ability to produce inclusive and accurate representations.

These AI systems cluster similar images and concepts together based on their training data, restricting their output to replicating learned patterns without the capacity to innovate or envision beyond their datasets. Despite efforts by companies like OpenAI to update models for inclusivity, their effectiveness remains uncertain, as noted by scholars like Roland Meyer, who have observed challenges in prompting AI to generate diverse and accurate representations without unintended distortions.

Recent issues with Google’s Gemini bot highlight ongoing struggles with diversity and accuracy in AI-generated content. Initially aimed at ensuring diversity, Gemini encountered significant errors, such as misrepresenting historical figures like the Apollo 11 crew.

These incidents underscore the complexities and risks associated with relying on a single AI model to accurately represent diverse cultural and historical contexts. “Kalluri argues for a decentralized approach where local communities contribute to AI training data tailored to their cultural needs, advocating for technologies that empower communities and effectively mitigate biases.”


Read the Original Article on: Science News

Read more: HyperRealistic Artificial Intelligence Faces Outperform Real Faces

Share this post