Leah Carli and her colleagues requested an image from Dall-E, an AI bot that generates images, of a person with a disability leading a meeting. Carli, who identifies as disabled, emphasized the importance of representing diverse individuals. However, Dall-E struggled to create the requested image. Last year, the bot depicted a situation where a person with a visible disability was observing a meeting instead of leading it, showcasing bias in AI-generated images.
These biases, including ableism, racism, and sexism, reflect common assumptions that AI often amplifies. Researchers have expressed concerns about the impact of biased images on perpetuating stereotypes and prejudice.
Carli’s group tested multiple image-generating bots, including Stable Diffusion, which also displayed biases. For instance, the bot portrayed software developers as exclusively male and light-skinned, disregarding diversity in real-world demographics.
Biased images are problematic as they can reinforce stereotypes and impact people’s opportunities. Overcoming the challenge of biased AI-generated images poses a significant hurdle in addressing societal biases.
Developers train bots using outdated and biased images, limiting their ability to reflect diverse perspectives accurately. OpenAI has updated Dall-E to reduce bias, but challenges persist in ensuring diversity and accuracy in AI-generated images.
Attempts to diversify AI-generated images have faced criticism, with instances like Google’s bot Gemini inaccurately representing historical figures. The concept of a one-size-fits-all AI technology is flawed, and Carli suggests empowering communities to collect data and train AI for their specific needs to avoid bias and harm.
Stuck in the past
Developers train bots like Dall-E and Stable Diffusion with outdated and biased images, limiting their ability to create diverse and accurate representations. Efforts to address bias in AI-generated images continue to face challenges in ensuring fairness and accuracy.
Whack-a-Mole Game
Addressing bias in AI-generated images is akin to a game of whack-a-mole, with fixes often leading to new challenges. Attempts to add diversity to AI-generated images can sometimes backfire, highlighting the complexity of ensuring accurate and inclusive representations.
The real solution lies in empowering communities to collect data and train AI to reflect their values and identities, thus avoiding bias and harm in image generation.
Source: www.snexplores.org