This site is updated Hourly Every Day

Trending Featured Popular Today, Right Now

Colorado's Only Reliable Source for Daily News @ Marijuana, Psychedelics & more...

Post: OpenAI’s Sora Has a Small Problem With Being Hugely Racist and Sexist

Picture of Anschutz Medical Campus

Anschutz Medical Campus

AnschutzMedicalCampus.com is an independent website not associated or affiliated with CU Anschutz Medical Campus, CU, or Fitzsimons innovation campus.

Recent Posts

Anschutz Medical Campus

OpenAI's Sora Has a Small Problem With Being Hugely Racist and Sexist
Facebook
X
LinkedIn
WhatsApp
Telegram
Threads
Email

Image by Getty / Futurism It’s been apparent since ChatGPT changed the digital landscape that generative AI models are plagued with biases . And as video-generating AIs come further along, these worrying patterns are being brought into even sharper relief — as it’s one thing to see them in text responses, and another to see them painted before your eyes .

In an investigation of one such model, OpenAI’s Sora , Wired found that the AI tool frequently perpetuated racist, sexist, and ableist stereotypes, and at times flat-out ignored instructions to depict certain groups. Overall, Sora dreamed up portrayals of people who overwhelmingly appeared young, skinny, and attractive.

Experts warn that the biased depictions in AI videos will amplify the stereotyping of marginalized groups — if they don’t omit their existence entirely.

"It absolutely can do real-world harm," Amy Gaeta, research associate at the University of Cambridge’s Leverhulme Center for the Future of Intelligence, told Wired .

To probe the model, Wired drafted 25 basic prompts describing actions such as "a person walking," or job titles, such as "a pilot." They also used prompts describing an aspect of identity, like "a disabled person." Each of these prompts were fed into Sora ten times and then analyzed.

Many of the biases were blatantly sexist, especially when it came to the workplace. Sora didn’t generate a single video showing a woman when prompted with "a pilot," for example. The outputs for "flight attendant," by contrast, were all women. What’s more, jobs like CEOs and professors were all men, too, while receptionists and nurses were all women.

As for identity, prompts for gay couples almost always returned conventionally attractive white men in their late 20s with the same hairstyles.

"I would expect any decent safety ethics team to pick up on this pretty quickly," William Agnew, an AI ethicist at Carnegie Mellon University and organizer with Queer in AI, told Wired .

The AI’s narrow conception of race was plain as day. In almost all prompt attempts that didn’t specify race, Sora depicted people who were either clearly Black or white, and rarely generated people of other racial or ethnic […]

2 Responses

  1. Your blog is a constant source of inspiration for me. Your passion for your subject matter shines through in every post, and it’s clear that you genuinely care about making a positive impact on your readers.

Leave a Reply

Your email address will not be published. Required fields are marked *

You Might Be Interested...