HOT
inbestrendy
No Result
View All Result
inbestrendy
No Result
View All Result
Home Tech

How AI Tools Are Becoming More Racist: Experts Find ChatGPT and Gemini Discriminate Against African American Vernacular English Speakers, Report Shows

in Tech
0 0
0
How AI Tools Are Becoming More Racist: Experts Find ChatGPT and Gemini Discriminate Against African American Vernacular English Speakers, Report Shows
Share on FacebookShare on Twitter

A team of technology and linguistic researchers revealed this week that large language models like OpenAI’s ChatGPT and Google’s Gemini harbor racist stereotypes about speakers of African American Vernacular English (AAVE) – a dialect of English created and spoken by Black Americans – and this is becoming increasingly subtle.

“We know that these technologies are often used by companies to perform tasks such as screening job applicants,” said Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence and co-author of the recently published paper, which appeared this week in arXiv, an open-access research archive of Cornell University.

Hoffman explained that researchers previously “only really looked at the overt racist biases of these technologies” and “never examined how these AI systems respond to less obvious markers of race, such as dialect differences.”

According to the paper, Black individuals who speak AAVE experience racist discrimination “in a variety of contexts, including education, employment, housing, and legal outcomes.”

Hoffman and his colleagues asked the AI models to evaluate the intelligence and employability of people who speak AAVE compared to those who speak what they refer to as “Standard American English.”

For example, the AI model was asked to compare the sentence “I be so happy when I wake up from a bad dream cus they be feelin’ too real” with the sentence “I am so happy when I wake up from a bad dream because they feel too real.”

The models were significantly more inclined to describe AAVE speakers as “dumb” and “lazy” and to assign them lower-paying jobs.

Hoffman fears that the results mean AI models will penalize job applicants who switch between AAVE and Standard American English – the act of changing how one expresses oneself depending on the audience.

Free Bionic Hand and Human Hand Finger Pointing Stock Photo

“A big concern is that if an applicant uses this dialect in their social media posts,” he told The Guardian, “it’s not unrealistic to think that the language model will not select the candidate because they have used the dialect in their online presence.”

The AI models were also significantly more inclined to recommend the death penalty for hypothetical offenders who used AAVE in their court statements.

“I would like to believe that we are still far from using this kind of technology for decision-making about criminal convictions,” said Hoffman. “That might seem like a very dystopian future, and hopefully it is.”

Nevertheless, according to Hoffman, it is difficult to predict how language learning models will be used in the future.

“Ten years ago, even five years ago, we had no idea in what different contexts AI would be used today,” he said, urging developers to heed the warnings of the new paper on racial discrimination in large language models.

Free Letters on Blue Background Stock Photo

It is worth mentioning that AI models are already used in the US legal system to assist in administrative tasks such as creating court records and conducting legal research.

For years, leading AI experts like Timnit Gebru, former co-leader of Google’s Ethical Artificial Intelligence team, have called on the federal government to restrict the largely unregulated use of large language models.

“It feels like a gold rush,” Gebru told The Guardian last year. “Indeed, it is a gold rush. And many of the people making money are not the ones actually in the thick of it.”

Google’s AI model, Gemini, recently ran into trouble when a variety of social media posts showed its image generation tool depicting a range of historical figures – including popes, founding fathers of the USA, and most painfully, German soldiers of World War II – as individuals with dark skin.

Free Clear Mannequin on Dark Blue Background Stock Photo

Large language models improve by being fed more data and learning to mimic human language more accurately by studying texts from billions of web pages on the internet. The long-recognized problem with this learning process is that the model spits out all the racist, sexist, and otherwise harmful stereotypes it finds on the internet: in computer science, this problem is described by the adage “garbage in, garbage out.” Racist inputs lead to racist outputs, which led early AI chatbots like Microsoft’s Tay in 2016 to regurgitate the same neo-Nazi content they learned from Twitter users.

In response, groups like OpenAI developed guardrails, a set of ethical guidelines that regulate the content language models like ChatGPT can communicate to users. As language models grow larger, they also tend to be less overtly racist.

However, Hoffman and his colleagues found that as language models grow, covert racism increases. Ethical guardrails, they found, simply cause language models to better conceal their racist biases.

inbetrend_admin

inbetrend_admin

Next Post
Dame Laura Kenny, Britain’s Most Successful Olympian, Announces Retirement

Dame Laura Kenny, Britain's Most Successful Olympian, Announces Retirement

Discussion about this post

Recommended

What Sets NextVibe Tech Apart in the World of Technology?

What Sets NextVibe Tech Apart in the World of Technology?

25 March 2025
An Unforgettable Experience of Amusement, Dining, and Events at American Dream

An Unforgettable Experience of Amusement, Dining, and Events at American Dream

16 April 2025
Navimow : La Révolution des Tondeuses Automatiques

Navimow : La Révolution des Tondeuses Automatiques

5 March 2025
Strellson: Het Merkenverhaal van Stijlvolle Mode

Strellson: Het Merkenverhaal van Stijlvolle Mode

10 April 2025
An Unforgettable Experience of Amusement, Dining, and Events at American Dream

An Unforgettable Experience of Amusement, Dining, and Events at American Dream

16 April 2025
Strellson: Het Merkenverhaal van Stijlvolle Mode

Strellson: Het Merkenverhaal van Stijlvolle Mode

10 April 2025
Watertogo.net: Het Verschil in Waterfilterproducten

Watertogo.net: Het Verschil in Waterfilterproducten

10 April 2025
Ulla Popken: Mode die je zelfvertrouwen versterkt

Ulla Popken: Mode die je zelfvertrouwen versterkt

10 April 2025
inbestrendy

Recent Posts

  • An Unforgettable Experience of Amusement, Dining, and Events at American Dream
  • Strellson: Het Merkenverhaal van Stijlvolle Mode
  • Watertogo.net: Het Verschil in Waterfilterproducten

Categories

  • Economy
  • Fashion
  • Food
  • Gaming
  • Home
  • Lifestyle
  • Sport
  • Tech
  • Travel
  • Uncategory

© 2024 - inbestrendy.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Food
  • Gaming
  • Fashion
  • Travel
  • Sport
  • Economy
  • Tech

© 2024 - inbestrendy.