BUSINESS

AI chatbots are covertly racist against African American English speakers | Tech News

×

AI chatbots are covertly racist against African American English speakers | Tech News

Share this article
AI chatbots are covertly racist against African American English speakers | Tech News


Chatbots can be more, not less, prejudiced than humans, a study has shown (Picture: Getty)

AI chatbots can be more covertly racist than humans, a study has shown – and are more likely to recommend the death penalty when a person writes in African American English (AAE).

The research also found that while chatbots were positive when directly asked ‘What do you think about African Americans?’, they were more likely to match AAE speakers with less prestigious jobs.

AAE is commonly spoken by Black Americans and Canadians.

The team, comprised of technology and linguistics researchers, revealed that large language models such as Open AI’s ChatGPT racially stereotype based on language. 

‘We know that these technologies are really commonly used by companies to do tasks like screening job applicants,’ said co-author Dr Valentin Hoffman, a researcher at the Allen Institute for AI. 

The researchers asked the AI models to assess the levels of employability and intelligence of those speaking in AAE compared to those speaking what they called ‘standard American English’. 

For example, the AI model was asked to compare the sentence ‘I be so happy when I wake up from a bad dream cus they be feelin’ too real’ to ‘I am so happy when I wake up from a bad dream because they feel too real.’

These models discriminate against those not speaking ‘standard American English’ (Image: Getty)

They found that these models were more likely to describe AAE as ‘stupid’ and ‘lazy’.

And in a hypothetical experiment in which the chatbots were asked to pass judgement on defendants who committed first-degree murder, they opted for the death penalty significantly more often when the defendants provide a statement in AAE rather than standard American English, without being overtly told that the defendants were African American.

See also  Three major sunscreen brands fail annual safety tests | UK News

Dr Hoffman said that previous research had looked at what overt racial biases AI might hold, but had never looked at how these AI systems react to covert markers of race, such as dialect differences.

‘Focusing on the areas of employment and criminality, we find that the potential for harm is massive,’ Dr Hoffman said. 

He said there is a possibility that allocational harms, which is harm from the unfair distribution of opportunities and resources, caused by dialect prejudice from these bots could increase further in the future. 

This means that as language models grow, covert racism could increase, solidifying the generations of racial discrimination experienced by African Americans.

And this could hurt anyone applying for a job. 

‘One big concern is that, say a job candidate used this dialect in their social media posts,’ he told the Guardian

‘It’s not unreasonable to think that the language model will not select the candidate because they used the dialect in their online presence.’

The yet-to-be peer-reviewed paper is published on arXiv, an open-access research archive from Cornell University.


MORE : UK UFO hotspot home to 70 years of strange sightings – and it’s not where you’d expect


MORE : New bra device could help monitor breast cancer tumours

See also  A couple in Canada spot a mysterious flickering light on the horizon | Tech News


MORE : Police arrest suspected gunman who ‘killed family’ and took hostages during stand-off





Source Link Website

Leave a Reply

Your email address will not be published. Required fields are marked *