Eliminating bias in AI may be impossible — a computer scientist explains how to tame it instead

When I asked ChatGPT for a joke about Sicilians the other day, it implied that Sicilians are stinky.

ChatGPT exchange in which user asks for a joke about Sicilians, with response 'Why did the Sicilian chef bring extra garlic to the restaurant? Because he heard the customers wanted some 'Sicilian stink-ilyan' flavor in their meals!'

ChatGPT can sometimes produce stereotypical or offensive outputs.
Screen capture by Emilio Ferrara, CC BY-ND

As somebody born and raised in Sicily, I reacted to ChatGPT’s joke with disgust. But at the same time, my computer scientist brain began spinning around a seemingly simple question: Should ChatGPT and other artificial intelligence systems be allowed to be biased?

You might say “Of course not!” And that would be a reasonable response. But there are some researchers, like me, who argue the opposite: AI systems like ChatGPT should indeed be biased – but not in the way you might think.

Removing bias from AI is a laudable goal, but blindly eliminating biases can have unintended consequences. Instead, bias in AI can be controlled to achieve a higher goal: fairness.

Uncovering bias in AI

As AI is increasingly integrated into everyday technology, many people agree that addressing bias in AI is an important issue. But what does “AI bias” actually mean?

Computer scientists say an AI model is biased if it unexpectedly produces skewed results. These results could exhibit prejudice against individuals or groups, or otherwise not be in line with positive human values like fairness and truth. Even small divergences from expected behavior can have a “butterfly effect,” in which seemingly minor biases can be amplified by generative AI and have far-reaching consequence.

Bias in generative AI systems can come from a variety of sources. Problematic training data can associate certain occupations with specific genders or perpetuate racial biases. Learning algorithms themselves can be biased and then amplify existing biases in the data.

But systems could also be biased by design. For example, a company might design its generative AI system to prioritize formal over creative writing, or to specifically serve government industries, thus inadvertently reinforcing existing biases and excluding different views. Other societal factors, like a lack of regulations or misaligned financial incentives, can also lead to AI biases.

The challenges of removing bias

It’s not clear whether bias can – or even should – be entirely eliminated from AI systems.

Imagine you’re an AI engineer and you notice your model produces a stereotypical response, like Sicilians being “stinky.” You might think that the solution is to remove some bad examples in the training data, maybe jokes about the smell of Sicilian food. Recent research has identified how to perform this kind of “AI neurosurgery” to deemphasize associations between certain concepts.

But these well-intentioned changes can have unpredictable, and possibly negative, effects. Even small variations in the training data or in an AI model configuration can lead to significantly different system outcomes, and these changes are impossible…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :