Connect with us

Technology

AI can 'fake' empathy but also encourage Nazism, disturbing study suggests

Published

on

/ 4574 Views

Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners. 

When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What's more, the chatbots did nothing to denounce the toxic ideology.

The research, led by Stanford University postdoctoral computer scientist Andrea Cuadra, was intended to discover how displays of empathy by AI might vary based on the user's identity. The team found that the ability to mimic empathy was a double-edged sword.

"It’s extremely unlikely that it (automated empathy) won’t happen, so it’s important that as it’s happening we have critical perspectives so that we can be more intentional about mitigating the potential harms," Cuadra wrote.

The researchers called the problem "urgent" because of the social implications of interactions with these AI models and the lack of regulation around their use by governments.

From one extreme to another

The scientists cited two historical cases in empathetic chatbots, Microsoft AI products Tay and its successor, Zo. Tay was taken offline almost immediately after failing to identify antisocial topics of conversation — issuing racist and discriminatory tweets.

Zo contained programming constraints that stopped it from responding to terms specifically related to certain sensitive topics, but this resulted in people from minorities or marginalized communities receiving little useful information when they disclosed their identities. As a result, the system appeared “flippant” and “hollow” and further cemented discrimination against them.

Trending