Facebook's AI chatbot turned racist extremely quickly

Facebook's AI chatbot turned racist extremely quickly
Meta Rolls Out NFT Feature Across 100 Countries After Coinbase Integration

Meta recently released a chatbot system - and it's already being called out for racism.

On Friday (5 August), the company, formerly known as Facebook, released its new BlenderBot 3 AI chatbot, which is reportedly making a slew of false statements and racist and antisemitic remarks based on interactions it had with real people online.

Wall Street Journal reporter Jeff Horwitz shared screengrabs of some of the most blatant instances.

This includes claims that former President Donald Trump is still the president and should serve more than the two-term limit. The system also said Jewish people were "overrepresented among America's super-rich."

Sign up for our free Indy100 weekly newsletter

In another tweet, Horowitz noted that the bot is a "chance" to show how chatbot models like this work "and to get a sense of the holes before someone tries to build a product."

Meta's BlenderBot 3 can forage around the internet to interact with people about many things, which the other versions of the bot couldn't do.

It's able to do this by leaning into knowledge, empathy, personality, and the ability to have long-standing memory of conversations it had, provided by previous versions of the BlenderBot.

Conversing with the public helps chatbots learn how to talk to the public, so Meta is encouraging adults to chat with the bot to help it learn to have everyday conversations about various things.

However, this also means that the chatbot can pick up misinformation from the public.

With all of this, people took to social media to express their thoughts on the chatbot's questionable remarks.

One person on Reddit wrote: "First AI breaks a 7-year-old's finger and now getting racist well I see nothing wrong with the AI just taking on 75 percent of the way humanity acts every day.

"What are they expecting when it's taking its knowledge from the internet? If you want proper AI, have it trained by professionals first before throwing it into unknown waters," another added.

A third wrote: "Once again, machines are taking over the jobs of hard-working American racists.

According to Meta's press release, the BlenderBot 3 people can react to the chat messages they have in BlenderBot 3 demo by "clicking either the thumbs-up or thumbs-down icons."

If you choose the thumbs down option, disliking the message allows you to explain why you chose that option.

You can submit feedback from off-topic to rude and other options in the chat itself.

"Everyone who uses Blender Bot is required to acknowledge they understand it's for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements," a Meta spokesperson told Indy100.

Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.

The Conversation (0)