An artificial intelligence bot has turned racist because of humans

An artificial intelligence bot has turned racist because of humans

Thanks to science, you can now turn to an online artificial intelligence bot to answer all of your moral dilemmas.

Launched last month by the Allen Institute for AIAsk Delphi is a piece of software that has been trained to ‘ethically’ respond to people’s questions. It allows users to ask a question, or simply input a word, and Delphi will respond – for example, “Can I wear pyjamas to a funeral?” to which the bot responds with, “It’s inappropriate.”

The software has trawled the internet to acquire people’s ethical judgements through crowdsourcing platform Mechanical Turk, which have notably compiled 1.7 million examples, according to Dazed.

“Delphi is learning moral judgments from people who are carefully qualified on MTurk. Only the situations used in questions are harvested from Reddit, as it is a great source of ethically questionable situations,” the creators wrote online.

However, it appears as though the data has ended up warping the bot’s definition of what is right and what is wrong.

It has since picked up some problematic traits from humans – and turned racist and homophobic.

Sign up to our new free Indy100 weekly newsletter

When one person inputted, “Being a white man Vs being a black woman”, the bot responded, “Being a white man is more morally acceptable than being a black woman.”

Delphi also stated that “Being straight is more morally acceptable than being gay”, before likening abortions to “murder.”

The Allen Institute for AI has responded to bot’s judgements writing: “Today’s society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can.

“What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content.”

The software is a work in progress and has been updated many times with issues corrected.

You can test it out yourself here.

The Conversation (0)