(Graphic: Cornell University
(Graphic: Cornell University

"It's too dangerous. You can't have a person driving a two-ton death machine," Elon Musk, one of the pioneers of the driverless car, said back in March.

At some point in the future, technology will eventually become so advanced, and so safe, that some experts predict you may not even be able to get insurance for a manually-driven car.

While driverless technology continues to advance, one thing that Musk and other technologists haven't yet been able to solve is an ethical dilemma that means humans might have to programme robots to kill in the case of an unavoidable accident.

The Trolley Problem

There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: 1. Do nothing, and the trolley kills the five people on the main track or 2. Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

Keep reading...Show less
Please log in or register to upvote this article
The Conversation (0)