The Ethical Dilemma of Self-Driving Cars

Researchers speculate on the murky decision-making aptitude of AI vehicles.

This animated short film unpacks the difficult questions that must be addressed as self-driving cars inch ever closer to becoming a reality on our highways.

Recent advances by companies such as Elon Musk’s Tesla, Toyota and even Google, would see that autonomous cars become a regular part of modern traffic.

The benefits of self-driving cars are clear. By removing human error and emotion from the equation, our roads would probably become safer, less congested and leave humans to do other tasks in transit.

But accidents happen. The question is, how would we expect a self-driving car to respond when our lives or the lives of other commuters are at stake. For instance, if evading an obstacle would save its user’s life but put the lives of other commuters at greater risk, what would we programme the AI to do?

TED-Ed researcher Patrick Lin (with help from animator Yukai Du) puts forth a number of scenarios that test the ethical decision-making that these cars will have to perform. It becomes clear that the judgement of the car will rely on programming that would have been implemented by an engineer years in advance – how will they instruct the car to take action in a moment of impending collision? As the narrator of the film comments, “That sounds more like premeditated murder”.

Moreover, an additional question is asked that consumers will have to consider when purchasing a self-driving car – would you buy one that is programmed to protect you (the passenger) at all costs, or one that attempts to save as many lives as possible in the situation? Since the difference between these two theoretical cars would result in different collision outcomes, it is a valuable (and potentially life-altering) thought experiment to consider.