With the recent news of a death caused by an autonomous Uber vehicle, it brings home a certain ‘Trolley Problem’ theory which could be applied to the recent rise in the autonomous vehicle market.
The theory requires that you imagine a runaway trolley hurtling down the tracks toward a group of people. You stand at a railway switch with the power to divert the trolley to another track, where just one person stands. What do you do?
This theory is put into a whole new light when applied to the use of autonomous vehicles. In a similar scenario does a robotic car (programmed by engineers) decide whether to kill a group of people or potentially kill its passengers? Giving machines the ability to decide who to kill is something I just can’t get my head around.
Of course, we haven't heard the whole story yet and Uber have done absolutely the right thing by grounding all test activity until a full investigation has been completed but this will certainly be another blow to public confidence in the autonomous vehicle industry...
Uber said it is suspending self-driving car tests in all North American cities after a fatal accident. A 49-year-old woman was hit by a car and killed as she crossed the street in Tempe, Arizona. While self-driving cars have been involved in multiple accidents, it is thought to be the first time an autonomous car has been involved in a fatal collision. Uber said that its "hearts go out to the victim's family".
Read the original article here