With the recent news of a death caused by an autonomous Uber vehicle, it brings home a certain ‘Trolley Problem’ theory which could be applied to the recent rise in the autonomous vehicle market.

The theory requires that you imagine a runaway trolley hurtling down the tracks toward a group of people. You stand at a railway switch with the power to divert the trolley to another track, where just one person stands. What do you do?

This theory is put into a whole new light when applied to the use of autonomous vehicles. In a similar scenario does a robotic car (programmed by engineers) decide whether to kill a group of people or potentially kill its passengers? Giving machines the ability to decide who to kill is something I just can’t get my head around.

Of course, we haven't heard the whole story yet and Uber have done absolutely the right thing by grounding all test activity until a full investigation has been completed but this will certainly be another blow to public confidence in the autonomous vehicle industry...