A car is a high-speed, heavy object with the power to kill its users and the people around it. A compromise in the software that allowed an attacker to take over the brakes, accelerator and steering (such as last summer’s exploit against Chrysler’s Jeeps, which triggered a 1.4m vehicle recall) is a nightmare scenario. The only thing worse would be such an exploit against a car designed to have no user-override – designed, in fact, to treat any attempt from the vehicle’s user to redirect its programming as a selfish attempt to avoid the Trolley Problem’s cold equations.
Whatever problems we will have with self-driving cars, they will be worsened by designing them to treat their passengers as adversaries.
The hard part of making a self-driving car is writing the AI. These days, the hard part of writing the AI is making the decisions about what decisions the AI should be biased towards making.