An autonomous car is still going to have a finite number of actions it can take. Using the info it gathers on the fly, it will traverse a pre-programmed decision tree to arrive at the action(s) it should take. Being a computer, of course it will do this faster and use a lot more information than a human would. This is going to result in "driving safely."
I'm not saying we should be able to rewrite the code. But a single user option like "Should I sacrifice your life to save another? [Yes/No]" just means the decision tree has another branch point that still leads to the same set of possible actions. It can still make better decisions about driving than the human driver because it still has the access to more information and more accurate controls. Deciding to crash me into a tunnel wall to save a child is not really about "driving safely."
Plus the law has made that choice for you. Sacrificing another life to save your own, intentionally, is second-degree murder (first if answering this question means you planned it in advance).
If you answer yes to this question, aside from being a scumbag, if it does actually lead to an action taken by the car, you could face execution (worst possible case, granted, but there's a minimum prison sentence too).
Plus, this would hardly be the first way for cars to kill you. Brakes can fail, the steering column can fail, the gas tank can fail. All these things are rare, but certainly not impossible, and quite likely to kill or at least injure you.
I'm not saying we should be able to rewrite the code. But a single user option like "Should I sacrifice your life to save another? [Yes/No]" just means the decision tree has another branch point that still leads to the same set of possible actions. It can still make better decisions about driving than the human driver because it still has the access to more information and more accurate controls. Deciding to crash me into a tunnel wall to save a child is not really about "driving safely."