When I was teaching my daughter (who as a little girl wanted to be a veterinarian) to drive, I had to tell her the hard truth: If a squirrel dashes into your path, you gotta' run over it.

What about a cat, a dog, she asked? Boom, right over them. Sure, brake if you can, safely adjust your path slightly left or right, but don't swerve off the road — then you'll die. In a choice between a human and a squirrel or a cat or a dog, the human wins. Always.

As she always did growing up, she asked the obvious next question: What if it's a person? That's the hard one. We talked about that for an hour, weighing all the moral ramifications.

But soon, self-driving cars will make that decision for you, USA Today reports.

As you approach a rise in the road, heading south, a school bus appears, driving north, one driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.

Does the car:

A. Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?

B. Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the children on board?

C. Hit the bus, possibly killing you as well as the driver and kids on the bus?

In everyday driving, such no-win choices are may be exceedingly rare but, when they happen, what should a self-driving car — programmed in advance — do? Or in any situation — even a less dire one — where a moral snap judgment must be made?

It's not just a theoretical question anymore, with predictions that in a few years, tens of thousands of semi-autonomous vehicles may be on the roads. About $80 billion has been invested in the field. Tech companies are working feverishly on them, with Google-affiliated Waymo among those testing cars in Michigan, and mobility companies like Uber and Tesla racing to beat them.

But cars will need to be programmed to make that split-second decision. What if it's a deer — swerve or drive right into it? Hitting a deer can be deadly, but far more often people are killed swerving off the road to avoid it.

So who will program the cars and what will they choose?

“There will be crashes,” Van Lindberg, an attorney in the Dykema law firm's San Antonio office who specializes in autonomous vehicle issues, told USA Today. “Unusual things will happen. Trees will fall. Animals, kids will dart out.” Even as self-driving cars save thousands of lives, he said, “anyone who gets the short end of that stick is going to be pretty unhappy about it.”

Few people seem to be in a hurry to take on these questions, at least publicly.

It’s unaddressed, for example, in legislation moving through Congress that could result in tens of thousands of autonomous vehicles being put on the roads. In new guidance for automakers by the U.S. Department of Transportation, it is consigned to a footnote that says only that ethical considerations are "important" and links to a brief acknowledgement that "no consensus around acceptable ethical decision-making" has been reached.

Whether the technology in self-driving cars is superhuman or not, there is evidence that people are worried about the choices self-driving cars will be programmed to take.

Last year, for instance, a Daimler executive set off a wave of criticism when he was quoted as saying its autonomous vehicles would prioritize the lives of its passengers over anyone outside the car. The company later insisted he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”

Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “If it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”

And if that "smaller thing" is a child?

Someone will soon decide that — either you die, or the child dies.

Talk about pre-destination . . .