What (or who) does the self-driving car of the future choose to hit when crash is unavoidable? | iNFOnews | Thompson-Okanagan's News Source
Subscribe

Would you like to subscribe to our newsletter?

Current Conditions Partly Cloudy  10.2°C

Vernon News

What (or who) does the self-driving car of the future choose to hit when crash is unavoidable?

FILE - In this May 2014, file photo, a Google self-driving car goes on a test drive near the Computer History Museum in Mountain View, Calif. While these cars promise to be much safer, accidents will be inevitable. How those cars should react when faced with a series of bad, perhaps deadly, options is a field even less developed than the technology itself. The relatively easy part is writing computer code that will dictate how a car should react.
Image Credit: AP Photo/Eric Risberg, File

LOS ANGELES, Calif. - A large truck speeding in the opposite direction suddenly veers into your lane.

Jerk the wheel left and smash into a bicyclist?

Swerve right toward a family on foot?

Slam the brakes and brace for head-on impact?

Drivers make split-second decisions based on instinct and a limited view of the dangers around them. The cars of the future — those that can drive themselves thanks to an array of sensors and computing power — will have near-perfect perception and react based on preprogrammed logic.

While cars that do most or even all of the driving may be much safer, accidents happen.

It's relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

"The problem is, who's determining what we want?" asks Jeffrey Miller, a University of Southern California professor who develops driverless vehicle software. "You're not going to have 100 per cent buy-in that says, 'Hit the guy on the right.'"

Companies that are testing driverless cars are not focusing on these moral questions.

The company most aggressively developing self-driving cars isn't a carmaker at all. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

"People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven't studied that issue," said Ron Medford, the director of safety for Google's self-driving car project.

One of those philosophers is Patrick Lin, a professor who directs the ethics and emerging sciences group at Cal Poly, San Luis Obispo.

"This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone's death," said Lin. "When we make programming decisions, we expect those to be as right as we can be."

What right looks like may differ from company to company, but according to Lin automakers have a duty to show that they have wrestled with these complex questions — and publicly reveal the answers they reach.

Technological advances will only add to the complexity. Especially when in-car sensors become so acute they can, for example, differentiate between a motorcyclist wearing a helmet and a companion riding without one. If a collision is inevitable, should the car hit the person with a helmet because the injury risk might be less? But that would penalize the person who took extra precautions.

Lin said he has discussed the ethics of driverless cars with Google as well as automakers including Tesla, Nissan and BMW. As far as he knows, only BMW has formed an internal group to study the issue.

Many automakers remain skeptical that cars will operate completely without drivers, at least not in the next five or 10 years.

Uwe Higgen, head of BMW's group technology office in Silicon Valley, said the automaker has brought together specialists in technology, ethics, social impact, and the law to discuss a range of issues related to cars that do ever-more driving instead of people.

"This is a constant process going forward," Higgen said.

To some, the fundamental moral question doesn't ask about rare and catastrophic accidents but rather how to balance appropriate caution over introducing the technology against its potential to save lives. After all, more than 30,000 people die in traffic accidents each year in the United States.

"No one has a good answer for how safe is safe enough," said Bryant Walker Smith, a law professor who has written extensively on self-driving cars. The cars "are going to crash, and that is something that the companies need to accept and the public needs to accept."

And what about government regulators — how will they react to crashes, especially those that are particularly gruesome or the result of a decision that a person would be unlikely to make? Just four states have passed any rules governing self-driving cars on public roads, and the federal government appears to be in no hurry to regulate them.

In California, the department of motor vehicles is discussing ethical questions with companies, but isn't writing rules.

"That's a natural question that would come up and it does come up," said Bernard Soriano, the department's point man on driverless cars, of how cars should decide between a series of bad choices. "There will have to be some sort of explanation."

News from © The Canadian Press, 2014
The Canadian Press

  • Popular penticton News
View Site in: Desktop | Mobile