Check out this link, the first case of an accident involving Google’s self driving car.
It’s clear that the Google car rear-ended another Prius. It’s not clear whether the other car stopped short or the Google car failed to stop. Either way, it led me to what I think is an interesting thought experiment.
Imagine you’re driving down a city street at 35 mph. On your right is a family with a baby carriage and a toddler, walking on the sidewalk. Coming in the opposite direction, meaning on your left, is a large commercial truck.
As you near the walking family, the toddler suddenly breaks from the group and runs right in front of you. There’s no way you can stop in time to avoid hitting him. You’re going to have to change direction too.
If you steer right, you wipe out Mom, Dad, and baby. If you steer left, you hit the big truck head on. If you do neither, you wipe out the little kid. You have 1 second to decide.
I hope that my brain would steer left. Save the pedestrians. Chances are at 35 mph and slowing, the driver of the big truck will be OK. And maybe he’ll see the child too, brake and steer to his right, reducing the impact somewhat. That would be nice.
The conundrum arises because this is a moral decision, not a pragmatic one. It’s a self-sacrifice, choosing not the path of least resistance but the path of the greater good.
Some people might choose to go straight ahead, or turn right. I’m not saying that’s wrong. There are arguments to make that justify that course. We’ll skip them here.
The question is this: Would a car driven by a computer be able to make a decision like that? I doubt it, don’t you? So I think self-driving cars are cool, very interesting, but they can’t replace the instinctive judgment of having a human being behind the wheel.
PCWorld: One of Google’s self-driving cars got into an accident earlier this week. But Google is claiming the auto-pilot-equipped Prius was actually flipped into manual mode when the accident happened, making this a case of user error.