Should your robot car kill you to save others?

Chuckanut

Give me a museum and I'll fill it. (Picasso) Give me a forum ...
Joined
Aug 5, 2011
Messages
17,280
Location
West of the Mississippi
Here's an interesting technology ethics problem:

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.​
Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.​


 
Last edited:
Interesting dilemma.

My initial thought is...

Leave the moral decisions to me as the owner, not the machine. I want the machine to protect me first. If, during whatever scenario presents itself, I'm able to take control and veer away over the cliff (or whatever escape presents itself that might save others), in the heat of the moment, then I will.

But I don't want my machine, the one I paid for, the one I own, to make that decision on my behalf.
 
I think it'll be a while before autonomous cars are that smart (size of other car, number of occupants). And I gather you're assuming you and the two oncoming drivers would all survive said collision, not a given, could be one to three fatalities or permanently disabling injuries. Going right would have been the right call more often than not, roadside cliffs aren't that common. Robot still looks pretty smart.

And I doubt there will ever come a time (for many generations at least) where a driver can't override the "robot."

An answer looking for a (low probability) question?
 
The scenario should become a rare anomaly with robotics?

You enter vehicle to depart to destination, fail self checks, end of trip. Underway, vehicle spacing and speed is maintained within response time for conditions. A vehicle encounters a problem in transit, every vehicle in proximity is networked, micro seconds later the overarching controller sends commands reacting to vector/velocity information.

Yada, yada, or you know the evil robot is just waiting to pounce.
 
Last edited:
Might kind of sway your car purchase decision if a car maker wanted to advertise us versus them safety functions.
 
Here's an interesting technology ethics problem...​


First, don't anthropomorphize robots. They really hate that...:cool:

As far as ethics go, the automation system is not deciding to kill you. It is deciding, based on it's current inputs and system state, to take a predefined series of branches in a decision tree that was set down by an engineering team before you ever purchased the vehicle (unless you just installed that new software update...).

Exactly what's in that decision tree will likely be determined by a combination of mechanical engineering, the vehicle's physics, the motor vehicle codes for wherever it operates, and a set of nebulous requirements from Corporate Legal.

I eagerly await the first few hundred cases of accident litigation following the introduction of these vehicles. :nonono:
 
Good post ^^^


But... I doubt that the robot even knew there was a cliff there to fall off... and tell me... how many cliffs are out there without a rail:confused:
 
I think that the robot would definitely know that there was a cliff there and all of the other parameters of the road it was traveling on. I suspect that the First Law of Robotics would have it taking the action which would result in the highest probability of its vehicle's occupant(s) surviving. This would be if it were truly a robot rather than just a computer.
 
After the singularity, auto arbitration will uphold the logical course of action. No need for popcorn lawsuit quashed at filing.
 
I would think a robot would be smart enough to know the proper procedure for recovering from a tire blowout without needing to make a choice to make a drastic swerve left or right. Most accidents from blowouts are caused by drivers not doing the right thing (like slamming on the breaks). I think in this case I would have more trust in a robot being able to make the right decision than most drivers.
 
Good post ^^^


But... I doubt that the robot even knew there was a cliff there to fall off... and tell me... how many cliffs are out there without a rail:confused:
The Blue Ridge Parkway has a LOT of cliffs, and only some guardrails, mostly on curves. In the 13 mile stretch I drive a lot, I remember 3 or 4 cars that have gone over the edge for one reason or another-- could be a flat tire, or started by an animal darting out in the road, or just inattentiveness. I think all came to a stop 50-200 feet down, hung up in trees and such, and no fatalities. I also have a friend that lives out in Telluride and says there's a place where come the spring thaw they often find a car or two that went over the cliff, and they discover what happened to that person that went missing over the winter.
 
No, my robot car should kill OTHERS to save ME !! The Narcissistmobile 2.0 !!

Additionally, if the robot car knows there are 2 other people in the other car, who are not having to focus on driving, does it know what they are doing together to.. umhh.. occupy the time ?! If so, I present the Voyeurmobile 2.0
 
Interesting, and I don't think it's as 'far out' as some are saying. After all, who could have predicted the technology we have today from 40 years ago?

Our car already knows something about the size/weight of a front seat passenger and sets the airbag accordingly.

It's really not hard to imagine that each car on the road could be broadcasting information about itself - GPS co-ordinates (or some more advanced location service?), its current speed and direction, number of occupants, maybe a 'crash profile' of the vehicle (the force you will experience on impact on each angle of the vehicle?), or at least its size/weight, the amount/type of fuel on board, or other hazardous material. Our current technology is pretty close to being able to do that.

Based on that, it seems it would be possible to make a 'least damage' decision if a crash was imminent.

But I can think of a few reasons why they would not include this level of decision making:

1) Despite our reasonable fears of software glitches causing problems, I would expect that autonomous vehicles could be far, far safer than the human glitches we experience practically every outing. So this type of software would not be needed often, the crashes would largely be avoided. And in the case above, if this vehicle broadcast to the other cars that it had a blow-out, all those cars would brake hard, and swerve to avoid it, which would lessen the damage. They could even communicate with each other to provide an open area along its most likely path. Try to get a bunch of drivers to co-ordinate that in an instant!

2) Liability (probably the biggest) - what company wants to explain in court why their vehicle chose to wipe out Car A and kill those passengers instead of the passengers in Car B? Maybe the decision is based on the number of Facebook friends that could show up in court (the computers would have access to everything! ;) )

Think of the ramifications. The car could have a complete profile of the expected damages and potential injuries, with expected costs. Recall the old 'joke' that it's cheaper to run over someone and kill them than to injure them for life? Would your car make that decision - kill the driver alone in a car, rather than injure a mini-van full of girl scouts? Then there is that old saw, 'Not to decide is to decide.' Something's going to happen, either way.

I recall reading recently about pacemakers - I was wondering why they don't use a simple rechargeable battery with an inductive coil implanted with it. You could recharge it by putting the charger plate in close proximity for maybe an hour a week? The current method is long life lithiums, which require surgical replacement after X years - sounds risky to me.

Well, it turns out they did use rechargeable systems many years ago, but there were liability concerns - what if the patient forgot to recharge it? I'd like to have that choice. Heck, today the charger could be programmed to 'phone home' and report to the doctor/manufacturer/care-giver that it was into a yellow zone and was over-due (but not yet critical) for a recharge.

-ERD50
 
No, my robot car should kill OTHERS to save ME !! The Narcissistmobile 2.0 !! ...

:LOL: Only available as an expensive upgrade!

Now imagine that one of these SW glitches happens, and it is found that there was a race condition caused when the SW was processing a subroutine to handle a phone call that the occupants were making from the built in communication system.

Would we need to pass a law that says robots can't phone and drive at the same time? :cool:

-ERD50
 
FWIW, here is some news on how Volvo plans to handle the liability issues surrounding self driving cars:

When the car is being manually driven, the driver will be at fault in an accident. But, if the car is in autonomous mode and causes a crash, Volvo will take responsibility.

Presumably, who is driving just before and at the moment of an accident will be easily know via the car's computer records. I wonder how Volvo's policy will reconcile with state liability laws.
 
Sounds to me like someone is a big fan of I am Robot, the movie starring Will Smith.

How well did the robots decision making process work out there? ;)
 
I saw I Robot and, alas, it was a poor depiction of the book, in my opinion.

Movies never tell us anything about reality, except for the odd reality that the entertainment industry lives in.
 
Knowing how many problems a lot of the car makers are having, I imagine they would use Internet Explorer as a staple, and a lot of us would be dead before we even got home..........as you would always be "crashing".........:)
 
Asimov's I Robot tells us that this and many more situations will arise. I would think as long as an insurance company, or car company, will insure the car things will work out.
 
Back
Top Bottom