Self-Driving Cars -- Needs of the Many vs Numero Uno


Or maybe it is? From that article:

Tesla said Thursday the death was "the first known fatality in just over 130 million miles where Autopilot was activated," while a fatality happens once every 60 million miles worldwide.

This is one of the problems - self-driving maybe safer on average, but every accident will get attention. Though I still think the tech should be approached very carefully. I think any driver who is just relying on it currently is plain stupid. If I were to use it, it would be as a test, and I'd remain 100% in control the whole time. At least until there is far more experience with the tech.


I am not convinced self driving cars will be that good. Try driving in a snow storm with 40mph cross winds and shelter belts occasionally beside the road. How will the software react to the sudden disappearance and reappearance of the cross wind as you pass by the tree line.

I would expect a properly designed CPU/system to respond to cross wind faster/better than any human. My little $16 quadcoptor does a good job staying level in some cross winds (within the limits of its tiny motors).

-ERD50
 
My understanding is that 100% self driving cars rely very heavily on very detailed digitized photos of of the area they are in and very accurate maps. Detours, combined with changes in buildings or other structures can cause these cars to literally not know where they are. Not so good.
Paving and milling operations creating uneven lanes,repainting lines, traffic cones, flaggers. All exciting problems for autonomous vehicles.

There is a LOT to work out.

I'm thinking the ultimately V2V, V2I fully autonomous timeframe is about 50 years from now. We're going to have some serious transition times before that.

I can see it starting with dedicated lanes which are fully autonomous. Perhaps this would start in about 10 years. Consider the current switchable express lanes you see in areas like Washington, DC. These lanes are boxed in. I could imagine those starting as "autonomous only".

Over time, more lanes or roads would be added that are autonomous only. No manual cars on these.

On other roads, tolls may be added to manually drive. Incentives will be required to give up the old cars, and the best incentive is a penalty (toll).

There will be a long period where both coexist on most roads, I'm guessing starting about 25 years from now until 50 years from now. The autonomous cars will start out very conservatively. They will only run on sunny days. Impending weather will have them go to a safe place to switch to manual. If you are that stroke victim who cannot drive, you'll have to get an uber lift from there.

Etc., etc.

I love to drive. I don't like this future, but it is coming. I also think it is both sooner and later than people think. Sooner for basic operation (some would say it is happening now with Tesla autopilot), later for the real vision of running on a snowy day with 40 mph winds.
 
Or maybe it is? From that article:

Tesla said Thursday the death was "the first known fatality in just over 130 million miles where Autopilot was activated," while a fatality happens once every 60 million miles worldwide.

I beg to differ. Again, as I said earlier, there are all kinds of video on Youtube showing Tesla autopilot screwing up and the driver had to override, or that it disconnected unexpectedly and the driver had to take over.

Does Tesla have data showing of the 130 million miles, how many hiccups like the above occured? Any of those hiccups could become a fatal accident to the driver or to a bystander if the driver was not attentive.

That's a big difference from the public expectation that they can go to the back seat and take a nap or read a novel.

I would expect a properly designed CPU/system to respond to cross wind faster/better than any human. My little $16 quadcoptor does a good job staying level in some cross winds (within the limits of its tiny motors).

Yes. Computers are a lot faster and more accurate than a human being (I worked on R&D autopilot projects for manned and unmanned aircraft). But a computer brings other problems that need to be solved. Namely, a computer's decision can only be as good as the information that it gets. What are the sensors that the computer needs to do a job? A car-driving computer needs a lot more info than a quad-rotor, which does not swerve around obstacles, or worries about running into other quad-rotors.

In this case, Tesla discovered that its computer vision literally could not see the broadside of an 18-wheeler!
 
Last edited:
...
There is a LOT to work out.

I'm thinking the ultimately V2V, V2I fully autonomous timeframe is about 50 years from now. We're going to have some serious transition times before that.

I can see it starting with dedicated lanes which are fully autonomous. Perhaps this would start in about 10 years. Consider the current switchable express lanes you see in areas like Washington, DC. These lanes are boxed in. I could imagine those starting as "autonomous only". ...

And I could see where we start with only the big semi-trucks in those lanes. That makes so much more sense to me. Put the tech on the vehicles that are very expensive, and drive many miles a year, and spend less time in complicated situations like pedestrians, cross traffic, etc. Work out the bugs there, while the tech keeps advancing.

As to your 50 year time frame comments, I think back to what people thought of as the biggest problems as they approached the year 1900. It was horses i n the streets, their manure waste, and dead horses rotting alongside the road. No one envisioned the auto would replace most horses in a mere 20-30 years. Fifty years from no, the idea of a private automobile might seem very quaint, and no 'solutions' will be needed?

I beg to differ. Again, as I said earlier, there are all kinds of video on Youtube showing Tesla autopilot screwing up and the driver had to override, or that it disconnected unexpectedly and the driver had to take over.

Does Tesla have data showing of the 130 million miles, how many hiccups like the above occured? Any of those hiccups could become a fatal accident to the driver or to a bystander if the driver was not attentive. ...

True, but how many close calls occur in normal driving that don't result in a death or accident (and no youtube video because it's just a plain Jane 2010 Civic)? Comparisons can always be flawed, and I'm not sure Tesla's are so good here - they compare to 'worldwide deaths/mm? Maybe a better measure would in the country where this death occurred? Might be better or worse, but if deaths + injuries + accidents are lower, that says something.


That's a big difference from the public expectation that they can go to the back seat and take a nap or read a novel.

But Tesla is instructing drivers to maintain full control. I think maybe they need to do more to reinforce that (back to the 'driver awareness' monitoring I mentioned)?

In this case, Tesla discovered that its computer vision literally could not see a broadside of an 18-wheeler!

And that seems like a big gap between the stage of alpha testing or beta testing with a select group of drivers under controlled conditions, versus allowing these consumer level drivers use the feature. I think Tesla (and others) need a LOT more evaluation under more controlled conditions before letting this out in the wild.


Though it seems they are using these drivers to collect data to improve the system, and figure they get more data than other test modes could provide. Maybe that's unethical, but if it is actually safer on average, maybe it is unethical to withhold it? Tough questions, similar to some pharmaceuticals in the testing phase.

-ERD50
 
Not a Tesla driver, nor have I been inside one, but watching these youtube videos shows me that people treat this feature more like a toy. Those drivers are the ones who would respond to a hiccup, and live. The ones who trust it (like the driver in that fatal accident) will be the ones we read about in the paper.

So, if I cannot trust something like that yet, how is it going to help me relax behind the wheel? Being on my toes all the time, wondering if the autopilot sees the same thing that I am seeing and second-guessing it all the time is too damn stressful to me.

If you watch Google's presentation on this technology, you might agree with me that their approach makes a lot more sense than Tesla's, who I think is more about grabbing headlines to sell more cars (and stock).
 
Last edited:
As to your 50 year time frame comments, I think back to what people thought of as the biggest problems as they approached the year 1900. It was horses i n the streets, their manure waste, and dead horses rotting alongside the road. No one envisioned the auto would replace most horses in a mere 20-30 years. Fifty years from no, the idea of a private automobile might seem very quaint, and no 'solutions' will be needed?

-ERD50

Very well, perhaps! If it is a "Star Trek" kind of device, sign me up.

If it is some sort of hyper-fast-bus or train, don't sign me up. Don't care to get the lice and fart gas from my fellow passengers. Don't care to wait at the hyper-fast-bus stop and then have to transfer to whatever.

We'll see, if we live long enough.

And, hey, I'm saying 50 years is about what it will take for the real vision of this to be realized. Some of it will be partially real much sooner.

I use 50 mostly because of the analog of airbags. Airbags were envisioned in the 50s. The first real crude ones came out in the 70s. Ten years later, they finally got to be much more mainstream. Constant improvements have occurred over the last 30 years, to the point that we can probably finally say the technology is fully matured -- once we get the jugular slicing Takatas out of the way.
 
We purchased an Acura MDX, about a month ago.This car has Lane Keeping Assist (LKA) and Adaptive Cruise Control (ACC). Now that we have driven it for about four weeks, some observations.

I love the two systems on the highway! I wondered how it determined my hands were on the wheel. I think I figured out it looks for minute steering pressure on the wheel. I can hold on with my thumb and fore finger and as the wheel moves, a very slight resistance to the turn will signal to the computer you are steering. If you take your had off the wheel, no resistance and it will give you a warning in about 46 secs.

This is not a system you can kick back and read a newspaper. You can however, look around more. Take your hand off the wheel and get something out of your pocket or glove box. On a divided highway feels ok. On the back road two lane, not so much.

The ACC is really nice. Set it for the speed limit and if the car in front is going slower, it does not run up on it. With regular cruise control, you have to kick it off and the urge to pass a slower car sets in. Often we will hold behind a car going a couple miles under the speed limit for awhile before we notice. It is a more relaxed way to drive.

I think you do loose a little 'situational awareness' as we say in the flying game. Because you are not quite as concerned with staying in your lane, and you monitor the system to make sure it is still on, I don't seem to pay as much attention what is beside or behind. This has nothing to do with the assisted driving, but the Acura looses cars in the rear view mirror when the nose of the car reaches your back bumper. The side mirrors will pick it up, and will show it until is equal with driver side/passenger side window. It take a little getting use to if you don't have a habit of monitoring the side mirrors. The Acura comes with a system that monitors cars around you, but our model does not have it. Next one will.

ACC: This works well on the open road with cars in front. No problem with normal traffic cutting in front going faster or about the same speed. However, it is not so good if the guy in front slows suddenly. It will slow also, but it is more aggressive than it needs to be. It does get your attention. Once more, if you are paying attention, you can usually anticipate this and kick the ACC off.

No intention of hijacking the thread. Just some thoughts of simi-autonomous driving.
 
I am not convinced self driving cars will be that good. Try driving in a snow storm with 40mph cross winds and shelter belts occasionally beside the road. How will the software react to the sudden disappearance and reappearance of the cross wind as you pass by the tree line.

My understanding is that the current self drive vehicles will not operate in snow conditions, not sure they ever will.
 
Last edited:
Though it seems they are using these drivers to collect data to improve the system, and figure they get more data than other test modes could provide. Maybe that's unethical, but if it is actually safer on average, maybe it is unethical to withhold it? Tough questions, similar to some pharmaceuticals in the testing phase.

Not in the medicine field, but I happened to read about the difficulties pharmaceuticals face in testing drugs. Basically, it's an ethical problem, and they follow the "First, do no harm" maxim absolutely.

Let's say there's an existing drug A that treats 70% of the people. They now have a drug B which they want to test. Maybe it could be successful for 80% of the people. Or it could be good for only 25%. How do they proceed?

As they do not know, they cannot withhold treatment with drug A which already works, albeit not perfectly. And so, they have to start with patients who already do not respond to drug A. Those have nothing left to lose. The problem is that those patients of course are the harder cases, and perhaps many of them would not respond to anything. So, drug B may be a better replacement for drug A, but it takes a lot of work before they can establish that.

In the case of the autopilot, unless it is proven to be safe, I am not going to use it and trust it. Driving is not a mandatory thing, something I need to do to survive, like taking a cancer drug. My driving skills are not perfect, and also there are external risks. So, I compensate by driving less, or driving slower, or avoiding rush hour where there are more idiots on the road. I do not drive when I am drunk, or sleepy, etc... I do not drive at night if I can help it. If I am not in a car, my risk of a traffic accident is zero (unless someone drives through my living room window).

So, given the precaution measures that I already take, I will trust an autopilot when it is proven that it is safer than me. In the case of the Tesla, I do not see how anyone now can trust something that failed to see the broadside of an 18-wheeler.

Anyway, back to the thread topic, being a pragmatist I will say that the ethical problem of killing the occupant of a car to save bystanders will remain academic for a while.

Let's see them being able to avoid inanimate objects first before talking about higher ethical dilemmas.
 
Last edited:
We purchased an Acura MDX, about a month ago.This car has Lane Keeping Assist (LKA) and Adaptive Cruise Control (ACC). Now that we have driven it for about four weeks, some observations...

I do not have a new car with features like yours, but I believe that some driver assistance features are useful.

I don't know if a feature is inoperative because of an equipment failure, it would let you know. If you rely on something, and it fails silently, that is bad. As a former pilot you know about safety features on an aircraft. The autopilots we designed all had BIT (built-in tests), where all the equipments were automatically exercised and verified by the computers to be operational before the pilot could take off. A broken wire or a bad connector pin among several thousands, and the system detected it immediately.
 
Last edited:
My understanding is that the current self drive vehicles will not operate in snow conditions, not sure they ever will.

Actually the lane departure systems rely on a white line (dashed or solid) being on both sides of the lane as at least on Hondas there are cameras on the rear view mirror that look forward and the computer looks for the lines in the images.

Further the systems sound a chime and/or display a message if they arenot working. (One example is the lane departure warning/ road departure warning on Hondas turns off if the windshield wipers are on in continuous mode. Since such systems depend on seeing lines on the pavement to navigate snow would shut them down (as the manual on the current systems warns)
 
Last edited:
My understanding is that the current self drive vehicles will not operate in snow conditions, not sure they ever will.

If self-driving cars can operate the vast majority of the time, when driving is relatively straightforward, but not in the most sticky situations, the result will be a lot of inexperienced drivers having to take over at the wheel at the exact time when the most skillful driving is needed.
 
My understanding is that the current self drive vehicles will not operate in snow conditions, not sure they ever will.

Back to my 50 year rule.

When all this self driving stuff was first envisioned over 50 years ago, the thought was some kind of wire in the pavement. This wire would be something that the electro-mechanical devices of the day could figuratively latch onto, and allow the car to stay in the lane.

Today, we have computers that can use machine vision to watch the lines of the road. This is far better than the mechanical systems of years ago. But clearly, it fails in snow.

Think ahead to a possible future:
- Improved GPS knows the car's location down to the centimeter
- Local information is available from every streetlamp
- Cars ahead are constantly providing feedback to other cars

The local information could be weather data, road construction, potholes, etc.

The cars ahead will notify other cars of yaw anomolies that the occupants don't even feel, but are sure signs of slippery conditions. Cars behind adjust accordingly.

Etc., etc., etc. Don't think "one car". Think "system".

But it is a long way off.
 
I'm surprised that more people aren't concerned with the government "creep" and excessive regulations that would be sure to accompany driverless cars. You can guarantee that driverless cars will have some government control and data collection, and the visions of a driverless utopia could easily become lost in a maze of bureaucratic control, red tape, and restrictions.

Kind of brings us back to the OP post. If there is the government "creep", do we really want driving decisions dictated that way. One can the government dictates as in rules of the road and speed limits, but not what maneuvers our cars make.
 
I wonder how these discussions compare to the advent and slow but complete adoption of the automobile from horses? Or when commercial airliners became a norm? Funny how we can't see the future...
 
It will happen, but it takes time. It likely will take less than 50 years, but one thing for sure is we are not there yet. Certainly not with Tesla's technology.

It is hard to predict future technology, but if we believed the most optimistic predictions in the past, we should be living on the Moon by now. Or deep under the ocean, or high on mountain tops, or having a huge bubble over our cities to provide ideal climate year round, etc... Or nobody dies of cancer anymore...
 
Driving to Lake Tahoe in a Tesla last Monday, I and DH noticed that a big rig hauling a trailer was pulling up alongside us in the right hand lane, but the car sensors were not picking it up. "Too tall for the sensors," DH opined. Eventually, as the big rig continued passing ahead of us, a silhouette of the truck appeared on our dashboard sensors. I asked, "Do you think that if I asked auto-pilot to change lanes, it would drive right into the side of a big rig trailer?" "Yup!" DH replied.

Think I'll leave the driving to us. The adaptive cruise control works pretty great though.

The fact that the car kept driving after the collision was disturbing however.
 
It will happen, but it takes time. It likely will take less than 50 years, but one thing for sure is we are not there yet. Certainly not with Tesla's technology.
I know I'm the parrot saying "50 years". What I mean by that is the full system, with full vehicle to vehicle and vehicle to infrastructure. Until we get that, the promise of self driving cars will be muted, with only part-time availability in force on certain roads or certain situations. That means the stroke victim or person with poor eyesight won't be able to participate since a manual driving component for some portion of the trip will be required.

Of course, the part time (long trips on interstates, for instance) will be welcome to many people. This will happen in less than 50 years. Probably a decade. I think we see Telsa is not ready, but it is all getting closer.

With cars lasting 15 to 20 years, the switch-over is going to be brutal. It is going to take tough laws and significant incentives to switch it over faster.
 
Last edited:
Driving to Lake Tahoe in a Tesla last Monday, I and DH noticed that a big rig hauling a trailer was pulling up alongside us in the right hand lane, but the car sensors were not picking it up. "Too tall for the sensors," DH opined. Eventually, as the big rig continued passing ahead of us, a silhouette of the truck appeared on our dashboard sensors. I asked, "Do you think that if I asked auto-pilot to change lanes, it would drive right into the side of a big rig trailer?" "Yup!" DH replied.

Think I'll leave the driving to us. The adaptive cruise control works pretty great though.

The fact that the car kept driving after the collision was disturbing however.

It's still a very new technology and I would hope that folks that are using it are VERY aware of this and are paying attention and are ready to take over. The technology will improve and many of us will see it adopted by the masses. I do think the days of truly sitting back and doing nothing won't happen for a very long time. We have had pretty good autopilots on airplanes for a while now, but it's used (or should be used) with a healthy dose of human supervision and will continue to for many years.
 
A project that I worked on was an autopilot with Autoland capability for a commercial jetliner.

The Autoland was the most useful feature to pilots during foggy weather, when they could not see the runway. The system used redundant sensors throughout to allow this, such as dual radar altimeters, dual ILS receivers, inertial sensors, etc... (this was 35 years ago, and GPS was still experimental and not functional). If the dual sensors disagreed, the system would drop out and the pilots had to punch "Go Around".

In order to sustain a sensor hiccup and still be able to land, the system would need a 3rd set of sensors, as it needed sensor duality all the time. And of course, the computers also needed to be triple, not dual.

Actuators or servo drives were all dual. Even warning indicators had dual bulbs in case one lamp burned out, else pilots would not know something was amiss. Serious malfunctions were alerted with aural warnings, either an audible horn or voice warning.

Ultimately, the cockpit had 2 humans inside, just in case. :)
 
Last edited:
A project that I worked on was an autopilot with Autoland capability for a commercial jetliner.

The Autoland was the most useful feature to pilots during foggy weather, when they could not see the runway. The system used redundant sensors throughout to allow this, such as dual radar altimeters, dual ILS receivers, inertial sensors, etc... (this was 35 years ago, and GPS was not functional). If the dual sensors disagreed, the system would drop out and the pilots had to punch "Go Around".

In order to sustain a sensor hiccup and still be able to land, the system would need a 3rd set of sensors, as it needed sensor duality all the time. And of course, the computers also needed to be triple, not dual.

Actuators or servo drives were all dual. Even warning indicators had dual bulbs in case one lamp burned out, else pilots would not know something was amiss. Serious malfunctions were alerted with aural warnings, either an audible horn or voice warning.

Airplanes are a good analog. They have V2I (airplane interacts with information from infrastructure, such as the glide slope info). They have V2V. (collision avoidance systems). Additionally they have traffic control of which there is very little analog for cars.

And still look at where we are. How many commercial airliners are there compared to cars? Or even general aviation? Many less than cars. General aviation pilots are extremely trained compared to our car drivers, so they follow the rules and stay out of the way of commercial traffic. Do we expect the typical bozo driver to be so aware? There is a lot of work.

I feel very bad for the Tesla driver, but from what I see of his videos, he was very much sold on it. We really need to explain the reality better.

This is a weird diversion: but as an engineer, a turning point in my profession was the Challenger Accident of 1986. Until then, I was ready to assume engineers were invincible.

They are not. Tesla is not invincible. Elon Musk is not invincible. NASA engineers are not invincible. Read some history. Be real.
 
Last edited:
Living in a town of 4 million, half of which are low income immigrants, what are people who won't be able to afford an expensive high tech car loaded with redundant computer systems going to drive? I suspect many folks will keep their old cars and trucks for long term use. Many do have to go to work each day, of course.

And our lawn service that depends on multiple stops, pulling a trailer full of lawn equipment, that is sometimes called upon to back into a wooded area to get access to tree branches, etc. Boy, that would be a challenge for automated driving.

Will there be "self driving kits" to adapt an old pick up truck into a high tech unit? And pulling a boat out of a lake and managing trailer brakes. Gosh, that would be exciting with a Tesla pickup on autopilot!

I see lots of daily uses associated with vehicles that are not easily programmed like just point A to B.
 
Airplanes are a good analog. They have V2I (airplane interacts with information from infrastructure, such as the glide slope info). They have V2V. (collision avoidance systems). Additionally they have traffic control of which there is very little analog for cars.

And still look at where we are. How many commercial airliners are there compared to cars? Or even general aviation? Many less than cars. General aviation pilots are extremely trained compared to our car drivers, so they follow the rules and stay out of the way of commercial traffic. Do we expect the typical bozo driver to be so aware? There is a lot of work.

I feel very bad for the Tesla driver, but from what I see of his videos, he was very much sold on it. We really need to explain the reality better.

This is a weird diversion: but as an engineer, a turning point in my profession was the Challenger Accident of 1986. Until then, I was ready to assume engineers were invincible.

They are not. Tesla is not invincible. Elon Musk is not invincible. NASA engineers are not invincible. Read some history. Be real.

No, as an engineer I never thought that we were invincible. It may be because of my first job being on the R&D for the aforementioned autoland autopilot. We took things very seriously, so I have been very alarmed with the cavalier attitude of many driverless car developers. Excuse me, but I think some of them border on stupidity.

The public also loves all this high tech stuff without knowing about their limitations. Look again at civilian air transport. We did not get there without regulation or at least guidelines to tell the manufacturers and also the airline operators what minimum safety measures they must satisfy.

When I was starting out, we were still flying with 3 guys in the cockpit. Remember the flight engineer in the jump seat? It took a while before the FAA allowed commercial jetliners to fly with 2 pilots. Also, back then, all long-range aircraft overflying oceans had to have 3 engines. It took a while for people to accept that jet engines got reliable enough that jetliners could be built with only 2 engines. The standard used then (and probably still now) is that we had to demonstrate by study, analysis, and simulation that the probability of a crash in a flight is 10^-9, or one in a billion. That's how tough it is.

Cars probably do not need to be as good as civilian jetliners, but what should the expectation be? What is the minimum requirement? We do not have anything regarding autopilots, the way we require how bright the headlights or brakelights have to be, the strength of the seat belts, etc... Everything goes! Should we now demand that they should be able to detect white semi-trailers so as not to drive under them?

By the way, the military projects I worked on were not held to the same stringent standard. This makes sense, because military aircraft get shot at (the risk of being shot is a lot higher than the 1-in-a-billion probability). It's also one or two pilots at risk vs. 300 passengers plus people on the ground. Still, if a military aircraft keeps crashing, we would have a hard time getting recruits to fly it. They are not stupid.

I have a lot of libertarian blood in me, but I can see that oversight and regulation in areas such as public transportation, medicine, civil aviation are what separates us from 3rd world countries. There, everybody is allowed anything to make money. If the naive consumers die because of defective products, tough luck! We would want better.
 
Last edited:
...

This is a weird diversion: but as an engineer, a turning point in my profession was the Challenger Accident of 1986. Until then, I was ready to assume engineers were invincible.

They are not. Tesla is not invincible. Elon Musk is not invincible. NASA engineers are not invincible. Read some history. Be real.

I'll 1+ NW-Bound's comment, us (we?) engineers are certainly not invincible (especially when it comes to grammar!).

I'm going from not so good memory here, so I'll keep the numbers general to illustrate, so don't take them literally. But from what I recall from my readings on Fenyman's analysis of the Challenger disaster, it was much more of a management failure that an engineering failure.

Feynman interviewed the engineers of each major subsystem individually, and they estimated the chance of a catastrophic failure as , say 1/100. So with 10 subsystems, that is a 1/10 chance of a catastrophic failure. But management took each number, added a bunch of imaginary safety factors they pulled out of their @$$, and came up with much higher reliability numbers. IIRC, managements numbers said that you could launch a shuttle every week for decades before you ever saw a failure. Just doesn't pass the common sense test.

https://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disaster#Richard_Feynman

bold mine
Feynman was critical of flaws in NASA's "safety culture", so much so that he threatened to remove his name from the report unless it included his personal observations on the reliability of the shuttle, which appeared as Appendix F.[58] In the appendix, he argued that the estimates of reliability offered by NASA management were wildly unrealistic, differing as much as a thousandfold from the estimates of working engineers.

-ERD50
 
Back
Top Bottom