Self Driving Cars?

I was answering Totoro's question about some mistakes being allowed, and gave him some examples.

No, some mistakes are definitely not allowed. That's why we do not have Level 4 and 5 cars yet, right? They are still working on it, right?
It's probably just me, but you seem to write as though you think the various developers haven't thought of even the most obvious requirements for safe self-driving cars. And that self-driving cars may cause additional accidents net than today's cars. I may be wrong, but I think that's highly unlikely, in fact just the opposite. Self-driving cars won't eliminate all accidents, but they're being developed to dramatically reduce them over today's cars. They won't be released to the public otherwise, and if they get it wrong, they'll be pulled off the road almost immediately. IOW, you're reluctant to keep don't let the perfect be the enemy of the good in mind.

As another poster showed, despite the (few) Tesla accidents, the evidence is there have been fewer Tesla accidents than today's cars - despite a few "ignoramuses" in Teslas.
 
It's probably just me, but you seem to write as though you think the various developers haven't thought of even the most obvious requirements for safe self-driving cars.
I'm also a bit pessimistic, but not to the level you describe.

Having worked in software development, I've seen a lot to make me skeptical. Of course the developers have thought about these things. But they also have been given aggressive deadlines and cost targets. Competitive pressure is enormous, and things fall through the cracks. This concerns me for code that is going to control small land missiles.

I think what Tesla has done is fantastic and I cheer on the engineering. But there is a lot more work required for true autonomy in congested environments.

Of course, Millennials will be doing the coding and they know everything so, no worries, it will be perfect. :) [Just a joke that refers to some other threads.]
 
Having worked in software development, I've seen a lot to make me skeptical. Of course the developers have thought about these things. But they also have been given aggressive deadlines and cost targets. Competitive pressure is enormous, and things fall through the cracks. This concerns me for code that is going to control small land missiles.
I'm not discounting the potential vulnerabilities, they can be real. But for every software shortcoming you know of, how many more successful software implementations have there been? It's pretty extraordinary how many tasks are routinely and successfully handled by software these days vs 30 years ago.

And repeating myself, but car makers are well aware of the consequences of getting this wrong - the potential $ liabilities, and long term damage to their brands.

For those who point out potential flaws, it's nice to acknowledge the successes as well, for perspective. To do otherwise is what I call "proof by exception" - it's a tired old trick, that many people fall for. There will never be a shortage of critics, it's easy, anyone can do it - no credentials required. Thank goodness innovators don't let it deter them. Innovation has moved us all forward remarkably, despite some failures along the way.

But this has been an interesting, thought provoking thread, thanks to all the POVs.
 
Last edited:
Incidentally, one of the cited reasons for splitting off and leaving apparently was quite a few employees hit big milestones and big payouts (several million I think). Suddenly they were FI!

It was widely reported this way, but it is completely incorrect. There were some large payouts, but the engineers affected didn't leave to retire, they left because the market rate for their skills was higher than what they were being paid after the bonus was no longer in effect. Perhaps the huge bonuses contributed to the market rate for these skills rising precipitously, but it was the lure of huge future salary at other places that made most of them leave.
 
It's probably just me, but you seem to write as though you think the various developers haven't thought of even the most obvious requirements for safe self-driving cars...

NOT AT ALL!

Again, I am sure car developers have been working these issues. And again, that is why they have not had Level 4 and 5 cars out now. I try to point out to the laymen why it is so tough to drive inside the city and not on the freeway, and to be able to handle things that are seemingly simple for a human. Big difference. Things that are difficult for a human are easy for a computer. And vice versa!

Freeway driving has been achieved long ago. Yet, they have been working the tougher driving problems for years.

Excerpt from Wikipedia:
The first self-sufficient (and therefore, truly autonomous) cars appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects in 1984 and Mercedes-Benz and Bundeswehr University Munich's Eureka Prometheus Project in 1987. A major milestone was achieved in 1995, with CMU's NavLab 5 completing the first autonomous coast-to-coast drive of the United States.
 
Last edited:
I do not know about the various autopilot/autothrottle modes of the 777 to know what modes the crew was selecting.

They accidentally selected the option to initiate a go-around; i.e., throttle up and climb to the designated go-around altitude (which I believe was 3,000 ft. in this accident). This was before they knew they were going to crash - they accidentally did this while they were still on a normal approach. As soon as the flight computer began advancing the throttles, the pilot manually grabbed the throttle levers and pulled them back, but left the auto-pilot mode where it was. This unusual sequence of commands is what caused the autothrottle to disengage.

Hmm... I wonder what happened to the stall warning subsystem.

Nothing. It worked as designed. Their airspeed didn't approach the threshold for stalling until they tried desperately to pull the nose up just as they realized they weren't going to make the runway. At that point, the stall warning did indeed sound, and the stick shaker engaged, but only briefly, as the crash occurred in relatively immediate succession thereafter.

In an airplane with less automation, pilots are trained to be constantly "scanning" their instruments, ensuring their heading hasn't drifted, they haven't inadvertently climbed or descended away from their assigned altitude, their airspeed is in the safe range for conditions, etc. When pilots become overly reliant on automation, they become complacent and just assume the computer will take care of all of those things. Their airmanship skills suffer, and they miss critical warning signals that would be obvious to a pilot less accustomed to relying on automation.

In the Asiana Air crash, the pilots had a clear indicator that they were too low (the PAPI lights along the edge of the runway) and that their speed was falling dangerously low (the airspeed indicator was, of course, functioning perfectly normally - they just weren't paying any attention to it).
 
There are plenty of drivers that can stop faster than the best current ABS brake technology.

Well that certainly can't be true. "The best" current ABS technology involves independent sensors on all 4 wheels that sample the rotation rate literally 1,000 times per second, and adjust caliper pressure to ensure maximum frictional engagement of the tire on the road surface, and can adjust said caliper pressure independently for each individual wheel. A driver only has one brake pedal applying the same braking force to all 4 wheels, and can't react nearly as quickly or as often as a computer.


Let's not let nostalgia overrule sound science.
 
Last edited:
Autopilot and autothrottle modes are always announced on the display panels, in case the pilots forget.

The pilot-flying's flight display was turned off.

I am sure that was annunciated in case the pilots forgot what was supposed to happen.

I don't believe there was an aural indication that autothrottle had been disengaged in this incident, but I could be wrong.

But they were too busy to look at the displays probably.

Indeed they were! They were on very short final of an airport the pilot flying had never landed at before!

This accident is a textbook example of utterly atrocious Crew Resource Management (CRM). The instructor pilot had no experience actually instructing, neither of them were calling out what they were doing or why, nobody was verifying the other's actions, ... even when the pilot accidentally selected "Go-around" mode on the autopilot, instead of verbally announcing his mistake and how he intended to correct it, he just grabbed the throttles and pulled them back. Neither did the instructor inquire what had just happened, or why the throttles advanced and retarded. This was a catastrophic blunder of two people who both should have known better.
 
I'm not discounting the potential vulnerabilities, they can be real. But for every software shortcoming you know of, how many more successful software implementations have there been? It's pretty extraordinary how many tasks are routinely and successfully handled by software these days vs 30 years ago.

And repeating myself, but car makers are well aware of the consequences of getting this wrong - the potential $ liabilities, and long term damage to their brands.

For those who point out potential flaws, it's nice to acknowledge the successes as well, for perspective. To do otherwise is what I call "proof by exception" - it's a tired old trick, that many people fall for. There will never be a shortage of critics, it's easy, anyone can do it - no credentials required. Thank goodness innovators don't let it deter them. Innovation has moved us all forward remarkably, despite some failures along the way.

But this has been an interesting, thought provoking thread, thanks to all the POVs.

It isn't just the flaws. It is the completeness of the solution. As a software guy myself, I celebrate and acknowledge the current achievements. They are fantastic!

But there's a lot of work to do. It isn't just flaws. It is standards, it is complexity, it is sensor development (hardware is involved too), and much more. This will take time. This will take money. This will take government cooperation on a worldwide scale.
 
There's a lot of money being thrown at it right now.

They're getting the best engineers money can buy.
 
About Asiana crash, I should have gone straight to the NTSB Web site. The Wikipedia page has misleading information.

See: https://www.ntsb.gov/news/events/Pages/2014_Asiana_BMG-Abstract.aspx

They accidentally selected the option to initiate a go-around; i.e., throttle up and climb to the designated go-around altitude (which I believe was 3,000 ft. in this accident).
No, they did not accidentally initiate a go-around.

What really happened was that they were too high above the glide slope when they were 5 nm out from the runway. In trying to descend, they tried to slow down by pulling the throttle levers back. This action overrode the Autothrottle which had been controlling speed, and it went into HOLD mode, and stayed inactive. This was not noticed by the aircrew.

While they got down to the glide slope, the airspeed kept decreasing, but they failed to notice it. Eventually they were way below the glide slope and losing speed at the same time. At 200 ft altitude, they should have done a go-around, but tried to salvage the landing. At 100 ft, they gave up and did a go-around. The Autothrottle probably responded properly and advanced up, but large turbofan engines had a very long spool-up time, and it was too late.

Nothing. It worked as designed. Their airspeed didn't approach the threshold for stalling until they tried desperately to pull the nose up just as they realized they weren't going to make the runway. At that point, the stall warning did indeed sound, and the stick shaker engaged, but only briefly, as the crash occurred in relatively immediate succession thereafter.

On the aircraft system that I worked on, the system would warn the pilots if they let the approach speed drop too low, and gave them a lot of margin above stall speed. The 777 equivalent function was not effective for landing phase to warn the pilots of this condition, and the NTSB did make a note of that.

In the Asiana Air crash, the pilots had a clear indicator that they were too low (the PAPI lights along the edge of the runway) and that their speed was falling dangerously low (the airspeed indicator was, of course, functioning perfectly normally - they just weren't paying any attention to it).
The landing phase is the most critical segment of flight, and pilot's workload is very high. Hence aircraft designers try to provide all kinds of warning. In this case, the warning was "weak".

The pilot-flying's flight display was turned off.

I don't believe there was an aural indication that autothrottle had been disengaged in this incident, but I could be wrong.

It appeared that they were flying manually, and relying on the Autothrottle to take care of the aircraft speed. Unfortunately, they overrode it early in the approach, and it stayed inactive and they were not aware of it.

If the Autopilot was off, then of course the corresponding display would be off. It's a big nono to not know what the Autopilot is doing. I would be very surprised that they were allowed to turn the flight mode annunciator off. The Autothrottle was most likely on and annunciated HOLD, but that was ignored.
 
Last edited:
CA set to allow self-driving-car tests without ride-along human. Let the speculative hand wringing [-]begin[/-] continue...
Not sure what the hand wringing comment is about. Mine are pretty dry right now. :)

My guess would be starting in non-congested areas in specific zones. Makes sense.
There have been some one-sided, pessimistic posts (without supporting data, more proof by exception) on this thread - and that's fine. There are other POVs.
 
Last edited:
There have been some one-sided, pessimistic without evidence, proof by exception post on this thread - and that's fine. There are other POVs.

My hands are dry, but don't worry, I'm still pretty pessimistic. Maybe my hands need some wringing. :)

Too many years of engineering development to be otherwise. BTW, the term "1%" is thrown around a lot. That last 1% of designing and developing self driving scenarios is going to be a bear.
 
CA set to allow self-driving-car tests without ride-along human. Let the speculative hand wringing [-]begin[/-] continue...

https://www.cnet.com/news/california-self-driving-cars-tests-rules/

It will be interesting to watch (from a distance). See excerpt from the same article. The Waymo cars I have seen often around my residential neighborhood still have at least a human in them. No hand wringing yet.

The day of Uber's launch in San Francisco, one of the self-driving cars was filmed running a red light. Similar incidents were reported throughout the city in the following days.
 
Last edited:
I look at it a bit differently. I think V2V will be essential for the switchover period we were speaking of. Cars ahead can signal to cars back: "Manual driver jerk ahead, slow down."

And once fully autonomous: "Deer on side of road, be ready, reduce pack speed." Or: "Tire inflation warning detected on my car, slow down, be ready for evasive action." Those kind of things.
If you use Google Maps or Waze you get this now. Any detour, slowdown, debris in road, car on shoulder or cop is reported to all users in the vicinity. This should be easy for autonomous cars...
 
It isn't just the flaws. It is the completeness of the solution. As a software guy myself, I celebrate and acknowledge the current achievements. They are fantastic!

But there's a lot of work to do. It isn't just flaws. It is standards, it is complexity, it is sensor development (hardware is involved too), and much more. This will take time. This will take money. This will take government cooperation on a worldwide scale.
I'm a retired software developer as well. The pedestrian detection is just one tiny example of where they have to be right because 99.99% is not even good enough. You can't kill/hospitalize 1 of 10000 people.

Here is a picture from a quick google search of: mobileye pedestrian detection

I think the trouble is not recognizing pedestrian it is just how to react to them or multiples of them. Pedestrians don't always follow crosswalk rules!

hqdefault.jpg
 
I'm a retired software developer as well. The pedestrian detection is just one tiny example of where they have to be right because 99.99% is not even good enough. You can't kill/hospitalize 1 of 10000 people...

Nah! Minor sacrifice to the altar of technology. How can we advance (and make some money) without losing some lives? ;)
 
Last edited:
I think the trouble is not recognizing pedestrian...

hqdefault.jpg



I beg to differ. In the above picture, several people were not recognized.
 
Last edited:
If you use Google Maps or Waze you get this now. Any detour, slowdown, debris in road, car on shoulder or cop is reported to all users in the vicinity. This should be easy for autonomous cars...

Maybe. If the mobile phone infrastructure can be made more pervasive, reliable and fast. The backhaul time currently is too slow to work well for the purpose of critical events (rock just fell on curve ahead). The good news is 5G may handle the fast part. Build out? Many areas of the country are not covered. Reliability? Well, the mobile phone infra it isn't built out to mission critical mode yet. V2V can take single point failures (one car fails) and still work reliably.

The future 20 years hence could solve it if the mobile phone infra is regulated to be mission critical for this purpose.
 
If you use Google Maps or Waze you get this now. Any detour, slowdown, debris in road, car on shoulder or cop is reported to all users in the vicinity. This should be easy for autonomous cars...

So, what happens to the 1st car that comes upon a scene?

Right now, I stay alert when driving, and am responsible to detect anything on the road that I come across.

If an autonomous car drives itself and takes away my steering wheel and brake pedal, should it do any less than what I am doing now?
 
I would like to see some numbers on pedestrian detection effectiveness of human drivers. Pretty sure it isn't 100%.
 
I would like to see some numbers on pedestrian detection effectiveness of human drivers. Pretty sure it isn't 100%.
No, it's not. Pedestrians and bicyclists get mowed down daily. :LOL:

But how can we be sure that the current test cars are better? Just hand waving, saying that computers are fast and technology is cool? :cool:

PS. Seriously, human drivers run over pedestrians because they just did not care, not because they try and fail to detect them. Occasionally, some creeps even run people down intentionally (good detection there!). Computer systems on the other hand alway try to do the right thing, but may fail.
 
Last edited:
I apologize if this fully autonomous driving Chevrolet Bolt video was already posted. I thought it was pretty impressive and there are dozens of subtle things it dealt with.

Within youtube you can use the gear icon to slow it down to 1/2 or 1/4 speed even. It is moving pretty fast at 'normal' speed.

http://www.detroitnews.com/story/bu.../2017/01/19/self-driving-bolt-video/96784678/
General Motors Co.’s Cruise Automation company has posted a video of a fully autonomous ride in San Francisco in a Chevrolet Bolt EV that included GM President Dan Ammann in the backseat.

 
Last edited:
Back
Top Bottom