Self Driving Cars?

Quote:
Originally Posted by growing_older
But we do not live in a world like that. Even without tech, drivers are NOT kept engaged. There are hundreds of accidents daily from un-engaged drivers. Why do you insist that automation must be better than some theoretical automation plus some theoretical perfect driver, when we already know that drivers do not and cannot be counted on to attain that level of engagement. They won't do it now with ZERO automation.

And then we can start discussing how engaged drivers can make mistakes that the automation might have avoided.
Quote:
Originally Posted by ERD50
You missed my point. I'm saying that the tech should keep the driver engaged. It's certainly do-able, and far easier to do than autonomous.

I'm betting it isn't getting the press, because it isn't as 'sexy' as a self-driving car. What, you want me to pay attention!?

-ERD50
I don't know why we are not connecting on this. I'll try again:

Certainly developing an almost autonomous vehicle driving system which calls on human assistance when it encounters a problem it cannot resolve is easier to develop. Further, over time the number of unresolvable issues will be reduced and the system can get gradually better. But taking such an approach has problems which are not technical, but are critical. Drivers are somewhat inattentive at the best of times. Even if a self-driving car could do it's best to keep drivers engaged, they are still likely to be no more engaged than they are now, when their attention is already required 100% on pain of death. Encouraging them the text, email, read, watch movies, and be instantly ready to take over is just a plan doomed to fail.

Further any semi-self-driving car that gets in an accident due to the not-as-engaged as we hoped driver error, will still be attributed to self-driving technology problems, and set the field back a dozen years. Many engineering teams who have been working on developing this technology have come to the same conclusion. That's one compelling reason why they are all targeting so-called level 5 autonomous cars.

Average drivers are pretty poor accident prone drivers. There is nothing a self-driving car would do to turn an average driver into your ideal model alert engaged driver. The best it could hope for would be for the fully autonomous car to NOT request any human intervention at all.

And lastly, the promise of "self-driving" to bring mobility to infirm, elderly, blind or simply non-driving people is completely defeated if the "self-driving" car requires a driver on stand-by.
 
Last edited:
I may be the only autonomous car geek here willing to sit through these long videos, but I found them very interesting and informative...
Thanks for posting.

Am I not the one who tried to get people to watch Google's TED talk earlier? These guys are the leader in this technology. And I am not an autonomous car geek, just a curious engineer who tries to understand this technology.

The first video is an expanded version of the TED talk. I did not learn anything new, except for the explanation of the incidence where Google's test car hit a bus. It is an ambiguous scenario, where the computer made a presumption about the bus driver's intent, and that turned out to be false.

The 2nd video is most interesting, as Waymo CEO introduced their own in-house sensor suite. It is installed on a fleet of Pacifica that is being tested. It is what I encountered several times when walking around my neighborhood. I thought I saw a lot of stuff poking out from the car, but could not catch it.

Anyway, instead of just one roof-mounted medium-range lidar, they now have the following suite:

* Roof-mounted lidars: one long and one medium-range, 360-deg view
* Front down looking, two side-looking, rear looking (4 total small lidars)
* Roof-mounted radar, 360-deg view
* Side-looking radars, mounted at 4 corners.
* Vision camera, top mounted, 360-deg view
* Vision camera, front-looking, high-resolution

Holy moly! That's a lot of sensors. They have all 3 types of sensors: lidars, radars, vision cameras covering 360 degrees. The roof-mounted lidars are for driving, and they cannot look down to objects close to the car, hence the 4 smaller lidars. The car is not going to run over, or back over something that is close to it.

I love it. Just the way I said in an earlier post: when safety is foremost, you cannot be cheap. You need lots of sensors, and these new sensors have far better resolution than what is on the market now to boot. They hired people to build their own sensors and make them better.

He said that the single lidar that they used cost $75K, they could make for $7.5K. But that is still only one sensor. How much for the entire suite, he did not say.

I would feel very safe being in this car for level-2 or level-3 self-driving, even if the software is not yet smart enough to take it to level 4.

waymo_fca_fully-self-driving-chrysler-pacifica-hybrid-1.jpg
 
Last edited:
I don't know if the new cars have this, but the car I was familiar with had retractable roof mounted lidar, etc that could retract into the vehicle in storms, etc.
 
You are right in that there are still many obstacles to overcome to achieve full 100% autonomy. However, 5,000 miles in-between human interventions is exactly 5,000 more miles than my current car, which requires human intervention for every single inch. In those terms, it's an astounding achievement level. In some senses, it is, for all practical terms, already fully autonomous! By that, I mean that the current state of the technology is already so far ahead of 99.5% of all cars on the road that it will *seem* as if it is already autonomous...

The severity depends on what the car is doing when the human has to take over.

If while cruising at 70 mph, and you are getting complacent after a couple of hours of uneventful cruising, the car suddenly veers toward the ditch by the side of the road, you may not be able to react even if you are not dozing off. Or if while the car is stopped at an intersection, it suddenly accelerates into traffic, you may not be able to stomp on the brake in time, even if you are not texting on your phone. In both cases, you do not like it if the car maker says it's your responsibility to stay alert and to override it, even if they warn you that it can happen anytime, once every 5000 miles on the average.

Now, if they are going to take away your steering wheel and brake pedal too :) for a truly autonomous car, they will have to be very very good.
 
Last edited:
See below
He said that the single lidar that they used cost $75K, they could make for $7.5K. But that is still only one sensor. How much for the entire suite, he did not say. In the SXSW video he went on to say nine if the sensors are exotic, 'not made with unobobtanium' so at volume they shouldn't be that expensive.

I would feel very safe being in this car for level-2 or level-3 self-driving, even if the software is not yet smart enough to take it to level 4. The Pacifica is most certainly a level 4-5 car! And all their efforts over the past eight years have been toward level 5, with fewer and fewer disengagements (human taking control).
Krafcik said "we are all in on fully driverless solutions, that's what we're all about." He added that driver assist technologies, which often ask humans to take back control of the car, don't interest Waymo. But the car might still have steering wheels and pedals, even if the vehicle is fully capable of driving itself.

waymo_fca_fully-self-driving-chrysler-pacifica-hybrid-1.jpg
 
Last edited:
No car makers are so confident to take away means of human controls for a loong time!

As ERD50 said, by the time they could do that in production cars, cars may not be needed anymore. We would be using some means of transportation not known at this time.

By the way, Waymo (Google) says it will continue to use human test drivers for a loong time too. They are not a bunch of cocky guys. They are smart enough to know the limitation of technology. If I find that official statement again, I will post it.
 
Last edited:
The Pacifica is most certainly a level 4-5 car! And all their efforts over the past eight years have been toward level 5, with fewer and fewer disengagements (human taking control).

Waymo's test Pacifica has enough sensors to be a Level 4.

But a level-4 car needs more than a suite of sensors. It would need multiple computers, which run off multiple power sources. It would need multiple actuators, etc...

On the software side, it would know how to deal with situations when one sensor fails or has its performance degraded, such as lidar in rain and snow, and reduce its driving envelope to stay safe, etc...

About level 5, we may not get there for a very long time. We will know more about that when we get to level 4.
 
Last edited:
I was wondering how various self-driving programs compare to each other so far, and how they compare so far vs humans. Here's some data for both. [My reaction re: the first question is 'wow, I had no idea,' and to the second is 'very encouraging progress, but there is no good comparison metric.']

https://www.driverless.id/news/2016...hing-competition-every-single-metric-0176110/

Using the lowest disengagement rate figures (from Waymo), then the best autonomous vehicles would be about be 32 times worse than the average human driver—or 16 times worse if unreported crashes are allowed for. That being said, using disengagements as a "crash imminent" measure is unduly harsh. As an experienced industry source pointed out to Driverless, one disengagement is not equivalent to one crash being prevented.

While it's good that these measures are in place to protect the public, it also means that we don't actually have a metric for how safe the autonomous vehicles are in comparison to a human driver.
 

Attachments

  • IMG_1139.jpg
    IMG_1139.jpg
    161.1 KB · Views: 10
Yes, as I keep saying, Waymo is way out in front of the pack. And that is before they have this new sensor suite that they are just testing!

It is true that not all disengagements mean an imminent crash avoided. Waymo says its test drivers can disconnect the autopilot if they encounter a situation that they feel uncomfortable and are not sure that the car can handle itself. The drivers are trained, and know what the car can do and can't. Waymo will not use the public as guinea pigs.

Waymo test cars are fully instrumented, and sensor and video data are recorded. They later look at the data and can run simulation to see what the car would do if allowed to proceed, etc... These guys have impeccable methodology.
 
Last edited:
Here is a random question: to what extent could a remote driver take over with regard to those 5.000 mile disengagement intervals?

If these events aren't of the "crash within 2 seconds type" but more of a "huh, I'm not too confident how to proceed from here in this situation" type, very different.

I'm sure the Waymo guys thought of that.
 
In looking at the demos on youtube videos, I can see how releasing more level 2 cars into wide public without some education can cause a lot of hazards.

For example, in one video demo by Mercedes, one ignorant commenter poo poo'ed the performance saying "Look, the test driver has his hands loosely on the steering wheels. Terrible. With the Tesla car, you can take your hands off".

That Mercedes was navigating a test course with no lane markings (not a lane marking follower here). And the driver follows the protocol of being ready to take over as needed.
 
Here is a random question: to what extent could a remote driver take over with regard to those 5.000 mile disengagement intervals?

If these events aren't of the "crash within 2 seconds type" but more of a "huh, I'm not too confident how to proceed from here in this situation" type, very different.

I'm sure the Waymo guys thought of that.

Of course all these companies know the difference between disengagement scenarios. How much of each, you are asking? That's a lot of details, and one has to look at each disengagement individually.

Even the guys with the most "imminent crash" types may not be the poor performance guys. They may be pushing the envelope further than the other guys.

Waymo is testing its new sensor suite. When they start to test degraded modes, in rain and snow when their superduper lidars don't work well and have to rely on their new radars, they will have more disengagements while they tweak the system. Nothing wrong with that. They are testing, remember? And that's what the trained test drivers are there for.
 
Last edited:
Some of Waymo disengagements are due to hardware failures.

People keep forgetting that things fail. Don't your TV's, laptops, cell phones fail? How often do they fail? Do you want an equivalent failure in a car result in your death, or just an inconvenience? Of course it should be the latter.

The design has to be looked at, analysed, and tested. In the aircraft world, it's called FMEA (Failure Mode & Effects Analysis‎) and FTA (Fault tree analysis). The SAE engineers know all about this. It takes time and money.
 
Last edited:
Some of Waymo disengagements are due to hardware failures.

People keep forgetting that things fail. Don't your TV's, laptops, cell phones fail? How often do they fail? Do you want an equivalent failure in a car result in your death, or just an inconvenience? Of course it should be the latter.

The design has to be looked at, analysed, and tested. In the aircraft world, it's called FMEA (Failure Mode & Effects Analysis‎) and FTA (Fault tree analysis). The SAE engineers know all about this. It takes time and money.
Yep, and fortunately today's cars don't fail. And none of us were aware that systems can and do fail.

And the Waymo and other self-driving programs don't know anything about FMEA, FTA, 6-Sigma, Lean or other problem solving protocols - not unique to the aircraft industry BTW. You should write and tell them Captain O...
 
Last edited:
You never seem to get it, Midpack! Shall I repeat one more time?

Car engineers know about this. That's why they, well, most of them, don't want to release stuff when they are not confident. And when asked when they are ready, they do not make a commitment.

I am trying to tell laymen that things are not as simple as it seems. It seems impossible to tell people that a lot of things are harder than it looks on the surface. Perhaps that's why car makers (except for one) are even more reluctant to release stuff.

How else can I rephrase the above?

PS. Even you, as a mechanical engineer, did not seem to realize that a level-4 car needs more than just a complete suite of sensors.
 
Last edited:
You never seem to get it, Midpack! Shall I repeat one more time?

Car engineers know about this. That's why they, well, most of them, don't want to release stuff when they are not confident.

I am just trying to tell laymen that things are not as simple as it seems.
I don't remember a post where someone seemed to believe development of fully self-driving cars was/is a simple proposition. Maybe we're not as naive as you assume. If this was a Tesla bashing thread, your continual broad brush criticisms without acknowledging progress might be more understandable.
 
... without acknowledging progress...

Did you really read what I wrote about Waymo? Not enthusiastic enough for you?

Why is it that anytime that a progress is made, I should be expected to jump up/down saying that a truly autonomous car is imminent?

Well, it could be and well within current technology, but it is going to be expensive because of all that redundancy.

And when I said some people still would not mind paying for such an expensive car because it suits their needs and they can afford it, you act offended.

What's your problem?
 
Last edited:
The downside is that eventually all vehicles will be connected to Big Brother. Your comings and goings could be subject to the restrictions and whimsy of some government agency.


I'd just have the car take me to a church, then walk a couple of blocks to the gentlemen's club...
 
Quote:
Originally Posted by ERD50
You missed my point. I'm saying that the tech should keep the driver engaged. It's certainly do-able, and far easier to do than autonomous.

I'm betting it isn't getting the press, because it isn't as 'sexy' as a self-driving car. What, you want me to pay attention!?

-ERD50
I don't know why we are not connecting on this. I'll try again:

Certainly developing an almost autonomous vehicle driving system which calls on human assistance when it encounters a problem it cannot resolve is easier to develop. ...

I think the reason we are not connecting is obvious - you are talking about something different from what I'm talking about.

I don't think it makes sense to call on human assistance when it encounters a problem it cannot resolve. That would be too late, unless it's a scenario that Totoro pointed out - that the system asks the human to take over way in advance of the problem, like it starts snowing/raining, and it says 'my sensors can't keep up, please take over'. But I'm thinking (and may be wrong), that most of the 'disengagements' are of a more time critical nature. Maybe we will get some info on that.


Further, over time the number of unresolvable issues will be reduced and the system can get gradually better. But taking such an approach has problems which are not technical, but are critical.

Do you really mean that? I don't agree - these problems are extremely technical!


Drivers are somewhat inattentive at the best of times. Even if a self-driving car could do it's best to keep drivers engaged, they are still likely to be no more engaged than they are now, when their attention is already required 100% on pain of death. Encouraging them the text, email, read, watch movies, and be instantly ready to take over is just a plan doomed to fail.

I disagree on a bunch of fronts here. The system like Tesla is producing does seem to encourage some drivers to allow themselves to be even more distracted - and that's the problem I'm saying should be avoided.

You said: "Even if a self-driving car could do it's best to keep drivers engaged, they are still likely to be no more engaged than they are now, ". I say no way! A system designed to keep the driver engaged could absolutely succeed in keeping the driver more engaged than they are now. There is some work going on in that area - looking for head motions, eye movements, etc. A system could actively be looking for feedback from the driver to see that he is responding to inputs. If the system throws up a "Blind Spot" alert - did the driver turn their head/eyes to notice it? If not, maybe make it more aggressive, brighter, flash it longer, make a noise. All sorts of approaches.


Average drivers are pretty poor accident prone drivers. There is nothing a self-driving car would do to turn an average driver into your ideal model alert engaged driver.

I'm not talking 'ideal' or 'perfect'. I'm talking about getting better, faster, and cheaper (which means the system will go into more cars), and learning from it, so maybe we get to level 5 (though again, we may never need it for all conditions, but it might be nice for parking and fetching a car - some more limited use where the risks are lower, but benefits still good).

And there are all sorts of things a car could do to make driver alertness better. I gave examples, but also, it sure would not be hard to detect a significantly impaired driver, and slow the car down, turn on emergency lights, and eventually force the car to pull over. The system could detect that the driver response to inputs is slow, the car is weaving from what the system expects - which is far easier to detect a problem than to be able to take over and do the driving itself. Just like in real life - we can often detect that someone is doing something wrong, but that doesn't mean we have the capability to do it better ( a bad singer, a dropped infield fly).


The best it could hope for would be for the fully autonomous car to NOT request any human intervention at all.

OK, that's one ideal. But I want lives to be saved in the meantime. And I don't think that following an ideal is the way to do that in the shorter term.

And lastly, the promise of "self-driving" to bring mobility to infirm, elderly, blind or simply non-driving people is completely defeated if the "self-driving" car requires a driver on stand-by.

Under that definition, sure. But then again, we have to wait for that ideal situation.

In the meantime, the infirm, elderly, blind may decide to have someone drive them around. And that driver will be safer if the car has systems to help detect problems, and alert the driver, and take steps to keep the driver alert and engaged.

Furthermore, as I think I mentioned before - none of what I propose is at odds with getting to Level 5, in fact it supports it, I think in the most realistic way possible. As the systems get better and better, and take over some of the driving (but with an engaged driver, not a complacent one), the system will be gathering data. And when we get to the point that the alert and engaged driver is so rarely called on to do anything (the car reacted before the driver), then we can decide to do without the driver. Again, I think that is very far off (as even one additional death will be considered too many by some), and may not even be needed/desired by that time.

-ERD50
 
Last edited:
Tesla Model X - 1st generation of autopilot hardware and software - driver assist - 27K miles in 1st year of ownership.
It has the 2nd generation autopilot software update to use Bosch updated radar API - radar taking more active roll in detection vs camera (ie. FL semi accident driven)

Yesterday I picked up my son and his roommate from college. It was 445 miles from ~8am to ~8pm. It was a long day but included lunch and dinner breaks while charging as well as driving 1.5 hrs out of our normal pickup to drop the roommate at his house.

Still felt good when we got home and not drained! Why?!? Well over 90% of the 445 miles, the Tesla AutoPilot assist steered in the center of the lane, maintained 2.5 sec behind cars, and automatically changed lanes when I turned the blinker on. It did this in quite a mix of traffic where people were merging and bouncing around me. As well there were various crosswinds from many changing angles. (3rd party browser page updates that on my center screen)

I also like that all my driving contributes to Tesla high definition maps of lanes (vs just general roads).

Tesla stands apart from the others is the way it's acquiring this data: through drivers. Every Tesla Model S, with Autopilot or not, is connected from the cloud; the company is constantly connecting data from each of its cars. Tesla is using the data it has, and will continue to collect, to develop its maps.

Elon Musk called this a "fleet learning network" where all its cars contribute to a shared database. "When one car learns something, all learn," said Musk.

On the left, what currently exists: just the roads themselves. On the right, what Tesla is aiming to do: mapping out every lane.
http%3A%2F%2Fmashable.com%2Fwp-content%2Fuploads%2F2015%2F10%2FScreen-Shot-2015-10-14-at-4.12.52-PM.png


Musk highlighted a section of I-405 in California, a highway where lanes are terribly marked. Using the information from Model S drivers traversing this specific section of road, Tesla's Autopilot can still function well, even in the absence of lane markings.
<snip>
However, its maps are already quite developed, as evidenced by the map of the San Fransisco Bay Area below.

http%3A%2F%2Fmashable.com%2Fwp-content%2Fuploads%2F2015%2F10%2FScreen-Shot-2015-10-14-at-4.12.17-PM.png


Pay attention to how people drive within their lane then next time you are on a highway. AutoPilot (AutoSteer) does an amazing job even with cross winds!!
Mjc1ODM3OQ
 
Last edited:
I have never had a car get to 100,000 miles without it being in an accident... sometimes 2 or 3.... some are very minor, but still an accident... (note, very few are my fault)....

So how many people have had a car go 100,000 without being in any kind of accident?

I've never had a vehicle get to 100,000 miles without an accident. Most never got to 50k before I hit a deer. I may be wrong, but I don't think it's possible for a self driving car to miss a deer crossing a snowy road in the dark.

Is blowing out a tire in a pothole an accident?

The number of accidents comes from police reports.

If we divide that into the number of miles driven, it gets to 500K miles. The 100K mile number is for urban driving, I believe.

There are a lot of fender-benders not getting reported, if they result in no injuries. My daughter got her car backed into in three incidences (twice by my across-the-street neighbor). In all cases, we settled with the other side's insurer and did not involve the police.

I do not know about deer collisions, but people do not report to the police their pothole accidents.

So, the number of non-injury fender-benders is much higher. The Waymo guy quoted from a source that something like 80% not getting reported.
 
Last edited:
Some of the back-forth I see here in this thread is reminiscent of some discussions I see on the 'green/environmental' sites.

When an engineer/scientist comes along on one of those sites, and points out that some things being discussed are just impossible (you can't make a solar panel 10x more efficient than the commonly available ones today - that takes you past 100%), or that some things are unlikely, or will be prohibitively expensive, they are shouted down as being anti-environment, unable to 'think outside the box', luddites, have no vision, work for the oil companies, etc.

The greenies also seem resistant to consider any alternatives that aren't in alignment with their 'ideals/perfect'. For example, I've seen discussions were people are so gung-ho on EVs as being 'the solution', that they don't even want to hear about how advancements in hybrid cars will actually save more fuel in the near term, as they can be used by more people.

Now, this being a moderated forum and generally more polite people, the talk hasn't escalated to that level - but I see similarities.

-ERD50
 
Eroscott, thanks for sharing your experience with the Tesla. It makes sense that most of the time, the car would work very well. It would not have its supporters otherwise.

How about the rare instances where it has, shall we call, a hiccup? From what I have seen on the Web, it would result in a serious accident if the driver was not alert and reacted in time. You alluded to one instance yourself, in an earlier post. Of course these were always caused by some unusual environmental or road factors. Most accidents by humans are the same way.

Now, if that could be detected and the driver alerted before hand, that would be great. Of course the computer did not know, else it would not continue on a straight path to a collision, or veered off road as I saw in another instance. So, let me change "detected" to "defined".

So, some driver education about the limitation of the system would be helpful. But when I mentioned that, a poster got violently upset. :)

Even if driver education, or shall we say user familiarity, is "acceptable" how do we deal with that in a rental car, where anyone unfamiliar with the system can drive it? Shall the rental car company sit the customer down for a 1-hour training session?

Would I feel comfortable loaning my car to my daughter?
 
Last edited:
Back
Top Bottom