Self Driving Cars?

... I'm 100% convinced that self-driving vehicles are going to kill far, far fewer pedestrians, adult and child, than the current crop of human drivers. ...

I'll be convinced when I see the data. Until then, this is only a wish, how can you be convinced?

-ERD50
 
AAA said 20 mph and 30 mph.

Son of a friend is working on SDC. He reports that the theoretical top speed for a SDC when:

* On a two lane street
* With cars parked along the side
* And in order to avoid hitting a pedestrian who walks out from between cars

is 20 mph. The upshot is that SDCs may be limited to this speed under these circumstances because of the legal liability to the manufacturer if there is a fatality.

Human drivers typically assume the risk of driving faster under these circumstances but the legal ramifications of a corporation intentionally violating the known safe speed is a potential issue.
 
Before we talk about detecting pedestrians, let's not forget that Tesla cars under autopilot control have driven under semi-trailers, into highway barriers, and into red fire trucks.

Would you not want to see these tests done and passed, before any attempt to detect and brake for pedestrians?

How hard is it to do these tests?

Would you not want these tests to pass 100%? Flawlessly?
 
Last edited:
Before we talk about detecting pedestrians, let's not forget that Tesla cars under autopilot control have driven under semi-trailers, into highway barriers, and into red fire trucks.

Would you not want to see these tests done and passed, before any attempt to detect and brake for pedestrians?

How hard is it to do these tests?

Would you not want these tests to pass 100%? Flawlessly?
For the last time, NO. Some of us want the technology only to perform better than we do, and no person will pass all tests flawlessly 100% of the time. This could prevent thousands or tens of thousands of deaths per year even if thousands still die.
 
is 20 mph. The upshot is that SDCs may be limited to this speed under these circumstances because of the legal liability to the manufacturer if there is a fatality.

There was a big announcement about a self driving shuttle being deployed at NC State.

That's fine. Good PR. It is nice. It even has an onboard "ambassador," what we call a human "Kill Switch Operator" in the non PR world. :)

How are they doing this? How are they deploying an SDC without a steering wheel?

Like this: top speed 7 MPH.
 
For the last time, NO. Some of us want the technology only to perform better than we do, and no person will pass all tests flawlessly 100% of the time. This could prevent thousands or tens of thousands of deaths per year even if thousands still die.


Darn!

I never fail to brake for a parked fire truck, or not drive my car under a semi-trailer.

To me, flawlessly here means under clear sky, no visibility limitation, in broad daylight.

And I think the public would agree with me.

How can anybody think that the current system is better than the average driver? I dare any Tesla owner to blindfold himself, engage the AP, and go on a drive. Show me that he survives a short trip.
 
Last edited:
I think a big fallacy some people make about the current, emphasis on current, Tesla system is this.

They claim and show statistics that Tesla cars have a lower accident rate than other cars.

But that is "a Tesla car plus a human driver", with the latter monitoring and overriding the car when the AP acts stupid.

(Man+machine) combination is better than man. That's your argument, and it has some validity.

But then, you also make the jump to "machine > man". That has not proven true at all. Not now. Not that anybody can see.

Again, blindfold yourself and engage the AP and let it drive. Show it to the world. :)
 
Last edited:
I think a big fallacy some people make about the current, emphasis on current, Tesla system is this.

They claim and show statistics that Tesla cars have a lower accident rate than other cars.

But that is "a Tesla car plus a human driver", with the latter monitoring and overriding the car when the AP acts stupid.

(Man+machine) combination is better than man. That's your argument, and it has some validity.
....

As I've mentioned earlier in these threads, I'm concerned that some (Man+machine) combination may be worse than man alone. We won't know w/o more data. But I am concerned that a "good SDC" system will lead the driver into a false sense of security, leading to the driver dropping their attention. This could possibly result in more accidents. We've already seen some of this.

Will it be a net gain over the next few years? Who knows? I don't think it's a sure thing at all.

-ERD50
 
Regarding pedestrian detection and braking systems, here's another test by IIHS (Insurance Institute for Highway Safety) conducted on some SUV. Some did reasonably well, while some failed miserably.

If you watch the tests, you will see that the case of a child "darting out" into the street may be nothing more than a kid taking a crosswalk in front of a parked car. It may be still tough to avoid even for a human driver. However, an automated system should apply braking to the max at the moment of late detection, so that the impact is at a lower speed to cause less injury.

And even in the case of successful braking, if there is no obstruction and it is at a marked crosswalk, a human driver would pay attention to a pedestrian and apply brake to yield, instead of waiting to brake at the last minute like the cars did here. However, this is still quite acceptable in my view, because it is not meant as a self-driving system and only as a safety system to supplement the human driver. The human driver is supposed to take advance notice and to act in anticipation, in order to yield to the pedestrian. The computer is only there as a backstop. An SDC has to be more sophisticated.

Waymo is way ahead of everybody else, at least in anticipation of these scenarios. In a presentation a few years ago to announce their in-house lidar, their CEO said that their new lidar had sufficient resolution to "read" facial features of a human so many yards away.

For what was the above performance useful? He said that knowing which way the human was facing would give indication of whether he might intend to step off the curb to cross the road or not. He showed a lidar scan where the nose and ear of a human head was clearly distinctive from a certain reasonable distance.

There are a lot of these ramifications like the above that laymen would not think of, and I was impressed with how Waymo foresaw and understood these issues. I have not heard any more on what progresses they made in these seemingly minor but important issues.

 
Last edited:
Before we talk about detecting pedestrians, let's not forget that Tesla cars under autopilot control have driven under semi-trailers, into highway barriers, and into red fire trucks.

Would you not want to see these tests done and passed, before any attempt to detect and brake for pedestrians?

How hard is it to do these tests?

Would you not want these tests to pass 100%? Flawlessly?

Elon would say auto pilot is not their SDC software.

And that you still don't need lidars.
 
Elon would say auto pilot is not their SDC software.

And that you still don't need lidars.

Fine. A lot of people are waiting with bated breath to see him deliver SDC in 2020 as he claimed to be able to do. Lidars make the job a bit easier, and if he can do without it, all the more power to him. It's the results that count.

But if I were him, if I knew some weakness in my system, I would test them to make sure I fixed up the software. It is not that hard to put a semitrailer across the car path and to see if my new software will not drive the car under it. I would do that with different lighting conditions. I would make sure my computer now recognizes a parked red fire truck on the car path, etc...

It does not matter when I could blame the drivers for not paying attention, but when I now tell them they can trust the system and go to sleep or look down to do texting on the phone, I would want to be sure that these known problems are fixed. No? Some people here say it does not matter, because the system does not have to be perfect. Huh?

Here's the tautology as I see it (maybe I misread something).

Statement A - Human drivers are imperfect.

Statement B - Therefore, an SDC does not have to be perfect to save lives.

Statement C - Brand A autopilot is not perfect, therefore it can still save lives.

A and B are true. But C is totally baloney. :)

The problem is while perfection can never be achieved in this life, certain minimal qualifications will be needed. Right now, nobody knows how to set the testing requirements. But I look at some of the egregious accidents, and I say that it is not going to work, unless they fix the known problems.

There are 33,654 fatal traffic accidents in the US in 2018. That's 92 fatal accidents a day.

How many semi-trailers in the US? 2 million. How many cars in the US? 272 million cars.

Now the tougher question: How many semi-trailer/car encounters on the road each day? How many million encounters?

I would say you have to do something to be sure you run a car under a trailer less than 92 times a day. Don't you want to test it out once or twice on a test track?

And that's just for "trailer avoidance". And then, you have to worry about running into red fire trucks, which you have shown capable of doing. And then construction barriers. And then who knows what else.

You've got to do some tests to know how "imperfect" your system is.

PS. Let's say each of the 2 million trailers encounters a car only once a day (ridiculously infrequent, I know). If I want to keep my cars from running under the trailers no more than 92 times each day, that's one time for every 22,000 encounters. How many tests do I need to make to be sure?

If I am able to do only 100 tests, then I want to be damn sure that the test results are "flawless". If you already fail 1 out of 100 times, you are not going to have less than 92 accidents each day with millions of vehicles. And that's just one thing where you can kill people.
 
Last edited:
:horse:
:horse:
:horse:
Looking forward to SDC whenever and however they rollout. And they don’t need to be “flawless” - they only need to be substantially better than the status quo with humans. The people actually working on SDC are light years beyond the concerns expressed here, but that won’t stop anyone from thinking otherwise and endlessly repeating their pet speculative talking points. Some people love to criticize things they don’t know much about...
 
Oh boy!

If the fatal accident rate is 33,654 in the US in 2018, or 92 fatal accidents a day (33654/365 = 92), then that is the rate that SDC has to beat.

If it is flawless then it will be 0.

But it does not have to be 0, only lower than 92/day. That is the human driver "status quo". No? :confused:

What's wrong with my number? It's based on NHTSA statistics.

I did not say that SDC will never get there, just what it needs to achieve. And people get royally upset. I am confused.

PS. The 92 fatal accidents/day is for a total of 272 million cars in the US. If you have fewer cars, you will need to lower the number of accidents proportionally. Maybe it's better to do with miles traveled or something like that, because some cars travel more than others.

Will I be upset at again for saying the above?
 
Last edited:
But it does not have to be 0, only lower than 92/day. That is the human driver "status quo". No?

But doesn't this pretty much ignore the issue of potential lawsuits?
 
However, this is still quite acceptable in my view, because it is not meant as a self-driving system and only as a safety system to supplement the human driver. The human driver is supposed to take advance notice and to act in anticipation, in order to yield to the pedestrian. The computer is only there as a backstop. An SDC has to be more sophisticated.
In my previous comment, I got a bit confused because this is the SDC thread.

Assist systems <not equal to> SDC

So, yeah, agree. These assist systems are just big old automated brake switches.

I fully expect SDCs to go way beyond this. First, they would know there is no car to their left and would take evasive action. Second, they would see ahead enough to anticipate a crosswalk. In the case of the Tesla turning the corner, I'd hope they'd see the pedestrian walking at a fast clip off-road towards the road. And finally, I'd hope they would anticipate children in neighborhoods, and maybe even see that football or soccer ball ahead as a warning. Not sure where they are on that yet.
 
Tesla gets confused by vandalized road sign (85mph, instead of 35mph).

https://finance.yahoo.com/news/electrical-tape-sign-tricked-tesla-090000044.html

I'm going to go out on a limb here and say that no way would Waymo make this mistake. Heck, google maps knows 95% of my speed limits today. It would surely reject this.

Hopefully Musk's SDC 2020 doesn't rely on this kind of awareness.

EDIT: OLD news. Researchers used an older system. Useless article. Musk SDC 2020 won't have this problem. Here's a quote I missed after a second read. News reporting just sucks. This should never have been reported.
Tests on Mobileye’s latest camera system didn’t reveal the same vulnerability, and Tesla’s latest vehicles apparently don’t depend on traffic sign recognition, according to McAfee.
Tesla didn’t respond to emails seeking comment on the research.
 
Last edited:
Three years later and we're still discussing this? :)

I have no doubt self driving cars will eventually be developed that will be reliable around town or out on the highway, but there will always be situations the programmers did not consider. Computers excel at single known tasks. Pick any operation and a computer can do it faster than a human with far fewer errors. Where computers fail is dealing with unknowns. Even in a limited environment like a desktop computer, it is impossible to predict human behavior. As a programmer I'm constantly amazed at the things people do when using software. Expand that to the real world where you have millions of people doing unexpected things, in unexpected environments (fog, flooding, strong winds, dirt/gravel roads, pot holes, etc.), and unlimited possibilities, I just don't see a computer ever being able to handle these situations.

Humans are far from perfect but we excel at improvising and adapting to any situation we encounter.

As I said in my original post, every machine WILL fail at some point. Even if they somehow manage the self driving task with AI or something, what happens when a sensor fails, or a throttle sticks, or a tire blows, or something like mud blocks the cameras. What happens when the computer chip burns out, or the connections corrode, or a wire breaks. I would hope they would design self driving cars with a fail safe mode. Of course, if you're not driving and the computer fails, who's in charge now?

I'm all for machines that "assist" me with my tasks, but I have no interest in machines replacing the things I enjoy doing. For example, I enjoy woodworking. Power tools make my job easier, and computer programs can help me design projects. But if a machine did the woodworking for me so it would be faster and safer, where's the fun in that?
 
But doesn't this pretty much ignore the issue of potential lawsuits?

Perhaps you are implying that SDC has to be much better than the human driver in order to be accepted and not sued for the rare mishap.

I don't know the answer to this. I simply looked for where the lousy human driver is in terms of accidents, in order to see how we can judge an SDC for its life-saving benefit.
 
Three years later and we're still discussing this? :) With almost no new information. Just the same guessing, as if anyone here has a fraction of the knowledge Tesla, Waymo and others working on SDC have. All our respective talking points are fine - but we’re relative amateurs. it’s repeating them over and over and over that seems pointless, for what purpose? That’s why I check on only occasionally.

I have no doubt self driving cars will eventually be developed that will be reliable around town or out on the highway, but there will always be situations the programmers did not consider. Computers excel at single known tasks. Pick any operation and a computer can do it faster than a human with far fewer errors. Where computers fail is dealing with unknowns. Even in a limited environment like a desktop computer, it is impossible to predict human behavior. As a programmer I'm constantly amazed at the things people do when using software. Expand that to the real world where you have millions of people doing unexpected things, in unexpected environments (fog, flooding, strong winds, dirt/gravel roads, pot holes, etc.), and unlimited possibilities, I just don't see a computer ever being able to handle these situations.

Humans are far from perfect but we excel at improvising and adapting to any situation we encounter.

As I said in my original post, every machine WILL fail at some point. Even if they somehow manage the self driving task with AI or something, what happens when a sensor fails, or a throttle sticks, or a tire blows, or something like mud blocks the cameras. What happens when the computer chip burns out, or the connections corrode, or a wire breaks. I would hope they would design self driving cars with a fail safe mode. Of course, if you're not driving and the computer fails, who's in charge now?
It stands to reason human actions will be one of the most difficult coding problems SDC will face. So the years between today’s mostly level 1 and 2 cars to level 5 cars, when both are on the roads, will be the most difficult transition period. But imagine if there were only level 5 cars on the road, they wouldn’t act unpredictably, and they’d probably all be “talking” to one another, working out issues far faster than humans. I don’t expect to live to see only level 5 cars everywhere, but isn’t that where this ends? When humans are proven to be far less safe than SDC’s, they’ll be removed from the picture, kicking and screaming.

Weather, road conditions, failure modes and stupid humans driving cars are among all the barriers, but they will all be solved eventually.

In 1900 I am sure there would have been threads here with a handful of people explaining over and over and over why humans would never fly...
 
Last edited:
In my previous comment, I got a bit confused because this is the SDC thread.

Assist systems <not equal to> SDC

So, yeah, agree. These assist systems are just big old automated brake switches.

I fully expect SDCs to go way beyond this. First, they would know there is no car to their left and would take evasive action. Second, they would see ahead enough to anticipate a crosswalk. In the case of the Tesla turning the corner, I'd hope they'd see the pedestrian walking at a fast clip off-road towards the road. And finally, I'd hope they would anticipate children in neighborhoods, and maybe even see that football or soccer ball ahead as a warning. Not sure where they are on that yet.

The systems tested were all ADAS (Advanced Driver-Assistance System), and only the Tesla 3 has some SDC capability.

The test result is interesting, because it shows where the current state-of-the-art is. And when someone mentioned that ADAS is currently offered in cars starting at $20K, I was very impressed, even when the systems did not work perfectly.

These ADAS will get better and better with time. I will be checking out the news on them from time to time.

About an SDC being able to swerve, not simply brake, for the pedestrians, yes, it should be able to do that. However, the first step in the process is to "see" or detect the pedestrians. I was very surprised to see the Tesla 3 did not even see the human forms in some tests. How is it going to serve as driverless taxi in 2020 as Musk promised? In case people forget, 2020 is the current year.

And when I pointed out that in addition to testing for pedestrian detection, Tesla should test for "white semi-trailer detection" (2 fatal accidents), and "red fire truck detection" (1 non-fatal accident) in broad daylight, people got totally bent out-of-shape. They said an SDC would not need to be flawless. NO tests needed. Whoa!

I was thoroughly confused by their reaction. I was simply talking about basic capabilities here. What's going on?
 
Last edited:
I'm all for machines that "assist" me with my tasks, but I have no interest in machines replacing the things I enjoy doing. For example, I enjoy woodworking. Power tools make my job easier, and computer programs can help me design projects. But if a machine did the woodworking for me so it would be faster and safer, where's the fun in that?

I don't care for driving. If a car can drive itself, I can go to sleep.

See Volvo concept for an SDC below:

izutxnxcowcu6ttbr969.jpg
 
But doesn't this pretty much ignore the issue of potential lawsuits?

+1

That was on my mind. I buy a 100% self driving car. The car ignores a crosswalk and runs over a pedestrian.... Who is legally reliabe? Me? The company that built it? What if the car is getting old and I should have noticed it was not as accurate as it was before? Maybe the sensors are having 'issues'. Now what?

If this is not done properly, it will be the Lawyer's Full Employment Act all over again.
 
+1

That was on my mind. I buy a 100% self driving car. The car ignores a crosswalk and runs over a pedestrian.... Who is legally reliabe? Me? The company that built it? What if the car is getting old and I should have noticed it was not as accurate as it was before? Maybe the sensors are having 'issues'. Now what?

If this is not done properly, it will be the Lawyer's Full Employment Act all over again.
If you're really concerned, start by reading about it. It's been written about for many years, and automakers, insurers, the tech community and their attorneys and many others have thought about it plenty. Legislators have started to think about it, but they're not up to speed (as usual with tech). We'll come to an understanding like any new technology - it may not be pretty or right the first time, but we'll get there.

https://en.wikipedia.org/wiki/Self-driving_car_liability
https://www.dolmanlaw.com/liability-and-self-driving-cars/
https://www.npr.org/2016/03/01/468751708/what-do-self-driving-cars-mean-for-auto-liability-insurance
Self-driving cars and the liability issues they raise « Protect Consumer Justice
https://www.hg.org/legal-articles/self-driving-cars-and-liability-39591
https://www.theatlantic.com/technology/archive/2018/03/can-you-sue-a-robocar/556007/
https://www.brookings.edu/research/...ssues-and-guiding-principles-for-legislation/
https://www.usatoday.com/story/mone...aises-issues-self-driving-liability/99880620/
 
... As I said in my original post, every machine WILL fail at some point. Even if they somehow manage the self driving task with AI or something, what happens when a sensor fails, or a throttle sticks, or a tire blows, or something like mud blocks the cameras. What happens when the computer chip burns out, or the connections corrode, or a wire breaks. I would hope they would design self driving cars with a fail safe mode. Of course, if you're not driving and the computer fails, who's in charge now?...

I worked on autopilots for commercial jets. Not simple cruise autopilots as available on small private craft, but ones with autoland capabilities. These systems are all designed to be fail-safe. It's mandatory. The main problem is cost, due to redundant hardware. The computers are in dual, triple, or quad configurations. Sensors are dual or triply redundant, etc...

In a car, it's much simpler to deal with hardware failure than on an airplane. An SDC can slow down and pull over when it detects something wrong. An aircraft cannot park itself in the sky. But I digress...

And so, other than costs which will come down, I do not think as much about failure problems as much as I look for the performance of the system when everything is a go. That's the baseline.

All sensors working, CPU running, no inclement weather, broad daylight. Show me what you've got. When that is working, you can go from there.
 
Last edited:
All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident. — Arthur Schopenhauer, German philosopher (1788 – 1860)

Humans haven’t changed in 200 years...
 
Back
Top Bottom