Let's talk Self Driving Cars again!

Take for example my earlier example of a coffee pot. No coffee pot lasts 1 billion seconds. But when one goes kaput, it usually dies without setting itself on fire.

In the rare case that it does set itself on fire (probably much more likely than 10^-9), we now require smoke alarms in the home. You see, there are backups to provide safety.

Now, you say, but what if the battery of the smoke alarm fails? Yes. In the case of an aircraft system, everything critical is identified and monitored.

OK, suppose the battery is good, but the smoke sensor itself fails. How do we know if it can still detect smoke when it's old?

In the case of an aircraft critical sensor, it would have a built-in test. It's similar to injecting smoke into a smoke sensor to see if the sensor still works.

BIT (Built-in Test) is exercised before each flight. Failure of a critical sensor must be repaired before the aircraft is dispatched. This keeps down the chance of a latent failure (hidden failure that goes undetected for a long time).
 
Last edited:
Take for example my earlier example of a coffee pot. No coffee pot lasts 1 billion seconds. But when one goes kaput, it usually dies without setting itself on fire.

In the rare case that it does set itself on fire (probably much more likely than 10^-9), we now require smoke alarms in the home. You see, there are backups to provide safety.

Now, you say, but what if the battery of the smoke alarm fails? Yes. In the case of an aircraft system, everything critical is identified and monitored.

OK, suppose the battery is good, but the smoke sensor itself fails. How do we know if it can still detect smoke when it's old?

In the case of an aircraft critical sensor, it would have a built-in test. It's similar to injecting smoke into a smoke sensor to see if the sensor still works.

BIT (Built-in Test) is exercised before each flight. Failure of a critical sensor must be repaired before the aircraft is dispatched. This keeps down the chance of a latent failure (hidden failure that goes undetected for a long time).

Reminds me of a clock radio I had many years ago. For some reason, it made a slight noise just as it turned on the radio which then took a couple of seconds to come on. So for 2 or 3 years, this reliable clock woke me - usually just with the little noise, though the radio dutifully came on 2 seconds later. One morning, I awoke, thinking I'd heard the "little noise" but the radio didn't come on, so I rolled over to go back to sleep. But a few seconds later it occurred to me that something must be wrong. I turned over to look at the time and saw the clock radio joyously in flame. So, SWAG, that's maybe 10^-3. YMMV
 
Reminds me of a clock radio I had many years ago. For some reason, it made a slight noise just as it turned on the radio which then took a couple of seconds to come on. So for 2 or 3 years, this reliable clock woke me - usually just with the little noise, though the radio dutifully came on 2 seconds later. One morning, I awoke, thinking I'd heard the "little noise" but the radio didn't come on, so I rolled over to go back to sleep. But a few seconds later it occurred to me that something must be wrong. I turned over to look at the time and saw the clock radio joyously in flame. So, SWAG, that's maybe 10^-3. YMMV


So, how did it catch fire? Was it UL listed? :)

OK. No one can make a clock radio that works for 1 billion hours. But if you want a failure to absolutely not burn down your house, there are measures one can do, if money is no object.

First, the radio should not catch on fire if it fails. Something is very wrong here.

But if you don't know how to build such a radio, then you look for backups. For example, the clock radio can incorporate a smoke sensor, which when triggered will cut the power to the whole thing. This smoke sensor should have a periodic self-test to ensure that it actually works.

If this does not pass 10^-9 by analysis, you can go further. Mount the whole thing inside a fire-proof enclosure (but arrange for the sound of the radio to enamate from it). Mount a heat sensor on the outside of the enclosure for more safety. If this heat sensor is triggered, it can also kill the electric power.

Ridiculous isn't it? But this is the kind of things one does to get 10^-9 catastrophic failure rates. Again, failures are unavoidable and will happen. What we want to be sure is that they don't become catastrophic.

By the way, you have to design and analyze the extra added sensors to make sure they don't themselves fail in a way that cause a fire. Here, you can make an argument that they run on very low currents, such that when failed they would be just slightly warm and will not burn up.

And you have maintenance and operating procedures to confirm that everything is functional, before you set the alarm and go to sleep.

Safety costs money, labor, and time. :)
 
Last edited:
So, how did it catch fire? Was it UL listed? :)

OK. No one can make a clock radio that works for 1 billion hours. But if you want a failure to absolutely not burn down your house, there are measures one can do, if money is no object.

First, the radio should not catch on fire if it fails. Something is very wrong here.

But if you don't know how to build such a radio, then you look for backups. For example, the clock radio can incorporate a smoke sensor, which when triggered will cut the power to the whole thing. This smoke sensor should have a periodic self-test to ensure that it actually works.

If this does not pass 10^-9 by analysis, you can go further. Mount the whole thing inside a fire-proof enclosure (but arrange for the sound of the radio to enamate from it). Mount a heat sensor on the outside of the enclosure for more safety. If this heat sensor is triggered, it can also kill the electric power.

Ridiculous isn't it? But this is the kind of things one does to get 10^-9 catastrophic failure rates. Again, failures are unavoidable and will happen. What we want to be sure is that they don't become catastrophic.

By the way, you have to design and analyze the extra added sensors to make sure they don't themselves fail in a way that cause a fire. Here, you can make an argument that they run on very low currents, such that when failed they would be just slightly warm and will not burn up.

And you have maintenance and operating procedures to make sure that everything is functional, before you set the alarm and go to sleep.

Safety costs money, labor, and time. :)

WHY it caught on fire was not a major issue to me. IIRC (40 years) it was UL tested. I assumed this must be some amazing thing that never happened. Then within a couple of years, I stopped at my mom and dad's place. Mom was cooking dinner and dad was asleep in front of his TV as usual - except this time, the TV was blazing up the wall. A few seconds more and there would have been serious damage instead of a professional cleaning and paint job. So far, knock on wood, that's my experience with fires in appliances. Hope it's my last but YMMV.
 
In those days, electronics used vacuum tubes, and they drew a lot of power, generating tons of heat. People knew the risk and tolerated them because there were nothing else, and they enjoyed the benefits enough to take the risk.

Back on SDC in its current form, some people are OK with the flaws, some are not. What I don't like is people who do not want the risks may still get hurt as collateral damages. And that's why NHTSA is investigating accidents involving SDC.

Right now, it's not yet a big deal, because these systems are not yet widespread. Still, NHTSA needs to define certain safety standards, and they don't know how to do it.

SDC requirements are a lot more complicated than requiring a car to have a certain stopping distance from a 60 mph speed, or to have a certain safety feature. Whatever requirements have to be clearly defined and testable.

If you define certain scenarios to be tested for compliance, I am sure all car makers will design their system to those scenarios, then call it done. And the variability in real life is infinite.
 
Motorcycles are hard. Humans can't see motorcycles half the time. We look through them. The promise of machine vision was they could solve the problem. Well, maybe they do better, but also have problems. A little vindication for us poor humans.

Someone here kept talking about lidar vs pure vision, and I think they have a point. Lidar is going to be necessary. Sorry Elon.
 
As a retired SW engineer myself, I totally understand your point.

However, why do we accept things like crankshafts failing and causing loss of control? Or even more common, why do we accept the shoddy work of auto techs causing wheel lug failure and wheels falling off?

I mean, mistakes (bugs) are made every day in this world resulting in hundreds of wheels falling off daily around the world*. Cars frequently spectacularly careen out of control. Wheels become cruise missiles bounding along and crossing the center line causing impacts with oncoming traffic.

Yet we "accept" this.

* - Source: plenty of cam cars catching wheels falling off on the subreddit r/idiotsincars
+1
Retired software and one of the things I was always assigned to was critical outages. These were typically OS and DBMS bugs the vendors missed during their QA/QC cycles. They're not perfect as nobody's code is.

Totally agree there's many bugs we accept. Many years ago I had a rear wheel drop off my pickup and go boom. Luckily it was on a dirt road with no traffic..

I didn't purchase FSD on my Y because autopilot does most everything I need given where we drive. I use it only as an extra set of eyes in the vehicle. Want different music? No problem. Want to take a nap? Problem.

When I first got the vehicle I was doing 65mph on autopilot when a guy pulled out directly in front of me from a side road and STOPPED! Right in the middle of the highway. He couldn't go left because of oncoming traffic and just looked, I was close enough to see his eyes, preparing to die. He was going to be in a very bad accident getting T-boned at a high rate of speed. I was shocked when the vehicle went into collision minimization and slammed the brakes on full force. Anti-lock brakes were fully engaged as it went from 65->20mph. When I finally got my foot to move the brake pedal came up to meet my foot and I manually stopped barely avoiding the collision. Had it not been for the technology it would have been a bad day for both of us.

Calling the product full self driving isn't good for anyone because it's not. Nor do I believe I'll see full self driving in my lifetime without it being fenced to specific roads and areas. That problem set it too big for the technology as it exists.
 
Sorry to go too off topic, but hey, I want to add some anxiety to your breakfast reading. Although self driving cars still have issues, we have plenty of other issues to deal with every day out there. Look sharp and be careful.
 

Attachments

  • wheel.JPG
    wheel.JPG
    44.3 KB · Views: 23
Motorcycles are hard. Humans can't see motorcycles half the time. We look through them. The promise of machine vision was they could solve the problem. Well, maybe they do better, but also have problems. A little vindication for us poor humans.

Someone here kept talking about lidar vs pure vision, and I think they have a point. Lidar is going to be necessary. Sorry Elon.


Tesla SDC cannot see a motorcycle?

I have shared YouTube videos where it failed to see a road barricade. On multiple occasions. Barricades that went across the road!

And I have seen videos where it did not see concrete support columns of an overhead metro. And a steel post. And a bollard.

About the bollard, a YouTuber showed a lot of videos of him testing SDC on the streets, and he was able to act fast enough to override the car when it was about to hit various objects. When he was not fast enough the last encounter with a bollard, the car hit it.

When he posted it on YouTube, Tesla found out that he was a Tesla employee. He was fired. And his personal car was stripped of the SDC beta software.

So much for Musk's championing free speech. Total BS.
 
I didn't purchase FSD on my Y because autopilot does most everything I need given where we drive. I use it only as an extra set of eyes in the vehicle. Want different music? No problem. Want to take a nap? Problem.

When I first got the vehicle I was doing 65mph on autopilot when a guy pulled out directly in front of me from a side road and STOPPED! Right in the middle of the highway. He couldn't go left because of oncoming traffic and just looked, I was close enough to see his eyes, preparing to die. He was going to be in a very bad accident getting T-boned at a high rate of speed. I was shocked when the vehicle went into collision minimization and slammed the brakes on full force. Anti-lock brakes were fully engaged as it went from 65->20mph. When I finally got my foot to move the brake pedal came up to meet my foot and I manually stopped barely avoiding the collision. Had it not been for the technology it would have been a bad day for both of us.

Calling the product full self driving isn't good for anyone because it's not. Nor do I believe I'll see full self driving in my lifetime without it being fenced to specific roads and areas. That problem set it too big for the technology as it exists.


About Tesla autopilot or perhaps just the AEBS (Automatic Emergency Braking System) saving you that time, it's great as a driver assistance. And I am sure that it has done that for other drivers.

But to count on it and go to sleep? You are right that it does not perform so well every time.

In 2016, Joshua Brown was killed when his Tesla went under a semi-trailer in Florida. Prior to this, he posted many YouTube videos praising the autopilot, saying it saved him from many accidents. Such irony. I looked for and saw his videos.

The above accident was on old hardware. But just a few weeks ago, a couple were killed when their Tesla exited the freeway into a rest area, and plowed at full speed into a parked semi. It was not yet known if the autopilot was engaged, but apparently the AEBS did not work for them like it did for you that time.

In all these full-speed ramming of other vehicles, there were no skid marks. Neither the driver nor the car system applied the brake. Both man and machine dozed off. Both were totaled. Well, at least the other vehicles were semi-trailers and not another car with occupants. This minimized the collateral damage.

teslacrash-crop.jpg
 
Last edited:
When we have self driving cars, and still get into an accident.... How is that handled in terms of insurance and fault?
 
When we have self driving cars, and still get into an accident.... How is that handled in terms of insurance and fault?


A true self-driving car will not have a steering wheel and accelerator and brake pedal. Musk kept saying it would be available soon. Like the end of the year. Or next year.

With no human control possible, the [-]driver[/-] riders cannot do anything other than screaming "OMG, please STOP, STOP", how do you blame them for whatever happens? It's like having your own robot taxi. If you get hurt in an accident in a taxi or Uber or Lift ride now, is it your fault?

Any property damages, bodily injuries, lost lives will have to be compensated by the car maker. Or perhaps, society has to pitch in to pay, via a gummint compensation program, if the car has been tested to the gummint standard.
 
When we have self driving cars, and still get into an accident.... How is that handled in terms of insurance and fault?

I wonder what electronic "finger prints" remain within (say) a Tesla system after an accident. Would it show whether the driver braked, turned, etc. or if the self-driving system did "anything". I would think there must be something "saved" that would be usable by the insurance company/court and perhaps the authorities. YMMV
 
I wonder what electronic "finger prints" remain within (say) a Tesla system after an accident. Would it show whether the driver braked, turned, etc. or if the self-driving system did "anything". I would think there must be something "saved" that would be usable by the insurance company/court and perhaps the authorities. YMMV

If you talk about the current phony FSD, the driver is always liable. Tesla says so. You agree to the caveat when you engage the autopilot.

The law also says the driver is always liable.

With the future TRUE FSD with no steering wheel nor pedals, nobody knows how it will work.

PS. NHTSA in their investigation of fatal crashes wants to know about the engagement state of the autopilot to gain an understanding to aid future regulations. Whether the AP is engaged or not, the driver is always at fault.
 
A true self-driving car will not have a steering wheel and accelerator and brake pedal. Musk kept saying it would be available soon. Like the end of the year. Or next year.

With no human control possible, the [-]driver[/-] riders cannot do anything other than screaming "OMG, please STOP, STOP", how do you blame them for whatever happens? It's like having your own robot taxi. If you get hurt in an accident in a taxi or Uber or Lift ride now, is it your fault?

Any property damages, bodily injuries, lost lives will have to be compensated by the car maker. Or perhaps, society has to pitch in to pay, via a gummint compensation program, if the car has been tested to the gummint standard.

It may take a while for our tort system to catch up to, in this case, self driving cars. Logically, if the driver has "no control" one would think that "fault" would be assigned to the FSD system (therefore the manufacturer.) Since the MFG has the deepest pockets, that would likely be the case. However, I don't think we will know until it happens.

One thing is certain, people (and their lawyers) will attempt to get out of paying, even if the car fails to engage (successfully) the collision avoidance system in a non-FSD car. That is OUR system. Do not accept "blame" but push the fault to someone else if at all possible. YMMV
 
From today's WSJ:
https://www.wsj.com/articles/self-d...tion-to-safety-at-tusimple-11659346202?page=1

A few quotes for those who can't get around the paywall..

On April 6, an autonomously driven truck fitted with technology by TuSimple Holdings Inc. suddenly veered left, cut across the I-10 highway in Tucson, Ariz., and slammed into a concrete barricade.

An internal TuSimple report on the mishap, viewed by The Wall Street Journal, said the semi-tractor truck abruptly veered left because a person in the cab hadn’t properly rebooted the autonomous- driving system before engaging it, causing it to execute an outdated command. The left-turn command was 2 1/2 minutes old— an eternity in autonomous driving— and should have been erased from the system but wasn’t, the internal account said.


But researchers at Carnegie Mellon University said it was the autonomous-driving system that turned the wheel and that blaming the entire accident on human error is misleading. Common safeguards would have prevented the crash had they been in place, said the researchers, who have spent decades studying autonomous-driving systems.


For example, a safety driver—a person who sits in the truck to backstop the artificial intelligence—should never be able to engage a self-driving system that isn’t properly functioning, they said. The truck also shouldn’t respond to commands that are even a couple hundredths of a second old, they said. And the system should never permit an autonomously- driven truck to turn so
sharply while traveling at 65 miles an hour.
 
^^^ Good Grief!

Bad, bad, bad programmers. These guys don't even deserve to write an app for a smartphone, let alone for a self-driving semi-trailer. Their mistakes could have killed tens of motorists, or more.

Off with their head!

PS. The article quoted researchers at Carnegie Mellon University. As I mentioned, CMU was the birthplace of self-driving technology. Graduates of this school went on to lead SDC programs at many current commercial endeavors.
 
Last edited:
I was recently driving through San Francisco at midnight. A car pulled up along side at the lights. I looked over and there was no one in the car. No driver. No passengers.

Just a car out for a late drive by itself.

We shared the road for a few blocks before it’s life plan took it right.

The only thing that would have felt stranger would be a dog in the driving seat.
 
I was recently driving through San Francisco at midnight. A car pulled up along side at the lights. I looked over and there was no one in the car. No driver. No passengers.

Just a car out for a late drive by itself.

We shared the road for a few blocks before it’s life plan took it right.

The only thing that would have felt stranger would be a dog in the driving seat.

Was it this car?

GM Cruise recently obtained permits to operate true self-driving taxis in SF without a safety driver. And it was only for night operation so far.

From a Reuters article on June 2, 2022:

OAKLAND, Calif., June 2 (Reuters) - General Motors Co's (GM.N) Cruise on Thursday became the first company to secure a permit to charge for self-driving car rides in San Francisco, after it overcame objections by city officials.

Cars will be limited to a maximum speed of 30 miles per hour (48 km per hour), a geographic area that avoids downtown and the hours of 10 p.m. to 6 a.m. They will not be allowed on highways or at times of heavy fog, precipitation or smoke.


gm-cruise-driverless-taxi.jpg
 
Last edited:
A few days ago I got to experience my Cadillac CT6 Supercruise.
After getting on a limited access highway which I knew had been Lidar mapped, I engaged the the gizmo, as per instructions in the manual, centering the car in the lane. Green light came on in the steering wheel and car cruised along fine with no hands on the wheel. Maintaining lane centering at all times, remarkably precise, per pre set cruise speed. At a few exits the wheel moved a tiny bit, but maintained lane centering.

Played with changing cruising speed. No issues. Played with turning my head to find out if it will disengage. After several seconds, the green light in the wheel starts flashing and the driver seat vibrates, thus requiring forward looking head attitude. Per manual, if ignoring flashing green, it will flash red and shrtly after that car will brake to a stop.
System requires manual lane changes, during which the steering wheel light will flash blue. Upon re centering it will give solid green and resume centered cruising.

IMHO it is nice break on a long drive, I think it creates extra tension in monitoring the system.
I prefer handling the wheel thus controlling the car. Do like cruise control with automatic following distance and automatic braking. If I set cruise speed slightly greater then car ahead it will maintain distance, the annoying part is if the car ahead slows down a good bit my car will too. Then I will usually go around and resume my preferred speed.
The car has a huge collection of driver assist stuff, but that is for other threads.
 
Was it this car?

GM Cruise recently obtained permits to operate true self-driving taxis in SF without a safety driver. And it was only for night operation so far.

From a Reuters article on June 2, 2022:




gm-cruise-driverless-taxi.jpg



Yeah that would be the one. Exactly like that...empty ghost car
 
Last edited:
A few days ago I got to experience my Cadillac CT6 Supercruise.
After getting on a limited access highway which I knew had been Lidar mapped, I engaged the the gizmo, as per instructions in the manual, centering the car in the lane. Green light came on in the steering wheel and car cruised along fine with no hands on the wheel. Maintaining lane centering at all times, remarkably precise, per pre set cruise speed. At a few exits the wheel moved a tiny bit, but maintained lane centering.

Played with changing cruising speed. No issues. Played with turning my head to find out if it will disengage. After several seconds, the green light in the wheel starts flashing and the driver seat vibrates, thus requiring forward looking head attitude. Per manual, if ignoring flashing green, it will flash red and shrtly after that car will brake to a stop.
System requires manual lane changes, during which the steering wheel light will flash blue. Upon re centering it will give solid green and resume centered cruising.

IMHO it is nice break on a long drive, I think it creates extra tension in monitoring the system.
I prefer handling the wheel thus controlling the car. Do like cruise control with automatic following distance and automatic braking. If I set cruise speed slightly greater then car ahead it will maintain distance, the annoying part is if the car ahead slows down a good bit my car will too. Then I will usually go around and resume my preferred speed.
The car has a huge collection of driver assist stuff, but that is for other threads.

So, when the car centers itself in the lane, does it avoid pot holes? Where I live, it would be a disaster to center your car in the lane at all times. Good way to break an axle. YMMV
 
So, when the car centers itself in the lane, does it avoid pot holes? Where I live, it would be a disaster to center your car in the lane at all times. Good way to break an axle. YMMV

What sensors can an SDC use to detect potholes?

Tesla's FSD cannot detect pot holes, and I believe the camera resolution is not sufficient for that, meaning it's not just the software.

Waymo's lidars have a sub-inch resolution, and certainly can detect pot holes. Whether their software can make use of all info from the sensors, we don't know.
 
^^^^^I doubt it would steer aaround potholes. Though for typical bumps and small holes the magnetic ride's forward looking radar controlled by softening the shocks. It calculates the time to bumps etc. based on speed.
Maybe if I brave speeding tickets, use David Letterman's method: Do 100 MPH. It just skims over potholes.

Limited access highways even in PA where roads are generally crappy, are well maintained. In the past 14 years have not found potholes on limited access highways. These would be interstates.
The system will not allow self drivng (steering) on regular roads or even partially limited acces ones. It uses Lidar mapped roads, the maps are stored on 2 hard drives in the trunk.
 
Last edited:
What sensors can an SDC use to detect potholes?

Tesla's FSD cannot detect pot holes, and I believe the camera resolution is not sufficient for that, meaning it's not just the software.

Waymo's lidars have a sub-inch resolution, and certainly can detect pot holes. Whether their software can make use of all info from the sensors, we don't know.

If the car cannot dodge potholes, I don't want it. It won't last long on our roads. YMMV
 
Back
Top Bottom