Let's talk Self Driving Cars again!

The dumb thing about the AI/ML is that there's no caching or any concept of the car learning the roads and reapplying previous solved problems.

For instance, if you take the same route to the office or a store, the SDCs, at least the Teslas, are re-scanning and re-calculating each route, trying to recognize the road, signs, etc.

So where's the learning, the intelligence?

It's learning but then immediately forgetting?

They're probably talking about ML in the aggregate, as it combines all the data from cars and in the cloud, it's improving at recognizing objects, detecting road paths, etc.?

They won't share what kind of milestones they've met or other measurable reached.

Presumably they will present data to NHSTA showing lower accident rates per 100k hours or something like that, to prove it's better than human drivers.
 
I don't see me ever buying a fully self-driving car. My 2021 Highlander has all of Toyota's advanced driver assist stuff and I leave it all activated. I set the cruise control to the dynamic mode that adjusts your speed based on the vehicle in front of you. But when it comes to slowing down in heavier traffic, I can't fully let go of putting my own foot on the brake to stop from hitting that car in front of me. I just can't do it. My trust just isn't there with the automation. I'm generally pretty hip with new technology. But not with self-driving cars.

I use cruise control even on regular non-highway roads.

It will break aggressively if someone lane changes ahead of me and slows down or stops.

I often wonder if it will break as I come upon cars stopped in front of me at a red light.
 
Here's an interesting review of an Israeli company working on FSD. They have programmed the AI to "anticipate" what other drivers and pedestrians will do. Looks impressive, but I still think getting something to work in snow, rain, etc is just beyond anything that can trickle down to the masses in the foreseeable future. If only the very rich can afford it, it doesn't help much overall, especially if it takes decades to even get to the rich. In the mean time (or forever), driver assistance make so much more sense (as it always has, IMO).


-ERD50


I have shared some videos from Mobileye before. This Israeli company did the first autopilot for the Tesla S. After the Florida fatal accident in 2016 when Joshua Brown's Tesla S drove under a semi-trailer, shearing off the car down to the top of the doors, Mobileye and Tesla had a fall-out.


Mobileye is perhaps best known for sparring publicly with Tesla in 2016 over the use of its driver-assist technology following a fatal crash that grabbed headlines and spurred a federal investigation. The companies had worked together on the early version of Tesla’s driver-assist technology, Autopilot, but Mobileye felt Tesla hadn’t incorporated the technology safely, it said at the time of the dispute.

“There is much at stake here, to Mobileye’s reputation and to the industry at large,” Mobileye said then.


Tesla started doing its own autopilot, and Mobileye got bought out by Intel. Mobileye's camera/processor is used in many car driver assistance systems on the road today. Recently, Intel has spun off Mobileye as a public company again.

I had read about Mobileye's methodology of "dissimilar processing" in order to build a system safe enough to have a car with no steering wheel. This thinking is not new. Back in 1980, there was talk of building aircraft autopilots with "dissimilar processing". Aircraft autopilots with Autoland capability had always had multiple channels, typically triplex to quadruplex, for redundancy to safeguard against hardware failure. Dissimilar processing using at least two different engineering teams to build the dissimilar channels adds the protection against generic CPU failures, and also software bugs. The basic idea here is that two different programmers may each make mistakes, but they will not make identical mistakes.

In the case of Mobileye, the dissimilar channels also have dissimilar sensors. This means that the channels will have dissimilar inputs, and to merge their outputs into a consensus to drive the car is going to be quite a challenge.

What I like about Mobileye is that the company clearly defines the problems that they have to solve in order to build an SDC. You may not agree with the way they solve the problem, but the first step is always to identify the difficulties. You cannot design a solution to a problem that you do not even understand.

A few years ago, watching presentations by Waymo, I was also impressed by the company's understanding of the problems that they still had to solve. In other words, they did not sound like an arrogant ignoramus.
 
Last edited:
The dumb thing about the AI/ML is that there's no caching or any concept of the car learning the roads and reapplying previous solved problems.

For instance, if you take the same route to the office or a store, the SDCs, at least the Teslas, are re-scanning and re-calculating each route, trying to recognize the road, signs, etc.

So where's the learning, the intelligence?

It's learning but then immediately forgetting?

They're probably talking about ML in the aggregate, as it combines all the data from cars and in the cloud, it's improving at recognizing objects, detecting road paths, etc.?

They won't share what kind of milestones they've met or other measurable reached.

Presumably they will present data to NHSTA showing lower accident rates per 100k hours or something like that, to prove it's better than human drivers.

The above is a major misunderstanding of Tesla's autopilot. Musk keeps talking about "fleet learning", but that is really about collecting a lot of "edge cases" using many test vehicles. The company then compiles these cases to tune the software for the next release.

Many ignorant Tesla owners think that their cars will learn as they drive, and the cars then share the knowledge over the Web. Nope. The AI technology is not to that level yet. :)

Karpathy, the recent former director of AI at Tesla, finally put the above myth to rest when he said that every time a Tesla car drives a road, it drives that road for the 1st time. If it ran over a pothole yesterday, it will run over the same pothole today. It does not have the memory of a human driver.

Still, to this day, I still see ignorant Tesla owners believing their cars will learn to avoid making the same mistake. Nope. Not until the software is updated to add new "smarts".

PS. I like Karpathy a lot, just from watching his presentation. This guy is a real engineer, and does not BS like a salesman. He left Tesla earlier this year.
 
Last edited:
That's the problem, it doesn't cache.

It literally doesn't map the roads so it's discovering the same roads over and over.

Waste of computing resources.
 
^^Before I retired, (maybe 2012) we were working on a joint venture with Nokia and other companies using remote sensing technology to gather roadway sign data. They had technology available then where their Google type car cameras could recognize signs by their shape, size, and other characteristics. i.e. a speed limit sign of a specific size with Speed Limit" printed on it at a standard font and size, etc. So they could recognize a sign if it met a standard parameter set. At that time, I don't believe they were doing anything with that ability, but it doesn't surprise me that they have developed means by which to control cars now by using the data.

As well as a reversing camera and front view camera from behind the rear view mirror our Leaf also has a front view camera hidden in the Nissan badge on the front grill and also high up on the tailgate sheltered from the weather looking to the rear.

The lower front camera looks for pedestrians and also comes on when parking so you can see in front of the car as it approaches the wall or whatever you are trying to get close to. I had become accustomed to the aerial type view showing the silhouette of the car with proximity senses and sounds showing how close you are to your surroundings. With the Leaf it is very similar but looks as if a drone is hovering above the car because you see the actual car and also the actual surroundings that you are close to. Some really clever software is taking the video from the cameras and creating that image from above.

The high position camera at the top of the tailgate can used instead of using the rear view mirror, and I actually prefer this to using the mirror as a mirror. In a regular car you have the option of “dipping” the mirror with a lever to stop cars behind dazzling you with their headlights, and in the Leaf that same action provides the image from the high rear view camera in the mirror. At night the headlights of the cars behind are blurred to avoid any dazzle, but it mostly means that your view behind is not obstructed by objects in the car or by rain or mist on the rear window. I love it, particularly at this time of year when we do a lot more driving in the dark and in wet conditions. (We just returned from a week driving to and around some parts of Scotland )
 
As well as a reversing camera and front view camera from behind the rear view mirror our Leaf also has a front view camera hidden in the Nissan badge on the front grill and also high up on the tailgate sheltered from the weather looking to the rear.

The lower front camera looks for pedestrians and also comes on when parking so you can see in front of the car as it approaches the wall or whatever you are trying to get close to. I had become accustomed to the aerial type view showing the silhouette of the car with proximity senses and sounds showing how close you are to your surroundings. With the Leaf it is very similar but looks as if a drone is hovering above the car because you see the actual car and also the actual surroundings that you are close to. Some really clever software is taking the video from the cameras and creating that image from above.

The high position camera at the top of the tailgate can used instead of using the rear view mirror, and I actually prefer this to using the mirror as a mirror. In a regular car you have the option of “dipping” the mirror with a lever to stop cars behind dazzling you with their headlights, and in the Leaf that same action provides the image from the high rear view camera in the mirror. At night the headlights of the cars behind are blurred to avoid any dazzle, but it mostly means that your view behind is not obstructed by objects in the car or by rain or mist on the rear window. I love it, particularly at this time of year when we do a lot more driving in the dark and in wet conditions. (We just returned from a week driving to and around some parts of Scotland )

Nice set of cameras. My Jeep Wrangler and Ford F150 only have the single back up cameras. My cameras get dirty and get covered with snow and ice in the winter.

What happens to a self driving car that depends on cameras to drive when their cameras get covered up?
 
Nice set of cameras. My Jeep Wrangler and Ford F150 only have the single back up cameras. My cameras get dirty and get covered with snow and ice in the winter.

What happens to a self driving car that depends on cameras to drive when their cameras get covered up?

I expect they won’t work. The second year we had our Prius we had to drive to Scotland to attend my wife’s BIL’s funeral in Scotland in January. A lot of folks were traveling and some of us booked into a B&B close by. The day before we were due to drive up heavy snow was forecast so we went up a day early in light snow. Next morning we got in the car which had all the previous day’s wet road muck on it plus a layer of snow sticking to it, and on power up a warning appeared saying that multiple sensors were blocked so the proximity alarms, emergency action braking and adaptive cruise control were all inoperative. I cleaned the dirt off all the sensors to fix the problem.

If we had actually driven in conditions with slush being thrown up and sticking to the car then I expect we would have received those warnings while driving and either had to continue without those safety features or pull off and clean off the gunge.
 
That's the problem, it doesn't cache.

It literally doesn't map the roads so it's discovering the same roads over and over.

Waste of computing resources.


The car does not have enough memory to store every info that it encounters.

How do humans do it? :) We don't have unlimited memory either. We try to remember what is important, and let go of things we don't need to retain.

Tesla used to talk about crowd-sourced mapping. The idea is that each car will report the road condition up to the cloud to share with the world. Then, Musk dropped this idea back a few years ago.

Mobileye is still pursuing and implementing the crowd-sourced idea. See video that ERD50 linked above.
 
As well as a reversing camera and front view camera from behind the rear view mirror our Leaf also has a front view camera hidden in the Nissan badge on the front grill and also high up on the tailgate sheltered from the weather looking to the rear.

The lower front camera looks for pedestrians and also comes on when parking so you can see in front of the car as it approaches the wall or whatever you are trying to get close to. I had become accustomed to the aerial type view showing the silhouette of the car with proximity senses and sounds showing how close you are to your surroundings.

The rental Opel I drove had this look-down view of the car and the surrounding. Makes backing up in tight quarters easy. I loved it.
 
Nice set of cameras. My Jeep Wrangler and Ford F150 only have the single back up cameras. My cameras get dirty and get covered with snow and ice in the winter.

What happens to a self driving car that depends on cameras to drive when their cameras get covered up?

Years ago, Google showed how their roof-mounted camera/lidar dome had a wiper.

The Mobileye video linked by ERD50 shows nozzles blowing cleaning fluid to clean the camera lens, then blowing it dry.

In the case a sensor is impaired, I expect the car to pull over and limp to a stop using the remaining redundant sensors. And you have to have redundant sensors. For example, two forward-looking cameras are typically used. One with a wide-angle view, and one more narrow. A car can limp to a stop with just one camera working.

You have to design for cases like the above if you want to build a Level 4/5 SDC. And test for them.
 
Last edited:
...If we had actually driven in conditions with slush being thrown up and sticking to the car then I expect we would have received those warnings while driving and either had to continue without those safety features or pull off and clean off the gunge.

.....In the case a sensor is impaired, I expect the car to pull over and limp to a stop using the remaining redundant sensors. And you have to have redundant sensors. For example, two forward-looking cameras are typically used. One with a wide-angle view, and one more narrow. A car can limp to a stop with just one camera working.....

So - in the case where a sensor is impaired, will the car limp to a stop automatically? Or can you continue to drive in a manual mode with self driving features disabled?
 
So - in the case where a sensor is impaired, will the car limp to a stop automatically? Or can you continue to drive in a manual mode with self driving features disabled?

In our Leaf, which can be driver assist but not self driving, then yes you can continue to drive in manual mode, and you can choose to be always in manual mode. Whenever the driver assist mode cannot operate while you are using it, it bings at you and the self assist icon goes gray. If you are in driver assist and take your hands off the wheel and don’t put them back then the car will slow down and pull over safely although I haven’t yet tried that in person :)
 
Last edited:
Tesla used to talk about crowd-sourced mapping. The idea is that each car will report the road condition up to the cloud to share with the world. Then, Musk dropped this idea back a few years ago.

.

How would they secure crowd sourced mapping to ensure that bad guys don't mess with it?
 
So - in the case where a sensor is impaired, will the car limp to a stop automatically? Or can you continue to drive in a manual mode with self driving features disabled?

Yes, an SDC of Level 4/5 will have to be able to limp to a stop. It's by definition.

About human taking over, it depends on whether the car has steering wheel or not. Recall how the SDC enthusiasts talk about removing steering wheel and pedals.

And then, it's awkward for a passenger of a robot taxi to climb onto the driver seat to drive himself. Talk about liability. :)
 
Yes, an SDC of Level 4/5 will have to be able to limp to a stop. It's by definition.

About human taking over, it depends on whether the car has steering wheel or not. Recall how the SDC enthusiasts talk about removing steering wheel and pedals.

And then, it's awkward for a passenger of a robot taxi to climb onto the driver seat to drive himself. Talk about liability. :)

No way will I ever get a car without a steering wheel.
 
No way will I ever get a car without a steering wheel.

Will you get into a robot taxi, even if it has a steering wheel? :)

If it is going to act up, can you climb up to the driver seat to steer or brake soon enough?

It's about whether the SDC car is properly designed to be safe enough. If it is not, there will be a lot of Joshua Browns.

tesla-car-crash-florida-reuters_240x180_61467430414.jpg
 
Last edited:
Will you get into a robot taxi, even if it has a steering wheel? :)

If it is going to act up, can you climb up to the driver seat to steer or brake soon enough?

Just rip the driver out and take over the wheel (or joystick). It worked forArnold/Quaid in Total Recall. :)
 

Attachments

  • robo1.JPG
    robo1.JPG
    30.7 KB · Views: 8
The car does not have enough memory to store every info that it encounters.

How do humans do it? :) We don't have unlimited memory either. We try to remember what is important, and let go of things we don't need to retain.

Tesla used to talk about crowd-sourced mapping. The idea is that each car will report the road condition up to the cloud to share with the world. Then, Musk dropped this idea back a few years ago.

Mobileye is still pursuing and implementing the crowd-sourced idea. See video that ERD50 linked above.

They can have the cache cleared and delete the least recently used tiles.

Instead of local storage they seem to be uploading tons of location data over mobile networks.
 
They can have the cache cleared and delete the least recently used tiles.

Instead of local storage they seem to be uploading tons of location data over mobile networks.

I am not an SDC developer, but suspect that things are not so simple.

If a car encounters a stopped vehicle, whether that is a delivery truck or a broken-down car, and has to swerve around it, does it help to retain that memory for the drive over the same road tomorrow?

If a section of the road is flooded because of rain today, does it help to remember it for tomorrow? Probably not, but for the next time it rains hard which can be next year.

There are all kinds of situations that a human has no problems with, but to program a computer to do the same ain't easy.

One should read what Karpathy, the former AI director at Tesla, says about building a humanoid robot to do human activities. The difficulties he describes should be obvious though, but people often have to be reminded of them. I suspect that was why he couldn't hack it anymore at Tesla, and had to quit. :)
 
Last edited:
Tesla used to talk about crowd-sourced mapping. The idea is that each car will report the road condition up to the cloud to share with the world. Then, Musk dropped this idea back a few years ago.

Isn't that what Waze does? Not mapping, but for traffic, road hazards, and police presence? Seems to work pretty well.
 
Yes. But wait …. Do EV’s have gas and brake pedals?

EVs and SDCs are not synonymous.

The Waymo SDC prototypes are hybrid cars. So are prototypes by many other developers. They don't want the burden of having to recharge these test cars, on top of tweaking the SDC hardware and software.
 
Back
Top Bottom