Let's talk Self Driving Cars again!

Anecdotal info - A friend's son is an engineer working on self-driving cars for a major corporation. He reports that there was quite a bit of initial progress but getting the rest of the way there has been very discouraging. Apparently a very big issue is that society is willing to allow a level of error from individual drivers - lawsuits, if any, are limited to those involved. An error by a self driving car exposes the whole corporation to legal action. Consequently, self driving parameters need to be set on the "safe side", which makes the experience of riding in a self-driving car frustrating for occupants - 10 mph on residential streets.
I’m not sure I agree that relative safety will ultimately block adoption of level 5 cars IF they prove to prevent 90% of accidents, as was the original mission of many self driving car programs.

I think “getting the rest of the way” is the overriding barrier, and doing so cost effectively. I still believe we’ll have level 5 cars, but it’s going to take much longer to handle those last details. No matter how good an autonomous cars vision and “intelligence” is, there’s no end to the variations in environment (missing/vandalized road markings, signs, etc., construction zones) and other moving objects (cars and people acting unpredictably). It’s great that AI will almost immediately help other cars deal with new situations, there will be some failures when each new situation is encountered.

Ironically, we’d probably get there quicker is all cars were replaced at once so all cars would be predictable - though that won’t happen. The period when human operated and computer operated cars coexist on roadways, will be more difficult.
 
Anecdotal info - A friend's son is an engineer working on self-driving cars for a major corporation. He reports that there was quite a bit of initial progress but getting the rest of the way there has been very discouraging. Apparently a very big issue is that society is willing to allow a level of error from individual drivers - lawsuits, if any, are limited to those involved. An error by a self driving car exposes the whole corporation to legal action.
It's going to be very hard to implement fully self-driving vehicles that aren't as good as the most capable individual drivers--probably the best 5%, say career professional drivers with those "million-mile" safety awards. That's the point at which it gets hard to argue that a hands-on driver would have done better than the automation in a particular instance.
 
It's going to be very hard to implement fully self-driving vehicles that aren't as good as the most capable individual drivers--probably the best 5%, say career professional drivers with those "million-mile" safety awards. That's the point at which it gets hard to argue that a hands-on driver would have done better than the automation in a particular instance.
Sorry to disagree. If FSD vehicles can reduce accidents and fatalities by 80-90%, it’ll be very hard to stop adoption of FSD if they’re affordable. Whether or not that matches the “best 5%” (how would you measure that?) wouldn’t matter with that kind of reduction. Insurance for human driven cars will become prohibitively expensive if/when FSD cars are dramatically safer, again if they’re affordable. And human driven cars will probably be prohibited in high traffic areas. I’m talking decades from today, not soon.
 
Last edited:
It's going to be very hard to implement fully self-driving vehicles that aren't as good as the most capable individual drivers--probably the best 5%, say career professional drivers with those "million-mile" safety awards. That's the point at which it gets hard to argue that a hands-on driver would have done better than the automation in a particular instance.

But it will be better than the other 95%, of whom at least 50% would self-identify as top 5'ers... And if everyone were raised in level then the need for the 5% drops - one needn't be an expert if there are no twits to avoid.
 
I was banned from Facebook once for 24 hours for posting the wrong response on an issue even though it was factually accurate. It's a known and proven fact that Google (Waymo) already blocks factual information that they deem inappropriate. Just wait until they are able to stop people from driving.

As Koolau says "Anything which can be used can be misused. Anything which can be misused will be."
 
But it is not certain that Waymo or the Chinese will have the first true robot cars.

There are so many companies working on this, I suspect the development will be like with the Covid vaccines. There will be several solutions presented, and there is no monopoly in this technology.
 
But it is not certain that Waymo or the Chinese will have the first true robot cars.

There are so many companies working on this, I suspect the development will be like with the Covid vaccines. There will be several solutions presented, and there is no monopoly in this technology.

It's a guarantee that whatever system is implemented, some people will receive preferential treatment and others will have their privileges reduced or revoked. And no one will be able to do anything about it.
 
It's a guarantee that whatever system is implemented, some people will receive preferential treatment and others will have their privileges reduced or revoked. And no one will be able to do anything about it.


Society will evolve with time. Right now, nearly all large cities have set aside large city blocks where car traffic is banned or severely restricted. You can say pedestrians are given preferential treatment over car drivers.

Where are we going with this robotcar? I suspect no impact for a few decades, simply because nobody has demonstrated a workable and affordable one.

The banning of ICE cars to make way for EVs is something more immediate and worth debating.
 
The banning of ICE cars to make way for EVs is something more immediate and worth debating.

Agreed. As long as the full impact of EVs is covered. Questions about source material, how it is sourced, the grid, disposal, and other things are currently downplayed or ignored.

There is no such thing as a zero emission vehicle. That term should be banned outright.
 
Here's a "fun" incidence with Waymo Robotaxi.

The car was exiting a residential area, and about to make a right turn onto a larger street when it encountered a benign road construction with traffic cones blocking the inner lane. It did not know what to do, and had to call home for assistance. Then, before human assistance arrived, it decided to move on its own, and correctly chose the outer lane. But then, it got stuck again when it tried to go to the inner lane. Then, it so happened that the construction crew removed the cones, and the car decided to move again once it was free of the dilemma. Then, it encountered traffic cones again further down the road, and got confused once more, so stopped in the middle of the road.

It's interesting to see how AI (Artificial Intelligence) may still have problems resolving some ambiguous situations that humans have no problems dealing with.

The interesting episode starts at 10:50.




I don’t see it that way at all. The car created a major hazard by partially blocking both lanes of a two lane road. The Waymo operator was overly casual and says multiple times “roadside assistance has arrived” in error. The inconsistent response when the car suddenly decided to go only to stop again in an even more precarious spot would be scary to a novice rider. Human drivers can figure out how to navigate around an unexpected situation. Imagine if a second robot car encounters the first car frozen and blocking both lanes. They have a long long way to go and that’s without the challenges of our crappy weather and potholes.
 
The interesting thing about automated driving is that a vehicle you would send a 12 year old to a friend's house in alone at night looks further away than it did five years ago.

At the same time, there are a whole range of assistance systems for driving that are reducing crashes right now, with more coming every year. Fixed route automated vehicles (like electric shuttles) don't seem to be far away.

Perhaps full automation of driving isn't as important as we believe?
 
... They have a long long way to go and that’s without the challenges of our crappy weather and potholes.


Yes. Real life experience is a far cry from a conceptual design. And that's why Waymo tests and tests. I give them credit for that.

This YouTuber took a lot of rides on experimental Waymo robotaxis, and as far as I know this was his first ride with an "interesting" incidence.

One can easily see why Waymo selected this most benign town to have this trial. And the cars have been driving around here with a test driver onboard for many years, starting back in 2016, before they get to this point to take passengers on a trial basis.

As I mentioned earlier, every time I go out on an errand, I see Waymo robotcars more than once on a trip. And I still have not encountered a driverless car. It appears that they are still limiting the number of driverless rides with passengers.

Who knows how much money Google has spent on this effort. Perhaps management's patience is running thin at Google (the parent corp), and that causes the management turnover at Waymo.

This thing is not simple. It's harder than rocket science.
 
Besides Waymo, another company that shows good capabilities on its test cars is Cruise Automation, which was acquired by GM in 2016. It's based in San Francisco, and has been conducting drive tests there.

Here's a video showing a Cruise car navigating the narrow streets of SF. There's a more interesting traffic scene with other cars and pedestrians starting at 7:50.

Of course, this video is posted by the company itself, so we will not see anything other than flawless performance. The problem with AI software is that we cannot be sure how it will react under an unusual situation, so just one video does not prove that the system is ready to go.

 
Last edited:
And here's a photo of the Cruise prototype car. The photo was taken in 2017, and I have seen different variations of the configuration. Same as Waymo, they have been tweaking not just the software, but also the hardware and the sensors, the latter being the only thing that we can observe in photos of the car exterior.

Note the side-looking lidars on the roof, looking down to the sides of the car. Waymo has its side lidars mounted lower on the fenders, and can look down right next to the car.

Waymo even has a chin-mounted lidar on the front grill. This would allow Waymo cars to detect a parking curb, or an obstacle, a dog, a person, lying right in front of the car and not observable by a windshield camera or roof-mounted lidar.

Are these numerous sensors necessary? What would happen if you don't have them? Wait for my next post.

gm-cruise-6.jpg
 
Last edited:
Compared to Waymo and Cruise cars that have lidars looking all around the car in addition to vision cameras and also radars, Tesla uses only vision cameras and sonar sensors to look at the car surrounding. How well does that work?

In this video, the Tesla owner regretted using the car "Summon" mode via her smartphone to command the car to drive out the garage. The car did not "see" the garage side jamb and ran into it. This video is dated 9/30/2019.

What's strange is that the car could have just driven straight out of the garage. Instead, you could see it kept steering right/left, not making up its mind where to go.


Here's another incidence, in two videos dated 3/18/2021. It's quite recent. The car owner was using the "Summon" mode to command the car to go forward out of its parking spot. He also regretted it.

The videos were taken from the car cameras themselves, so they are what the car saw. One video is the forward-looking camera, and the 2nd one is the side camera, on the right side where the car turned right into a column and scraped it.


 
Last edited:
This thing is not simple. It's harder than rocket science.



I listened carefully for the rider to identify the location and he finally mentioned Chandler, AZ. I saw the (Waymo?) test cars when I was in Scottsdale in Jan 2018 riding around with human test drivers but no public riders. They were using Volvos, not Chrysler products. It occurred to me then how relatively “perfect “ the streets are in that valley. Long, wide straight. Not much traffic. No potholes and not much weather. A short while later they had the fatality with the woman walking her bike across the road in the dark. The test driver was caught looking down at her phone just prior. The engines had disabled the emergency braking that was standard on the Volvo.
 
I listened carefully for the rider to identify the location and he finally mentioned Chandler, AZ. I saw the (Waymo?) test cars when I was in Scottsdale in Jan 2018 riding around with human test drivers but no public riders. They were using Volvos, not Chrysler products. It occurred to me then how relatively “perfect “ the streets are in that valley. Long, wide straight. Not much traffic. No potholes and not much weather. A short while later they had the fatality with the woman walking her bike across the road in the dark. The test driver was caught looking down at her phone just prior. The engines had disabled the emergency braking that was standard on the Volvo.
I think you’re mistakenly conflating Waymo (Chryslers and a few Jaguars?) and Uber (Volvo, since discontinued), very different programs and capabilities. There hasn’t been a Waymo fatality IIRC and the short lived Uber SD program was very poorly conceived, they sold the business after the fatality.
 
Last edited:
The Volvo car accident caused by a negligent Uber test driver was covered earlier in the thread.

Here's one of my earlier posts.

There has been only one fatal accident involving an experimental SDC car with a test driver on board. It was a test car by Uber, and the accident happened in Tempe, AZ, about 10 miles from my house. Yes, Uber also chose to test its cars here, along with Waymo.

The test driver was an employee of Uber. The accident happened in March 2018 at night, when the Uber test car hit and killed a homeless woman who was crossing the road pushing a bicycle. The victim was jaywalking, meaning not using a marked crosswalk, but the accident would not happen if the driver was not looking down at his phone, watching a streamed video.

After a lengthy investigation involving the NTSB, the test driver was charged with negligent homicide in Sep 2020, more than 2 years after the accident. The DA office decided not to press criminal charge against Uber. The company quite early in 2018 settled a civil lawsuit pressed by family of the victim, a mere 2 weeks after the accident.

The above accident was the topic of a long thread on this forum. I am surprised that people's memory is so short. :)

See: https://www.bbc.com/news/technology-54175359, which has a good video of the accident.

For more details, see: https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg.


PS. The Uber test cars have a lidar among its sensor suite, like Waymo cars do. Recorded data shows that the lidar picked up the jaywalker at more than 6 seconds before impact, but the software for various reasons decided not to apply the brake until 1.3 seconds before impact. The car was traveling at 39 mph, within the speed limit of that street.

From an excerpt of the above Wikipedia article which quoted the NTSB , it was clear that Uber relied on its test driver to be constantly alert and to monitor the experimental system, and he failed.
 
I think you’re mistakenly conflating Waymo (Chryslers and a few Jaguars?) and Uber (Volvo, since discontinued), very different programs and capabilities. There hasn’t been a Waymo fatality IIRC and the short lived Uber SD program was very poorly conceived, they sold the business after the fatality.



Point taken. I did indicate I was uncertain who was running the test. The point however is that the roads and terrain in that area seem to be less of a challenge for SD tech than many other areas.
 
Tesla has officially dropped the radar from its models 3 and Y. The plan is to use vision cameras for all functions that formerly involve the radar. This impacts more than the autosteer or FSD, but also the AEB (Automatic Emergency Braking) that previously uses the radar, the same as many other car makers do.

While Tesla works to improve the software for the vision camera, some features earlier available such as AEB will be disabled in the new cars without radar, pending software update.

See: https://www.caranddriver.com/news/a36542541/tesla-model-3-model-y-pure-vision/.

This is an interesting development. Earlier, Tesla made a big deal about being able to use the radar return from two cars ahead (signals bouncing back under the car immediately ahead) in order to slow down preemptively even before the leading car slows down. Now, it doesn't care about that. It is however true that the radar never did help with detection of stopped vehicles or highway barriers, because it only detects moving objects as I described earlier. It is now declared excess baggage and deleted.

The loss of the AEB causes Consumer Report to drop the Model 3 from its Top Pick list.

See: https://www.consumerreports.org/car...ick-status-and-iihs-safety-award-a2421791602/
 
T ... It is however true that the radar never did help with detection of stopped vehicles or highway barriers, because it only detects moving objects as I described earlier. ...

I was curious about this. I have a very limited knowledge of radar, but it seems to me that it would be possible to detect reflections from fixed objects, they would be either earlier or later than the surrounding objects?

I assuming the detection systems in use are designed to look for differences, to isolate moving objects from everything else (Doppler effect, or looking for deltas between successive samples?). Maybe this is to give better resolution of the moving objects, or maybe processing all the fixed objects is just beyond the capabilities of these types of radar systems?

I've always been curious about how the type of radar we see in old movies worked with 1940's technology. The beam sweep on the CRT was obviously a representation of the rotating antenna. I guess since they were looking at open sky or open ocean, the normal signals would all look the same, very little reflected energy. So some reflected power would indicate something other than air/water there. Hmmm, but then they have to convert the time delay or phase shift to a distance, and map that to the screen?

Ahhh, this wiki article seems to explain it. Very clever, they modulate the frequency of the radar signal at a known rate, the frequency of the returned signal is measured, and it is now easy to determine how much time elapsed since that frequency was transmitted. I guess that was easier to implement in 1940's circuitry than measuring small times delays in the reception of the signal (if that is actually how it was done back then)?

-ERD50
 
The radar used in Tesla cars is the same type used in other cars to implement the AEB function (Automatic Emergency Braking). They do not have a focus beam like radar dishes used for aircraft detection. There's a reason why these radar dishes have to be swept mechanically via rotation. The common car radar used in Tesla cars cannot distinguish a car from a metallic object ahead, such as a road sign by the side of the road. The only difference is one is moving, and the other one is not, and that info is crucial.

In the Florida accident where a driver was decapitated when his car on autopilot drove under a semi-trailer, Musk said the radar was not useful to prevent that, because you cannot tell a tall vehicle crossing the road from an overhead sign on an overpass. This should not be surprising. What I don't understand is why Tesla computer cannot tell from its vision camera, in good daylight and no inclement weather. And this has happened time and time again, twice with a semi-trailer, and several times with other vehicles including a red fire truck.

As described earlier, there's now a new type of radar under development for car applications with multiple beams with a claimed superior resolution. Interested people can search the Web for "4-D imaging radar". I don't know much about this technology myself.
 
Last edited:
In other news, it was reported a few days ago that a Tesla car was spotted in Florida sporting lidars mounted on its roof. What? Musk changed his mind about lidars and now has to use it?

People who follow SDC technology know that Musk has been anti-lidar. He said humans could drive with just their vision, that lidars were just a crutch, and that using lidars is a fool's errand. :) Yes, in theory a computer should not need anything more, but the problem is to have AI software with the same capability as a human brain. And the world is still waiting for that.

Now, it was reported that Tesla confirmed that it was testing lidar and had some technical agreement with Luminar, a lidar maker. However, it is not true that Musk has conceded that he needs a lidar for his FSD (full self driving). On the Web, it was suggested that Tesla now needs data from a lidar as the reference to train its vision camera, and/or to validate the info that the AI computer infers from the camera image.

Say, if the computer does not see a semi-trailer ahead, yet the lidar says that there's a humongous object ahead, you've got a huge problem. Or if the AI computer guesses that the car in front is 100 ft ahead, but lidar says it's only 50 ft, you also have a problem.

As I always suspected, the data from the many Tesla cars out on the road that Musk claims is so valuable in order for him to develop SDC is not so useful after all. It lacks any reference or truth data.

On the other hand, his competitors like Waymo and Cruise Automation have been driving test cars around for millions of miles with vision cameras and lidars running. They have been collecting and recording much more useful information. Their data is far more valuable for them to see if they can use advanced software to use just the camera image. They can just run their vision algorithm through the millions of miles of test data that they already have, and see if their AI vision software can match the concurrently recorded precise lidar data.

Lidars are not a fool's errand after all. :)

Why-Tesla-Using-LiDAR-Is-Way-More-Complex-Than-It-Seems-800x445.jpg
 
Last edited:
I was curious about this. I have a very limited knowledge of radar, but it seems to me that it would be possible to detect reflections from fixed objects, they would be either earlier or later than the surrounding objects?

I assuming the detection systems in use are designed to look for differences, to isolate moving objects from everything else (Doppler effect, or looking for deltas between successive samples?). Maybe this is to give better resolution of the moving objects, or maybe processing all the fixed objects is just beyond the capabilities of these types of radar systems?

I've always been curious about how the type of radar we see in old movies worked with 1940's technology. The beam sweep on the CRT was obviously a representation of the rotating antenna. I guess since they were looking at open sky or open ocean, the normal signals would all look the same, very little reflected energy. So some reflected power would indicate something other than air/water there. Hmmm, but then they have to convert the time delay or phase shift to a distance, and map that to the screen?

Ahhh, this wiki article seems to explain it. Very clever, they modulate the frequency of the radar signal at a known rate, the frequency of the returned signal is measured, and it is now easy to determine how much time elapsed since that frequency was transmitted. I guess that was easier to implement in 1940's circuitry than measuring small times delays in the reception of the signal (if that is actually how it was done back then)?

-ERD50

I've seen the old Kinescopes of the earliest radars and it must have been quite a learning curve to figure it all out. There wasn't the more familiar "dot" or "dots" on a screen within a grid. It was more like a gemish of squashed pulses across the screen. From this "picture" a good radar operator was able to determine range, speed even numbers to an extent. Imagine if folks had believed their radars and properly interpreted the results in the early hours of Dec. 7, 1941. Things might look a bit different in Pearl Harbor now. Of course, we DO learn more from our mistakes than our successes in most cases. But some mistakes are more costly than others. YMMV
 
There is no such thing as a zero emission vehicle. That term should be banned outright.

I disagree. Given that our solar photovoltaic system overproduces energy relative to our home car charging (almost 100% of our charging) I believe I can claim that our cars are solar powered. After an "offset" period, to cover production emissions, we get to point where we are emission free. On top of that, the cars have proven to be almost maintenance free as well. I don't miss the gas station or oil changes one bit.
 
Back
Top Bottom