Self-Driving Cars -- Needs of the Many vs Numero Uno

Probably, but only for detection of large objects. Or having multiple vision cameras to use stereoscopic effects would be a lot better than being a cyclop. Pot holes are tough though.

There are a lot of decisions the computer must make if it has no human to help. It would have to know not to drive on a soft shoulder after a rain and gets stuck. It has to detect and avoid puddles or slow down to avoid hydroplaning or splashing pedestrians and other cars... It's a lot more involved than detecting a semi-trailer. One can go on and on.
 
Last edited:
google-new-self-driving-car-prototype.jpg

I wouldn't be caught dead in something like this that looks like an oversize marshmallow.:LOL:
 
So, you don't think the cool Lidar on top makes up for the vehicle shape?

Hey, if it is affordable and reliable, I will be the one to have it. However, I would want it larger and beefier to protect me from the texting idiots who will still be around. For me, function and safety come before form.

I will even pay a lot more if it comes with an automatic turreted machine gun that's computer controlled.

PS. By the way, the vehicle shape and the high position of the Lidar allows the latter to look down very close to the vehicle. I believe that's intended. Far superior to the ultrasonic proximity sensors in other production cars. Sitting up high, that Lidar has 360-degree vision. Excellent and lots of data for the computer to analyze.
 
Last edited:
So, you don't think the cool Lidar on top makes up for the vehicle shape?

Hey, if it is affordable and reliable, I will be the one to have it. However, I would want it larger and beefier to protect me from the texting idiots who will still be around. For me, function and safety come before form.

I will even pay a lot more if it comes with an automatic turreted machine gun that's computer controlled.

PS. By the way, the vehicle shape and the high position of the Lidar allows the latter to look down very close to the vehicle. I believe that's intended. Far superior to the ultrasonic proximity sensors in other production cars. Sitting up high, that Lidar has 360-degree vision. Excellent and lots of data for the computer to analyze.

Whatever the self driving car manufacturers come up with, they would still have to meet the NHTSA safety standards for bumpers, air bags, glass type, etc, etc.

I realize that little marshmallow is a test vehicle, but it is butt ugly. :LOL:
 
By the way, as I repeatedly pointed out, the current lidar is an expensive dome ($75K initially) that sits on top of the car. Tesla owners who want a sleek looking car would not care for it.

I really didn't bother with the comparison because I thought it was obvious that Google has a superior -- and much more costly -- sensor system.

I don't know enough about lidar, but I'm guessing some of this may be able to be scaled down. I assume Google is using what is available and reliable. I assume more research can and will be done in this area. You can correct me if I'm wrong.

But people want Tesla. I think it was a mistake for Google to show their marshmallow car. Tesla's marketing has been brilliant. Google's has been nerdy.
 
Last edited:
:LOL::LOL::LOL:

I was just kidding about the look of the Google car.

It was an experimental car that Google built to drive around to collect data. Its speed is limited to 25 mph. They try to get as much real-world data as possible to fine-tune their software, and to encounter weird situations that one does not know to stage in a lab. Nowadays, harddrives are cheap and Google is known for having tremendous data banks. Using the stored data, they can go back and look for mistakes, or refine their software for object detection and tracking. Note that they use their own test drivers, and not using the public.

About the lidar, yes, in production it would be a lot lower. However, it is a spinning mechanical thing, so concealing it in an aesthetic manner may be difficult.

Again, in production, whoever uses Google technology will have to work with them to do a major design that somehow will not compromise that ideal sensor mounting location too much. I was just linking the photo so that people know what kind of sensor it currently takes to do an honest-to-goodness autopilot. Up until the Tesla fatal accident, laymen think that Google is stupid and Tesla is smarter because the latter can do the job with a sleek looking car.

The other type of simple radar in current production cars is really meant to avoid rear-ending somebody in a "adaptive cruise control" mode. The use of that to do an autopilot leads to serious shortcomings, as the public just now learns about. And the ultrasonic proximity sensors for clearance around a car are really inexpensive, and meant for a very close range. You can get one for a few dollars to play with.
 
Last edited:
To boot the radar is typically in the bumper while the cameras are typically in the rear view mirror. As noted earlier the combination of the radar and the cameras are used for adaptive cruise control and crash mitigation. I don't know what combination is used for pedestrian detection and low speed braking which some models of cars have, but I don't think people show up that well on radar, perhaps they add ultrasound, although blind spot detection is based upon cameras only.
 
I have seen some demos showing that Google relies on the Lidar the most. It has much higher resolution than radar and can see finer details because of the laser narrow beamwidth compared to the radar beam. In rain, fog, snow, or dusty conditions, I do not know how each of them would be affected.

It ain't easy I keep saying, but people kept saying Tesla has got it, so what's the big deal.

PS. By the way, the early and most popular blindspot detection systems are sonar. Maybe some use video cameras now, but then you would need a computer to interpret the image. Ultrasonic sensors can simply tell you if there is something in the proximity or not by listening to the echo, just like bats do.
 
Last edited:
By the way, there are many reports that blindspot detection systems have problems with fast vehicles overtaking you. That makes sense because with the range of 10 to 16 ft, a fast vehicle will cross that distance in a very short time. So, it is not as good as you looking in the side mirrors to judge the speed of that approaching vehicle.

A Lidar that looks all around you can spot that overtaking vehicle from a few hundred feet away if it has a clear line of sight. And the computer tracking that vehicle can warn you way in advance of any proximity sensor.

Now, if you are followed by a big semi, then the center-mounted Lidar is blocked, and will not see to the rear as well as your own side mirrors. Man, it's not easy, I keep saying.
 
Last edited:
Sorry for the detour off the thread topic, that is the dilemma of whether a self-driving car should sacrifice its single passenger to save a higher number of pedestrians. Most people do not like that simply because the self-preservation instinct is so strong.

But I ran across some other discussions about situations that do not involve harm to the car owner. That's clever, because when oneself is removed off the equation, perhaps we will focus on how the difficulty still remains in having the computer making the choice.

Should the computer not choose to mow down a single person instead of two? Here, I think we can get a consensus.

A tougher example is, if driving down the road, the computer sees two persons jaywalking and stepping off the curb. One is an elderly person, the other a child, and if the car has to hit one of them, should it not be the elderly person in order to save the child who has more years ahead of him?

We can think of zillions of cases like the above. When the autonomous car gets here, it will have to have the smart to recognize human features to avoid them. And it can infer the age from their height. Should it not use that to make the decision? (it will pay to be short here, I tell you). That is not far fetched at all, as Google car driving software can already recognize the general human shape of pedestrians.

Such trade-off dilemmas are academic now, but will come up in the years ahead. Perhaps I was wrong in saying it's too early to think about it.

Heck, many pro-Tesla-cars say that fatal accidents of early self-driving cars are worthwhile because they save lives in the long run. They are already making that trade-offs.
 
Last edited:
Sorry for the detour off the thread topic, that is the dilemma of whether a self-driving car should sacrifice its single passenger to save a higher number of pedestrians. Most people do not like that simply because the self-preservation instinct is so strong.

But I ran across some other discussions about situations that do not involve harm to the car owner. That's clever, because when oneself is removed off the equation, perhaps we will focus on how the difficulty still remains in having the computer making the choice.

Should the computer not choose to mow down a single person instead of two? Here, I think we can get a consensus.

A tougher example is, if driving down the road, the computer sees two persons jaywalking and stepping off the curb. One is an elderly person, the other a child, and if the car has to hit one of them, should it not be the elderly person in order to save the child who has more years ahead of him?

We can think of zillions of cases like the above. When the autonomous car gets here, it will have to have the smart to recognize human features to avoid them. And it can infer the age from their height. Should it not use that to make the decision? (it will pay to be short here, I tell you). That is not far fetched at all, as Google car driving software can already recognize the general human shape of pedestrians.

Such trade-off dilemmas are academic now, but will come up in the years ahead. Perhaps I was wrong in saying it's too early to think about it.

Heck, many pro-Tesla-cars say that fatal accidents of early self-driving cars are worthwhile because they save lives in the long run. They are already making that trade-offs.
If you think about it if the occupants of a car are belted in with airbags a pedestrian collision should not do to much damage to the occupants. It would take a vehicle to vehicle collision. So put the following scenario, a head on collision at high speed or plowing down pedestrians by going right off of the road.
 
That's a good one, though it brings back self-preservation issue, where the passenger would prefer that his car hits a soft target even if that is a human.

The above is a likely scenario. A child chasing a ball dashes out into the street. A human driver would not be fast enough to react, and would most likely hit the child. A computer is fast enough to see and react, but the law of physics does not allow it to stop in time. So, it has to combine braking with steering. But steer where? To a harder target, or an approaching car to save the child?

I think programmers working on a truly autonomous car have to work on this dilemma already. They have to write the software to steer the car somewhere. Whether the car will skid or follow the path they program is not really relevant here. The question is "what path do they program"? Even if there is no product yet, they are writing some software now. What's their choice, their intention?

Congress needs to subpoena these programmers and asks them. :)
 
Last edited:
I spent two days with a 2016 Tesla 90 with Autopilot several months ago.
I was very impressed with it. It currently has lots of limitation, which have been discussed in the thread.

However, I think it's a safer driver than I am now in situations where I'm not paying complete attention to the road.

In fact, its a better than most drivers are in rush hours bumper to bumper traffic, which
I sadly experience three times a week in Hawaii going back from the gym. If I am going 5 or 10 mph, I am often not paying close attention (especially with 17" screen with internet access in front of me.). By the 2nd day, I was totally trusting the software after observing how good it was about keeping its distance from the next car and yet making room for people making lane changes.
 
Just like there are no green or red lights in the sky, there will be no TCLs.

There are no traffic lights in the sky because human controllers are guiding every single airplane, telling them exactly what direction, altitude, and speed to fly at at all times. In addition, every airplane carries a little box (transponder) that is connected to every other plane, and it can recognize when the human guide and the human pilots have all screwed up, and a collision is imminent, and alert everybody.

That said, even with all that interconnectivity and all that airspace to work with, they still haven't even managed to automate air traffic control. Why do we need human controllers at all? Numerous crashes have occurred throughout history because the human controller inadvertently put two planes on a collision course. A computer would never make such a mistake. Yet they still have not found a way to safely turn the task entirely over to the computer.
 
There are no traffic lights in the sky because human controllers are guiding every single airplane, telling them exactly what direction, altitude, and speed to fly at at all times. In addition, every airplane carries a little box (transponder) that is connected to every other plane, and it can recognize when the human guide and the human pilots have all screwed up, and a collision is imminent, and alert everybody.

That said, even with all that interconnectivity and all that airspace to work with, they still haven't even managed to automate air traffic control. Why do we need human controllers at all? Numerous crashes have occurred throughout history because the human controller inadvertently put two planes on a collision course. A computer would never make such a mistake. Yet they still have not found a way to safely turn the task entirely over to the computer.

Which is why I think fully autonomous level 5 cars are many decades away.

Interestingly, in the air, they have TCAS along with the controllers. There was an event where if the pilots followed TCAS, all would have been OK. But one followed TCAS and the other followed the controller, and a horrific crash occurred. What this says to me is it will be a difficult process to give up our control to the computer.
https://en.wikipedia.org/wiki/Überlingen_mid-air_collision
 
I spent two days with a 2016 Tesla 90 with Autopilot several months ago.
I was very impressed with it. It currently has lots of limitation, which have been discussed in the thread.

However, I think it's a safer driver than I am now in situations where I'm not paying complete attention to the road.

In fact, its a better than most drivers are in rush hours bumper to bumper traffic, which
I sadly experience three times a week in Hawaii going back from the gym. If I am going 5 or 10 mph, I am often not paying close attention (especially with 17" screen with internet access in front of me.). By the 2nd day, I was totally trusting the software after observing how good it was about keeping its distance from the next car and yet making room for people making lane changes.

This is my experience as well. I just used it yesterday in terrible traffic. I use the driving assistance features LESS when I'm at high speed because there are things that make it unreliable: dips in road with sharp turns, barriers in HOV lanes, disappearing dividing lines, crazy cars crossing many lanes really quickly. I don't think anyone should turn it on and forget about it.

The most dangerous thing about it is that it creates a false sense of security. This is similar to when you're driving on an open, straight road for an hour or so and want to fumble around for that song you want to listen to so you put your knee on the wheel while you look down and search for the song. If ANYTHING goes wrong during that moment, it'll be ugly.

The difference with the driving assistance is that it can make you think you can do that for minutes when, you shouldn't.

Yes... it has warnings and updates, etc. but I think human psychology is more powerful. Unless there was a constant annoying noise when I took my hands off the wheel, I think people will abuse it.... and if it had that noise, no one would ever use it and we'd go back to the "have a knee on it" method :).

I think in aggregate it'll be safer than not having it and over time the whole tech will improve, but there will be a transition where the tech isn't as good as people think it is and stuff like this will happen.

Worse will be when cars are totally self driving and errors in programming cause larger accidents. Right now car accidents don't get much press because they happen one at a time and it's humans that cause them. When a networked computer makes an error it may look more like a train crash and when 100 people die all at once it's 1000x times worse than if 200 die in different incidents at different times; even if it's "safer" when 100 die all at once.
 
I wonder what the target for "success" will be for autonomous cars? If the claim re: the Tesla of "first fatality in 130 million miles" is true, it appears they're in the ballpark - without even claiming their cars are autonomous.

Of course I realize statistics probably won't govern the debate at all. Just as people (disportionately) fear dying in airline crashes, lightning strikes, swimming in the ocean, or acts of terrorism OR winning the lottery - when dying in a car is FAR more likely, yet almost all of us do it every day.

While the fatality rate roughly leveled off around 2000–2005 at around 1.5 fatalities per 100 million miles traveled, it has resumed a downward trend and reached 1.27 in 2008.
https://en.m.wikipedia.org/wiki/Transportation_safety_in_the_United_States#
 
From what I see, the common and inexpensive sensors in the Tesla and other production cars may be adequate for automatic driving on freeways during rush hours. I am sure it is a lot less tiring than driving yourself in stop-and-go traffic. You can just sit back and watch it, but with your foot ready to stomp on the brake if something bad happens (which is not likely because the computer should have no problem seeing the car in front).

Sadly, not everyone is as cautious as the Tesla owners on this forum. "Look Ma, no hands" at 75 mph is what they like to show on youtube.

... A computer would never make such a mistake. Yet they still have not found a way to safely turn the task entirely over to the computer.

A computer can fail too, or someone may just unplug it :) . A central computer is the worse arrangement. You want a distributed system.

Fault-tolerant computing was a hot topic for research 35 years ago. I do not know how far they have advanced this area.
 
Last edited:
There are no traffic lights in the sky because human controllers are guiding every single airplane, telling them exactly what direction, altitude, and speed to fly at at all times. In addition, every airplane carries a little box (transponder) that is connected to every other plane, and it can recognize when the human guide and the human pilots have all screwed up, and a collision is imminent, and alert everybody.

That said, even with all that interconnectivity and all that airspace to work with, they still haven't even managed to automate air traffic control. Why do we need human controllers at all? Numerous crashes have occurred throughout history because the human controller inadvertently put two planes on a collision course. A computer would never make such a mistake. Yet they still have not found a way to safely turn the task entirely over to the computer.

What you describe is instrument flight rules which commercial airlines use. Most private pilots use visual flight rules which are based on see and avoid. (also IFR is used above 18000 feet universally) Visual flight rules are based on see and avoid, which required operations clear of clouds etc.
 
I think the needs of the many and few would be to have the cars behave in a predictable way. That means not swerving in to oncoming traffic and crap. That's nuts! What about the jaywalkers that "know" the cars will stop for them, so they simply get used to walking in front of cars. And of course, people being people (some people), they'll continue to walk in front of ever nearer cars until the car can't stop, and they get hit. Then it'll be "waaah, waah, bad old car". I used to hope to be around in the days of true self-driving cars, but now I don't know.
 
... people being people (some people), they'll continue to walk in front of ever nearer cars until the car can't stop, and they get hit... I used to hope to be around in the days of true self-driving cars, but now I don't know.

Particularly if you don't want to see a group of teenage pranksters jumping out in front of your autonomous car at the last minute, causing it to solve the ethical dilemma by driving itself and you into a wall. :)
 
Particularly if you don't want to see a group of teenage pranksters jumping out in front of your autonomous car at the last minute, causing it to solve the ethical dilemma by driving itself and you into a wall. :)
Presumably the car would first have stomped on the brakes, since its strategy would be to mitigate the damage, even 5-10 mph can make a difference. Then assuming you are belted in the air bags and the seat belts should mean only bruises and the like.
Let me give an alternative issue: where I live deer car collisions are common. How would the car distinguish a deer or a large dog from a person? Deer have a habit of running right in front, of the car as do some dogs.
 
What you describe is instrument flight rules which commercial airlines use. Most private pilots use visual flight rules which are based on see and avoid. (also IFR is used above 18000 feet universally) Visual flight rules are based on see and avoid, which required operations clear of clouds etc.

You're of course correct (I'm a private pilot myself); I was trying to keep the explanation simple, as when most people think "airplanes," they think of the big, commercial ones, which are almost exclusively IFR.
 
Particularly if you don't want to see a group of teenage pranksters jumping out in front of your autonomous car at the last minute, causing it to solve the ethical dilemma by driving itself and you into a wall. :)
Maybe the autonomous cars should be indistinguishable, at a distance, from traditional cars. Then there would probably be fewer pranksters to worry about, or at least get them to think twice before trusting a human driver to be quick enough.
 
Back
Top Bottom