...I'm no expert on this, but from what I've been able to ascertain, GPS is only part of the solution; it supports the general location and route selection, but isn't used on a millisecond basis to position the car....that's done by vision systems. So the same thing that allows human drivers to keep in the altered lanes during construction (orange drums) are also used by the vehicle guidance system. And the same clues that keep a human driver from driving off the road when rain or darkness obscures the strong lane indicators, well, they've coded the software to use the same clues...
GPS and the road database are used by these systems the same way humans read paper maps: it is used mainly for routing.
Humans drive by looking at the road: trying to stay in lane, watching for other cars, obeying traffic lights and road signs, avoiding furniture or junk falling off vehicles in front, swerving around pot holes if possible or slowly driving across, etc... In an autonomous vehicle, these tasks are done by computer vision, and obstacle detection and avoidance is augmented with Lidar and millimeter-wave radar. GPS has nothing to do with that.
An autonomous system must also be fail-safe. It must know when one of its crucial sensors fails or is erratic, and refuses to drive, or slows down and pulls over safely if it is on the road. How fast people forget about Toyoya's unintended acceleration! Such a simple function, and they messed it up and denied it.
But right now, they are still trying to have it work when everything is in tiptop condition, let alone dealing with self-diagnosis for failures.
The kind of AI I talked about earlier is even harder. Consider how a human acts in the following situation.
You come up to an intersection. There's a 4-way stop sign. You slow down, preparing to make a full stop. Then, you observe a woman pushing a stroller on the sidewalk and it looks like she is about to cross the road. Of course, you pause at the intersection, yielding her the right of way. But the woman stops there, turns and looks back at her husband and another kid who are trailing behind. So, it was clear that she intends to wait for them to catch up. Releasing the brake, you slowly accelerate to get through the intersection.
Now, a human can read body language and interpret the intention of another human. How does a computer do that? Last I have seen about AI (artificial intelligence), the effort to recognize human faces and to identify them is not all that great. And the software to estimate the age of people through computer vision is even worse.
I, for one, agree with The Oracle of Omaha: automated vehicles will be here faster than we think. The countries that put up laws to inhibit their use will be giving up a chance to get a productivity leg-up on the rest of the world.
There are a lot of people working on this technology. If it works, then it works. Nobody would be able to stop it, and why would any want to stop it (perhaps other than truck driver unions)? But it is just not as easy and simple as people try to make it look like.
Think about technologies that we had for decades, then gave up because of insurmountable problems: supersonic commercial flight, reentrant orbital vehicles (read Space Shuttle), etc... They worked, but were far from safe and cost so much money that we gave it up. And autonomous vehicles are not even here yet for us to price it and to talk about commercial viability.
So, I find this area interesting but am not going to hold my breath. I would not bet either for or against it.