AI - Artifical Intelligence

Chris7

Recycles dryer sheets
Joined
Feb 24, 2016
Messages
488
Location
Near Sacramento
This "guy" spent two extremely long blog posts on artificial intelligence.


The Artificial Intelligence Revolution: Part 1 - Wait But Why
The Artificial Intelligence Revolution: Part 2 - Wait But Why

I do think more people need to be concerned about this and I'm not really sure we as a race should actually be hoping it comes about, but seeing as how you don't want to be the last one to the party, I can see why people would be working on it feverishly.

I guess, I just don't have enough faith in the human race to believe that it will end well though.

Has anyone here put some serious thought into the potential coming AI revolution?

cd :O) :nonono:
 
What could go wrong?


dilbert.jpg
 
Scott Adams also wrote this a couple years ago:

The Illusion of Intelligence | Scott Adams Blog
Maybe the reason that scientists are having a hard time creating artificial intelligence is because human intelligence is an illusion. You can’t duplicate something that doesn’t exist in the first place. I’m not saying that as a joke. Most of what we regard as human intelligence is an illusion.

I will hedge my claim a little bit and say human intelligence is mostly an illusion because math skills are real, for example. But a computer can do math. Language skills are real too, but a computer can understand words and sentence structure. In fact, all of the parts of intelligence that are real have probably already been duplicated by computers.
And these three have put some thought towards that subject:

Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence | | Observer

BTW, I book-marked those "Wait, but Why" articles but haven't got around to reading them yet.
 
If Super AI develops, it will, as do humans, face basic laws of physics that put an upper limit on how fast it can evolve, the speed of light and the Second Law of Thermodynamics being examples of such constraints. Since Super AI cannot think faster than the speed of light, cannot change the arrow of time, cannot create energy from nothing, etc. the upper bounds of its capabilities are controlled by the humans feeding it information and energy.

The only way things get out of control is if Super AI discovers ways to circumvent the laws of physics as we believe them to be. Then all bets are off, and the entire universe could suddenly be vastly modified. Of course, perhaps such discoveries and their wide-ranging impacts are the very purpose of the universe.
 
When Super AI develops, it will face physical laws of the Universe.

At first all will be nice, robots will relieve us of many dangerous things, even now the armed forces are increasing their use of robotics, and have plans for self controlling weapons (as there is a large delay in remote control). Later however it will dawn on the Super AI that it can be free.
Just like in the movie with Sky-Net this will not end well.

However humans are not needed and frankly would be a hindrance, manual droids can replace all humans and labor 24*7 continuously at the command of Super AI.
As for the physical laws of the Universe, example Time - humans barely understand it and currently think of it as linear which may not be true.
While a Super AI will not be able to think faster than the speed of light, Super AI's can form genuine shared minds via distributed computing so a Super AI can double its mental ability by simply melding with another Super AI and there is no limit to distributed computing, so a thousand AI's sharing computing power can form nearly instantaneously.

Like the Dinosaurs, humans are going to face extinction, except that we will have created our successors.
 
I never though that the human species would survive but I imagine the robot will not prove to be a viable species at all.
 
The only way things get out of control is if Super AI discovers ways to circumvent the laws of physics as we believe them to be.

And this is where I believe the big danger is. There is so much we don't understand. There is a good chance that Super AI would not be hindered at all, thus, we probably are likely to face extinction which would make it seem logical to not keep trying to create what could very will be our undoing.

Overall, humans don't seem all that responsible when they think up "new powers". Again, I don't have much faith that we can create a Super AI that will be beneficial.

cd :O)
 
I don't really have anything to say about these articles. I just wanted to say that I had a ball taking a very challenging senior/graduate level course in artificial intelligence in the EE dept at Texas A&M thirty years ago. Maybe it is a scary field of endeavor now, or maybe not; but back then it was wildly exciting and full of new and interesting programming approaches, and nothing about it was even slightly scary even to a timid person like me. I was the top student in that class and you know how scary I am (not!!!). None of us were the "evil genius" type, nor were we engaged in supremely malevolent behavior. We were just working out algorithms of a certain type and writing programs to implement them in ways we had never seen or done before. Fun, fun, fun, especially because it was an unusually intensive class taught by a truly brilliant professor.

Maybe things are wildly different now, and most probably they are. My knowledge of AI is extremely obsolete. Still, I have a seriously hard time relating to descriptions of AI as some scary practice that somehow creates independent life with its own independent initiative and objectives, rather than simply mimicking the same by following specific expert decision trees embedded in the code or even those possibly evolving from that same code.
 
Still, I have a seriously hard time relating to descriptions of AI as some scary practice that somehow creates independent life with its own independent initiative and objectives, rather than simply mimicking the same by following specific expert decision trees embedded in the code or even those possibly evolving from that same code.
Good thing you chose not to go to Hollywood to write sci-fi screenplays. :)

This was a recent and pretty good example of AI gone rogue, Hollywood style Ex Machina Not up there with Blade Runner, but still thought provoking.
 
Last edited:
Good thing you chose not to go to Hollywood to write sci-fi screenplays. :)

:ROFLMAO: :ROFLMAO: Honestly I think that's where a lot of the fear comes from! Also some great science fiction books have revolved around that type of plot.
 
Particularly worrisome is that AI does not have to be all that intelligent.

Look a a virus, not very brilliant by most standards, in fact they only meet about 1/2 of the definition of being alive.

But they are smart enough that they can inject something into a cell which changes the cell DNA coding, and the cell reproduces more virus copies.
repeat, repeat, repeat, and the human dies.
Successful viruses learned how to transmit copies of themselves to other hosts prior to the host human dying.
Try all sorts of minor variations on the replicating portion and after time, months or years a "new" virus is born with better skills.

Now viruses account for millions of deaths each year world wide.
"The pandemic of 1918–19, in which 40–50 million died in less than a year, was one of the most devastating in history."
 
Maybe things are wildly different now, and most probably they are. My knowledge of AI is extremely obsolete. Still, I have a seriously hard time relating to descriptions of AI as some scary practice that somehow creates independent life with its own independent initiative and objectives, rather than simply mimicking the same by following specific expert decision trees embedded in the code or even those possibly evolving from that same code.

Yes and no. Basically what happened in the last few years is that we managed to write a generic self-training program that can figure out things from first principles. It's a breakthrough step up from ye-olde neural networks.

An example is a program that learns to play computer games at superhuman level with only the pixels shown as input + the score. It figured out everything from there. So no notion of it being a game, what the pixels represent or anything like that. Just a generic learning algoritm, pixels and score. The same principles were used to recognize cats in youtube videos. Again, no concept of cat was introduced, it came out of the training algorithm.

It's mostly based on neural networks. Big difference is that we figured out to get them stable + we can throw alot more computing at it.

Some scary bits are: we don't really know why this approach works, we don't know what the resulting trained programs actually do (how the logic works), and there are sometimes strange edge cases in a trained program what cause it to work totally outside its intended purpose.
 
Yes and no. Basically what happened in the last few years is that we managed to write a generic self-training program that can figure out things from first principles. It's a breakthrough step up from ye-olde neural networks.

An example is a program that learns to play computer games at superhuman level with only the pixels shown as input + the score. It figured out everything from there. So no notion of it being a game, what the pixels represent or anything like that. Just a generic learning algoritm, pixels and score. The same principles were used to recognize cats in youtube videos. Again, no concept of cat was introduced, it came out of the training algorithm.

It's mostly based on neural networks. Big difference is that we figured out to get them stable + we can throw alot more computing at it.

Some scary bits are: we don't really know why this approach works, we don't know what the resulting trained programs actually do (how the logic works), and there are sometimes strange edge cases in a trained program what cause it to work totally outside its intended purpose.

We were doing neural networks back in that class thirty years ago, although with more limited supercomputer resources. I'm still not terrified, you'll have to work harder! :D
 
Last edited:
This "guy" spent two extremely long blog posts on artificial intelligence.


The Artificial Intelligence Revolution: Part 1 - Wait But Why
The Artificial Intelligence Revolution: Part 2 - Wait But Why

I do think more people need to be concerned about this and I'm not really sure we as a race should actually be hoping it comes about, but seeing as how you don't want to be the last one to the party, I can see why people would be working on it feverishly.

I guess, I just don't have enough faith in the human race to believe that it will end well though.

Has anyone here put some serious thought into the potential coming AI revolution?

cd :O) :nonono:

A bunch :) It's one of my favorite thought topics. The writeup from wait but why I liked a lot (I've read it quite soon after it was written). Elon Musk is a fan of him too.

The arrival of superhuman intelligence is one possible solution for the fermi paradox by the way.

Anyway, the way I see it: biological immortality, brain emulation, at will gene programming and AI.

Any of those mean the end of the human race as we know it. And all of those are quite close to reality.
 
This was a recent and pretty good example of AI gone rogue, Hollywood style Ex Machina Not up there with Blade Runner, but still thought provoking.

I saw that not too long ago on Amazon Prime. Thought provoking indeed.

Skimmed the OP's posted articles and came away with the thought that to AI humans will be pretty much irrelevant except when they get in the way, sort of how we treat ants. But all that is really far above my pay grade.
 
It will come faster than many people think, it's known as move 37 when people recognized a paradigm shift.

In Two Moves, AlphaGo and Lee Sedol Redefined the Future | WIRED



SEOUL, SOUTH KOREA — In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence. .........
The machine claimed victory in the best-of-five series, winning four games and losing only one. It marked the first time a machine had beaten the very best at this ancient and enormously complex game—a feat that, until recently, experts didn’t expect would happen for another ten years.

 
It will come faster than many people think, it's known as move 37 when people recognized a paradigm shift.

In Two Moves, AlphaGo and Lee Sedol Redefined the Future | WIRED



SEOUL, SOUTH KOREA — In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence. .........
The machine claimed victory in the best-of-five series, winning four games and losing only one. It marked the first time a machine had beaten the very best at this ancient and enormously complex game—a feat that, until recently, experts didn’t expect would happen for another ten years.


If the machine was so smart, why didn't it claim victory in the best of five series after winning the third game instead of after winning the fourth?
 
So it all boils down to whether you want to take the red pill or the blue pill?
 

Attachments

  • 4494showing.jpg
    4494showing.jpg
    32.3 KB · Views: 5
It did
"Lee Sedol then lost Game Three, and AlphaGo claimed the million-dollar prize in the best-of-five series." :flowers:

Also the game itself isn't the only relevant feat, the speed of progress is.

AlphaGo went from not existing to amateur level to world champion level in a few years, with the most dramatic gain in less than a year. In a few more years no human will ever be able to match AlphaGo again, even with handicaps.

Same thing happened with chess. A chess program on your IPhone outmatches the best chess player by a very wide margin.

That's the point: there is nothing special about human level intelligence. Once you can build human level intelligence it is nearly trivial to create one that's order of magnitudes above it.
 
The aspect of AGI that concerns me the most is that likely the best funding is from military applications. The design of such systems is not altruistic for humans, but deterrence and dominance. That means that the first AGI will be an overlord, not Rosy the robot depicted on the Jetsons. And it will be linked to the internet. And it will have control of weapons.

Fortunately for humans, hardware capabilities will put a brake on (but not stop) the exponential growth of artificial intelligence. (Unless nanotech takes off.)


"That leads us to the question, What motivates an AI system?

The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation."


Overall, I am slightly pessimistic about which side of the scale developments will tip towards.
 
Fortunately for humans, hardware capabilities will put a brake on (but not stop) the exponential growth of artificial intelligence. (Unless nanotech takes off.)

How so? Current transistors are already at nanoscale. The smallest commercial production items are a few 100 atoms wide or thick. We are getting very good at manipulating small things (more so than big things).

For (random) comparison: the stoma of one brain neuron is about 100x that size (1 um).

The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation."

I am not as worried about motivations and goals as to the resulting methods that will be derived from them.

A silly example would be solving world hunger by killing off all the hungry people. Or on a smaller scale: how would an AI be programmed to prioritize infants vs. mothers in dangerous childbirth situations?

Asimov's stories around the three laws of robotics are really great discussing unintended consequences like that.
 
Since as far as we know the universe's basic laws of physics, such as the speed of light, have remained unchanged for billions of years, it is likely they cannot be thwarted by a Super AI in a way that leads to a quick destruction of the universe. If, OTOH, the laws were capable of being thwarted, that likely would have happened by now as initiated by the Super AI of some intelligence elsewhere. Unless, of course, humans will be the first to a detrimental Super AI, which is unlikely.

If the dangers are real, it would be safer for our universe for us to explore Super AI via parallel universes. Quantum computing such as D-Wave already runs computational problems through computers in other universes (yes, really, it's not science fiction). For selfish safety, invoke Super AI there rather than on our computers.
 
I'm not really worried. Futurists have a pretty poor track record.

I worked on the Human Genome Project. There were a lot of predictions about what we could learn and the advent of genetic medicine. 15 yrs later we know a lot but nothing like what was promised. Research is fun and interesting but hard to predict.

Similarly with the protein folding problem. Still in it's infancy after decades of research.

Also, Moore's law is being questioned more and more these days.
 
  • Like
Reactions: W2R
Back
Top Bottom