The Singularity... and you

Regarless of the technology, humans will be involved. Will they be like the engineers that worked on Appolo, Space Shuttle, Hubble, Rover....

Or the 'engineers' that launched healthcare.gov? One of these two choices scares me.
MRG
 
Well, I actually look forward to it, but I also know that it likely won't come at the time predicted. Basically, this is because computing actually stopped following Moore's Law, they started having trouble increasing computing power due to size constraints with silicon, and having been keeping up so well since the mid 00's. Multi-cores, the last attempt at increasing computer power significantly, did not add a lot of additional power when you keep adding cores, it starts becoming a negligible difference past eight cores.

It is going up, but not nearly as quickly as before, it would take a breakthrough technology to revive it.

While the singularity will eventually remove a lot of jobs, people tend to find new jobs to replace them, for the most part. Also the increase in wealth do the singularity should help more people retire earlier if they choose that is what they want.

As for body transfers/health discoveries, there is a good reason why that is in a totally different category from technological discoveries. There are no moral/legal rules that govern experimenting on robots/computers, there are a whole slew of them when it comes to experimenting on humans, so progress that requires human experimentation, goes very very slow, and carefully, for the most part, it is on a whole different dimension of the quantity of testing required in comparison to technological innovation (decades of clinical studies vs. some genius in their basement working for a year or two).
 
Just remember, those so called "engineers" were hand picked by the government........:rolleyes:

Yup, the same Gummint that picked the engineers that put us on the moon near 50 years ago... Notice how many other countries have matched what our Gummint did nearly 50 years ago.
 
Hmmm... limits...
I don't know about that... perhaps.

In the case of health based progress... to some degree limited by tests, but genetic research and the scale of studies is just gtting off the ground. Hardly a day goes by without major, new discoveries.

As to computer speed and capabilities, beyond my knowledge and understanding, however not all technological advances are based on speed. We reached some limits with the Fermi Lab accelerator over 30 years ago, but that was not on the same track with the other rails of knowledge that brought us to today's level of technology.

In the OP, I mentioned Charlie Rose's interview with Larry page. Here is a "take on the interview. Larry Page Lays Out His Plan for Your Future | Wired Business | Wired.com

And a quote that is intriguing:

Rose asked him about a sentiment that Page had apparently voiced before that rather than leave his fortune to a cause, that he might just give it to Elon Musk. Page agreed, calling Musk’s aspiration to send humans to Mars “to back up humanity” a worthy goal. “That’s a company, and that’s philanthropical,” he said.

Basically, we relate to things that we understand. Usually that means things that are "logical" and within the framework of a social structure with which we are familiar.

When I was young, (1940's early 50's) my dad worked in the textile industry. One day he came home and explained that some of the looms in the mill had been fitted with a new "robot" (computer was not a familiar term at the time) that would cut the work load, by automatically changing the pattern of the weave. It was logical that because the productivity would be improved, that the company would make more money, and that the hours of the workers would be cut, while receiving the same pay. That was logical... In two years the Textile industry moved from New England to the south. In ten more years, the industry moved from the south, to the far east and third world countries.

We look at knowledge and progress as a continuum..albeit somewhat shortened of late. Indeed, until now, much of our progress has come from the discoveries of individuals, working alone or in small groups. If in fact, what Page proposes becomes reality, marshalling the forces of mega corporations could accelerate the move towards singularity, moving the "due date" to an earlier time than current projections.

One of the comments that fascinated me was the Google did not yet know what we know... Something to ponder.
 
Last edited:
Having been in the IT world for about 25 years, much of writing code, my own personal feeling is that the truly intelligent, self-aware, sentient machine will never happen.

In fact, I think you could give the smartest people on the planet regarding AI (artificial intelligence) access to an unlimited number of infinitely fast processors, and an unlimited amount of infinitely fast memory and storage, and I still don't see it happening.

A sentient machine would have to be "bootstrapped" by something, and I think that something would have to be our own knowledge of how we as humans think and are sentient. But we don't know enough of all that to reproduce it (short of biological reproduction, ie, having a baby).

I would be willing to bet even another 2000 years from now, assuming technology continues to advance, we still wouldn't see a truly sentient machine. I just don't see it happening. Ever.
 
Regarless of the technology, humans will be involved. Will they be like the engineers that worked on Appolo, Space Shuttle, Hubble, Rover....

Or the 'engineers' that launched healthcare.gov? One of these two choices scares me.
MRG
Let's see, One Appolo burned on the launch pad, Space Shuttles exploded, Hubble had a bad lens because some engineer screwed up the math. One Mars mission burned up when it reached Mars because someone forgot to translate between metric and english measures.

Those were somewhat unexpected. Did anyone expect healthcare.gov to go smoothly?
 
Yup, the same Gummint that picked the engineers that put us on the moon near 50 years ago... Notice how many other countries have matched what our Gummint did nearly 50 years ago.

Yeah, things have really deteriorated in the "pickers" since then, for sure.
 
I started reading Ray Kurzweil's books many years ago; starting with "Live Long Enough to Live Forever". Frankly, I choose to have an optimistic outlook on the future and so tried to incorporate some of his thinking / ideas into my own life ... so that I could "Live long enough". The Singularity does not require a sentient computer ... but computers that design their own successor's ... at rapidly increasing rates. Given that, progress is easily exponential. Imagine a computer that contains all the information on the human genome, all past studies/data by thousands of highly intelligent humans conducted over decades; and can duplicate then expand on all these within weeks/days/minutes. Could that lead to the ability to extend human longevity? I certainly thinks so and I'm looking forward to living at least 300 years ...
 
Well, I actually look forward to it, but I also know that it likely won't come at the time predicted. Basically, this is because computing actually stopped following Moore's Law,....It is going up, but not nearly as quickly as before, it would take a breakthrough technology to revive it.
Kursweil's plot includes many historical breakthrough technologies and expects/requires more of the same to keep pace. Although we might not be able to imagine them, I don't see why breakthroughs are going to stop happening. The world demand for mainframe computers was deemed to be in the dozens and of course nobody will ever need more than 640KB of memory....we suck at prediction of specifics, but we have managed, so far, to utilize the breakthroughs in unexpected ways and move the ball downfield.

Having been in the IT world for about 25 years, much of writing code, my own personal feeling is that the truly intelligent, self-aware, sentient machine will never happen.
Never is a long time, but I hear you. I used to get paid to code, and it's a pretty blunt instrument when you look at how the brain works (or might work, since we really don't know). No flying cars, no decent artificial intelligence, it's looking like humanity failed, hehe.
 
I started reading Ray Kurzweil's books many years ago; starting with "Live Long Enough to Live Forever". Frankly, I choose to have an optimistic outlook on the future and so tried to incorporate some of his thinking / ideas into my own life ... so that I could "Live long enough". The Singularity does not require a sentient computer ... but computers that design their own successor's ... at rapidly increasing rates. Given that, progress is easily exponential. Imagine a computer that contains all the information on the human genome, all past studies/data by thousands of highly intelligent humans conducted over decades; and can duplicate then expand on all these within weeks/days/minutes. Could that lead to the ability to extend human longevity? I certainly thinks so and I'm looking forward to living at least 300 years ...

Could it happen, maybe. It starts with brave humans that believe they can do the impossible.

Yes, there will be 'failures' along the way. Are they failures, or just part of learning?
MRG
 
While singularity is not about Google, the new and ongoing projects certainly point in that direction.
Google mapping the brain, plotting face recognition, analyst asserts - Network World

In particular, the part about mapping the brain, and the implied reason for doing so.

Probably the company's most ambitious project to date, however, is its ongoing effort to completely map the structures of the brain.

"What they are doing is taking information from brain-scanning technologies to replicate the functionality of the human brain in machines," Strawn said.

And while the time frame on this project is a matter of decades, many subsidiary accomplishments could result, he added, citing improvements to natural language processing and computer vision as examples.

... and Google is not alone in brain mapping initiatives, though from a different perspective.
http://www.scientificamerican.com/article/brain-mapping-projects-to-join-forces/
 
Last edited:
Having been in the IT world for about 25 years, much of writing code, my own personal feeling is that the truly intelligent, self-aware, sentient machine will never happen.

In fact, I think you could give the smartest people on the planet regarding AI (artificial intelligence) access to an unlimited number of infinitely fast processors, and an unlimited amount of infinitely fast memory and storage, and I still don't see it happening.

A sentient machine would have to be "bootstrapped" by something, and I think that something would have to be our own knowledge of how we as humans think and are sentient. But we don't know enough of all that to reproduce it (short of biological reproduction, ie, having a baby).

I would be willing to bet even another 2000 years from now, assuming technology continues to advance, we still wouldn't see a truly sentient machine. I just don't see it happening. Ever.

I remember back in the late 70s and early 80s, just as microprocessors were starting to become established, everyone was talking about AI, robots, and chess playing computers. I remember writing programs myself coding on a soldered together SWTP 6800 in assembly language trying to write "thinking" programs, "evolving" programs, etc., it was magic (to my young self anyway).

The trouble was, that there were increasingly complex systems, think about the avionics of a modern airliner, Deep Blue, etc. and what was called AI one year, was just a normal program a few years later. In the end even the term AI fell out of vogue, as it had no real meaning.

IMO we are still on the same track as we were when I started in computer software in the 70s, increasingly complex, connected, amazing, software doing increasingly wonderful things, and in the process making our lives better.

But I agree with LoneAspen, there will never be a silicon based sentient or conscious machine. Complex yes, amazing yes, but sentient or conscious no. They are simply machines that do what they are programmed to do.

On the other hand as silicon systems begin to reach the limits imposed by physics, we are seeing the rise of genetically engineered biological systems, which while fundamentally different, may eventually surpass silicon systems in importance. These are by their very nature not simply machines that do what they are told.

Will they be eventually able to have consciousness, we know biological systems can, who knows. To me this is where the real excitement of the next 100 years lies.
 
Having been in the IT world for about 25 years, much of writing code, my own personal feeling is that the truly intelligent, self-aware, sentient machine will never happen.

In fact, I think you could give the smartest people on the planet regarding AI (artificial intelligence) access to an unlimited number of infinitely fast processors, and an unlimited amount of infinitely fast memory and storage, and I still don't see it happening.

A sentient machine would have to be "bootstrapped" by something, and I think that something would have to be our own knowledge of how we as humans think and are sentient. But we don't know enough of all that to reproduce it (short of biological reproduction, ie, having a baby).

I would be willing to bet even another 2000 years from now, assuming technology continues to advance, we still wouldn't see a truly sentient machine. I just don't see it happening. Ever.

I'm not so sure about that. I'm a programmer/engineer - and know that the stuff I'm personally working on doesn't fit the bill in any way shape or form. Nor should it.

But I chaperoned my FLL robotics team to a field trip the the Neurosciences Institute in La Jolla. Those folks are approaching the programming in a completely different way. They're trying to learn how the brain works and are simulating it with a completely different style of programming. I didn't understand all of it - but it was pretty mind blowing. (We spent more time talking to the guys who built their crazy cool robots that play soccer - on segway platforms. So I didn't get a chance to dig deeper into it.)

There are people working on a whole new paradigm - much closer to the way the brain operates... as a way of confirming how the brain operates. Not just decision trees - but preferences, biases, some learned preferences, but others random and innate, etc. (Why do some people like chocolate, and others prefer candy corn?)

Synthetic Neural Modeling - The Neurosciences Institute
 
Just don't forget to program in the Three Laws before turning the robots loose to replicate themselves, write their own programs, etc.
 
Regarding impossibilities, I'm always mindful of Arthur Clarke's First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
 
Did you guys notice in that article that Ray K. is referred to as, "Futurist and Google’s Director of Engineering Ray Kurzweil"... How long has he been Google's Director of Engineering?
 
He became Google Director of Engr in Dec 2012. Good choice in my opinion
 
Last edited:
Back
Top Bottom