Digging in to AI

Can we escape into these higher dimensions to evade these homicidal AI machines? Or will they be able to follow us everywhere?

PS. Oh never mind. All I wish for is a sure cure for cancer, so that we can all die of old age and not some terrible diseases.

Oh wait. Dying of real old age may be a horrible lingering death worse than death by cancer. We are doomed no matter what.

Maybe not.

Not too far off from what I've been reading. How about this... sub atomic particles and genetic science.

https://www.omicsonline.org/subatomic-medicine-and-the-atomic-theory-of-disease-2161-1025.1000108.php?aid=9053
 
Agree that's not AI at work, it's merely data mining via algorithms designed by humans.

How would you define AI?

Pattern recognition is data mining. It's what humans do too. The algorithms being used are self-learning, and the derived rules when making decisions are not written explicitly by humans, and usually not transparent to the designer of algorithms.

We're nowhere near general AI when defined as full human reasoning in all kinds of contexts, yet in many areas it is no longer a contest.
 
As to the OP... One of the subjects that I left off, was "How far we've come, in five, ten, or twenty years". Am putting together a long list of things that have become second nature today, but back then, would have laughed me out of the room. AI... a new term, but what has been going on for some time.
Think "Television" How many here on ER don't remember when this was a "miracle".
How far we've come in 20 years, compared to the 200,000+ years we've been around.
Exactly. There is nothing magical or mystical or inherently evil about AI. It's just an approach to programming can be helpful at times and that has been around, in some realization or other, for a very long time.

When I was at Texas A&M completing my B.S. in Electrical Engineering back in the early 1980's, I took every class on AI that was offered. Although I am hardly an expert on the subject, still I was the top student at that time both in the lecture parts and the lab/programming parts of every one of these classes. It was right up my alley, so to speak and so interesting to me and tremendously fun to learn about and implement.

Granted, AI has come a long way since then, but I think I learned enough that I can pretty much follow how it got from point A to point B, and I do not fear it. It is simply an approach to programming and problem solving IMO. I think that a lot of the fear that has developed around it, evolved due to its use as a topic in science fiction and fantasy stories. Like any other scientific, computational, or engineering advance, it can be used for good or evil depending upon who is using it. But so what. There is nothing new about that, either. I admit that articles about it can be great clickbait.
 
Good perspective.

For the average person, who doesn't understand the basics of applied science let alone anything newish like AI, it is easy to make AI seem oh so scary. It's profitable too, in terms of scj fi books, movies, website hits,etc.
 
I was "in the field" back in 1980's excitement of AI (that fell completely flat). Neural networks were supposed to learn and solve huge problems too big for humans. Didn't happen.


I was there, too, a few years behind you. Back then, there was a lot of hype and what can be achieved was just theories. I remember many in the field believing that some AI will never get there b/c there can't be super HW to run the SW. As HW computing power exponentially grew for decades, many of those theories can now be implemented.



Anyone remember AlphaGo, GO game AI? It was only a few years ago that world's then 3rd ranked player could beat it once (out of the 5 matches held). A year or two later, AlphaGO 2.0 beat the top ranked player 3-0. Another year later came the self learning game AI. Now, the game AIs (there are many) have to give handicap to top pro players to even up the matches.



I saw many movies which portray AI vs human. Humans eventually triumph in most of these movies. Now, I wonder if that'd be the case. SW is incredibly complex and will always have bugs. Like nuclear facility disasters, one day, we may see AI disaster of equal (Chernobil, Fukushima) magnitude.
 
Somehow, back in the dark ages of the early 80s, people thought Lisp programming was the way to AI. I took a brief course in early AI, and to me it was a BS. Seriously? We wrote ELIZA programs in LISP to ask you questions about "How you feel." It was more history of AI than actual progress. (Yes, AI was thought about for a long time, even before electronic computers.)

We discussed, but did nothing about, the incredible problem of computer vision. Etc. I guess the point of the course was to make us realize we were in the infancy of the field.

Flash forward 35+ years. Progress is made. We're in the small child stage. Children can start fires. It may start getting dangerous.
 
Flash forward 35+ years. Progress is made. We're in the small child stage. Children can start fires. It may start getting dangerous.




I always ridiculed the movies with AI threatening human race. Now, I don't think that is too far fetched in the future. AI SW will have bugs, and it will be hacked. That's danger enough for AIs which diagnose diseases, self drive cars, ....
 
It used to take people to make fake news

From this page: https://blog.openai.com/better-language-models/#sample2


Human written prompt:
A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.
Machine generated "news report":
The incident occurred on the downtown train line, which runs from Covington and Ashland stations.


In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.


“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”


The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.


The Nuclear Regulatory Commission did not immediately release any information.


According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.


“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.
 
Flash forward 35+ years. Progress is made. We're in the small child stage. Children can start fires. It may start getting dangerous.

I always ridiculed the movies with AI threatening human race. Now, I don't think that is too far fetched in the future. AI SW will have bugs, and it will be hacked. That's danger enough for AIs which diagnose diseases, self drive cars, ....

Yes, AI can be dangerous because of the way we use it for what it cannot yet do. Look at people who trusted their life with inadequate autopilots that cannot see semi-trailers, fire trucks, and highway barriers. It's not too different than people who shoot themselves with their firearms.

As for AI getting to the point it can produce something its creator cannot do, and I don't mean just doing it faster as even a $1 calculator in a dollar store can multiply two numbers vastly faster than I can, I am waiting for an AI program to tell us how to build a better and cheaper lithium battery. We need one badly. This alone will solve a lot of problems for human kind.

Next, it can tell us how to cure cancer. And how to deflect the asteroid we keep getting threatened with. If AI is so smart, I would like to think it can be benevolent, and not aspiring to dominate mankind. :)
 
Somehow, back in the dark ages of the early 80s, people thought Lisp programming was the way to AI. I took a brief course in early AI, and to me it was a BS. Seriously? We wrote ELIZA programs in LISP to ask you questions about "How you feel." It was more history of AI than actual progress. (Yes, AI was thought about for a long time, even before electronic computers.)

We discussed, but did nothing about, the incredible problem of computer vision. Etc. I guess the point of the course was to make us realize we were in the infancy of the field.

Flash forward 35+ years. Progress is made. We're in the small child stage. Children can start fires. It may start getting dangerous.
What a rotten course! My sympathies. ELIZA was more of a big thing back in the 1960's than it was by the early 1980's, and the summer of 1972 was when I first had that amazing experience of interacting with ELIZA in a beginning programming class. But by the early 1980's neural networks were real, and were being warned about by the mass media as being a terribly, hideously dangerous new form of Artificial Intelligence (OMG! Danger Will Robinson! Batten the hatches! :eek: ) that would probably lead to the end of life as we know it, I suppose in less than a decade. :D
 
What a rotten course! My sympathies. ELIZA was more of a big thing back in the 1960's than it was by the early 1980's,
Yep. In retrospect, the professor was not good. I think he was stuck back in his thesis years. I think he really only wanted it to be a LISP programming course, of which he did a good job, because I still remember CAR(and CDR())), etc. We wrote our ELIZAs well enough to get some good laughs out of the dialogs.

After the first part, he then jumped us forward to the math involved in computer vision. It was basically a different course, and not as much fun. However, it was more relevant for the problems of the time and possibly sparked someone's future thesis.
 
I've incorporated AI neural networks into my business. They do things that I couldn't possibly do and do it way faster. That said, when it does something wrong, it's not an easy fix, the processes are a black box. input and output. I wish I understood it more to replace even more parts of my business!
 
Re: health and disease... one small paragraph from this site, cited above and below.

Medical oncologists rely on nuclear medicine to aid in the detection, diagnosis, staging, and surveillance of several cancers including lung, lymphoma, melanoma, breast, colorectal, esophageal, head and neck, pancreatic, ovarian, cervical, and thyroid cancers. For the majority of tumors, detection is through targeted radionuclides, such as 18FDG, which enables the localization of neoplastic lesions with increased glucose metabolism. Another example is the localization and characterization of neuroendocrine tumors by means of the somatostatin analog octreotide, radiolabeled with Indium-111 (111Inpentetreotide) or, more recently, with the positron emitter Gallium-68 (68Ga-octreotide). For most tumors 18FDG-PET imaging is the most commonly used modality. 18FDG-PET imaging can depict the wholebody distribution of areas of increased metabolic activity, indicating the relative underlying metabolic activity of a tumor. Over 90% of PET utilization is in the field of oncology, with cardiology and neuroscience at a distant 5% and 3%, respectively [24]. As reported by Gambhir et al., 18FDG-PET imaging sensitivity and specificity in oncology are estimated at 84% (based on 18,402 patient studies) and 88% (based on 14,264 patient studies), respectively [25]. It has been shown that adding 18FDG-PET imaging to conventional staging of cancer has altered the management of 13.7-36.5% patients [25]. Currently, exciting applications of PET and SPECT include the indirect visualization of gene expression using reporter probes that aid in the diagnosis as well as monitor the therapeutic treatment of the disease [


https://www.omicsonline.org/subatomic-medicine-and-the-atomic-theory-of-disease-2161-1025.1000108.php?aid=9053

It's not easy reading, but it appears that gene therapy is not tomorrow, but today.
 
Last edited:
... when it does something wrong, it's not an easy fix, the processes are a black box. input and output. I wish I understood it more to replace even more parts of my business!


Deep-learning neural network is the hot thing right now, with the image recognition being of primary significance to self-driving car technology.

Yet, witness how a neural network could be fooled. It shows people did not really understand how it worked. I shared some of these in another thread about self-driving cars. The following videos are some examples from universities.


 
Last edited:
AI may be more advanced than we acknowledge. AI systems have now been written that were able to defeat the world champions in two complex games, Go and chess.

OTOH, a Microsoft Twitter bot, which is similar to AI, was easily defeated.
 
AI may be more advanced than we acknowledge. AI systems have now been written that were able to defeat the world champions in two complex games, Go and chess.

OTOH, a Microsoft Twitter bot, which is similar to AI, was easily defeated.
Complex to a degree. There are still a well defined set of rules for these games.

Consider self driving cars. Humans process so many clues that we take for granted that are "rules" to us.

Example: traffic lights are out due to power failure. The rule is supposed to be this is a four way stop. But the real rules are more complex. We know that the hopped up muscle car behind the large SUV to your right might very likely draft the SUV on their turn through. It is just something we know to be true. Similarly, we see the group of inattentive teenagers in the other car, and really don't know what they'll do, so we use extra caution. Etc...
 
Complex to a degree. There are still a well defined set of rules for these games...

It did not surprise me that machines would one day beat humans at chess. Or Jeopardy.

But has a machine been able to decide if the game of chess is a "trivial" game, meaning that either the side who moves first will always win, or the game will always end in a draw when two infinitely smart players face off?

For more practical applications, I am still waiting for a machine to use its encyclopedic storage of physical and chemical properties of all materials, along with its "superior" reasoning, to tell us how to build a better lithium battery.

"Elementary, my dear Watson. From the properties of materials, it is obvious that if you add 1% of phosphorus in the form of sodium phosphate in the electrolyte, and then instead of using graphite for the anode..."
 
Last edited:
Exactly. There is nothing magical or mystical or inherently evil about AI. It's just an approach to programming can be helpful at times and that has been around, in some realization or other, for a very long time.

When I was at Texas A&M completing my B.S. in Electrical Engineering back in the early 1980's, I took every class on AI that was offered. Although I am hardly an expert on the subject, still I was the top student at that time both in the lecture parts and the lab/programming parts of every one of these classes. It was right up my alley, so to speak and so interesting to me and tremendously fun to learn about and implement.

Granted, AI has come a long way since then, but I think I learned enough that I can pretty much follow how it got from point A to point B, and I do not fear it. It is simply an approach to programming and problem solving IMO. I think that a lot of the fear that has developed around it, evolved due to its use as a topic in science fiction and fantasy stories. Like any other scientific, computational, or engineering advance, it can be used for good or evil depending upon who is using it. But so what. There is nothing new about that, either. I admit that articles about it can be great clickbait.


+1

The technology itself is morally neutral. Can the same thing be said about all the eventual developers of this technology and their respective motivations?
 
What people are talking about is the future where intelligent machines revolt, take over the world from humans, and put us all in a "Matrix".

https://en.wikipedia.org/wiki/The_Matrix.




That's sci-fi. To me, the likely scenario is some key AI that has taken over important human tasks will malfunction due to AI.



BTW, Alphago AI SW technology/methodolgy can be extrapolated to apply in other areas. IIRC, that's one of the main reasons why they invested in Alphago so much.
 
That's sci-fi. To me, the likely scenario is some key AI that has taken over important human tasks will malfunction due to AI.

BTW, Alphago AI SW technology/methodolgy can be extrapolated to apply in other areas. IIRC, that's one of the main reasons why they invested in Alphago so much.

Yes, I worry about AI being misapplied and trusted where its deficiencies can do harm. In other words, the machines may not yet be as smart as people believe, and do stupid things, like not seeing a semi-trailer and driving under it.

But many people are afraid of the reverse, that these machines can get too smart, and they scheme to do us harm in the manner of Skynet. :)


PS. Do people here remember the incidence where two AI computers in Facebook lab started exchanging gibberish? People immediately said that they were inventing a new language so that we could not eavesdrop on what they were scheming. The truth was that buggy software was just emitting garbage. Nothing new here.

See: https://towardsdatascience.com/the-truth-behind-facebook-ai-inventing-a-new-language-37c5d680e5a7.
 
Last edited:
PS. Do people here remember the incidence where two AI computers in Facebook lab started exchanging gibberish? People immediately said that they were inventing a new language so that we could not eavesdrop on what they were scheming. The truth was that buggy software was just emitting garbage. Nothing new here.


That's funny. People forget that AI is as smart as people who write the SW.




Speaking of AI going at each other ... they let Alphago AI to learned by itself. The resulting AI can't be beaten by previous AI. See https://www.scientificamerican.com/...ught-alphago-zero-vanquishes-its-predecessor/.
 
That's funny. People forget that AI is as smart as people who write the SW./.
That's an un-scary thought and why I'm not worried. Although I do kind of fear "AI kiddies", not because they'll take over the world or exterminate humanity but they'll make us all do the equivalent of taking off our shoes at TSA.


I've written what the lay-press might call an AI app for Android. It recognizes images in a narrowly defined realm using a combination of OCR and graphical interest points. There are so many "knobs" to set, though, it's far from optimal. I just picked some settings and ran the library images, which populated a database. Then, in user mode, camera frames are used to find matches in the database. It actually works, but I think it could be made faster and more accurate, but usually those two are at odds.
 
Some applications of deep neural network have worked quite well, despite the occasional glitches. Examples include image recognition, and speech recognition.

AI still has a lot to go, but it has improved a lot in the last 20-30 years. In the late 90s, my aerospace megacorp looked into applications of speech recognition for use in an aircraft cockpit as another means of pilot's inputs to the flight system. It worked so poorly that people laughed and said it would surely misinterpret voice commands and get everyone killed. I wonder if they have tried again.
 
Back
Top Bottom