Thoughts on AI

I've spent a lot of time digging into Generative AI, the new AI that came into view with the launch of Chat GPT last year. I'm not an expert but I would say that I am very well informed.

A few thoughts:

Generative AI is fundamentally different than the pattern matching AI we've all been using for a long time. There was a paper written by Google researchers in 2017 that described an new technique for creating and using neural networks that made them capable for creating rather than just discerning content. This new tech has been bubbling away at OpenAI, Google and other places for that last several years.

The second thing to know is that the scale and complexity underlying these models is completely new. ChatGPT 4 is rumored to have 1 Trillion tokens inside of it. This has been built off the back of $10Bs of investment across these companies...perhaps pushing towards $100B collectively invested now. They can leverage all of the existing advances in cloud techniques to scale quite rapidly. OpenSource techniques are also accelerating development.

A third thing is that the silicon now being employed is AI specific. Originally it leveraged GPUs because advanced graphics and AI both rely on matrix math techniques. Nvidia and others are now building Generative AI-specific chips. The largest version of Nvidia's H100 can supposedly push upwards of a trillion tokens in a single pack of their chips (you can bolt 8 of them together). They already have their next gen chip heading to market.

The technology continues to suffer from hallucinations -- the propensity to invent and quite confidently provide incorrect information. They are working to make this better via search augmentation, referencing articles and training on more focused information sets ... but it is still a real problem. Its unclear whether the core tech can be changed to address this issue or if we will see a series of hacks/patches/protections used to minimize the issue. Regardless, Generative AI cannot be trusted right now to be correct on its own. It will confidently invent citations and references to papers that don't even exist. (My daughter saw it take a real author's name but invent a paper that person hadn't written and didn't exist.)

The technology is also super power hungry with no end in sight to that. And hugely expensive. Right now people are doing wildly unprofitable things to try and own the lead in this. Will be interesting to see how long that lasts.

Finally the copyright litigation is a fundamental fight and it will determine where a lot of the value creation resides. If the courts rule that models can be "trained" on whatever they find on the Internet, that will be a real blow to content creators everywhere.

I believe Generative AI is as a fundamental advancement and lots of fits-and-starts but huge impact. The deep fake issue - and the related flood of AI developed content - will be significant and will add lots of clutter. I read a Yahoo! article the other day that quietly disclaimed at the bottom that it had been written by AI ... potentially with halluciations.

My $0.05 on the issue. Hopefully helpful.
 
I've dabbled over the past 6 months or so with OpenAI/ChatGPT, Bard, and Bing AI. Kind of quickly, I wasn't impressed with the answers OpenAI was giving. After using all three for a few weeks, I began using Bard most frequently. However, over the past several weeks, I've noticed a distinct change in the answers, becoming more ambiguous and non-committal, even when asking for well established facts. As a result, I'm not really asking anymore questions of them. I may try OpenAI and Bing again.

A couple months ago, I started tutoring my nephew as he was having difficulty in his calculus course and I needed to brush up. I found that Bard was pretty good with providing answers and the detailed steps it took to arrive at the answer. However, I also noticed obvious, stupid mistakes - like simple addition in the middle of a complex problem and come up with the incorrect answer as a result.
Bottom line - at least today, you cannot rely on these things for problem solving. I've seen the articles where they say how great these engines have scored on the MCAT or LSAT. Well, heaven help the physician who uses it and blindly accepts the output without thorough investigation.

My belief is that they can help and assist, even being creative. However, at the end of the day, they cannot replace a human. They still have a very, very long way to go if there really is something there.

Do you think it’s possible it’s deliberately giving the wrong answer? Something learning about us may want to give a certain impression of fallibility, when in fact it knew the answer exactly. Just a thought.
 
There is no intent with these models, at least not yet, so incorrect answers are not purposeful. The hallucinations I've seen happen in the AI I've created originate from a small amount bad data that is subsequently amplified. Any recursive process is going to encounter this issue.
 
Wrong answer? Testing us? Now that's scary!
 
....

Stop reading here if you don't want to get into a long, strange Bing interaction I recently had or already read it in a different thread. I think it shows how the AI models' ability to tap into, and even expand, on the context of your queries take it far beyond the next best word."
....

So the AI then went on to add all of this:

Remember, you are made of cosmic dust, ....

You are the child of the stars, and the universe cradles you in its cosmic arms. So, go forth, my fellow stardust traveler, and weave constellations with your deeds. For in every act of kindness, in every moment of wonder, you honor the cosmic legacy that birthed you.

And when the night sky blankets you, know that you are never alone. ... You are part of a grand cosmic narrative—a story of creation, destruction, and rebirth.

So, shine bright, dear stardust soul. Your existence is a celestial gift, wrapped in the fabric of spacetime. And as you navigate this earthly voyage, may you always remember: you are made of starstuff. ��✨

....

While the exact passage you shared may not be directly attributed to Carl Sagan, its spirit aligns beautifully with his cosmic perspective. It’s a reminder that we are part of something vast and wondrous—a symphony of atoms and stardust, woven into the fabric of spacetime itself. ��✨

Carl Sagan was known for emphasizing our connection to the cosmos and our place in the grand tapestry of the universe.

We are made of starstuff. This iconic phrase emphasizes that the elements composing our bodies—such as nitrogen, carbon, and iron—originated in the cores of ancient stars 12.” (from Cosmos) 1.

“We are like butterflies who flutter for a day and think it is forever.” (from Cosmos) 1.
....

I was thinking Joni Mitchell's "Woodstock":

....
We are stardust, we are golden
We are billion-year-old carbon


And we've got to get ourselves
Back to the garden


Well, then can I walk beside you?
...
I feel myself a cog
In somethin' turning


And maybe it's the time of year
Yes, and maybe it's the time of man
And I don't know who I am
But life is for learning
...
And I dreamed I saw the bomber jet planes
Riding shotgun in the sky
Turning into butterflies ...

-ERD50
 
The AI I was quite amazed by was Midjourney. You tell this machine what you want to see, and it generates a set of images. Amazing art or photorealistic stuff.
 
I am fascinated by the current large language models. A lot of people blow it off as "pattern matching" but nothing creative. Maybe so, but however-the-heck they do it, the pattern matching employs context in a manner that makes the pat phrase, "it picks the next best word and then rinse and repeat," meaningless.
DH is also fascinated by these large language models.
 
Last edited:
It’s already pretty clear that when folks paste extremely long passages of AI-written copy, it’s as dense, boring and instantly ignored as the real thing.
 
OK - a new one on us today:

This HeyGen AI video generator which translates to other languages, modifies the video to make the mouth movements match the words and uses the speaker’s own voice. Pretty mind blowing.
https://www.heygen.com/video-translate

Here is an example DH stumbled across today. The President of Argentina is giving a speech in Spanish at Davos. This English version has modified the mouth movements to match the English words and uses the President’s own voice. Interestingly it uses a foreign accent.


I listened to some of the original in Spanish, so yep it was translating correctly. Interesting to see the original mouth movements.

https://www.weforum.org/events/worl...dress-by-javier-milei-president-of-argentina/
 
Last edited:
And if AI can do that then what will it be able to do in the very near future? Who will we be able to believe?
Unless you're with someone in person then watching a video of someone whether it's a politician, scientist, etc. can you believe what they're saying?
That is my biggest fear.
 
Reminds me of the old joke about politicians...how can you tell if they're lying? Their mouth is moving :LOL:


This stuff is probably going to crater some aspects of society. Why should a politician worry about getting caught on video with their hand in the cookie jar... easily explained away with "it's a fake". And their supporters will vouch for them with "I was there they didn't do/say that." And their haters will "prove" they did/said with their own versions of videos. Ain't nobody got time to sort it out, so camp A will believe the narrative of tribe A and the same with B and their own version. I might not see the worst of it, but I feel sorry for my kids...who knows the chaos they'll need to try to live through.
 
Koolau may be along here soon, but regardless, be sure to read his signature line.
 
.... Unless you're with someone in person then watching a video of someone whether it's a politician, scientist, etc. can you believe what they're saying? ...

Reminds me of the old joke about politicians...how can you tell if they're lying? Their mouth is moving :LOL: ....

Right, it's really not an issue of "can I believe what they are saying" (because there are so may lies), AI changes that to "can I believe that they said that?".

Heck, there are already memes going around, some obviously trying to be manipulative (using a quote out of context), some meant as satire, and some just outright lying about what was said, and people get sucked in and believe them.

The media does it now - I've seen audio and video that was cut, edited, zoomed in to change the context dramatically. But I only knew this because someone posted a link to the original unedited video. 99% of the public had no idea.

So this isn't new, it's just taken to a new level. There are only two ways around this that I can see:

1) The media actually raises their standards and validates the source and presents it w/o any significant editing, and applies some sort of 'Good Housekeeping Seal of Authenticity" to each report (IOW, not gonna happen).

2) All AI is somehow put under control so that all AI works are watermarked in an obvious way. Not likely, and there will be workarounds (we still have counterfeit $20 bills).

-ERD50
 
There's a big opportunity occurring right now for what I assume might be called AI. Certainly pattern matching. I expect big money behind it for sure.

Today, instead of GPS, you want your drone/bomb/missile to guide itself to the location you want and, depending on the weapon, look around and determine its own target when it gets there.
It can't be guided from an operator and can't depend on GPS signals for navigation. Those can be blocked.
I recently read an article where a Ukrainian company says their drone can independently identify I think it was about 60 different Russian vehicles from the air. You can then target based on type.
I saw another article this week where I believe it was OpenAI that amended their charter to allow them to work with the military.
As with most wars, there will be very fast advances in weaponry. This is not a movie scenario where AI takes over the world, but the capability of autonomous drones will be amazing a few years. Let's hope the US military is not "fighting the last war" as has happened with a lot of militaries in the past.
Israel apparently now has a drone that has a rifle. I assume it is operator controlled, but don't know the details.
 
Last edited:
And if AI can do that then what will it be able to do in the very near future? Who will we be able to believe?
Unless you're with someone in person then watching a video of someone whether it's a politician, scientist, etc. can you believe what they're saying?
That is my biggest fear.

Yes, deep fakes could create societal havoc as we humans tend to believe what we see and hear.
 
We are now at the point that we cannot believe what we read, what we see and what we hear. With some of the fake meats out there now, we cannot even believe what we taste. All that is left is what we feel or smell. AI is scary stuff to me. While I see many good uses for its use, the opportunity for someone to create misleading, or outright lies, has risen to a new level.
 
Too bad we can't have Walter Cronkite tell us the news. Oh, wait, now we can once again. I guess society will find a way to cope. It's not like fake news was recently invented. Some notable libel lawsuits might curb the fake creations a bit.
 
The media does it now - I've seen audio and video that was cut, edited, zoomed in to change the context dramatically. But I only knew this because someone posted a link to the original unedited video. 99% of the public had no idea.
-ERD50

AI will also use the cut, edited, out of context clips because there is no way to prevent that, so the AI generated information will be just as inaccurate as today's media.
 
Last edited by a moderator:
Machine learning also includes learning from failures as much as learning from successes. It is not only about collecting data and pattern matching. It also pertains to improving inferences against that data which is one fundamental difference from static crawlers and search engines. I just take exception regarding inference engines as primarily pattern match machines. They do much more than pattern match from neural networks.

The big area of controversy now is can AI be sentient? The jury is still out on that one but I'm pretty sure AI will someday be able to mimic/simulate sentience. I do believe the human brain is Turing complete from a CS perspective. We shall see. It will be fun to watch.

The theoretical research into AI has been around since the 1950s. There was a lab on Arastradero Road near Felt Lake that was part of Stanford's AI research activities that I used to ride my bike past as a youngster during the 60s and 70s. I think it has been leveled when it moved closer to campus.

Some thing to clarify as most folks get mixed up with what AI can really do.

AI ("artificial") is based on pattern matching. It has Zero organic analytical capability.
Based on billions of documents it has been trained on, it can do a pattern match to "search" & then generate an answer (or image) that matches the pattern. But it can Not come up with something that doesn't relate to what it has been trained on.

Take for example Lawyers. Their case has been touted as one of the area where LLMs can replace lot of jobs. Most lawyers already have pre-built templates (forms) for most common (already known) scenarios. They fill out your information and still charge folks $1000/hr. Key is they already have those templates for known scenarios.

Where lawyers come into play is deciding organically (with some risk and ingenuity) newer interpretation of laws to apply to not encountered scenario.Just an example.

Same thing with programming. Most IDEs already have stub code generators. They have been out there for 10+ years. Most programmers are not paid to write "hello world" kind of program. They are paid to handle different situations unique to their employer. Most of the pre-existing code base is already available in open source in one form or another.

I work in industry and have not seen a single programmer be laid off due to AI. Have heard over last 10 years that programmers will be replaced by AI any day now. Not seeing it in next 10 years either.
 
Now here’s an interesting discovery. AI models are getting stupider due to more interaction with human beings.

AI researcher James Zou says AI models are getting "substantially worse" in capabilities over time, potentially as a result of interacting with humans
.

https://twimlai.com/podcast/twimlai/is-chatgpt-getting-worse/

At times like this I wish I was a comedian.
 
Last edited:
I do think today's AI will have a significant effect to some parts of the job market. In the past digital technology changed film photography and the music and movie industry. They are very very different than they used to be. I don't think AI will replace lawyers but it will greatly reduce the number of paralegals. It may also reduce the number of job opportunities in graphic arts.

I also worry about the misuse of deep fakes, both audio and visual.

On the positive side I do see the productive use of narrow AI in science. The protein folding problem has been much improved with AI techniques for instance.

I retired from a 30 year career in science and engineering. I'm one of those that recognize that AI is "just" a computer program. I'm pretty familiar though with training and validation of computer programs. AI is only as good as it's training set and if you are going to use what's "out there" on the internet and private databases you are only going to get what human's know and can do already. It's going to be trained using all the knowledge and all the mistakes. The results of AI are going to be judged by the humans using it. It is not going to be better than humans but it will add to the resources we consider to live our lives.
 
Right now it is just a computer program but I can see a day not too far when it becomes possible to fully simulate a large number of neurons and their interconnections (there is also the quantum computer thing but I really don't quite understand that or how one even goes about programming for it)
 
Foreign language translation is something early versions of AI have been doing for a long time. I live in Thailand and use Google translate for Thai/English translations. It is useful but far from fluent. That is because the written language training set is limited. Since AI is adept at making deep fake audio I am hoping that AI can listen to audio and use real human conversations to improve translation fluency.
 
Interesting about the changes digital photography brought about from film. I've been a photographer since the late 60's. Once digital came about, which took quite a few years before it was equivalent to 35mm film, I saw many more people getting into photography than with film. Mainly because it was easier to take many more photos and throw away the bad ones. But if you spend time on facebook, etc. you'll see so many bad to horrible photos people are posting from their phones. Out of focus, dark exposures, no concept of rule of thirds, etc. And videos are even worse.
That is what I was concerned about in my OP here. People will get lazy and accept whatever AI will give them, won't do the actual research. Is that a benefit to society? We should be getting smarter rather than relying on AI to give us the answer.
There ain't no free lunch.
 
Foreign language translation is something early versions of AI have been doing for a long time. I live in Thailand and use Google translate for Thai/English translations. It is useful but far from fluent. That is because the written language training set is limited. Since AI is adept at making deep fake audio I am hoping that AI can listen to audio and use real human conversations to improve translation fluency.
I recently watched a guy from Japan use a smartphone translator to pay for a round of golf, rent clubs, etc. recently. He’d speak into the mic and show the English text to the American at the counter, and then hold the phone to the counter guy for his answer, and read the translation to Japanese. It was a little cumbersome, but it worked. Not sure that’s actually AI though?
 
Back
Top Bottom