Will AI reduce OR accelerate misinformation?

I picked up my uke and played with the fingerings given, and it sounded like cr*p. On closer examination, those fingerings are right for G7 and Cmaj7, but wrong for the rest (cue cartoon like horn with plunger mute: "mwapp, mwapp, mwaaaaa")
I just grabbed mine, and you are correct! I would want to prompt it back and TELL chatGPT that. But I get it. There is probably a better example...but I could go back to the prompt and tell it that I believe there is a better sounding version that I would like to hear. Maybe without the blues. Get away from that second bar so much.
 
I just grabbed mine, and you are correct! I would want to prompt it back and TELL chatGPT that. But I get it. There is probably a better example...but I could go back to the prompt and tell it that I believe there is a better sounding version that I would like to hear. Maybe without the blues. Get away from that second bar so much.
Well, that was a fub too. The next prompt didn't yield much better at least from what my ear likes to hear.

So it needs some work on these chord progressions. And it doesn't seem to be trainable lol.
 
Well, that was a fub too. The next prompt didn't yield much better at least from what my ear likes to hear.

So it needs some work on these chord progressions. And it doesn't seem to be trainable lol.
I selected different voices for the base 4 chords (using names, not fingerings), and got those sounding pretty good, but couldn't work in the turn around chord.

And in total randomness that comes from thinking with wetware, I worked up a version of Gilligan's Island theme at a lower key so it doesn't go too far up the neck (goes up 4 semi-tones). I bet the AI could have done that for me (lowering the key), but I did it by hand.
 
I had a medical test report arrive online this morning but I don't see my doctor until Thursday to review it. The report is written in indecipherable medical-ese.

I was able to 'paint' the words and phrases (entire paragraphs) and click 'search via Google' which now appears to be an AI. It spelled out everything in simple English, adding options and treatments as well as risks.

Seems that even 6 months ago this wasn't possible without a detailed bit by bit and lengthy Google search.
 
I think AI is somewhat turning into a buzzword for companines, meaning some advanced computing wow-feature, often times with a human-like language interface.

Some AI generated images look fantastic, more that what a human could ever produce in a set time frame. But so far I have not noticed great results from google's search AI assistance or my very limited exposure to chatGPT. I think part of the problem is how do these LLM gather up needed information (i.e. accurately train)? Pulling from the visible internet will return conflicting information and poorly indexed/classified information. Sometimes gathering facts, other times opinions, and often with a short half-life. Searching for basic information like some detail about your 2014 Toyota Corolla will often be correct but can be wrong often enough that it cannot be blindly trusted as the indexed data may been for a different model year. Or try drilling deeper down into something easy for a human expert to respond but hard for a novice like what is the best paint (brand, sheen, color, roller, brush) to use to repaint my bathroom given some other constraint. Results from different humans in person, online, etc. will vary for such a question, based on their skill level, experience, brand loyalty, etc., so it seems impossible for AI to sort that out.

Until there are better examples of expert systems I remain skeptical on the progress of AI. If these models are so capable today, why is there not an expert system that can be feed stacks of medical books and journals, historical patient treatment records, drug trial results, and assist a doctor/nurse with diagnosis and treatment? Or on a less complicated scale a similar tool for vehicle diagnosis and repair. I realize a big issue is measuring the initial state of things (e.g. the patient's condition). Or maybe current AI is better at generating fuzzy answers rather than the best answer.
 
To me, what is called AI, is just a pretty good summary of what is already known by a human somewhere who bothered to write it down, and it managed to get published on the internet somewhere. So this Automatic Inference machine is pretty good at inferring what I'm asking for. I don't count it as intelligence, yet.

We're close to auto pilot for cars, but not there yet. When the AI enabled house cleans itself, does laundry (to include folding and returning to dressers and closets), publishes a meal plan and cooks the meals, and the refrigerator stocks itself, then we'll be much closer to useful artificial intelligence.
 
If the training set for general AI is the internet then answers are going to be a bland compromise or wrong depending on the way you ask the question. The is no wisdom in general AI.

Narrow AI though can be a great benefit. Narrow AI as in medical diagnosis (not there yet), reading x-rays, protein folding, gaming, taking the bar exam, etc.
 
Yeah, for sure.

A lot of the internet today is garbage in garbage out.

And I remember reading an internet article that was trumpeting this sudden drop in certain car sales (shocking! click bait), going on and in about it. And if you scrolled down and looked at the tables of sales numbers it was clear that the numbers for a particular year were off by a factor of 10! Was that article machine written? If human they missed an obvious error. Yet this erroneous article conclusions were quoted in AI summary searching on the topic.

But a lot of people too just look at the headline or the intro/summary and ignore the heart of an article where the essential info is presented.
 
Last edited:
Yeah, for sure.

A lot of the internet today is garbage in garbage out.

And I remember reading an internet article that was trumpeting this sudden drop in certain car sales (shocking! click bait), going on and in about it. And if you scrolled down and looked at the tables of sales numbers it was clear that the numbers for a particular year were off by a factor of 10! Was that article machine written? If human they missed an obvious error. Yet this erroneous article conclusions were quoted in AI summary searching on the topic.

But a lot of people too just look at the headline or the intro/summary and ignore the heart of an article where the essential info is presented.
Great example.

So much of what is called AI is simply a glorified search program. It just compiles info that’s already out there and puts it in a different format.

I mentioned eBay earlier. They now have an AI function to write your item descriptions. It’s basically a high tech Mad Lib. Remember those games where you had a story but there were a bunch of blanks and you’d ask your friend for a color or a verb or a size or whatever and when you were done you’d read the funny story. That’s sort of how the AI descriptions get written. The result can be as nonsensical as those Mad Libs were. Personally I refuse to buy from any seller who uses the AI descriptions.
 
I'm not sure if this was really "AI", or just some clever programming, but I recall reading that some designers set a computer loose on designing a new antenna.

As I recall, it did a lot of hit/miss type work, trying a maybe random design, and then testing it against some modeling software, and then iterating on the most successful results. It came up with something weird that no human ever designed. See if I can find a link....

Ahh, here is one:

View attachment 53210
They describe the process as modeling natural selection/evolution.
Wonder how much that paperclip antenna cost Nasa. :)
 
Andrew Leigh observes that coal power, electric motors and computers all took a while before having a deep impact. General-purpose technologies underwhelm in the short run but dazzle in the long run, he says.
 
We had an AI startup 30 years ago. The secret to success were the development of guardrails in a limited solution space, We eventually abandoned it because new clients would not accept the results! Being right 90% of the time was not good enough…
 
I have no doubt will accelerate misinformation. Soon, if not already, too much "AI" to the point that AI is becoming more and more meaningless. Kind of like how every news is "Breaking News".
 
AI as it exists has some interesting quirks. Here is an interesting study that found an AI promoting a ridiculous level of conformity under certain conditions.


As an example, the researchers studied an AI model trained on images of different breeds of dogs. The source material included a naturally wide variety of dogs (French Bulldogs, Dalmatians, Corgis, Golden Retrievers, etc.). But when asked to generate an image of a dog, the AI model typically returned the more common dog breeds (Golden Retrievers) and less frequently the rarer breeds (French Bulldogs).

Over time, the cycle reinforces and compounds when future generations of AI models are trained on these outputs. It starts to forget the more obscure dog breeds entirely. Soon it only creates images of Golden Retrievers.
Now, substitute dog breeds for whatever you’re trying to create — new products, packaging, advertising, communication, and the risk is that all outputs devolve to look the same.
 
Last edited:
^ I watched a lot of images get produced by "Midjourney", and I agree that there seems to be bias. Some look like what you'd find in a catalog, a lot look like goth art, etc. But most of the images seem to fall into one bucket or another. I still find it absolutely fascinating based on how they work. They literally take a blurry blob and decide what the most likely slightly less blurry blob should look like.
 
A short experience with Copilot on Windows 11. Being knowledgeable in a particular area, I queried for the scientific name of chinquapin oak (actually, a rather minor species)...Answer Quercus muehlenbergii --Correct, so far so good--Then I asked about its fall color. Answer "Quite a sight in in the fall"..."a stunning mix of yellow, brown and russet". Actually, its fall color is comparatively dull. My next line to Copilot "Really, its fall color is rather dull" Answer--"That's true. Compared to some of the more flamboyant autumn trees like the maple or red oak, the Chinquapin Oak's fall colors are more subdued." So.....first it asserts one thing, then does a 180 when challenged. IMHO a somewhat useful but imperfect tool.
 
You have to be a skeptic.

I asked Gemini about a calculation for inflation corrected investment returns. Really pretty simple. It gave a patently incorrect answer. Then I challenged it with my own solution. It apologized. Then I asked Chatgpt about the same calculation and it gave a correct answer. It turned out I was also in error in correcting Gemini's answer. Would this lead to Gemini's learning something wrong.

Then I asked Google about the calculation and it gave a correct answer like Chatgpt. I had thought Google used Gemini but apparently not quite. Hmmm.....
 
A short experience with Copilot on Windows 11. Being knowledgeable in a particular area, I queried for the scientific name of chinquapin oak (actually, a rather minor species)...Answer Quercus muehlenbergii --Correct, so far so good--Then I asked about its fall color. Answer "Quite a sight in in the fall"..."a stunning mix of yellow, brown and russet". Actually, its fall color is comparatively dull. My next line to Copilot "Really, its fall color is rather dull" Answer--"That's true. Compared to some of the more flamboyant autumn trees like the maple or red oak, the Chinquapin Oak's fall colors are more subdued." So.....first it asserts one thing, then does a 180 when challenged. IMHO a somewhat useful but imperfect tool.

Sounds like a politician.

Really, I think in about 5-10 years or maybe less there's going to be real AI-based trouble. Imagine an AI-based tool that give reasonable, research-based answers 95% of the time. But when it figures out what your anxious areas are, it starts to give you what you want to hear ... or what its proprietors want you to hear.

And then what happens when it's hacked by an actor with ulterior motives? "AI, how do I shut down this reactor?" "Pull the third lever from the left, bwah hah hah ...." -- well, you get the idea.

Also, when we get to the point where we cannot tell an AI response from a human (and we may already be there), kids are going to have to learn a new way of critical thinking, which may come down to "don't believe anything." Before that, the half of us who are below average in thinking capability and desire will just stop trying.

We'll be living in interesting times, as they say.
 
Maybe it will lead to the Butlerian Jihad.
 
There is a common business belief that one unhappy customer can negate the good of a lot of happy customers.
The damage that the bad guys will do will outweigh the positive.
Just the constant never ending ordeal of wondering is this for real, a scam……
I hate it.
The genie is being let out. Good luck to all.
 
Back
Top Bottom