Will AI reduce OR accelerate misinformation?

I picked up my uke and played with the fingerings given, and it sounded like cr*p. On closer examination, those fingerings are right for G7 and Cmaj7, but wrong for the rest (cue cartoon like horn with plunger mute: "mwapp, mwapp, mwaaaaa")
I just grabbed mine, and you are correct! I would want to prompt it back and TELL chatGPT that. But I get it. There is probably a better example...but I could go back to the prompt and tell it that I believe there is a better sounding version that I would like to hear. Maybe without the blues. Get away from that second bar so much.
 
I just grabbed mine, and you are correct! I would want to prompt it back and TELL chatGPT that. But I get it. There is probably a better example...but I could go back to the prompt and tell it that I believe there is a better sounding version that I would like to hear. Maybe without the blues. Get away from that second bar so much.
Well, that was a fub too. The next prompt didn't yield much better at least from what my ear likes to hear.

So it needs some work on these chord progressions. And it doesn't seem to be trainable lol.
 
Well, that was a fub too. The next prompt didn't yield much better at least from what my ear likes to hear.

So it needs some work on these chord progressions. And it doesn't seem to be trainable lol.
I selected different voices for the base 4 chords (using names, not fingerings), and got those sounding pretty good, but couldn't work in the turn around chord.

And in total randomness that comes from thinking with wetware, I worked up a version of Gilligan's Island theme at a lower key so it doesn't go too far up the neck (goes up 4 semi-tones). I bet the AI could have done that for me (lowering the key), but I did it by hand.
 
I had a medical test report arrive online this morning but I don't see my doctor until Thursday to review it. The report is written in indecipherable medical-ese.

I was able to 'paint' the words and phrases (entire paragraphs) and click 'search via Google' which now appears to be an AI. It spelled out everything in simple English, adding options and treatments as well as risks.

Seems that even 6 months ago this wasn't possible without a detailed bit by bit and lengthy Google search.
 
I think AI is somewhat turning into a buzzword for companines, meaning some advanced computing wow-feature, often times with a human-like language interface.

Some AI generated images look fantastic, more that what a human could ever produce in a set time frame. But so far I have not noticed great results from google's search AI assistance or my very limited exposure to chatGPT. I think part of the problem is how do these LLM gather up needed information (i.e. accurately train)? Pulling from the visible internet will return conflicting information and poorly indexed/classified information. Sometimes gathering facts, other times opinions, and often with a short half-life. Searching for basic information like some detail about your 2014 Toyota Corolla will often be correct but can be wrong often enough that it cannot be blindly trusted as the indexed data may been for a different model year. Or try drilling deeper down into something easy for a human expert to respond but hard for a novice like what is the best paint (brand, sheen, color, roller, brush) to use to repaint my bathroom given some other constraint. Results from different humans in person, online, etc. will vary for such a question, based on their skill level, experience, brand loyalty, etc., so it seems impossible for AI to sort that out.

Until there are better examples of expert systems I remain skeptical on the progress of AI. If these models are so capable today, why is there not an expert system that can be feed stacks of medical books and journals, historical patient treatment records, drug trial results, and assist a doctor/nurse with diagnosis and treatment? Or on a less complicated scale a similar tool for vehicle diagnosis and repair. I realize a big issue is measuring the initial state of things (e.g. the patient's condition). Or maybe current AI is better at generating fuzzy answers rather than the best answer.
 
To me, what is called AI, is just a pretty good summary of what is already known by a human somewhere who bothered to write it down, and it managed to get published on the internet somewhere. So this Automatic Inference machine is pretty good at inferring what I'm asking for. I don't count it as intelligence, yet.

We're close to auto pilot for cars, but not there yet. When the AI enabled house cleans itself, does laundry (to include folding and returning to dressers and closets), publishes a meal plan and cooks the meals, and the refrigerator stocks itself, then we'll be much closer to useful artificial intelligence.
 
If the training set for general AI is the internet then answers are going to be a bland compromise or wrong depending on the way you ask the question. The is no wisdom in general AI.

Narrow AI though can be a great benefit. Narrow AI as in medical diagnosis (not there yet), reading x-rays, protein folding, gaming, taking the bar exam, etc.
 
Yeah, for sure.

A lot of the internet today is garbage in garbage out.

And I remember reading an internet article that was trumpeting this sudden drop in certain car sales (shocking! click bait), going on and in about it. And if you scrolled down and looked at the tables of sales numbers it was clear that the numbers for a particular year were off by a factor of 10! Was that article machine written? If human they missed an obvious error. Yet this erroneous article conclusions were quoted in AI summary searching on the topic.

But a lot of people too just look at the headline or the intro/summary and ignore the heart of an article where the essential info is presented.
 
Last edited:
Yeah, for sure.

A lot of the internet today is garbage in garbage out.

And I remember reading an internet article that was trumpeting this sudden drop in certain car sales (shocking! click bait), going on and in about it. And if you scrolled down and looked at the tables of sales numbers it was clear that the numbers for a particular year were off by a factor of 10! Was that article machine written? If human they missed an obvious error. Yet this erroneous article conclusions were quoted in AI summary searching on the topic.

But a lot of people too just look at the headline or the intro/summary and ignore the heart of an article where the essential info is presented.
Great example.

So much of what is called AI is simply a glorified search program. It just compiles info that’s already out there and puts it in a different format.

I mentioned eBay earlier. They now have an AI function to write your item descriptions. It’s basically a high tech Mad Lib. Remember those games where you had a story but there were a bunch of blanks and you’d ask your friend for a color or a verb or a size or whatever and when you were done you’d read the funny story. That’s sort of how the AI descriptions get written. The result can be as nonsensical as those Mad Libs were. Personally I refuse to buy from any seller who uses the AI descriptions.
 
I'm not sure if this was really "AI", or just some clever programming, but I recall reading that some designers set a computer loose on designing a new antenna.

As I recall, it did a lot of hit/miss type work, trying a maybe random design, and then testing it against some modeling software, and then iterating on the most successful results. It came up with something weird that no human ever designed. See if I can find a link....

Ahh, here is one:

View attachment 53210
They describe the process as modeling natural selection/evolution.
Wonder how much that paperclip antenna cost Nasa. :)
 
Andrew Leigh observes that coal power, electric motors and computers all took a while before having a deep impact. General-purpose technologies underwhelm in the short run but dazzle in the long run, he says.
 
We had an AI startup 30 years ago. The secret to success were the development of guardrails in a limited solution space, We eventually abandoned it because new clients would not accept the results! Being right 90% of the time was not good enough…
 
I have no doubt will accelerate misinformation. Soon, if not already, too much "AI" to the point that AI is becoming more and more meaningless. Kind of like how every news is "Breaking News".
 
AI as it exists has some interesting quirks. Here is an interesting study that found an AI promoting a ridiculous level of conformity under certain conditions.


As an example, the researchers studied an AI model trained on images of different breeds of dogs. The source material included a naturally wide variety of dogs (French Bulldogs, Dalmatians, Corgis, Golden Retrievers, etc.). But when asked to generate an image of a dog, the AI model typically returned the more common dog breeds (Golden Retrievers) and less frequently the rarer breeds (French Bulldogs).

Over time, the cycle reinforces and compounds when future generations of AI models are trained on these outputs. It starts to forget the more obscure dog breeds entirely. Soon it only creates images of Golden Retrievers.
Now, substitute dog breeds for whatever you’re trying to create — new products, packaging, advertising, communication, and the risk is that all outputs devolve to look the same.
 
Last edited:
^ I watched a lot of images get produced by "Midjourney", and I agree that there seems to be bias. Some look like what you'd find in a catalog, a lot look like goth art, etc. But most of the images seem to fall into one bucket or another. I still find it absolutely fascinating based on how they work. They literally take a blurry blob and decide what the most likely slightly less blurry blob should look like.
 
A short experience with Copilot on Windows 11. Being knowledgeable in a particular area, I queried for the scientific name of chinquapin oak (actually, a rather minor species)...Answer Quercus muehlenbergii --Correct, so far so good--Then I asked about its fall color. Answer "Quite a sight in in the fall"..."a stunning mix of yellow, brown and russet". Actually, its fall color is comparatively dull. My next line to Copilot "Really, its fall color is rather dull" Answer--"That's true. Compared to some of the more flamboyant autumn trees like the maple or red oak, the Chinquapin Oak's fall colors are more subdued." So.....first it asserts one thing, then does a 180 when challenged. IMHO a somewhat useful but imperfect tool.
 
You have to be a skeptic.

I asked Gemini about a calculation for inflation corrected investment returns. Really pretty simple. It gave a patently incorrect answer. Then I challenged it with my own solution. It apologized. Then I asked Chatgpt about the same calculation and it gave a correct answer. It turned out I was also in error in correcting Gemini's answer. Would this lead to Gemini's learning something wrong.

Then I asked Google about the calculation and it gave a correct answer like Chatgpt. I had thought Google used Gemini but apparently not quite. Hmmm.....
 
A short experience with Copilot on Windows 11. Being knowledgeable in a particular area, I queried for the scientific name of chinquapin oak (actually, a rather minor species)...Answer Quercus muehlenbergii --Correct, so far so good--Then I asked about its fall color. Answer "Quite a sight in in the fall"..."a stunning mix of yellow, brown and russet". Actually, its fall color is comparatively dull. My next line to Copilot "Really, its fall color is rather dull" Answer--"That's true. Compared to some of the more flamboyant autumn trees like the maple or red oak, the Chinquapin Oak's fall colors are more subdued." So.....first it asserts one thing, then does a 180 when challenged. IMHO a somewhat useful but imperfect tool.

Sounds like a politician.

Really, I think in about 5-10 years or maybe less there's going to be real AI-based trouble. Imagine an AI-based tool that give reasonable, research-based answers 95% of the time. But when it figures out what your anxious areas are, it starts to give you what you want to hear ... or what its proprietors want you to hear.

And then what happens when it's hacked by an actor with ulterior motives? "AI, how do I shut down this reactor?" "Pull the third lever from the left, bwah hah hah ...." -- well, you get the idea.

Also, when we get to the point where we cannot tell an AI response from a human (and we may already be there), kids are going to have to learn a new way of critical thinking, which may come down to "don't believe anything." Before that, the half of us who are below average in thinking capability and desire will just stop trying.

We'll be living in interesting times, as they say.
 
Maybe it will lead to the Butlerian Jihad.
 
There is a common business belief that one unhappy customer can negate the good of a lot of happy customers.
The damage that the bad guys will do will outweigh the positive.
Just the constant never ending ordeal of wondering is this for real, a scam……
I hate it.
The genie is being let out. Good luck to all.
 
I just asked all the AI engines I could access: "which religion is responsible for the most deaths" and got a bunch of gobbledy gook. Would not commit to any data whatsoever, indicating that either there is no answer or something is metering or censoring the response. I would like to know the answer to the question, but apparently that data is less knowable than the origin of the universe.
 
i expect it to be developed and eventually weaponized just like any other technology. if you can see the scale of what it can do in your individual life imagine actually at scale globally. areas i see weaponized offhand surveillance (could be video cameras, your credit score, a social credit score, personal rights intrusions, etc.), warfare i.e. drones and robots on AI fighting, health (mrna turbo cancers, create and then provide cure Big Pharma model), genomics (don't even want to think about that one right now), general technology (aerospace, flying in general, physical catastrophes fires, tornades, etc. On information side it will become the largest propaganda arm ever created. it will not just provide information but provide insights based on the way it thinks not how you think. things that are concrete hopefully it will be simple yes/no correct/not correct answers but anything else best of luck. for instance at basic level the ideas of West vs East in the world. AI developed in the West will think and behave in ways that are Western centric and vice versa for AI created in the east. Each side will build backdoors for themselves same like any other software.

It will far surpass human level of collecting, collating, curating, and providing answers on data and in real time. in the beginning it will be to assist, then provide guidance, and later to provide framework for humans to live by most likely. we will not make things AI machines and software will for instance. we will have instant answers that we could never think of on our own. AI machines will replace human controlled like driving, flying, maritime, etc.

It's definitely the next Revolution like Industrial was before it. AI coupled with technology and technological development with AI will change the world forever and in ways never thought about. It truly is the next race like Atomic bomb was. Whoever wins AI race will determine much of the way the world works and functions. Hopefully it does more good than harm in the long run though. We should see many breakthroughs that will benefit humans on a scale and quickness never seen before.

As far as information vs misinformation. there is only information. the mis part is someone/something classifying and putting an identity on information. information itself is raw.
 
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

― Frank Herbert, Dune
 
You have to be a skeptic.

I asked Gemini about a calculation for inflation corrected investment returns. Really pretty simple. It gave a patently incorrect answer. Then I challenged it with my own solution. It apologized. Then I asked Chatgpt about the same calculation and it gave a correct answer. It turned out I was also in error in correcting Gemini's answer. Would this lead to Gemini's learning something wrong.

Then I asked Google about the calculation and it gave a correct answer like Chatgpt. I had thought Google used Gemini but apparently not quite. Hmmm.....
Interesting interaction. As I understand current LLMs, Gemini would not incorporate your correction into future answers. For "memory" the LLM relies on the complex, multi-level weighting of associations of tokens (word components) achieved during training. If the model's training is updated or the model is "fine-tuned" new associations could be introduced.

The reason your interaction can result in the model correcting itself is that the LLMs incorporate information you provide in your prompt, or information from an Internet Search, into the "context" under which they evaluate token associations and compile an answer. The size of the "context windows" has rapidly grown so a lot of background information can be passed to the model through prompts. It can be valuable to query multiple LLMs and use results to broaden the context provide in subsequent prompts.

Currently context information from prompts is not incorporated in the base model. In other words, that information is not used to update the model's training. I assume these things will eventually become self-teaching.
 
I wonder if, ironically, the blizzard of misinformation, enhanced by AI, will cause a few trusted media sources to emerge, something like the 3 tv channels and a few newspapers of the last century, run by people who are accountable due to reputational risk and having skin in the game? And perhaps they’ll need a membership model, like NPR.

For example, Twitter/X claims to be a public square of the internet. In reality, free speech is defined by what the new, billionaire owner likes. It’s just gone from canceling or promoting one kind of slanted speech to another. Even the Twitter founder, Jack Dorsey, was canceled on X the other day 😂. Evidently, the same kind of bias is happening at the Washington Post, and probably others, due to owners with multiple business interests wanting to stay in good graces with the nation’s changing governing elite to protect federal contracts and manage regulations. Readers are fleeing both, due to lack of trust.

As media and social media companies develop, it seems like they’ll need to implement AIs that help, not hurt, their reputations.
 
Last edited:
Back
Top Bottom