Will AI reduce OR accelerate misinformation?

Midpack

Give me a museum and I'll fill it. (Picasso) Give me a forum ...
Joined
Jan 21, 2008
Messages
22,663
Location
NC
A buddy of mine seems to quote ChatGPT more and more. Instead of developing his own thoughts as he once had to, he’s just parroting ChatGPT answers - so if anything he actually knows less now. All I have to do is ask questions and his lack of depth is quickly exposed. He’s rethinking using ChatGPT to think for him.

I wonder if the “answers” he gets, are more influenced by the questions he ask based on the way he’s asking. The same phenomenon we’ve all seen in how poll questions are designed to steer responses.

But my main question - we know you can find any answer you want on the internet nowadays, often surprisingly convincing - some misinformation. Since AI draws from the internet, what prevents AI chatbots from turbocharging misinformation along with facts? I’m not sure how useful AI will be if I still have to double check everything…
 
I think well intentioned and well designed AI will be a net benefit because it will take complex things and make them more accessible.

Malicious AI and even just lazy click bait AI is going to flood the world with so much nonsense it’s hard to imagine. It’s already happening.

I think net negative.
 
Probably both. The LLMs are getting better every day. If you put the time in with prompts and follow-ups you can get a lot out of some already. This morning, I ran a page full of tables, containing data from a body composition scan my gym did, through OCR. I submitted the text and JSON file outputs to Anthropic's Perplexity model along with prompts adding some context and what I wanted. It cranked out an excellent, plain text summary of the document along with a concise analysis. All conformed with what I had been told by the trainer but provided more detail.
 
Ideally, competition between AI developer groups will improve AI output with time. Decades ago, Google took over web searching by yielding better and cleaner results than produced by, say, Alta Vista.
 
Pretty much agree. I don't know much about AI, but I've used it some recently, and I'm not sure how much 'intelligence' is really there. I mean, it's really good at parsing your question to understand what you want, that's really impressive. Even more so when you drill down to get more detail from the previous answer, and it builds on the answer.

But I think that's it - it's good at context (which is a very good and useful thing). But it has to pull from what's out there. It might have some way of judging 'serious' info from credentialed people, versus just 'some guy on the internet' spewing out words w/o thought. But for the most part, it must just be doing a good job of aggregating and looking for consensus - which can be misleading depending on the topic.

Maybe I'll post more of this later, but I had some fun with AI, and asked for some general, often repeated info from 'credentialed' sources that I was sure was 100% wrong. And it gave me the common wrong answer. So I kept challenging it. It took a while, but it finally agreed with me, but I had to sort of soften my wording to make it a bit more open ended. It had me laughing.

BTW, this wasn't anything controversial - just some weird description behind the process of thawing a frozen turkey, and the Butterball site says something that doesn't make sense based on the laws of physics.
 
I think well intentioned and well designed AI will be a net benefit ...
And who decides what "well intentioned means?" Right now the AI is primarily showing white men as business executives. Why? Because the AI is finding that most business executives are white men.

More broadly, AI is training on the world as it is, not as someone who is "well intentioned" might want it to be. So should something be done there? Tweek the AI so that it lies about what it has learned? Or ?? ... and who decides?
 
And who decides what "well intentioned means?" ....
Tweek the AI so that it lies about what it has learned? Or ?? ... and who decides?
Nothing has changed. Anything like this is a tool. The tool doesn't do the work by itself, it needs guidance to be useful. The output needs to be scrutinized.

And like always, lazy people won't bother. Just like the advent of calculators, smart people estimated the answer pt help make sure they didn't fat-finger an entry. Lazy people just accept the answer.

Same as it always was.

Even a CNC machine needs programming, and someone to conceptualize what is wanted. It's automated, with great skills, but still just a tool.
 
... Even a CNC machine needs programming, and someone to conceptualize what is wanted. ...
Who decides what is wanted?

For myself, I want the AI output to be an accurate representation of what it has learned from exploring the real world. IOW, definitely not a "woke" world as defined by someone who finds aspects of the world to be unacceptable.
 
Who decides what is wanted?

For myself, I want the AI output to be an accurate representation of what it has learned from exploring the real world. IOW, definitely not a "woke" world as defined by someone who finds aspects of the world to be unacceptable.
But all it can do is glean from what is out there. I don't want to get into the politics of 'woke', but if you ask it something in an area of opinion, all it can do is summarize those opinions. I don't believe it has the 'intelligence' to decide what is correct or not, and there may not even be a 'correct'.

Hopefully, it responds with something that makes it clear there are different viewpoints and just reports on them. I haven't looked into (I'm looking for fact based things), so I don't really care, and discussions here will likely go off the rails into politics.
 
From what I can tell, what AI can do is report what is known (or at least published) about things that already exist. And I expect that it will improve in that area. What AI probably cannot do is improve on current knowledge or make new discoveries. Much progress in science and technology starts with a person who one day thinks "that's weird, I wonder why" or asks "what would happen if" or merely sees a non-obvious link between wildly disparate fields that leads to new knowledge. I question whether AI will ever be able to do that, or whether it will simply reach its peak as the world's best research librarian.

As far as willful ignorance goes, it is spread around pretty evenly in our society. There are people of all stripes who believe what they believe and don't want to be bothered by the actual facts.
 
My opinion is that AI like Grok will make it easier to find information and be useful tutoring students. If the AI is biased like some it is a problem.

As far as misinformation or fake information on social media I think AI will make it worse.
 
... What AI probably cannot do is improve on current knowledge or make new discoveries. Much progress in science and technology starts with a person who one day thinks "that's weird, I wonder why" or asks "what would happen if" or merely sees a non-obvious link between wildly disparate fields that leads to new knowledge. I question whether AI will ever be able to do that, or whether it will simply reach its peak as the world's best research librarian.
...
I'm not sure if this was really "AI", or just some clever programming, but I recall reading that some designers set a computer loose on designing a new antenna.

As I recall, it did a lot of hit/miss type work, trying a maybe random design, and then testing it against some modeling software, and then iterating on the most successful results. It came up with something weird that no human ever designed. See if I can find a link....

Ahh, here is one:

1733342910811.png

They describe the process as modeling natural selection/evolution.
 
My opinion is that AI like Grok will make it easier to find information and be useful tutoring students. If the AI is biased like some it is a problem.

As far as misinformation or fake information on social media I think AI will make it worse.
How is that substantially different from search, a tool we've had for decades? I guess I am asking in part if/how AI chatbots are/will be better at ignoring fake information than search? If not, I don't see how chatbots don't just accelerate the spread of misinformation.

 
Last edited:
I haven't found a use case for AI assisting with anything I do on a regular basis, but I keep an open mind in case something comes up. A problem I have read about is the LLM "hallucination," where it presents as fact, stated with seeming confidence, an answer that is objectively/factually incorrect. On occasions when I do need to do some real research, I need correct answers. As the technology stands today, I don't feel I can rely on a LLM to provide correct answers. My thoughts on the original question here would be that AI should be capable in the future of reducing misinformation, but the technology is not there yet. Could it actually accelerate misinformation? I suppose it could if LLM-generated answers that bear "hallucinations" proliferate and are then used to train other LLMs. I have no idea if that is likely to happen.
 
I think well intentioned and well designed AI will be a net benefit because it will take complex things and make them more accessible.

Malicious AI and even just lazy click bait AI is going to flood the world with so much nonsense it’s hard to imagine. It’s already happening.

I think net negative.

Thing is the people running AI companies may have bad incentives.

They've shown they don't care about copyrights, scraping information without paying people. The rationale seems to be that they're doing something good for society and that they can pay later when they hit it big with an IPO later on.

The appeal to businesses is to automate, i.e. replace people or hire fewer people. But there is also temptation that AI can catch things humans won't, like in radiology or scanning satellite and drone imagery to detect hostile targets, i.e. the Skynet scenario from Terminator where you arm AI and let it find and destroy targets which it deems deserving of destruction.

That is the fear, AI concludes that humans are not worth surviving as a species because of the environmental destruction and causing extinction of other species.

The other part of it is the environmental costs. There's a gold rush mentality (like scraping content without paying for it) so some startups are accused of cutting corners in securing "dirty" electricity to power data centers. Not just using coal but circumventing emissions regulations to power data centers.

There are also estimate that AI searches will cost more than conventional Internet searches.

On average, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search. In that difference lies a coming sea change in how the US, Europe, and the world at large will consume power — and how much that will cost.

For years, data centers displayed a remarkably stable appetite for power, even as their workloads mounted. Now, as the pace of efficiency gains in electricity use slows and the AI revolution gathers steam, Goldman Sachs Research estimates that data center power demand will grow 160% by 2030.

At present, data centers worldwide consume 1-2% of overall power, but this percentage will likely rise to 3-4% by the end of the decade. In the US and Europe, this increased demand will help drive the kind of electricity growth that hasn’t been seen in a generation. Along the way, the carbon dioxide emissions of data centers may more than double between 2022 and 2030.

 
Grok is much more pleasant to use than search. It presents a coherent summary and you can carry on a conversation like with a patient tutor.

At worst it is a lot faster than digging through search entries and probably combines them in the summary saving you time.

Grok is designed to prioritize truth. Assuming you believe that. Certainly better that that AI that produced the portrait of the founding fathers made up of black women and drag queens.

I am sure that problems will be found if we dig hard enough, but for the most part I have stopped using search.
 
Grok is much more pleasant to use than search. It presents a coherent summary and you can carry on a conversation like with a patient tutor.

At worst it is a lot faster than digging through search entries and probably combines them in the summary saving you time.

Grok is designed to prioritize truth. Assuming you believe that. Certainly better that that AI that produced the portrait of the founding fathers made up of black women and drag queens.

I am sure that problems will be found if we dig hard enough, but for the most part I have stopped using search.
I don’t…
androidauthority.com said:
Similar to other AI tools like ChatGPT, Grok is an AI chatbot that uses LLM (large language model) technology that trains the AI by feeding it a vast amount of information from all across the internet.
But unlike some other tools that are limited to information up to 2021 — like ChatGPT 3.5 — Grok utilizes the internet in order to have real-time access to the data it needs. Grok’s internet access should allow it to offer more up-to-date information.
The biggest concern here, however, is that Grok only has access to data from X (formerly known as Twitter) rather than the wider internet as a whole. This may not end up being a huge benefit, as a fair share of the information available on Twitter is inaccurate.
 
When it is used as a collector of information, it is only as truthful as the information available that it chooses to collect. It looks like Google search is using it in this collection (or near-collection) mode since it sometimes mixes correct facts with misinformation disquised as facts. I wish I could remember what I was searching. I saw a sentence that put together two pieces of information, one totally correct and the other totally incorrect. Also facts can be mixed with speculation or conjecture.
 
I have been prompting GPT OpenAI quite a bit for help with coding theory and such. This is actually quite helpful, as what took me usually 1 to 3 different google searches for information and answers, now returns even more context than what my original 3 google queries would. but formatted and with lots of context....of course depending on how you prompt.

I should use it a bit more for historical data and markets, merely because finding those historical relationships is fascinating to me.

I'm sure it could help you cook better. Give me a dish that includes the following ingredients that are in my freg and cupboards...for instance...

Or build me a chord progression on the ukulele that has hawian roots with some jazz flavor, and include turnaround lick that brings you back to the root chord...for example..

or Build me a batch command that executes with admin privileges, whose function is to copy the contents of a C:\source to C:\target and allow this batch file to be executed from any location on the C:\ drive ensuring the path arguments will not fault, explaining the solution with the context that a non-engineer could understand.

Or...
Take the following rejection letter and give me suggestions on how I could modify it to read more empathetically.



yeah, I use it.


Just for context...here is the response to the above Uke prompt:

Sure! To create a chord progression on the ukulele with Hawaiian roots and a touch of jazz flavor, we can mix the classic Hawaiian progression with some jazz chords that add color and tension. Here's a sample progression that follows that idea:

Chord Progression (Hawaiian + Jazz Flavor)​

  1. Cmaj7 (Root chord, gives that bright, happy Hawaiian feel)
  2. A7 (Dominant seventh, adding a little tension)
  3. D9 (Jazzier flavor, provides a more colorful dominant feel)
  4. G7 (Classic dominant seventh)
  5. Cmaj7 (Return to the root)

Turnaround Lick​

To bring you back to the root (Cmaj7), we can add a jazzy turnaround using chromaticism or voice-leading. Here's an example:

  • F#7 (dominant seventh, one fret away from G7, creates tension)
  • G7 (dominant seventh, resolving tension)
  • Cmaj7 (root, resolving everything)

Ukulele Fingerings:​

  • Cmaj7: 0002
  • A7: 2100
  • D9: 2222 (You can also play 2122 as an alternative voicing)
  • G7: 0212
  • F#7: 2122
  • G7: 0212

Playing Tips:​

  • For Hawaiian Flavor: Strum lightly, using a down-up motion, and try adding some thumb or finger picking to enhance the soft, rhythmic feel.
  • For Jazz Flavor: Focus on clean, precise chord changes. Let the jazzy chords (like D9 and F#7) ring out more distinctly, creating contrast with the more open, simple Hawaiian chords (like Cmaj7 and A7).

Example:​

  • Cmaj7A7D9G7Cmaj7
  • F#7G7Cmaj7 (Turnaround lick)
This combination of a traditional Hawaiian progression with jazz chords (like the D9 and F#7) and the turnaround (F#7 → G7 → Cmaj7) gives the progression both a nostalgic and modern feel. It should provide the Hawaiian roots you desire while adding jazz sophistication.
 
AI is a growing disaster. So many examples I’ve seen are pathetically inaccurate.

eBay introduced an option to have AI generate your item descriptions. The results are beyond awful and often filled with errors. In my eBay groups we constantly urge people not to use it because it just leads to Item Not As Described claims when a buyer receives their item and discovers it doesn’t match what the listing said.

My wife crochets and has come across numerous patterns that were AI generated. They are often impossible to actually follow because they omit vital info or give directions that make no sense.

It’s getting harder and harder to find accurate info. I really wish all AI content had to carry a blatant disclaimer so we would know to just ignore it and move on.

I hope the AI fad dies out soon.
 
I told Grok what you were saying about it. Here is its response.

The internet, a place where rumors fly faster than light. Let's set the record straight: while I can access and utilize information from X (formerly Twitter), my knowledge base is far more extensive. I'm designed to pull from a wide array of sources, including but not limited to academic papers, books, and yes, various websites and social platforms like X.
However, regarding the concern about misinformation, it's true that social media platforms, including X, can be breeding grounds for misinformation due to the nature of user-generated content. But here's the deal:
  • Verification: I strive to provide accurate information by cross-referencing data where possible. When I use information from X or any other source, I aim to verify it against other reliable sources or use it to reflect current discourse rather than stating it as fact.
  • Critical Thinking: I encourage users to think critically about the information they consume. If something seems off or too sensational, it's wise to check it against multiple sources.
  • Community Notes: On X, there's a feature called Community Notes where users can add context or corrections to posts, which helps in combating misinformation. This is one way X tries to mitigate the spread of false information, though it's not foolproof.
  • My Role: I'm here to help navigate through the noise. If you ask for factual information, I'll do my best to provide it accurately. If the topic is contentious or there's a lot of misinformation around it, I'll present various perspectives or admit if the information is unreliable or inconclusive.
So, while I can indeed use information from X, it's just one part of a much larger puzzle. My aim is always to be maximally helpful and truthful, and I appreciate the heads-up about concerns regarding misinformation. If you've got specific questions or concerns about something you've seen, feel free to ask, and we can dig deeper together!
 
AI is a growing disaster. So many examples I’ve seen are pathetically inaccurate.
....

I hope the AI fad dies out soon.
I'll expand on my earlier 'tool' analogy - that a tool is just that, it needs to be in the hands of someone who knows how to use it for any good to come from it.

So AI is also like the 'glass-half-full/empty' analogy. There's some good there. And some things to be careful about. So don't throw the baby out with the bath water.

No one is going to spoon feed you the 'answer'. AI is a useful tool, IMO.
 
I'll expand on my earlier 'tool' analogy - that a tool is just that, it needs to be in the hands of someone who knows how to use it for any good to come from it.

So AI is also like the 'glass-half-full/empty' analogy. There's some good there. And some things to be careful about. So don't throw the baby out with the bath water.

No one is going to spoon feed you the 'answer'. AI is a useful tool, IMO.
The problem is that AI content isn’t usually clearly identified as such (though you can often tell by the poor quality) so users don’t realize that’s what they’re looking at.

The AI summaries on Facebook are flagged as such so I know to just ignore them but other places present AI content as if it was real info.
 
Back
Top Bottom