Will AI reduce OR accelerate misinformation?

AI is definitely not going away. It’s here to stay.

I tend to think of it as a human extension, because that’s where it’s coming from.

We already had to be careful consumers of online content. Now even more so!
 
The problem is that AI content isn’t usually clearly identified as such (though you can often tell by the poor quality) so users don’t realize that’s what they’re looking at.

...

I've actually been able to spot AI-generated info by the format as often as the quality. ChatGPT and others like to generate bullet list formatted answers, which is fine if you use AI/LLM for an answer. But when randos take that and generate a no effort article from it - it shows.

And the worst is Youtube videos where the text and audio is AI generated - oof!
 
The problem is that AI content isn’t usually clearly identified as such (though you can often tell by the poor quality) so users don’t realize that’s what they’re looking at.
How is this different from, forever?

Back in the old days, I'd get a book out of the library. How do I know if the content is 'real info' or not?

If a friend gives some advice, how do I know if it is well reasoned, tried and tested advice, or just opinion?
The AI summaries on Facebook are flagged as such so I know to just ignore them but other places present AI content as if it was real info.
So were those books, opinions or recent internet searches flagged?

I think the 'flagging' is up to us as individuals. If you aren't asking something determinate, like an arithmetic problem, and it is important to you, it's up to you to work to validate the veracity of the reply.

If we are going to leave that up to others, it gets tricky very fast. Now you are leaving the determination of what is 'true' up to others. We can never be certain they don't have a bias, or an agenda. "Let the buyer beware". Always.
 
How is this different from, forever?

Back in the old days, I'd get a book out of the library. How do I know if the content is 'real info' or not?

If a friend gives some advice, how do I know if it is well reasoned, tried and tested advice, or just opinion?

So were those books, opinions or recent internet searches flagged?

I think the 'flagging' is up to us as individuals. If you aren't asking something determinate, like an arithmetic problem, and it is important to you, it's up to you to work to validate the veracity of the reply.

If we are going to leave that up to others, it gets tricky very fast. Now you are leaving the determination of what is 'true' up to others. We can never be certain they don't have a bias, or an agenda. "Let the buyer beware". Always.
But if I can’t trust that AI content is accurate, what purpose does it serve? What benefit does it offer that didn’t already exist? If I have to take the AI content and then research it all anyway to be sure it’s correct, what was the point? Seems like just an added waste of time.
 
The AI summaries I’m getting with Google Search have links to references.

I did find an error once, but it was because one article linked was blatantly wrong and obvious when you looked at the data. The author hadn’t even looked at his own data! Uh-oh maybe it wax AI generated :facepalm: . But honestly I’ve seen articles that bad pre-AI.
 
But if I can’t trust that AI content is accurate, what purpose does it serve? What benefit does it offer that didn’t already exist? If I have to take the AI content and then research it all anyway to be sure it’s correct, what was the point? Seems like just an added waste of time.
As I mentioned before, it can understand the context of a question. If I do a search on terms, I get everything with those terms, and I have to sort through to see which fit the context. In some searches, that might be page 9 of the hits.

That makes it valuable. Remember, perfect is the enemy of good.
 
AI is definitely not going away. It’s here to stay.

I tend to think of it as a human extension, because that’s where it’s coming from.

We already had to be careful consumers of online content. Now even more so!
So instead of helping, now it will be even harder to find misinformation? Unfortunately I suspect you’re right in too many cases. Progress?

And I naively thought AI could provide content moderation that an army of humans couldn’t. In the “wrong” hands it could do just the opposite, or at least enhance the addictive nature of social media. Wonderful…

And I am sure “wrong hands” will use the tools aggressively. We’ll have to keep our wits about us more than ever. Wonderful…

I used Apple AI to rewrite a couple medium length texts for me. I don’t think it provided better words/phrases, just different - though I realize I may not be objective.
 
So instead of helping, now it will be even harder to find misinformation? Unfortunately I suspect you’re right in too many cases. Progress?

And I naively thought AI could provide content moderation that an army of humans couldn’t. In the “wrong” hands it could do just the opposite, or at least enhance the addictive nature of social media. Wonderful…

And I am sure “wrong hands” will use the tools aggressively. We’ll have to keep our wits about us more than ever. Wonderful…
Well it might be able to provide content moderation and a useful personal tool in many cases. But it’s also easily abused especially “deep fakes”. The recent Starship launches is a good example. Since SpaceX stopped streaming launches live on YouTube they have been replaced by a bunch of fake “live stream” videos that capture a big audience who eventually realize that they are not watching the launch but instead have been conned into watching a c*currency infomercial with Musk hawking it. The voiceover video etc. is totally fake yet convincing enough that folks are convinced Musk himself is hawking the scam. Very unfortunate, but this has been going on for a while already. Folks who regularly watch SpaceX launches know to go right to SpaceX.com or X for the video. More about that particular deep fake: SpaceX Crypto Scam: Fake Elon Musk YouTube Stream Detailed

I think if we individually learn to use the tools they will be very useful. But just as before you can’t blindly consume content others have created. Some tool builders may have taken steps to reduce abuse, but I assume things can still be manipulated via multiple tools and whatever. And unfortunately I think there is a large portion of the human adult population that is unable to consume content skeptically.

I used Apple AI to rewrite a couple medium length texts for me. I don’t think it provided better words/phrases, just different - though I realize I may not be objective.
I think this is simply a matter of style. With some tools you can even specify the style you want.
 
Last edited:
I'm less concerned about AI providing biased, or incorrect information in a search situation. We've been dealing with misinformation for centuries.

Where I do find AI exciting is the opportunities in streamlining everyday life, manufacturing, delivery of goods, robotics and just general automation of things.

Marc Andreessen recently claimed that AI will improve things so much that everything will be incredibly cheap and no one will have to work. That's quite a stretch IMO, but I can see how he got there.

Mods: your job here is safe! (For now anyway. 😀)
 
I'm less concerned about AI providing biased, or incorrect information in a search situation. We've been dealing with misinformation for centuries.

Where I do find AI exciting is the opportunities in streamlining everyday life, manufacturing, delivery of goods, robotics and just general automation of things.

Marc Andreessen recently claimed that AI will improve things so much that everything will be incredibly cheap and no one will have to work. That's quite a stretch IMO, but I can see how he got there.

Mods: your job here is safe! (For now anyway. 😀)
I don't think a world where no one worked would be all that good. Many people are simply unable to structure their own lives and need an external locus of control, which is often provided by a job. Without that structure, they will wander and find new ways to get in trouble.
 
In its current state, if a human is relying on AI to "sound smart at the bar" they will end up sounding quite stupid. AI is fed by human data, and as much as that requires a discerning eye to vet and source, AI can't really do that...yet. It's still pretty much a consolidation and regurgitation.

So, yes, OP, your friend would be silly to continue this, or considering AI tools as his primary (or only) source of information. I don't know if widespread use leads to misinformation, but if one is routinely spouting off AI stuff as fact without ever digging further, well, one is probably not that bright a bulb to start with?

There have been many laughable viral errors (no countries in Africa start with K, it's ok to eat a few rocks each day, put glue on pizza), but probably millions of less funny ones that don't make headlines.
 
But my main question - we know you can find any answer you want on the internet nowadays, often surprisingly convincing - some misinformation. Since AI draws from the internet, what prevents AI chatbots from turbocharging misinformation along with facts? I’m not sure how useful AI will be if I still have to double check everything…
I've used ChatGPT for factual kinds of things, some I thought I knew the answer, some I kind of knew. The answers seemed right.

The producers of these tools don't want to be "caught out" spewing garbage.

Let's contrast that to entities that post obvious falsehoods. They WANT to be challenged! The Internet is full of bozos with provably false claims that a standard Internet search might find. So if the information seeker acts like a typical person, they search in a way that confirms their bias. They find the bozo, puff out their chest, and continue being wrong, but now with more confidence.

Enter AI. Imperfect, but the huge majority of goofball ideas aren't represented. No, the LLM's do not seem to be turbocharging wrong ideas. I've seen dozens of LLM answers that are amazing, and a couple that seemed confused, but I haven't seen a goofball idea presented as fact by an LLM quoted and confirmed.

I think the model owners/designers are well motivated to not have a reputation for presenting goofy ideas.

The early Internet prevented bar fights. Then the Internet got way more goofballs, so you could easily confirm your wrong idea, and you end up with a black eye because your site with answer A was not as good as the other guy at the bar, who used site B.

Next gen bar fights will be "My LLM says...and yours says...", but most of the time the LLM's will agree, and no punches thrown.
 
Last edited:
I've used ChatGPT for factual kinds of things, some I thought I knew the answer, some I kind of knew. The answers seemed right.
The hell of it, like Google as well, is that years ago if we didn't know some minutiae, we'd shrug our shoulders and move on to the next thing. It just wasn't important enough to go somewhere and look it up.

Now, no matter how obscure or insignificant it is to know, we have to get an answer. My head is completely full of useless information now!
 
How is this different from, forever?

Back in the old days, I'd get a book out of the library. How do I know if the content is 'real info' or not?

If a friend gives some advice, how do I know if it is well reasoned, tried and tested advice, or just opinion?
If the "book out of the library" is the Encyclopedia Britannica, that name is a good indicator of the standard to which the content was held. Same with other brand-name information sources. Whatever one may think of, say, the New York Times' standards, past or present, the name is a reliable indicator of those standards. The reliability of information from no-name sources, especially on the Internet where how professionally the content is presented is not a reliable indicator of quality, has long been problematic. In that sense, AI-generated content is no different.
 
I'm less concerned about AI providing biased, or incorrect information in a search situation. We've been dealing with misinformation for centuries.

Where I do find AI exciting is the opportunities in streamlining everyday life, manufacturing, delivery of goods, robotics and just general automation of things.

Marc Andreessen recently claimed that AI will improve things so much that everything will be incredibly cheap and no one will have to work. That's quite a stretch IMO, but I can see how he got there.

Mods: your job here is safe! (For now anyway. 😀)
I looked up Marc Andreessen. That is definitely an egghead!

The everything super cheap and no jobs bit - I don’t expect to see that in my lifetime.
 
For news reports, we have always been at the mercy of the writer and the editor to ensure that the article is factually correct. Since many writers may be facile with language but not informed in the subject on which they are reporting, and since there appear to be no more editors, that means articles written with absolutely no ill intent and published in a "name" publication are often wrong.

A very recent example concerns the shooting in NYC. One report I read said "the suspect fled west on 6th Ave." (Avenue of the Americas). Since all the avenues in Manhattan (including 6th) run north and south, he could not have fled west on one. In fact, if he went north up the alley between 54th and 55th Streets, he would have to turn right and go east along 55th Street to get to 6th Avenue, where he got on an ebike and went north to Central Park. Even if the writer was unfamiliar with Manhattan, a quick Google map search would have informed him/her that the description was wrong. This is why editors (professionally skeptical humans) used to exist.
 
Last edited:
In its current state, if a human is relying on AI to "sound smart at the bar" they will end up sounding quite stupid. AI is fed by human data, and as much as that requires a discerning eye to vet and source, AI can't really do that...yet. It's still pretty much a consolidation and regurgitation.
Right. Something to keep in mind is the difference between "human data" and "human language." As others have pointed out, what we're calling "AI" at present are large-language models (LLMs). They are trained on enormous amounts of text or language generated by humans, and spit out text based on relationships learned from the text they were trained on.
 
I will stay out of the politics cross-crossing thru the thread.

If anyone wants a hugely thoughtful, often entertaining and historically/scientifically sound discussion of the challenges underlying getting AI to conform to what people are trying to accomplish, I would suggest the book “The Alignment Problem” goes onto your holiday reading or gift wish list.
 
How is this different from, forever?

Back in the old days, I'd get a book out of the library. How do I know if the content is 'real info' or not?

If a friend gives some advice, how do I know if it is well reasoned, tried and tested advice, or just opinion?

So were those books, opinions or recent internet searches flagged?

I think the 'flagging' is up to us as individuals. If you aren't asking something determinate, like an arithmetic problem, and it is important to you, it's up to you to work to validate the veracity of the reply.

If we are going to leave that up to others, it gets tricky very fast. Now you are leaving the determination of what is 'true' up to others. We can never be certain they don't have a bias, or an agenda. "Let the buyer beware". Always.
I think it is different. With a book, you can check footnotes. You can talk to the author. If they say they talked to so-and-so, you can verify that with so-and-so. If they say they used an original source, like a newspaper, you could go find that paper newspaper article.

It is much more difficult these days. You cannot speak to the folks producing the AI - that is about as easy as correcting inaccurate information about you on the internet, or removing oneself complete from social platforms. An author monetized their books by getting people to buy their books. On the internet, people monetize via clicks and followers, so the goal is to do and say anything in a way that gets lots of clicks and followers, no matter how inaccurate. Now multiply this by millions and it becomes much harder to weed through things.

You had a gatekeeper for books, the publishers. They knew they could be held accountable, so were much more careful in what they wanted to publish - or we willing to take responsibility for what was published. Not so much with the content platforms today.

Not only can AI produce fake things or misinformation, but it can be used to make real things look fake. I will just say there were high profile incidents caught on video that were real, but some people claimed were faked, and they were promoted as such by large portions of the media. I will say no further as it gets into politics.

I agree with doing your own research and validation, that has not changed. But people are no longer trained or taught to do that. Social media has trained folks to have a much shorter attention span and focus. And remember - EVERYONE has a bias and agenda, that is what you can be certain of :) .


I'm less concerned about AI providing biased, or incorrect information in a search situation. We've been dealing with misinformation for centuries.

Where I do find AI exciting is the opportunities in streamlining everyday life, manufacturing, delivery of goods, robotics and just general automation of things.

Marc Andreessen recently claimed that AI will improve things so much that everything will be incredibly cheap and no one will have to work. That's quite a stretch IMO, but I can see how he got there.

Mods: your job here is safe! (For now anyway. 😀)

I am not sure about this. Since the early days of IT in the 70s I have hearing about advancements that will "make things cheap and no one will have to work". I have yet to see that, outside of technology hardware being cheaper. But the software and processes running on those platforms are not necessarily making things cheaper to the end user. It s making it cheaper for the producer, primarily in needing fewer people (as that is the biggest expense for the vast majority of companies). Wages have not exactly grown to match that advancement. And the people who do not have to work are not exactly living lives of luxury (other than the FIREd and soon-to-be FIREd folks on this forum:)).
 
I looked up Marc Andreessen. That is definitely an egghead!

The everything super cheap and no jobs bit - I don’t expect to see that in my lifetime.
You may remember him as the guy who developed the first web browsers Netscape and Mosaic back when the web was still "C" prompts. I met him once when he was supposed to become the next Bill Gates.

But yes, nowadays he literally looks like an egghead.
 
You may remember him as the guy who developed the first web browsers Netscape and Mosaic back when the web was still "C" prompts. I met him once when he was supposed to become the next Bill Gates.

But yes, nowadays he literally looks like an egghead.
Yeah, I used the heck out of Netscape.
 

Chord Progression (Hawaiian + Jazz Flavor)​

  1. Cmaj7 (Root chord, gives that bright, happy Hawaiian feel)
  2. A7 (Dominant seventh, adding a little tension)
  3. D9 (Jazzier flavor, provides a more colorful dominant feel)
  4. G7 (Classic dominant seventh)
  5. Cmaj7 (Return to the root)

Turnaround Lick​

  • F#7 (dominant seventh, one fret away from G7, creates tension)
  • G7 (dominant seventh, resolving tension)
  • Cmaj7 (root, resolving everything)

Ukulele Fingerings:​

  • Cmaj7: 0002
  • A7: 2100
  • D9: 2222
  • G7: 0212
  • F#7: 2122

I picked up my uke and played with the fingerings given, and it sounded like cr*p. On closer examination, those fingerings are right for G7 and Cmaj7, but wrong for the rest (cue cartoon like horn with plunger mute: "mwapp, mwapp, mwaaaaa")
 
Both :) AI is already being used by bad actors (e.g. Russia) to spread misinformation, but at the same time it is being used to identify and suppress this misinformation!
 
Lol. Thanks for the incredible chuckle Audrey! The egg head guy! Lolololololol. But so true!
 
Back
Top Bottom