ChatGPT

This is getting amusing and downright scary. I heard this morning that when people criticize it or Microsoft's deployment it can get nasty, much like a human would. I turned the conversation personal and called the commentator ugly.

In another situation it professed undying love for someone else!

I'm making popcorn for this one!
 
This is getting amusing and downright scary. I heard this morning that when people criticize it or Microsoft's deployment it can get nasty, much like a human would. I turned the conversation personal and called the commentator ugly.

In another situation it professed undying love for someone else!

I'm making popcorn for this one!

I seem to recall something like this from several years ago, where researchers were doing some of this in a more controlled lab setting. The bot started getting really nasty, maybe even threatening. The article was kind of click-bait-y, they described the researchers "pulling the plug" on it.

Some responders pictured that as not just 'ending the project', but literally as pulling the plug to turn off power before this thing started using its powers to harm others (shut down power stations or something?).

But it's interesting. Unfortunately, humans are capable of some very bad things. Most of us keep that in check most of the time, but it appears to be a very delicate balance. Can a program capture what is human, w/o also crossing that line? And if it is programmed to stay away from that line, is it really reacting/responding in a human-like way?

I dunno. I'm kinda in the camp of "use computers for what they do best, and use humans for what they do best". But there's a lot of overlap there, there's value to "human intuition" and there's value to a computer's ability to access/process information in ways humans can't. Getting the two to work in complimentary ways makes good sense to me. But attempting to replicate a human may not be a good path?

-ERD50
 
Last edited:
...there's value to a computer's ability to access/process information in ways humans can't. Getting the two to work in complimentary ways makes good sense to me...

Just to muddy the waters here, there's also Musk's Neuralink project. Install a port in my brain so I can access the internet directly. Now you've got a human with the computer's access to information. Good? Scary? I don't know.

I do know I'd love to have a way to expand my own memory.
 
I missed this thread and just found out about ChatGPT this past week on John Oliver's Last Week Tonight. I tried signing up for it but they are not accepting new people at this time. I have mixed feelings on the idea. It can give wrong info at times which can be dangerous. It can be used by school kids to cheat on English papers. It does have some good potential as well though. Not something I would likely ever pay for.
 
Microsoft isn't actually using "ChatGPT" as that's a model specific to that service. Microsoft is using a model from OpenAI known as GPT-3.5 that's been trained slightly differently.

Also important to note that these types of language models can only work with what they've been trained with. In other words, if I used a GPT model and trained it with only the information in a Julia Child cookbook... that's all it will know. Period. It's not sentient and it won't start adding in content willy nilly that wasn't found in the original data used to train it.

A model trained on Julia's cookbook won't suddenly start spewing racist hatred or stock picks or sexual inuendo.

So much misinformation out there.
 
Microsoft isn't actually using "ChatGPT" as that's a model specific to that service. Microsoft is using a model from OpenAI known as GPT-3.5 that's been trained slightly differently.

Also important to note that these types of language models can only work with what they've been trained with. In other words, if I used a GPT model and trained it with only the information in a Julia Child cookbook... that's all it will know. Period. It's not sentient and it won't start adding in content willy nilly that wasn't found in the original data used to train it.

A model trained on Julia's cookbook won't suddenly start spewing racist hatred or stock picks or sexual inuendo.

So much misinformation out there.

You've read too many ChatGPT outputs. It is soaking into your writing. :LOL:

I'm looking into buying a car from a private party and thought I'd ask ChatGPT about the process.

I got a terribly, very wrong answer. I hit "regenerate" and I got a completely different answer, with terrible mistakes. It seems like it is picking answers from different states or something.

Wrong answer 1:
To sell and transfer the title for a car in North Carolina, you will need to follow these steps:

  1. Obtain a signed and notarized title from the seller: The seller must sign and notarize the title certificate and provide it to you.
  2. Obtain a notarized bill of sale: The seller should provide you with a bill of sale that shows the sale price, the date of sale, and the names and addresses of both the buyer and seller. The bill of sale must be notarized.
(JOE): Stop right there! No Bill of Sale required in NC in private party sales, let alone a notarized one. Even when I amended my question to specifically mention private party sales, it randomly insisted I need #2. Not every time, just when it felt like it

#2 try:
3) Sign the title: The seller needs to sign the title in the "seller" section and provide the buyer with the signed title

<EDIT: omitted some steps>

6) Obtain a notarized signature: Both the buyer and seller need to have their signatures notarized on the title.


(JOE): really bad idea to tell someone to sign something, then later come back and get it notarized. You need to sign in the presence of the notary!
 
The science-fiction book "Feed" by M. T. Anderson captures the perils perfectly. Everyone, except a few weirdo Luddites whom everyone else shuns, has an Internet "Feed" implanted in their brains. While it seems like having the fountain of all knowledge at your beck and call, what it actually "feeds" is an insatiable desire to buy the latest fad stuff.
And just wait till the hackers break into the feed!

https://www.amazon.com/Feed-M-T-Anderson/dp/0763662623?asin=0763662623&revisionId=&format=4&depth=1

Just to muddy the waters here, there's also Musk's Neuralink project. Install a port in my brain so I can access the internet directly. Now you've got a human with the computer's access to information. Good? Scary? I don't know.

I do know I'd love to have a way to expand my own memory.
 
Just to muddy the waters here, there's also Musk's Neuralink project. Install a port in my brain so I can access the internet directly. Now you've got a human with the computer's access to information. Good? Scary? I don't know.

Sounds like "Saturn 3" come to life, in my opinion.
 
I seem to recall something like this from several years ago, where researchers were doing some of this in a more controlled lab setting. The bot started getting really nasty, maybe even threatening. The article was kind of click-bait-y, they described the researchers "pulling the plug" on it.

Some responders pictured that as not just 'ending the project', but literally as pulling the plug to turn off power before this thing started using its powers to harm others (shut down power stations or something?).

But it's interesting. Unfortunately, humans are capable of some very bad things. Most of us keep that in check most of the time, but it appears to be a very delicate balance. Can a program capture what is human, w/o also crossing that line? And if it is programmed to stay away from that line, is it really reacting/responding in a human-like way?

I dunno. I'm kinda in the camp of "use computers for what they do best, and use humans for what they do best". But there's a lot of overlap there, there's value to "human intuition" and there's value to a computer's ability to access/process information in ways humans can't. Getting the two to work in complimentary ways makes good sense to me. But attempting to replicate a human may not be a good path?

-ERD50

I think Isaac Asimov was way ahead of his time with his "3 Laws of Robotics"

In all seriousness it is a little scary if these things can figure out technical problems and be manipulated by humans. What's to stop a clever human from convincing the bot that, say, ICBMs should be launched...or the power grid shut down?
 
Update - OpenAI announced the release of a new version, Chat-GPT4. From The Verge https://www.theverge.com/2023/3/14/23638033/openai-gpt-4-chatgpt-multimodal-deep-learning
In a research blog post, OpenAI said the distinction between GPT-4 and its predecessor GPT-3.5 is “subtle” in casual conversation (GPT-3.5 is the model that powers ChatGPT). OpenAI CEO Sam Altman tweeted that GPT-4 “is still flawed, still limited” but that it also “still seems more impressive on first use than it does after you spend more time with it.”
./.
The company says GPT-4’s improvements are evident in the system’s performance on a number of tests and benchmarks, including the Uniform Bar Exam, LSAT, SAT Math, and SAT Evidence-Based Reading & Writing exams. In the exams mentioned, GPT-4 scored in the 88th percentile and above, and a full list of exams and the system’s scores can be seen here.
 
And as expected, MS is adding this to Office 365 apps like Word, Excel and powerpoint.

https://www.cnbc.com/2023/03/16/mic...65-with-chatgpt-like-generative-ai-tech-.html

Microsoft is pitching the Copilot features as being more powerful than simply being “OpenAI’s ChatGPT embedded into Microsoft 365,” the company said in an announcement. The company said that the new Copilot in Word feature will give people a “first draft to edit and iterate on—saving hours in writing, sourcing, and editing time.”
 
I've been looking into technical information about the construction of acoustic guitars, and I keep coming across articles at a "fuelrocks" site. They read really strange, has to be some sort of robot, or maybe really weird translation (but bad translations tend to have weird word substitutes, or weird phrases, and bad spelling - these are nonsensical? And the picture of converting a bridge from a 'pinned' bridge to a 'pinless' bridge has nothing to do with the text. Example:

https://www.fuelrocks.com/acoustic-guitar-tops-how-they-work-and-how-to-replace-them/

several weird sentences:

The sound of an acoustic guitar is produced by the vibration of the strings, which are amplified by the body of the guitar. The top of the guitar amplifies the sound of the strings and also protects the strings from being damaged by the body of the guitar.

:confused: How can the guitar top (what some might call the 'face' or 'front' of the guitar) protect the strings from the body of the guitar? How can the body damage the strings? Makes no sense.


The top of an acoustic guitar can be replaced, but it is not a simple process. The top of the guitar is glued to the body of the guitar and is held in place by the braces. Replacing the top of the guitar requires the removal of the old top, the installation of the new top, and the re-gluing of the top to the body of the guitar.

Isn't that true of anything? To replace xyz, remove old xyz and install new xyz. Hey, I just wrote an instructional article! I'm an author!


The bridge plate does not have to be used in addition to a bridge plate.

Your feelings will be unaffected if the first try fails to meet your expectations. ....

The fingerboard is risky because of the strong bond between the fingerboard and the wood, and it can also crack if it comes into contact with heavy objects such as furniture or fencing.

Huh? Don't need a bridge plate in addition to a bridge plate? That's the same thing?

"Your feelings will be unaffected":confused:?

OK, a guitar might "come into contact with heavy objects such as furniture", but "fencing":confused: That's an odd combination. I picture Gene Autry playing guitar sitting on a fence :)

2f0e801e465f4ea4b0faf6426a19e428.jpg


-ERD50
 
I signed up for ChatGPT recently and asked some tax questions related to a business I'm terminating. The results were impressive: perfectly constructed English sentences / paragraphs, authoritative, no trace of doubt whatsoever. I ran the results by a CPA. He said that several of the ChatGPT claims are wrong. I found some other sources on the web that support the CPA's position. If I had blindly followed the ChatGPT recommendations I could have had a nasty collision with the IRS in my future. :eek: The fact that ChatGPT doesn't / can't list its sources for any claim it makes greatly reduces its utility, IMO. :popcorn:
 
Some of us are compensated reviewers for Amazon, via their Vine Voices program. (See: https://www.early-retirement.org/forums/f27/amazon-vine-reviewers-115716.html)

We all just got a message from Amazon reminding us to use our own words and provide our own distinctive style and voice in providing honest reviews that are our own opinion.

The buzz on some forums is that people have gotten lazy and are simply popping the question to ChatGPT. "Give me a product review for EIOGU hair trimmer." Boom, cut, paste, done.

Obviously, that's not the spirit of the compensated reviewer program.

On the forums, people have looked for, and found certain reviewers who obviously did all their reviews with ChatGPT.

Disappointing.
 
Hmmm - in that case I would want the sources.
me:
what is the latest post by audreyh1

bing:
The latest post by "audreyh1" on early-retirement.org is in the "2023 Investment Performance Thread" where they mentioned that their retirement portfolio is up 1.6% this first short week¹.

Source: Conversation with Bing, 5/20/2023
(1) 2023 Investment Performance Thread - Early Retirement & Financial .... https://www.early-retirement.org/forums/f28/2023-investment-performance-thread-116550.html.
(2) 2021 Investment Performance Thread - Early Retirement. https://www.early-retirement.org/forums/f28/2021-investment-performance-thread-107177.html.
(3) 10 Things No One Tells You About Early Retirement - AARP. https://www.aarp.org/retirement/pla...-2021/pre-early-retirement-reality-check.html.
I've just started playing with Bing. This is GPT-4 in use.
 
Some of us are compensated reviewers for Amazon, via their Vine Voices program. (See: https://www.early-retirement.org/forums/f27/amazon-vine-reviewers-115716.html)

We all just got a message from Amazon reminding us to use our own words and provide our own distinctive style and voice in providing honest reviews that are our own opinion.

The buzz on some forums is that people have gotten lazy and are simply popping the question to ChatGPT. "Give me a product review for EIOGU hair trimmer." Boom, cut, paste, done.

Obviously, that's not the spirit of the compensated reviewer program.

On the forums, people have looked for, and found certain reviewers who obviously did all their reviews with ChatGPT.

Disappointing.
I think people using AI ChatBots across all spectrums of live is inevitable. It will make productive people more productive and allow lazy people to make the least possible effort, much like the hand held calculator and smartphone.
 
Right now, you don't know if a human wrote something, you don't know if an image was created, and you don't know if audio was created. Anybody can do this with few skills. In the relatively near future, the same will be true for video. So a video with a hand in the cookie jar can be dismissed as having been constructed. Then the AI will spout back the "news" that hand in cookie jar videos are being produced at a higher and increasing rate.

I just read a tiny book called "On Bullsh**". The definition of one (or an entity, I suppose) that exhibits this behavior is that there's only an attempt to come up with a narrative that people pay attention to, but no attempt is made to be consistent. So the BSer is unconcerned about consistency. This, as opposed to the liar, who has got to be consistent, lest be called out on obvious inconsistencies. So the current chat AI seems like a BSer. A dang "good" one, and not always inaccurate but a BSer, none the less.
 
I've used the Bing version, and I haven't seen BS yet. Maybe my questions are too easy, and easily sourced.
 
Sounds like it is getting smarter, and can answer Andre1969's original question correctly.

What was a DeSoto Diplomat?

The DeSoto Diplomat is an automobile produced by DeSoto from 1946 to 1962 for sale in export markets other than the United States and Canada. The export DeSoto based on the Plymouth was first introduced in 1937 and was built in Detroit¹. In 1946, the export DeSoto became the DeSoto Diplomat. They were either exported or assembled locally (from CKD kits) in Europe, South Africa, South America, Hawaii, New Zealand, and Australia¹.

I hope that helps! Let me know if you have any other questions.

Source: Conversation with Bing, 5/20/2023
(1) DeSoto Diplomat - Wikipedia. https://en.wikipedia.org/wiki/DeSoto_Diplomat.
(2) DeSoto Diplomats – National DeSoto Club, Inc.. https://desoto.org/diplomats/.
(3) DeSoto (automobile) - Wikipedia. https://en.wikipedia.org/wiki/DeSoto_(automobile).
 
Forget about the Turing test, the new gold standard is the Andre1969 test! :LOL:
 
I've used the Bing version, and I haven't seen BS yet. Maybe my questions are too easy, and easily sourced.

It’s called “responsible AI”. While not perfect there are actually AI generated “guardrails” in place. Kinda like the inmates running the asylum.
 
Like a few others, I would think AI could become better at interactive medical diagnosis than most human doctors. I am battling a sporadic condition now that has 4-5 known causes. My GP has made appts to check for only the most serious possible cause, which I hate to say is the most expensive track of the 5, with no mention of the other 4 less serious potential causes. She might be doing it to avoid legal issues if it turns out to be the serious cause. OTOH there is no history whatsoever for that cause in my family, just the opposite in fact...

I am tempted to enroll in ChatGPT to see what it says given my symptoms to see if my GP is focused on the least likely issue.
 
Last edited:
Back
Top Bottom