Do you ask chatgpt questions? If so what kind?

Status
Not open for further replies.

badatmath

Thinks s/he gets paid by the post
Joined
Aug 22, 2017
Messages
2,543
I was noticing the other day how inaccurate AI is on some search I had just done and so looked up an event that happened near me some years back (food poisoning local place) . . . but I did not feel the AI captured it accurately. Like it downplayed the severity, facts weren't strictly in accordance with my memory or a local news article of the time. (So like where does it learn if not news articles?) But I digress . . .

I notice online that some people are getting diet/workout/relationship advice from chatgpt and just wondered if that was limited to the younger generation or if anyone has tried and found a use for it? It seems even worse to me than asking SGOTI . . . but I'm old and grumpy so maybe not.

I have not and likely will not.

I will try a ride in a driverless car some day I see them buzzing around and it appeals to me not to have to chat up the drivers. . . or tip them.
 
No, I do not "chat" with AI. Like pretty much everyone I know I do on-line searches thru Google and DuckDuckGo but those are focused on facts, definitions, etc. As an aside, neither my wife or I have any interest or desire to ride in a driverless car.
 
Try asking chatgpt how many "r"s there are in the word "strawberry". I wouldn't call that "intelligence".;)
 
Try asking chatgpt how many "r"s there are in the word "strawberry". I wouldn't call that "intelligence".;)
I asked Copilot. Since the app is prone to human-like errors (it read too fast, and told me such in a follow-up conversation), I asked it to develop a more thorough prompt. It came up with the following.

"Can you count the number of R’s in ‘strawberry’ and list their positions?”

Then you get a thorough, correct answer.

There is much to learn.
 
I asked how to drive from spot A to spot B while avoiding major urban areas and steep road grades. Was mostly satisfied with the results.
 
I use co pilot sometimes but mostly use Grok.

I asked one, forget which, to write c# code to send and receive a message using TCP. I compiled the two programs and aside from changing the IP address, they worked without any changes.

I asked Grok to explain many instances of medical shorthand from my recent ER visit. It seemed to do a good job.

I have also asked Grok to explain medical questions on how drugs like statins work, contraindications etc. But more as a way of confirming what I already know or suspect. I would verify before taking any drastic action. Grok seems to hedge its answers by suggesting consulting a real doctor.
 
I was noticing the other day how inaccurate AI is on some search I had just done and so looked up an event that happened near me some years back (food poisoning local place) . . . but I did not feel the AI captured it accurately. Like it downplayed the severity, facts weren't strictly in accordance with my memory or a local news article of the time. (So like where does it learn if not news articles?) But I digress . . .

I notice online that some people are getting diet/workout/relationship advice from chatgpt and just wondered if that was limited to the younger generation or if anyone has tried and found a use for it? It seems even worse to me than asking SGOTI . . . but I'm old and grumpy so maybe not.

I have not and likely will not.

I will try a ride in a driverless car some day I see them buzzing around and it appeals to me not to have to chat up the drivers. . . or tip them.
I don't converse with AI every day. But I do go deep in my genealogy group(s), where there are early adopters who know far more than I do about this technology. As one genealogist explained the large impact on our lives, he said, "You won't be replaced by AI as most workers believe, you will be replaced by another worker using AI."

It is of no grave consequence to me, but I recognize there are areas of application which make more sense. For example, a common genealogy task is to review a basket of census forms which were written in different years, by different individuals, and so on. One goal is to uncover facts which are not clear due to handwriting and other stumbling blocks. So others before me have created templates and made them available on the AI platform. Rather that fumbling throught this new tech, you use the engineered prompts and syntax, fill in your details, and get a structured analysis in very little time. Then it's your task to consider what's before you and determine how well it applys.

An experienced newswriter would handle the OP's request differently. Their AI app would be a professional level subscription, and it would have added to it the entire library of publisher's news articles. There would be suggested templates to follow, and the result would be much different than OP's. For example, you would define the level of completeness in some way. By prompting, re-prompting, and examining my inability, I find progress.
 
I wonder why you don't hear about it being used to make predictions about how to invest, or what horse to bet on...
 
Which ChatGPT? 3, 4, o1 preview? There isn't one AI, there are thousands, and they are improving rapidly. Yes, they make mistakes, but good ones are far better writers than you and me, and they can crank out volumes of information in nanoseconds - most of it spot on to the prompt you give them. AI is already producing 25% of Google's code. Developers everywhere who are good at using AI will quickly outperform those who ignore it.

I almost never bother RTFM'ing (reading the F'ing manual) anymore. If I have a question about what to do in a application, I ask an AI. When I wanted a plain English summary of a complex trust (as a supplement for the kids and me) I got it from Claude 3.5 and it was excellent. I did take a very close look to see if there were significant errors. And yes, you do have to be careful still. But this is not some flash-in-the-pan toy, it is the future of work.
 
Last edited:
I don't use any of them, as they are so unreliable. Most of them don't seem to be able to tell good from bad info (and pull from so many sources). There are so many examples of it being horribly wrong, and I am not at all a fan of how much energy these consume.
 
I loaded ChatGPT app on my phone and used it probably 30 or 40 times. Usually just one question, without follow up.

To me the value is getting to a proven answer quickly, as opposed to something it might mess up on. For instance, if you have a question where there's going to be a mess of search results trying to sell you stuff, and you're not trying to buy anything, then it makes sense to use an LLM based inquiry. I also use the trick to do a web search and add "site:Wikipedia.org" and that also gets to an answer more quickly.
 
Last edited:
I've played with ChatGPT and Perplexity. One thing that's becoming clear is that being able to design good prompts or questions is key. This is an actual profession now; you can get a job doing this.

Sometimes the AI summary at the top of the results at the regular search engines is helpful, and all I need for a quick answer. Beyond that, I've found the AI tools I've tried to be pretty good at writing code.

Again, it's all about how you prompt it. Given enough iterations, where I ask it to add a little more logic each time, it generally produces something usable. It frequently forgets what I asked it previously though, and I have to tell it to go and add something back in, or fix a mistake it had made earlier again. For languages I don't use frequently, it's quicker than looking up each step in a reference manual, although the process is very similar. Start simple and add functionality one chunk at a time.

Luckily I don't do this for a living. My guess is the whole job of "coder" will devolve to "AI prompter." Frankly, the kids coming out of school before I retired were far more suited to that task than to actually understanding what their code does on the hardware, so no great loss there.
 
I use Perplexity to help me research companies. I recently asked it for the top 10 dividend paying MLP's to give me a starting point to research some additions to my portfolio.
 
I write press releases for an organization that I volunteer for. I use ChatGpt to help write those. First, I'll write something myself, then I'll feed it into ChatGpt and ask it to revise what I've written. It's really great at grammar, so it sometimes, sometimes comes up with a better way of getting my message across. It usually takes multiple passes and I tell things like "make it shorter" or "make it longer", or "don't use the word nnn". It's really good at grammar, I don't trust it with facts.
 
I use co pilot sometimes but mostly use Grok.

I asked one, forget which, to write c# code to send and receive a message using TCP. I compiled the two programs and aside from changing the IP address, they worked without any changes.

I asked Grok to explain many instances of medical shorthand from my recent ER visit. It seemed to do a good job.

I have also asked Grok to explain medical questions on how drugs like statins work, contraindications etc. But more as a way of confirming what I already know or suspect. I would verify before taking any drastic action. Grok seems to hedge its answers by suggesting consulting a real doctor.
I'm not a real doctor, but I play one on TV.
 
I’m surprised it’s unreliable for some. I use it instead if generic search, since it provides an answer to my question, not a bunch of links to websites where I have to dig for an answer.

But I don’t blindly go with the result. If something doesn’t seem right, then I’ll double check, especially if it’s for something more serious. Sometimes I’ll ask for the reference sites if I want to dig in deeper.
 
This is not an answer to how we might use ChatGPT but I thought of this thread when I read it in a newsletter. It speaks to the common view that AI is all hype and BS:

An 82-year-old man named Paul had aggressive blood cancer. After six rounds of long and unpleasant chemotherapy failed, things looked bleak. Doctors had tried all the usual cancer killers, but none worked.​
Enter Exscientia, a UK startup using AI to play matchmaker between patients and treatments. They took a tiny piece of Paul's tumor and exposed it to a buffet of drugs. The AI analyzed the results and recommended a drug Paul’s doctors skipped over.​
Two years later, Paul isn't just alive; he’s cancer-free. The AI-chosen drug obliterated every last cancer cell!​
Drugmakers have toyed with AI for decades. But this new breed of AI systems, like ChatGPT, is a game changer. They can connect dots from medical journals, clinical trial data, and personal genetic information that human doctors never see.​

This is a specialized AI, but it is building on the same underlying "Transformer" approach to build LLMs like ChatGPT.
 
I was noticing the other day how inaccurate AI is on some search I had just done and so looked up an event that happened near me some years back (food poisoning local place) . . . but I did not feel the AI captured it accurately. Like it downplayed the severity, facts weren't strictly in accordance with my memory or a local news article of the time. (So like where does it learn if not news articles?) But I digress . . .

I notice online that some people are getting diet/workout/relationship advice from chatgpt and just wondered if that was limited to the younger generation or if anyone has tried and found a use for it? It seems even worse to me than asking SGOTI . . . but I'm old and grumpy so maybe not.

I have not and likely will not.

I will try a ride in a driverless car some day I see them buzzing around and it appeals to me not to have to chat up the drivers. . . or tip them.
I test software for a living. I will say that I've never seen a community of cohorts come together to align on a singular complaint about LLM and ChatGPT... It's untested.

Unverified.

In the wild, free to roam and do what it pleases. There has been such an increase in what the community of testers is always looking out for, bad quality products and features.

I will say, if you are okay sifting through data that may or may not be accurate, then go for it. I build and test software everyday and I really don't use it that much. So maybe I am doing it wrong.
 
... Again, it's all about how you prompt it. Given enough iterations, where I ask it to add a little more logic each time, it generally produces something usable. It frequently forgets what I asked it previously though, and I have to tell it to go and add something back in, or fix a mistake it had made earlier again. For languages I don't use frequently, it's quicker than looking up each step in a reference manual, although the process is very similar. Start simple and add functionality one chunk at a time. ...
Thanks for sharing your specifics. I've never used it to generate code. I'm sure it would be helpful, but I never think of it. I often use an example from StackOverflow. I'm not up on all the newer language features, and want something I can easily read and understand. I suppose I can ask it for the dummy version, hehe!
 
I use free versions of Copilot and Claude, and I prefer them to regular search engines, because it just gives me the answer, as others have said, not the usual Google and Safari page of links and ads.

I also subscribed to Grammarly, which has an ai that makes my English perfect as I write for my consulting clients; and I can prompt it to write whole passages in various styles.

Extraordinary.
 
Other than maybe starting a creative thing to break writers block I'm unimpressed. Everytime I tried finding the answer to a multiparameter question that could be researched using public info it failed. When I added additional criteria/clarified my question to try to help it it just started telling me what I asked for...making it up/hallucinations etc. I tried a few graphic things... neat but it still had many errors (for instance, I asked for the image to have specific text incorporated and it would misspell or use other words than what I requested.... very odd error.) Maybe the pay walled versions are smarter but for now I see little value for myself.
 
Like most things, i think it’s a matter of the right tool for the right job. In that regard, it helps to know how they work.

The latest AI systems are called “Generative AI” because they create new things rather than traditional AI which excels at discerning a from b and using that to recommend an action. Play this song, stop the car, that picture has a rhino in it, etc.

Under the hood, generative ai systems are still massive prediction engines but of unparalleled complexity. ChatGPT 4 injested 10% of the Internet. They predict the next best “token” at the sub-word level. 2-3 characters at a time. The sheer volume of information they were trained give them exceptional range and depth of topics and skills … but it remains a probabilistic prediction at its core.

The systems are now updated to be able to access the Internet and augment their knowledge thru searches. The newest models like o1 are embedding new techniques to come closer to real reasoning and math. They are also pushing their prediction capabilities by allowing users to enter prompts of huge complexity and specificity. 500k characters in some cases. The jury is out on how successful these techniques are in chasing out errors. Some evidence exists that getting better at one task reduces success at others.

Knowing all that, they are excellent tools for some things but not others.

I would NEVER trust generative ai to make an important decision where you need a binary right/wrong answer. It is not a doctor, lawyer, therapist or financial advisor.

I do/have trusted with :

1 - items I’m just curious about
2 - brain storming headlines for articles I write
3 - critiquing my articles
4 - drawing logos
5 - reviewing contracts to give me the gist of what they say and run down summaries of laws I don’t know about (but I would ALWAYS engage a lawyer for actual work); looking up specific questions within a contract (eg, when does my non-compete expire)
6 - compiling research (with links) on a complex topic to help me get ready for a panel I spoke on
7 - giving me quick summaries of technical terms I hadn’t heard while I’m on a conference call
8 - helping me learn Python by creating scraps of code; same for advanced excel functions. You can even give it a scenario and ask it to suggest an approach in excel

I like Gemini the most but haven’t really tried Perplexity.

Googles new NotebookLM service is shockingly powerful for knowledge development/writing/learning. I’d suggest trying that … if it’s the right tool for the job!

(Hope that massive post is useful!)
 
Status
Not open for further replies.
Back
Top Bottom