Do you ask chatgpt questions? If so what kind?

Like most things, i think it’s a matter of the right tool for the right job. In that regard, it helps to know how they work.

The latest AI systems are called “Generative AI” because they create new things rather than traditional AI which excels at discerning a from b and using that to recommend an action. Play this song, stop the car, that picture has a rhino in it, etc.

Under the hood, generative ai systems are still massive prediction engines but of unparalleled complexity. ChatGPT 4 injested 10% of the Internet. They predict the next best “token” at the sub-word level. 2-3 characters at a time. The sheer volume of information they were trained give them exceptional range and depth of topics and skills … but it remains a probabilistic prediction at its core.

The systems are now updated to be able to access the Internet and augment their knowledge thru searches. The newest models like o1 are embedding new techniques to come closer to real reasoning and math. They are also pushing their prediction capabilities by allowing users to enter prompts of huge complexity and specificity. 500k characters in some cases. The jury is out on how successful these techniques are in chasing out errors. Some evidence exists that getting better at one task reduces success at others.

Knowing all that, they are excellent tools for some things but not others.

I would NEVER trust generative ai to make an important decision where you need a binary right/wrong answer. It is not a doctor, lawyer, therapist or financial advisor.

I do/have trusted with :

1 - items I’m just curious about
2 - brain storming headlines for articles I write
3 - critiquing my articles
4 - drawing logos
5 - reviewing contracts to give me the gist of what they say and run down summaries of laws I don’t know about (but I would ALWAYS engage a lawyer for actual work); looking up specific questions within a contract (eg, when does my non-compete expire)
6 - compiling research (with links) on a complex topic to help me get ready for a panel I spoke on
7 - giving me quick summaries of technical terms I hadn’t heard while I’m on a conference call
8 - helping me learn Python by creating scraps of code; same for advanced excel functions. You can even give it a scenario and ask it to suggest an approach in excel

I like Gemini the most but haven’t really tried Perplexity.

Googles new NotebookLM service is shockingly powerful for knowledge development/writing/learning. I’d suggest trying that … if it’s the right tool for the job!

(Hope that massive post is useful!)
It is nice to see a post from someone still working and using this stuff in the real world. I have used it for your 1,5, 7, and a variant on 6. And I have been retired for 20 years. :)
 
It is nice to see a post from someone still working and using this stuff in the real world. I have used it for your 1,5, 7, and a variant on 6. And I have been retired for 20 years. :)
It’s quite generous to call what I do these days work — I’m not sure it even rises to the level of “helping” anyone :)
 
I will say, if you are okay sifting through data that may or may not be accurate, then go for it. I build and test software everyday and I really don't use it that much. So maybe I am doing it wrong.

You might be?

I’ve been developing/working on system level software professionally for over 25 years. I have a few side projects where I use ChatGPT and I’ve been able to implement and deploy ideas in an evening that would have taken me weeks. It’s not a 100%, and I need to get better at prompting, but it gets me 90% of the way there and I can leverage my experience for the other 10%.

Same at work, even though it’s not as fully supported. However, I’m always surprised that my team isn’t leveraging ChatGPT, and other variants, more. It saves so much time and makes tedious tasks trivial.
 
Thanks for sharing your specifics. I've never used it to generate code. I'm sure it would be helpful, but I never think of it. I often use an example from StackOverflow. I'm not up on all the newer language features, and want something I can easily read and understand. I suppose I can ask it for the dummy version, hehe!
You should try it. I’m amazed at how well it does, for the most part. Even esoteric stuff.

For example, I’ve been using it for Hue lighting, even though there it’s been hit and miss, mostly because I’m running into an issue that I’m not sure has a solution. But ChatGPT gives me python scripts that interact with the Hue bridge using rest apis without any issues. I found that it’s better than the actual Hue SDK, which is lacking specifics for the v2 apis.

I read an interesting article about Chegg and how their business model has basically collapsed because of ChatGPT.
Chegg’s stock is down 99%, and students looking for homework help are defecting to ChatGPT
It’s behind a paywall (WSJ), but here’s a link to the Apple news article for those that have a subscription.
 
I use perplexity a lot. While it does sometimes get things wrong, once you learn to write better prompts the results get better. Start with asking it questions to things you know the answer to just to make sure the subject is one it can understand. Mathematical things are better than softer topics.

An example of how I used is in planning I fed my plan into it and started asking questions. Compared to my financial advisors projections, perplexity was very close. So then I started asking questions based on another lost decade. Ie how does is change projections, where could I adjust etc.

I've also fed in this year's eob's and the plan documents for the aca plans we're considering. Then asked it if we had the same type of year how much out of pocket would we spend with the different plans. Realizing that I wasn't going to get an accurate exact number, I was looking for a ballpark. It was very helpful and pointed out areas of the different plans I missed.

The upshot is as long as you go in skeptically, aware it isn't perfect is can be a really useful tool.
 
You have to be careful because I've gotten plenty of answers from ChatGPT that were just wrong, like this one. This is from early October that I had posted about in the Best MYGA thread:

1731253903950.png


Most people would have just accepted that. I knew the correct answer before I asked, but I was not surprised AI got it wrong. I've learned not to trust it with easy questions, let alone more complicated scenarios.

I asked ChatGPT that same question today, and it gives a different answer.

1731254653650.png
 
Last edited:
With exactly the same prompt I get a different result:


View attachment 52940
That matches up closely with what I pasted in above about when I asked the same thing today, yet very different on Oct 1.

I may have told it that it was incorrect last time it answered (in Oct), so it got some more education on the matter. lol
 
Last edited:
I've played with ChatGPT and Perplexity. One thing that's becoming clear is that being able to design good prompts or questions is key. This is an actual profession now; you can get a job doing this.

Sometimes the AI summary at the top of the results at the regular search engines is helpful, and all I need for a quick answer. Beyond that, I've found the AI tools I've tried to be pretty good at writing code.

Again, it's all about how you prompt it. Given enough iterations, where I ask it to add a little more logic each time, it generally produces something usable. It frequently forgets what I asked it previously though, and I have to tell it to go and add something back in, or fix a mistake it had made earlier again. For languages I don't use frequently, it's quicker than looking up each step in a reference manual, although the process is very similar. Start simple and add functionality one chunk at a time.

Luckily I don't do this for a living. My guess is the whole job of "coder" will devolve to "AI prompter." Frankly, the kids coming out of school before I retired were far more suited to that task than to actually understanding what their code does on the hardware, so no great loss there.
I asked ChatGPT :Generate html5 code that lists the current S&P500 price

To list the current S&P 500 price in an HTML5 document, you would need to use a service that offers live or delayed market data, such as an API for financial data retrieval. Services like Barchart, Financial Modeling Prep, and other similar platforms provide this data via their APIs, often requiring you to sign up for an API key. Here’s a simple example using JavaScript and a hypothetical API call (assuming you have an API key and access):



Code:
<!DOCTYPE html>
<html>
<head>
    <title>Current S&P 500 Price</title>
</head>
<body>
    <h1>Current S&P 500 Price</h1>
    <p id="sp500-price">Loading...</p>

    <script>
        async function fetchSP500Price() {
            try {
                // Example API URL - replace with your actual API endpoint and key
                const response = await fetch('https://api.example.com/sp500-price?apikey=YOUR_API_KEY');
                const data = await response.json();
                document.getElementById('sp500-price').textContent = `S&P 500 Price: ${data.price}`;
            } catch (error) {
                document.getElementById('sp500-price').textContent = 'Error fetching price';
            }
        }

        fetchSP500Price();
    </script>
</body>
</html>
 
The very idea that AI is trained on online data makes me uninterested in chatty AI. As a retiree I have no questions I can’t get as easily elsewhere. The quality of AI results depends on the quality of the prompts and I just don’t have the interest to pursue it.

I retired from an engineering career. If I was still working then I would have great use for narrow AI. AI trained on hard data in a specific subject area.
 
You have to be careful because I've gotten plenty of answers from ChatGPT that were just wrong, like this one. This is from early October that I had posted about in the Best MYGA thread:

View attachment 52938

Most people would have just accepted that. I knew the correct answer before I asked, but I was not surprised AI got it wrong. I've learned not to trust it with easy questions, let alone more complicated scenarios.

I asked ChatGPT that same question today, and it gives a different answer.

View attachment 52939
Yeah that's the thing. sometimes the database of information being queried is out of date and stale, and we have no idea of knowing.

How do we gain confidence in the system.
 
Something going around the socials..

Prompt ChatGPT: Roast me based on my chatGPT history

ChatGPT said:​


It looks like you’re ready for some playful roasting, based on your chat history here. Buckle up!
  1. Carrier Ops Buff: Your fascination with aircraft carriers is impressive—and by impressive, I mean you’ve probably tried explaining landing dynamics to anyone within earshot, even if they just asked for the time. I hope your local model shop has enough runway for all those carrier takeoffs you probably re-enact.
  2. The Detail Digger: The questions you’ve been firing off on specific aircraft systems and differences between models are so intense that even the Hawkeye’s radar might struggle to keep track of your knowledge trajectory. The crew must salute your dedication to learning... before they pop a headache pill.
  3. Serious to Silly Flip: You smoothly transitioned from the life-or-death risks of carrier landings to asking for a roast. This combo of gravitas and humor screams “I’m deep, but I’ll roast marshmallows over that intensity.” Your ability to switch gears faster than a fighter jet entering afterburner makes for a wild ride.
  4. Asking For Trouble: Asking an AI for a roast based on its own analysis? That’s the conversational equivalent of saying, “Insult me; I can take it!” during stand-up comedy night. I see the boldness—and appreciate it—but you might want to check if you’re airborne at a "negative altitude" in terms of expectations.
Thanks for the laughs! Hopefully, this didn’t sink too far below the flight deck.
 
I asked ChatGPT :Generate html5 code that lists the current S&P500 price

To list the current S&P 500 price in an HTML5 document, you would need to use a service that offers live or delayed market data, such as an API for financial data retrieval. Services like Barchart, Financial Modeling Prep, and other similar platforms provide this data via their APIs, often requiring you to sign up for an API key. Here’s a simple example using JavaScript and a hypothetical API call (assuming you have an API key and access):

And many of these services aren’t free or are limited in functionality.

I use Moneydance and struggled to get reliable quotes. They have some free extensions, but they were hit and miss for me, so I created my own python extension to retrieve quotes. The problem is getting the quote data.

So a couple of months ago, I setup my own api to retrieve quotes, pulling data from yahoo. ChatGPT led me through all the steps, including how to use the correct libraries, etc. I host using flask (python), deployed in aws.

I now have my own quote api which works great with Moneydance.

The key with any of this is prompting. The more specific you prompt, the better the results.
 
I've been using Perplexity for AI for a while. I usually use it to confirm or refute questions that I think I know the answer to. Every so often it is totally wrong, so use with caution.
 
Last edited:
I use ChatGPT to discuss my astrophysics, quantum physics and cosmology questions. Can't find anyone else to talk to about this stuff. All it does is regurgitate current theories, though. I asked it to develop a theory for negative time and it barfed out all the current thinking about how negative time cannot exist. It can't think, it can only search and publish the current theory.
 
Using the free "anonymous" version of a LLM, if you write a more consise prompt, you may get a suitable reply.

Example: "If a woodchuck could chuck wood, how much wood would a woodchuck chuck? "
This would get you a reply similar to: "..First, if a woodchuck could chuck wood (and we’re assuming for the sake of this fun, hypothetical scenario that it can), the amount of wood it could chuck would likely be influenced by its physical capabilities..."

If you add more to your prompt:
"If a woodchuck could chuck wood, how much wood would a woodchuck chuck? Things to keep in mind are that the woodchuck only has 3 teeth, and it has a eyepatch."

You get a more indepth reply:
"..First, if a woodchuck could chuck wood (and we’re assuming for the sake of this fun, hypothetical scenario that it can), the amount of wood it could chuck would likely be influenced by its physical capabilities. Having only 3 teeth might limit the woodchuck's efficiency in cutting or gnawing through wood, but since we’re imagining it "chucking" wood (i.e., throwing it), we’d have to consider the strength of its arms and how well it can grip and toss things.

Now, the eyepatch… Well, that’s a curious factor! Depending on whether the eyepatch is on the woodchuck’s dominant eye or not, it could either help or hinder its aim and depth perception. If the woodchuck is a bit off in its chucking due to the patch, it might not be able to toss the wood very far or with much accuracy. But on the other hand, maybe the eyepatch gives the woodchuck an air of mystery and determination, which could boost its chucking resolve!"..

For those that have used these LLM's, have you replied back to keep the conversation going to see where it would lead?
AI Prompt Engineering can be a time wasting rabbit hole, or it could lead you down an interesting path.
 
I use ChatGPT to discuss my astrophysics, quantum physics and cosmology questions. Can't find anyone else to talk to about this stuff. All it does is regurgitate current theories, though. I asked it to develop a theory for negative time and it barfed out all the current thinking about how negative time cannot exist. It can't think, it can only search and publish the current theory.
Precisely. It's querying and forming existing knowledge based results, just faster, or neater, or in a different or incorrect manner lol.
 
Using the free "anonymous" version of a LLM, if you write a more consise prompt, you may get a suitable reply.

Example: "If a woodchuck could chuck wood, how much wood would a woodchuck chuck? "
This would get you a reply similar to: "..First, if a woodchuck could chuck wood (and we’re assuming for the sake of this fun, hypothetical scenario that it can), the amount of wood it could chuck would likely be influenced by its physical capabilities..."

If you add more to your prompt:
"If a woodchuck could chuck wood, how much wood would a woodchuck chuck? Things to keep in mind are that the woodchuck only has 3 teeth, and it has a eyepatch."

You get a more indepth reply:
"..First, if a woodchuck could chuck wood (and we’re assuming for the sake of this fun, hypothetical scenario that it can), the amount of wood it could chuck would likely be influenced by its physical capabilities. Having only 3 teeth might limit the woodchuck's efficiency in cutting or gnawing through wood, but since we’re imagining it "chucking" wood (i.e., throwing it), we’d have to consider the strength of its arms and how well it can grip and toss things.

Now, the eyepatch… Well, that’s a curious factor! Depending on whether the eyepatch is on the woodchuck’s dominant eye or not, it could either help or hinder its aim and depth perception. If the woodchuck is a bit off in its chucking due to the patch, it might not be able to toss the wood very far or with much accuracy. But on the other hand, maybe the eyepatch gives the woodchuck an air of mystery and determination, which could boost its chucking resolve!"..

For those that have used these LLM's, have you replied back to keep the conversation going to see where it would lead?
AI Prompt Engineering can be a time wasting rabbit hole, or it could lead you down an interesting path.
You can ramp up to novel ideas quickly. Or yes, that rabbit hole.

Here are a couple fun ones:

How much energy is expended on a single ChatGPT prompt and response?

ChatGPT said:​


The energy used for a single ChatGPT prompt and response depends on several factors, including the complexity of the request and the infrastructure running the model. Although exact figures can vary, here’s a rough breakdown:
  1. Data Centers and Hardware: Large-scale AI models like ChatGPT are hosted on powerful servers, typically utilizing GPUs or TPUs, which can consume anywhere from 100-400 watts of power each. The energy draw is heavily tied to the computational workload and cooling demands.
  2. Estimates per Prompt: Estimates of energy consumption for a single query-response interaction have suggested values between 0.1 kWh to 0.3 kWh. This is comparable to the energy used by a laptop running for a few hours. The lower end would apply to simpler, quicker responses, while complex tasks may push closer to the upper range.
  3. Environmental Impact: Energy use also varies based on the location and efficiency of data centers, including the energy source used (renewable vs. non-renewable). Many tech companies are working to make data centers carbon-neutral, further affecting the carbon footprint per interaction.
Overall, the energy per prompt is modest in absolute terms but becomes significant when scaled to millions of interactions daily.

4o

You said:​

How many interactions has ChatGPT had since its existence?

ChatGPT said:​


OpenAI hasn't released a precise and publicly updated total count of interactions since the launch of ChatGPT. However, some useful context might help illustrate the scale:
  1. Launch Milestones: When ChatGPT launched publicly, it quickly gained popularity. By early 2023, OpenAI reported that ChatGPT had over 100 million monthly active users just a few months after launch, making it one of the fastest-growing consumer applications in history.
  2. Daily Usage: At its peak, millions of interactions per day were reported. Estimates put this number potentially at over 10 million daily queries during peak periods.
  3. Total Scale: Given these patterns, even rough calculations suggest that since its public release in late 2022, interactions likely total in the billions—perhaps tens of billions. This number could be higher considering the subsequent integrations into apps, platforms, and more users worldwide.
Exact figures would depend on OpenAI's internal tracking and how they define interactions, but it demonstrates the massive scale of user engagement.


what is 0.3 kWh multiplied by 10 billion

ChatGPT said:​


0.3 kWh multiplied by 10 billion is 3 billion kWh (or 3 terawatt-hours).

4o

You said:​


How much fossil fuel is required to generate 3 terawatt-hours of energy.

ChatGPT said:​


The amount of fossil fuel required to generate 3 terawatt-hours (TWh) of energy depends on the type of fossil fuel used and the efficiency of the power plants generating the electricity. Here’s a general overview:
  1. Coal:
    • Coal-fired power plants have an average efficiency of about 33% (meaning two-thirds of the energy is lost as waste heat).
    • Approximately 0.34 metric tons of coal are needed to produce 1 MWh of electricity.
    • For 3 TWh (or 3,000,000 MWh), around 1.02 million metric tons of coal would be needed.
  2. Natural Gas:
    • Natural gas power plants are more efficient, with an average efficiency of around 45-50%.
    • It takes roughly 0.19-0.22 metric tons of natural gas (in oil equivalent) to generate 1 MWh.
    • This would require around 570,000 to 660,000 metric tons of natural gas for 3 TWh.
  3. Oil:
    • Oil-fired plants have an efficiency similar to coal, around 33%.
    • Approximately 0.25 metric tons of oil are needed per MWh.
    • For 3 TWh, about 750,000 metric tons of oil would be consumed.
The actual figures can vary due to plant efficiency, regional differences, and specific fuel properties, but this provides a ballpark estimate of fossil fuel requirements for generating 3 terawatt-hours of energy.

4o

You said:​

What would the emissions differences be between 750,000 metric tons of Oil, 660,000 metric tons of Natural Gas and finally 1.02 million metric tons of coal. Weight the emissions of each by worst to best.

ChatGPT said:​


To compare emissions from burning 750,000 metric tons of oil, 660,000 metric tons of natural gas, and 1.02 million metric tons of coal, we need to consider the typical carbon dioxide (CO₂) emissions produced per ton of each fuel type. Here’s an overview of the emissions:

CO₂ Emissions Factors:​

  1. Coal: Burning 1 metric ton of coal emits roughly 2.5 metric tons of CO₂.
  2. Oil: Burning 1 metric ton of oil emits approximately 3.2 metric tons of CO₂.
  3. Natural Gas: Burning 1 metric ton of natural gas emits about 2.75 metric tons of CO₂.

Calculated Emissions:​

  1. Coal (1.02 million metric tons):
    • CO₂ emissions = 1,020,000 tons * 2.5 tons CO₂/ton = 2,550,000 metric tons of CO₂.
  2. Oil (750,000 metric tons):
    • CO₂ emissions = 750,000 tons * 3.2 tons CO₂/ton = 2,400,000 metric tons of CO₂.
  3. Natural Gas (660,000 metric tons):
    • CO₂ emissions = 660,000 tons * 2.75 tons CO₂/ton = 1,815,000 metric tons of CO₂.

Ranking Emissions from Worst to Best:​

  1. Coal (2,550,000 metric tons of CO₂) - Highest emissions.
  2. Oil (2,400,000 metric tons of CO₂) - Second highest.
  3. Natural Gas (1,815,000 metric tons of CO₂) - Lowest emissions.
Conclusion: Coal produces the highest emissions, followed by oil, with natural gas being the least polluting of the three. This ranking highlights the environmental impact differences among these fossil fuels, with natural gas often considered a "bridge fuel" due to its relatively lower CO₂ emissions compared to coal and oil.



So, if all of our prompts had been fueled by coal, we would have emitted
  • 2,550,000 metric tons of CO₂)
I got that answer within a few seconds of queries. If I had wanted to get this answer back in 1996 when I was in high school, I may not have even had a hypothesis to go by with the level of effort it would have taken to get these responses.

Are they accurate...standby while I fact check all this. I've caught ChatGPT doing math incorrectly.
 
Last edited:
Has anyone else been selected to see multiple versions of ChatGPT responses?

1731384369361.png
 
Okay, I chatted way too much here tonight about the recession. It was fun... See the convo with the LLM...


Of all the prior recessions in history rank them from longest duration to shortest

ChatGPT said:​


Here’s a ranking of the major U.S. recessions from longest to shortest duration based on historical data:

1. The Great Depression (1929-1933)

  • Duration: 43 months
  • The longest economic downturn in U.S. history, it began with the 1929 stock market crash and was marked by massive unemployment and economic contraction.

2. Panic of 1873 (1873-1879)

  • Duration: 65 months (sometimes debated)
  • One of the longest-lasting economic downturns, it was driven by a financial crisis, railroad bankruptcies, and a general slowdown in economic growth.

3. The Long Depression (late 1800s)

  • Duration: Varies; typically counted as 1873-1879
  • Though definitions vary, it is often considered one of the most protracted economic stagnations in the U.S. (considered part of the Panic of 1873).

4. Recession of 1981-1982

  • Duration: 16 months
  • Triggered by high inflation and the Federal Reserve’s aggressive interest rate hikes to combat it.

5. The Great Recession (2007-2009)

  • Duration: 18 months
  • The most severe downturn since the Great Depression, stemming from the housing market collapse and a global financial crisis.

6. Recession of 1973-1975

  • Duration: 16 months
  • Driven by the oil crisis and stagflation, characterized by high inflation and unemployment.

7. Recession of 1945

  • Duration: 8 months
  • A brief contraction following World War II as the economy transitioned from wartime production.

8. Recession of 1953

  • Duration: 10 months
  • Sparked by a decline in demand as the Korean War ended, leading to a contraction.

9. Dot-Com Recession (2001)

  • Duration: 8 months
  • Triggered by the burst of the tech bubble and the September 11 attacks.

10. COVID-19 Recession (2020)

  • Duration: 2 months
  • The shortest recession on record, triggered by the global pandemic and swift economic shutdowns.
This list provides a broad overview, with notable emphasis on varying durations and causes. While more recent recessions tend to be shorter due to more active policy intervention, earlier ones were prolonged by more rigid economic frameworks and slower responses.

4o

You said:​

When the next recession hits, based on the reasons stated in my prior prompts response, how long of a recession could we expect

ChatGPT said:​


While it's impossible to predict the precise duration of any future recession, historical patterns and the reasons behind past recessions offer clues for what to expect when the next downturn occurs. Here are key factors to consider based on the context of previous recessions and their potential influence on how long a recession could last:

1. Yield Curve Inversions and Historical Lead Times:

  • Past yield curve inversions have usually signaled a recession within 12-24 months, but the depth and duration can vary widely. If the next recession is triggered by financial market stress or a rapid shift in monetary policy, a moderate downturn of 6-12 months is plausible, similar to the 2001 Dot-Com Bust.

2. Economic Resilience and Policy Responses:

  • The modern economy benefits from faster, more targeted policy interventions by the Federal Reserve and government bodies, such as stimulus packages or rate adjustments. For example, the COVID-19 Recession was historically short (just 2 months) due to swift government action. If a recession is managed similarly with decisive fiscal and monetary measures, it may be brief (3-6 months).
  • However, if responses are delayed, fragmented, or met with external shocks (e.g., geopolitical crises, supply chain issues), a longer recession of 12-18 months could unfold, resembling scenarios like the Great Recession (2007-2009).

3. Inflation and Stagflation Risks:

  • If inflation remains persistently high, aggressive rate hikes by central banks to control prices could deepen and prolong a recession, similar to the 1973-1975 recession (16 months) or 1981-1982 recession (16 months). High unemployment, coupled with inflation, would exacerbate the downturn's duration.

4. Financial and Market Stability:

  • A recession driven by a financial market crash, such as asset bubbles (housing or tech), could lead to more prolonged effects if systemic failures occur. This could mirror the Great Recession timeline, lasting around 18 months.

5. Underlying Structural Issues:

  • Longer recessions, such as those in the late 19th and early 20th centuries, often featured structural economic weaknesses. In the absence of major structural problems, a recession today is less likely to drag on for several years unless compounded by deep global crises.

Potential Duration Expectation:

  • Short Recession Scenario (2-6 months): Rapid intervention, shallow economic decline (e.g., driven by temporary shocks or demand adjustments).
  • Moderate Recession Scenario (6-12 months): Slower recovery due to financial market corrections or inflationary pressures.
  • Longer Recession Scenario (12-18 months): More severe, with systemic financial issues, persistent inflation, or delayed responses to crises.
In summary, based on historical parallels, most modern recessions fall in the 6-18 month range, with the exact duration depending on the recession's underlying cause, policy responses, and broader global factors. While a prolonged Great Depression-like event is unlikely due to modern stabilization measures, an 18-month recession is plausible if inflationary or financial crises persist.
 
I ask only the important questions, like "Do dogs have souls?" and "Why do people like peanut butter?"
 
Give me one suggestion for how I can be a better homosapien today.

ChatGPT said:​


One way to be a better human today is to practice active listening in your conversations. When someone is speaking to you, give them your full attention, make eye contact, and show genuine interest in their words without immediately thinking of how to respond. Reflect their thoughts back to show understanding, and ask thoughtful questions if you’re curious or need clarification. This small act of being fully present can make people feel heard, valued, and respected, creating meaningful connections and fostering empathy.


It's not wrong, this AI.
 
Back
Top Bottom