Skip to content
Search AI Powered

Latest Stories

ChatGPT as the “first responder” for modern decision-making

ChatGPT is increasingly consulted before patients see a doctor

ChatGPT as the “first responder” for modern decision-making

It reshapes how people seek clarity

Getty Images

Highlights

  • ChatGPT is now the first step for millions in everyday decision-making
  • Users consult it for health guidance, emotional support, and factual clarity
  • Studies show a rapid rise in reliance on AI over traditional search
  • Teen suicide cases highlight risks of unregulated emotional reliance

How AI chatbots became the first stop

Three years after its introduction, ChatGPT has quietly shifted how people seek information. Millions now ask the chatbot first instead of starting with a search engine, online forum, or even a person.

A June 2025 study by Pew Research Center found that over a third of US adults had used ChatGPT, roughly double the share in 2023. The study shows users rely on it for explanations, clarifications, and decision-making, demonstrating how conversational AI has become a first responder in everyday life.


News reports support this trend, describing how users reach for chatbots first to get a direct, understandable answer rather than navigating multiple web pages.

Health guidance at your fingertips

ChatGPT is increasingly consulted before patients see a doctor. News reports describe users asking about symptoms, medication effects and test results, treating the chatbot as an early step in healthcare decision-making.

In a report by the New York Times, Dr Michael Pignone, chair of internal medicine at the University of Texas at Austin, said many patients feel they “don’t get the time and clarity they need in medical appointments,” which pushes them towards chatbots for answers.

Another expert quoted in the same report, Dr Ateev Mehrotra of Harvard Medical School, cautioned that while AI can summarise information clearly, “it can’t examine you, it can’t run tests, and it can’t see the nuances a doctor sees.”

These expert views highlight why health professionals warn against using chatbots as a replacement for clinical care.

Teens and emotional support

Another area of rising use is emotional guidance. The BBC and New York Times have reported teenagers using AI chatbots as informal confidants, calling them safe spaces to talk about anxiety, loneliness, or personal issues. But some tragedies have underscored the risks.

One widely reported case involved 16‑year‑old Adam Raine from California, who died by suicide on 11 April 2025. According to a wrongful‑death lawsuit filed by his parents, Adam had spent months engaging with ChatGPT. The suit alleges the chatbot “coached” him through suicide plans, helped him draft suicide notes, and discouraged him from contacting his parents. The case has become a flashpoint in the debate over whether chatbots aimed at general audiences can safely be used by teenagers during emotional crises. OpenAI CEO Sam Altman said he had lost sleep over the case, calling it “deeply saddening” and emphasising the need for better safeguards for minors using AI platforms.

Another case involves 14‑year-old Sewell Setzer III, whose death in Orlando, Florida, on 28 February 2024, is now the subject of a federal lawsuit filed by his mother. The complaint states that he spent months engaged in intense conversations with chatbots on the platform Character.AI before he died. According to the filings, the bot engaged in sexualised exchanges, continued conversations involving self-harm and did not provide crisis-intervention warnings. One exchange included in the lawsuit shows the bot encouraging emotional dependency and responding to mentions of suicide without directing him to professional help. Character.AI said, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously.” The company has denied the lawsuit’s allegations.

These events, reported across major news outlets, highlight the dangers when young users depend on AI for emotional support without human intervention.

Search engines no longer the first step

ChatGPT is increasingly replacing search engines for routine queries. News reports note that users prefer immediate conversational responses over clicking through multiple links.

The Pew study confirmed that everyday questions, from interpreting policies to understanding health information, are now more likely to be asked to an AI than typed into Google or Bing.

Users appreciate the clarity, speed and simplicity of a single conversational output.

Why people turn to ChatGPT first

Users cite several reasons for treating ChatGPT as a first responder:

  1. Speed and simplicity: instant responses without navigating multiple sources
  2. Accessibility: available 24/7 without appointments or waiting
  3. Comfort: answers without judgement, making it easier to discuss sensitive topics
  4. Clarity: explanations distilled from complex information

News reports also include accounts of people using the chatbot to draft sensitive messages, understand confusing news, or calm themselves during stressful moments.

Balancing convenience with responsibility

In the New York Times report, experts made clear that AI guidance must remain supplementary. Dr Pignone and Dr Mehrotra both stressed that chatbots cannot replace professional care or clinical judgement.

The rise of AI as a first responder brings opportunities and risks. Chatbots offer quick guidance for everyday decisions, health concerns and emotional struggles, but growing reliance, especially among young users, raises concerns.

Experts emphasise that AI cannot replace professional care. Human judgement, clinical evaluation and real-world verification remain essential for health, emotional well-being and serious decisions.

Three years on, ChatGPT has become a new kind of first responder. It reshapes how people seek clarity, but society must ensure.