Subscribe to our newsletter and be the first to access exclusive content and expert insights.

subscribe now subscribe cover image
Ria Bhatia profile imageRia Bhatia

With India emerging as the world’s ChatGPT capital, more and more people are turning to AI seeking answers to health-related queries. But, how reliable can ChatGPT’s search function-driven prognosis be?

A woman chatting with ChatGPT on her phone, discussing her health concerns and getting medical advice from ChatGPT

Artificial Intelligence (A.I.) has a doctor problem. India is now the world’s ChatGPT capital: 36 per cent of the population use AI applications daily—twice the global average (17 per cent). In fact, OpenAI reports that 70 per cent of conversations are personal, and 49 per cent comprise questions. No wonder, then, health queries, from midnight rashes to anxiety spirals, are finding their way into the chatbox.

In 2023, a survey revealed that 78 per cent of respondents were willing to use ChatGPT for self-diagnosis. By 2024, one  in every six adults with access to the internet admitted to turning to AI chatbots at least once a month for medical advice. 

ChatGPT medical advice is not just a digital shortcut; it signals how Indians are renegotiating healthcare itself. Doctors remain revered, but soaring medical costs, stigma associated with certain medical conditions, and endless waiting rooms make AI feel like an alternative for some.

Why ChatGPT medical advice is booming in India 

Once dismissed as a novelty, ChatGPT medical advice is becoming a first stop for self-diagnosis, decoding reports, and cutting through jargon. A 2025 study found that 39 per cent of individuals who haven’t yet used ChatGPT for health would consider doing so in the next six months.

Statistics on Indians using ChatGPT for ChatGPT medical advice
India is now the world’s ChatGPT capital

The shift is telling in India, where doctors still hold a venerated status and human interaction has long defined care. Medical inflation in India is rising faster than the global average; nearly 400 million Indians don’t have medical insurance and consultation costs start at ₹500 to ₹1,000 in cities. In rural areas, nominal fees are offset by poor infrastructure, a shortage of doctors, and a lack of awareness. In this gap,AI health advice is less of a preference than a fallback, especially in a system that feels inaccessible. 

“AI CAN POINT YOU IN A DIRECTION THAT FEELS PERSUASIVE, BUT IT DOESN’T MEAN IT’S ACCURATE” – Deepti Chandy

Dr Smeet Patel, director of Mayflower Women’s Hospital, Ahmedabad, says that speed is its biggest driver, “Medical anxiety doesn’t follow clinical hours, making ChatGPT’s 24x7 availability a big draw.” Celebrity dermatologist Dr Jaishree Sharad adds that patients appreciate how AI explains symptoms in plain language. “When something like a rash, hair fall, or an itchy patch pops up suddenly at midnight, waiting hours or days for a doctor’s appointment feels frustrating. Plus, with skin and hair problems often being personal and sensitive, many find it comfortable to ask AI first. And, because these tools often explain in simple, everyday words instead of medical jargon, patients find it easier to understand what might be going on.”

Convenience and privacy matter too. In a culture where reproductive health and mental illnesses are still hushed, asking a chatbot about contraception or depression might feel safer than risking judgment. Psychologist Sanam Devidasani notes, “It gives you the illusion of a connection without the risk that comes with building a real connection with someone.”

A woman looking for ChatGPT medical advice at night
A 2025 study found that 39 per cent of individuals who haven’t yet used ChatGPT for health would consider doing so in the next six months. Photograph: (Unsplash)

Validation is yet another hook. Functional nutritionist Mugdha Pradhan observes that the tone itself is persuasive, “The answers tend to be reassuring and agreeable; they rarely tell you where you’re messing up.” Reassurance may feel good, but it’s also what makes people mistake confidence for credibility. In India, where medical access is already fragile, that kind of easy validation can be its own risk. 

The risks of ChatGPT medical advice

The flip side? ChatGPT can be dangerously wrong. From a California teenager’s suicide to a 60-year-old man’s transient ischemic attack dismissed as “vision problems,” and an 83 per cent misdiagnosis rate in pediatric cases, the pattern is clear: confidence without competence. Yet, people give in.

Therapist Deepti Chandy recalls a client convinced she had a personality disorder because ChatGPT suggested it. “She was distressed and already wanted to structure her sessions around that assumption—what framework we would use, what treatment she should follow,” she says. “This is the risk—AI can point you in a direction that feels persuasive, but it doesn’t mean it’s accurate. Without professional evaluation, you may end up following the wrong course of treatment, and that can have very harmful consequences.” 

“MEDICAL ANXIETY DOESN’T FOLLOW CLINICAL HOURS, MAKING CHATGPT’S 24X7 AVAILABILITY A BIG DRAW” –– Dr Smeet Patel

Psychologist Dhara Ghuntla agrees, stating that while AI may sound comforting, it often delivers dangerously broad diagnoses, from trivial to life-threatening. “This can fuel health anxiety, obsessive symptom-checking, and unnecessary medical tests,” she says. For women especially, private and stigma-free answers can also delay urgent care. 

Statistics on the willingness to accept ChatGPT medical advice by Indians
As per reprots, medical inflation in India is rising faster than the global average

Pradhan highlights another flaw—that of false authority. “Not just ChatGPT; I’ve even tested Gemini for my own blood tests. While it’s good at reading large sets of data and spotting patterns, it completely missed the mark on the actual diagnosis. It creates an illusion of authority by throwing in citations, but they’re often false. Public models like ChatGPT pull information from Reddit, blogs, or random opinions, none of which have real clinical significance.”

The risks compound when AI in healthcare treats symptoms in isolation. Patel notes, “A woman reports pelvic pain, and AI responds, ‘Try a hot water bottle’, when she is actually doubled over in actual distress with a potential ruptured ovarian cyst.” He cites cases where miscarriage was wrongly suggested, or ectopic pregnancies dismissed as ‘normal spotting’. 

“MEDICAL PROFESSIONALS HAVE VAST, LEARNED EXPERIENCE, EMPATHY, AND CONTEXTUAL DECISION-MAKING THAT NO ALGORITHM CAN FULLY REPLICATE” –– Dr Vivek Bande

Hormonal nuance is another blind spot. Patel says fatigue often gets brushed off as stress when it may be thyroid disease, while over-the-counter recommendations ignore interactions with birth control, supplements or Ayurvedic medicine. Worse, these tools are trained on data from the West, overlooking Indian diets, cultural contexts, and symptom patterns altogether.

Functional medicine expert Annie Kanwar shares a recent case where a young professional was told by ChatGPT that her bloating was ‘likely IBS’. Following its fibre and probiotic advice, her pain worsened until testing revealed Small Intestinal Bacterial Overgrowth. “With a tailored treatment plan including targeted antimicrobials and dietary changes, she improved in weeks,” explains Kanwar. 

A doctor and patient at a clinic
Without professional evaluation, you may end up following the wrong course of treatment, with very harmful consequences, says Deepti Chandy Photograph: (Dupe)

In dermatology, the risks are equally stark. Sharad shares, “I had a patient with a red, persistent rash who, before their visit, had asked ChatGPT about it and got a list of possible diagnoses, including some serious ones like autoimmune diseases and infections, leaving them anxious and fearful. After the consultation, it turned out to be a mild allergic reaction that could be treated easily with topical creams.” The risks don’t stop there. Another followed ChatGPT’s filler recommendations to the point of losing facial expression entirely.

Celebrity dermatologist Dr Madhuri Agarwal adds, “AI platforms offer guidance based on protocols designed for Western skin, which may not be appropriate for melanin-rich Indian skin.” This leaves Indian patients with chemical burns and irritation. She notes that the environmental context also plays a role in dermatological evaluations. “AI systems currently lack the capacity to adequately account for these variable factors that are essential for accurate diagnosis and effective management.”

The rise of AI in healthcare in India: Does it always spell danger? 

Using ChatGPT for medical diagnosis is risky, but the broader role of AI in healthcare in India is harder to dismiss. Earlier this year, Apollo Hospitals allocated 3.5 per cent of its annual digital spend to AI, streamlining tasks for doctors and nurses. A NITI Aayog report projected that expanding AI could triple India’s GDP by 2035, underscoring its growing weight in the healthcare system. From offering 24x7 assistance to automated systems easing the load on the medical workforce, AI tools are already becoming critical to the sector.

“THIS CAN FUEL HEALTH ANXIETY, OBSESSIVE SYMPTOM-CHECKING, AND UNNECESSARY MEDICAL TESTS” –– Dhara Guntala 

For the public too, AI can be useful, but only if used carefully. Devidasani says some clients treat ChatGPT  “like talking to a journal that talks back,” helping them track moods or structure thoughts. Others feel unsettled by flat, generic responses. “Even the unhelpful moments,” she adds, “lead to great conversations in therapy—about what it really means to feel seen and connected.” 

Hands of a human and robot, reflecting the rise of AI in healthcare
A NITI Aayog report projected that expanding AI could triple India’s GDP by 2035, underscoring its growing weight in the healthcare system. Photograph: (Pexels)

Beyond therapy, AI can help patients organise questions, track symptoms, and decode medical terms. Pradhan notes, “If someone has done multiple blood tests and a marker keeps dropping, ChatGPT might be able to highlight that recurring trend, upon assessing all the reports. It’s like putting pieces of a puzzle together, useful for pointing you in a certain direction.” But she is clear, “That’s where AI’s role ends. The interpretation and the clinical decision-making should never be left up to AI, because that requires human judgment.”

Why AI in healthcare can’t replace human professionals

A 2024 study put ChatGPT’s accuracy in providing medical advice at just 49 per cent. Other research found it can dip as low as 20 per cent. Despite this, 61 per cent of users still ask ChatGPT questions that demand clinical expertise. This heavy reliance, however, doesn’t translate into real confidence.

A KPMG report noted that only 46 per cent of people are willing to trust AI systems—a sign of weak privacy  safeguards and the ease with which responses can be influenced by prompts, rendering it unqualified for serious medical decision-making. “Medical professionals have vast, learned experience, empathy, and contextual decision-making that no algorithm can fully replicate,” says Dr Vivek Bande, a Pune-based surgical oncologist. “Doctors can individualise treatment based on overall history, symptoms, and clinical assessment, which ChatGPT cannot.” Devidasani adds that ChatGPT is unable to register and recall a patient’s medical history.

A patient consulting with a doctor, indicating the limitations of AI and healthcare
A 2024 study put ChatGPT’s accuracy in providing medical advice at just 49 per cent. Other research found it can dip as low as 20 per cent. Photograph: (Pexels)

Healthcare, especially mental health, is built on human connection. “A professional brings not only training and expertise but also empathy, intuition, and the ability to notice what’s not being said,” says Chandy. “We listen to tone, body language, and pauses. We hold space for people in a way that AI simply cannot.Most importantly, professionals don’t just agree with you—we challenge, we guide, and we offer interventions tailored to your context.”

For decades, Indians have turned to Google for health questions. ChatGPT is riskier because it mirrors a friend or doctor in conversation, coaxing users to trust its tone. But just like Google never replaced doctors, neither will ChatGPT. Technology may evolve, but the core of healthcare is still judgment, empathy, and accountability—qualities no algorithm can match.


Subscribe for More

Subscribe to our newsletter and be the first to access exclusive content and expert insights.

subscribe now