AI didn’t just appear out of thin air. We’ve been living with versions of it for over a decade — it’s quietly powering everything from Netflix recommendations to virtual assistants, acting as a helpful background feature we’ve almost taken for granted.
However, the advent of ChatGPT in 2022 changed the scenario. AI is no longer an invisible assistant hidden in apps. It has a voice. He responds when you talk to him. It can create, interact with, and even present itself in health contexts in ways that make it seem like a real doctor.
Dr. David Shusterman, a board-certified urologist and chief physician of Modern Urologists in New York City, United States, says that compared to three or four years ago, he sees far more patients with long lists of possible diagnoses found online or through AI tools.
“Sometimes they’ve read ten different explanations for the same symptom, and many of those explanations contradict each other,” he says. “Instead of coming up with one concern, they are often overwhelmed and worried about five or six possible situations.
“The Internet can be helpful for education, but without clinical context, it can easily turn into information overload.”
Dr. David Shusterman
If you rely on AI or algorithm-generated health advice instead of peer-reviewed data or a qualified professional, remember – you’re not talking to a real doctor. These tools do not understand the person behind the screen and are no replacement for a professional, personal assessment.
Shusterman cautions that although AI can summarize information, it can’t examine you, review your full medical history in context, or recognize subtle warning signs during a conversation.
“When one relies exclusively on algorithm-generated advice, important diagnoses may be missed or delayed,” he says.
AI health content often sounds overly confident, giving the illusion of expertise. This is due to a phenomenon called ‘AI hallucination’, where technology generates information that seems completely logical and factual but is actually completely invented. AI models prioritize fluency and persuasion over medical truth, making it harder for people to unlearn harmful, overly simplistic health advice.
“AI-generated information is often written in a very authoritative tone, making it seem like definitive medical guidance,” Shusterman warns. “The issue is that confidence of language does not guarantee accuracy of information.
“When people hear something said with great confidence online, it can be difficult to convince them that the situation is actually more nuanced.”
Shusterman says the real danger comes when security exceptions are left in place, and people are given the same advice as everyone else.
“In medicine, the little things matter – age, medications, family history, physical examination findings,” he explains. “A recommendation that is safe for one person may be dangerous for another.
“When complex symptoms are reduced to general advice, you risk overlooking serious conditions that require timely evaluation or specialized treatment.”
‘Quick Fix’ vs. Real Therapy
Another issue to consider is that internet social media are full of ‘health hacks’ and apparent ‘miraculous’ fixes. Presented as quick, simple solutions – often by those without medical expertise – these claims can slow down or unnecessarily complicate professional health care.
“Good medication usually involves a plan, follow-up, and consistency,” says Shusterman. “But online content often promotes immediate results. This creates unrealistic expectations, and when people don’t see immediate changes, they sometimes abandon treatments that will actually help them in the long run.”
Although many people turn to AI for quick solutions to health concerns, they often feel compelled to double-check its answers. This may mean asking the AI follow-up questions or searching elsewhere, which sometimes leads to a merry round of contradictions and second guessing. Soon, you may be spending hours online and be more confused and anxious than when you started.
“Sometimes patients spend weeks or months researching symptoms online, and instead of feeling more informed, they feel exhausted and unsure of what to believe,” explains Shusterman. “Finally, some people delay care because they get stuck in a cycle of reading conflicting opinions.
“This type of decision paralysis can, unfortunately, postpone the medical evaluation that would give them a clear answer.”
Bypassing reliable sources for quick-fire summaries from the digital wild west can easily trigger cyberchondria. This is the digital form of hypochondria – excessive worry about a disease you don’t actually have.
Short, bullet-pointed summaries make it easy to ignore and miss the reassuring context, while highlighting worrying signs. This combination can lead you to repeatedly search for affirmation, misinterpret normal sensations as symptoms, and increase your anxiety.
When AI advice goes wrong – or people trust unqualified sources, misleading visuals, or deepfake experts – it can undermine trust in real providers and the healthcare system as a whole.
Shusterman says this creates confusion and doubt. When people discover that supposedly reliable online information is false, they may begin to doubt all medical guidance – even the advice of actual physicians.
“Trust is an important part of the doctor-patient relationship,” he explains. “Our goal as physicians is to help people navigate information, not to dismiss their curiosity.”
Shusterman’s tips for navigating online health content
Shusterman recommends considering online health information as a starting point, not a diagnosis.
He shares some practical guidance for staying sane online:
-
Be wary of content that promises instant cures, oversimplifies complex situations, or uses fear to drive action.
-
Prioritize sources Affiliated with recognized medical institutionsPeer-reviewed research, or recognized professionals.
-
Use online information to inform questions to your doctor, not to replace a professional evaluation.
“Remember that real health care involves conversations, exams, and personal care,” Shusterman concluded. “Technology can support therapy, but it should never replace the guidance of a qualified professional who understands your specific health situation.”
