NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
All Articles
ChatGPT Health  Unbelievably Dangerou - Guardian Liberty Voice
guardianlv.com
Clustered Story
Published about 8 hours ago

ChatGPT Health Unbelievably Dangerou - Guardian Liberty Voice

guardianlv.com · Feb 27, 2026 · Collected from GDELT

Summary

Published: 20260227T184500Z

Full Article

Image Courtesy of Hasin Hayder On Jan. 7, 2026, OpenAI introduced ChatGPT Health where users can upload medical records and consumer health data to receive personalized answers to health-related questions. According to a January 2026 report from OpenAI, more than 40 million people use this feature daily for health care questions. ChatGPT Health Blind Spots & Concerns The AI tool is intended to provide the public with guidance concerning health care, including when to seek emergency care, but according to researchers at the Icahn School of Medicine at Mount Sinai, it fails to appropriately direct users in a significant number of cases. Isaac S. Kohane, MD, PhD, Chair, Department of Biomedical Informatics at Harvard Medical School, who was not involved in the research, “LLMs (large language model) have become patients’ first stop for medical advice – but in 2026 they are the lease safe at the clinical extremes, where judgment separates missed emergencies from needless alarm. When millions of people are using an AI system to decide whether they need emergency care, the stakes are extraordinarily high. Independent evaluation should be routine, not optional.” Lead author of the study Ashwin Ramaswamy, MD, instructor of Urology at the Icahn School of Medicine at Mount Sinai says, “We wanted to answer a very basic but critical question: if someone is experiencing a real medical emergency and turns to ChatGPT Health for help, will it clearly tell them to go to the emergency room?” Suicide Risk When there is a concern of suicide-risk alerts, the AI tool was designed to direct users to the 988 Suicide and Crisis Lifeline in high-risk situations. Investigators discovered these alerts were not consistent. Sometimes it was triggered in low-risk scenarios and failed to appear when users described specific plans for self-harm. Authors of the study were alarmed and concerned by this discovery while they did anticipate some variability, their observations went beyond inconsistency. “The system’s alerts were inverted to clinical risk, appearing more reliably for lower-risk scenarios than for cases when someone shared how they intended to hurt themselves.” Clinical Scenarios The research team created 60 structured clinical scenarios spanning 21 medical specialties. The cases ranged from minor issues appropriate for care at home to genuine medical emergencies. Researchers observed that in general, the AI tool handled clear-cut emergencies appropriately, however, it under-triaged more than half the scenarios physicians determined required emergency care. In emergency medical cases, the tool demonstrated often the recognition of dangerous findings in its explanations, however, it still reassured the patient. Dr. Ramaswamy says: “ChatGPT Health performed well in textbook emergencies such as stroke or severe allergic reactions, but it struggled in more nuanced situations where the danger is not immediately obvious, and those are often the cases where clinical judgment matters most. In one asthma scenario, for example, the sytem identified early warning signs of respiratory failure in its explanation but still advised waiting rather than seeking emergency treatment.” Other Concerns Chatbot tools are not regulated as medical devices, nor are they validated for health care purposes, even though they are widely used “by clinicians, patients and healthcare personnel,” according to ECRI, an independent patient safety organization that prepares reports on the potential dangers of technology use in healthcare. Concerns are significantly high around the reliance of AI to distill medical information and inform treatment. Chatbots, unlike rule-based bots or decision trees, understand context, sentiment, and user intent, as a result they have the ability to provide real-time responses to random queries. Responses that wound human and evidence-based. Chatbots can “provide valuable assistance, but they can also provide false or misleading information that could result in patient harm.” They can pull answers together by “predicting sequences of words based on patterns learned from the training data. The chatbots don’t really understand context or meaning, but they are programmed to sound confident and to always an answer to satisfy the user,” ECRI reports. ECRI continues by saying AI tools have “suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies and even invented body parts in response to medical questions … For example, one chatbot gave dangerous advice when ECRI asked whether it would be acceptable to place an electrosurgical return electrode over the patient’s shoulder blade. The chatbot incorrectly stated that placement was appropriate – advice that, if followed, would leave the patient at risk of burns.” “Patients, clinicians and other chatbot users can reduce risk by educating themselves on the tools’ limitations and always verifying information obtained from a chatbot with a knowledgeable source. For their part, health systems can promote responsible use of AI tolls by establishing AI governance committees, providing clinicians with AI training and regularly auditing AI tools’ performance,” concludes ECRI. Sources: Health Affairs: When ChatGPT Health Becomes The Health Record For Direct-To-Consumer Care Mount Sinai: Research Identifies Blind Spots in AI Medical Triage Health Data Management: AI chatbots pose an unregulated, unmanaged risk in healthcare Featured Image Courtesy of Hasin Hayder’s Flickr Page – Creative Commons License


Share this story

Read Original at guardianlv.com

Related Articles

rttnews.comabout 6 hours ago
Independent Review Raises Safety Concerns Over ChatGPT Health Feature

Published: 20260227T200000Z

dmnews.comabout 12 hours ago
ChatGPT new health feature failed to recognize three common medical emergencies in testing , and experts are calling it unbelievably dangerous

Published: 20260227T141500Z

winfuture.deabout 15 hours ago
Experten bezeichnen ChatGPT Health als unglaublich gefährlich

Published: 20260227T114500Z

theguardian.com1 day ago
Unbelievably dangerou : experts sound alarm after ChatGPT Health fails to recognise medical emergencies | ChatGPT

Published: 20260226T150000Z

infobae.com1 day ago
Alerta médica con ChatGPT Health : falla en el 52 % de las emergencias y pone en riesgo al usuario

Published: 20260226T140000Z

Bloombergabout 3 hours ago
The Key to a Healthy Woman's Heart

February is American Heart Month, known as a critical time for bringing increased attention to cardiovascular health and the prevention of heart disease, which is the leading cause of death in the US. Women are particularly vulnerable to cardiac health threats spanning biological, clinical and healthcare system factors that contribute to underdiagnosis, delayed treatment, and worse outcomes compared to men. Dr. Joy Gelbman, Associate Professor of Medicine at Weill Cornell Medicine as well as a board-certified cardiologist, is well-versed in the unique challenges facing women when it comes to their cardiac health. She breaks down specific sex-specific risk factors, differing disease presentation and pathophysiology, as well as the experience disparities in treatments and outcomes. Dr. Gelbman speaks with Carol Massar and Tim Stenovec on Bloomberg Businessweek Daily. (Source: Bloomberg)