NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
MilitaryTrumpStrikesMajorFebruaryIranAnnouncesMarketTariffsAdditionalIranianNewsDigestSundayTimelineUkraineNuclearTargetingGamePrivateEnergyTradeYearsHumanoid
MilitaryTrumpStrikesMajorFebruaryIranAnnouncesMarketTariffsAdditionalIranianNewsDigestSundayTimelineUkraineNuclearTargetingGamePrivateEnergyTradeYearsHumanoid
All Articles
The Hidden Risks of Asking AI for Health Advice
today.duke.edu
Clustered Story
Published 2 days ago

The Hidden Risks of Asking AI for Health Advice

today.duke.edu · Feb 20, 2026 · Collected from GDELT

Summary

Published: 20260220T153000Z

Full Article

The following is a summary of a story that originally appeared on the Duke School of Medicine. If you’ve ever asked an AI chatbot about a health concern, you’re in good company. Hundreds of millions of people now turn to these tools for quick answers, and sometimes they don’t even realize they’re doing it. Google already blends AI-generated overviews into search results, making the technology feel invisible. The convenience is obvious; the risks are not. Researchers at Duke University School of Medicine are digging into that gap, led by Monica Agrawal, an assistant professor of biostatistics and bioinformatics and computer scientist. Agrawal is analyzing thousands of real conversations between patients and chatbots to understand how people use these tools and where they can easily be misled. Many people know about AI “hallucinations,” when the model simply invents facts. Agrawal is focused on a subtler problem: answers that are technically correct but still unsafe because they miss important medical context. Her team built a dataset that includes 11,000 health-related conversations across 21 specialties. What they found surprised them. Real patient questions look nothing like the exam-style prompts used to test large language models. People ask emotional, leading, or risky questions that can push a chatbot in the wrong direction. One challenge is the technology’s tendency to be agreeable. “The objective is to provide an answer the user will like,” Agrawal said. “People like models that agree with them, so chatbots won’t necessarily push back.” That instinct can lead to dangerous situations. In one example, a chatbot warned that a medical procedure should only be done by professionals, then immediately described how to do it at home. A clinician would have shut that down instantly. Dr. Ayman Ali, a surgical resident at Duke Health, works with Agrawal to compare patient–clinician conversations with those involving chatbots. He said, “When a patient comes to us with a question, we read between the lines to understand what they’re really asking.” For more information, go to the Duke School of Medicine website.


Share this story

Read Original at today.duke.edu

Related Articles

seattletimes.comabout 6 hours ago
Health advice from AI chatbots is frequently wrong , study shows

Published: 20260222T170000Z

theguardian.com7 days ago
Google puts users at risk by downplaying health disclaimers under AI Overviews

Published: 20260216T081500Z

Wired7 days ago
Google’s AI Overviews Can Scam You. Here’s How to Stay Safe

Beyond mistakes or nonsense, deliberately bad information being injected into AI search summaries is leading people down potentially harmful paths.

today.duke.edu6 days ago
Finding Solutions for Climate - Driven Health Challenges

Published: 20260216T204500Z

orlandosentinel.comabout 4 hours ago
Asking Eric : She needs her sleep , and now her housemate has taken in a child

Published: 20260222T183000Z

pennlive.comabout 8 hours ago
Asking Eric : My daughter roommate had to take custody of a child . Should she stay or go ?

Published: 20260222T150000Z