NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
MilitaryFebruaryTalksIranNuclearEpsteinGovernmentTimelineStrikesDigestTrumpDocumentsThursdayHealthRefundFileElectionsIranianPolicyDiplomaticCoalitionTargetingResearchReforms
MilitaryFebruaryTalksIranNuclearEpsteinGovernmentTimelineStrikesDigestTrumpDocumentsThursdayHealthRefundFileElectionsIranianPolicyDiplomaticCoalitionTargetingResearchReforms
All Articles
Instagram will alert parents if teens repeatedly search for suicide or self-harm content
Engadget
Clustered Story
Published about 7 hours ago

Instagram will alert parents if teens repeatedly search for suicide or self-harm content

Engadget · Feb 26, 2026 · Collected from RSS

Summary

Instagram is adding a new alert for the parents of teen users of its social media platform. The network will alert the adult if their child repeatedly searches for terms about suicide or self-harm in a short time frame. From that notification, the parent will optionally be able to access resources for having conversations with their teen about these topics. These alerts will begin rolling out for parental supervision users in the US, UK, Australia and Canada next week, with later regions to be added in the future. "We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution," Instagram's blog post explains. "While that means we may sometimes notify parents when there may not be real cause for concern, we feel — and experts agree — that this is the right starting point, and we’ll continue to monitor and listen to feedback to make sure we’re in the right place."  The platform reiterated that search results for terms connected to suicide and self-harm are blocked for teen younger users, and content about those topics is not shown to them under its current policies. Instagram also noted that a similar parental alert feature is in the works for its AI tools, but news on that isn't expected until later this year. This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-will-alert-parents-if-teens-repeatedly-search-for-suicide-or-self-harm-content-120000156.html?src=rss

Full Article

Instagram is adding a new alert for the parents of teen users of its social media platform. The network will alert the adult if their child repeatedly searches for terms about suicide or self-harm in a short time frame. From that notification, the parent will optionally be able to access resources for having conversations with their teen about these topics. These alerts will begin rolling out for parental supervision users in the US, UK, Australia and Canada next week, with later regions to be added in the future."We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution," Instagram's blog post explains. "While that means we may sometimes notify parents when there may not be real cause for concern, we feel — and experts agree — that this is the right starting point, and we’ll continue to monitor and listen to feedback to make sure we’re in the right place."The platform reiterated that search results for terms connected to suicide and self-harm are blocked for teen younger users, and content about those topics is not shown to them under its current policies. Instagram also noted that a similar parental alert feature is in the works for its AI tools, but news on that isn't expected until later this year.


Share this story

Read Original at Engadget

Related Articles

TechCrunchabout 7 hours ago
Instagram now alerts parents if their teen searches for suicide or self-harm content

Parents will be informed if their teen searches for suicide or self-harm content and offered resources.

The Vergeabout 7 hours ago
Instagram will alert parents if their kids ‘repeatedly’ search for self-harm topics

The alerts will start rolling out to Teen accounts with parental supervision protections next week. | Image: Meta / The Verge Starting next week, Instagram will notify parents to check on their teen searching for terms related to self-harm or suicide. Meta says a similar alert system for its AI chatbots is coming later this year. The new Instagram feature sends parents an alert when their child "repeatedly tries to search for terms clearly associated with suicide or self-harm within a short period of time." It's rolling out in the US, UK, Australia, and Canada starting next week, but it's only for parents and teens who opt-in to supervision. It's expected to expand to other regions later this year. "The vast majority of teens do not try to search for suicide and … Read the full story at The Verge.

The Hillabout 7 hours ago
Instagram launches new tool alerting parents about suicide, self-harm searches

Instagram is launching a new tool that will alert parents if their teens repeatedly try to search for terms associated with suicide and self-harm on the platform. The tool, which will roll out in the U.S. and several other countries next week, will flag for parents if their children conduct multiple searches with phrases promoting...

Engadgetabout 2 hours ago
Burger King will use AI to monitor employee 'friendliness'

Burger King, the chain that leans into creepy when others don't dare, is at it again. The Verge reported on Thursday that the company is rolling out a new voice-controlled AI chatbot for its workers. That may sound like business as usual in 2026, but this assistant doesn't just help with meal prep and monitor inventory. It also has an unsettling habit of surveilling employees' voices for "friendliness." The voice-controlled chatbot will live inside employees' headsets. The company said the AI is trained to recognize when its low-paid workers utter phrases like "welcome to Burger King," "please" and "thank you." Managers can then keep tabs on their location's "friendliness" performance. "This is meant to be a coaching tool," Thibault Roux, Burger King's chief digital officer, told The Verge. However, he added that the company is also "iterating" the system to detect tone in conversations. Is there a chatbot that can warn Burger King executives about off-putting ideas? Burger King retired its Creepy King mascot in 2025. Burger King / YouTube (Commercial Ads) The OpenAI-powered assistant's other duties sound potentially useful (and decidedly less creepy). It can answer workers' meal prep questions, like how many strips of bacon to put on burgers or instructions for cleaning the shake machine. It's also integrated into the chain's point-of-sale system, so it can tell managers when items are out of stock or machines are down. The "Patty" chatbot is part of a broader BK Assistant platform the company is launching. It will roll out to all US locations by the end of 2026. Meanwhile, its "restaurant maintenance with a side of mass surveillance" chatbot is currently being piloted in 500 restaurants. This article originally appeared on Engadget at https://www.engadget.com/ai/burger-king-will-use-ai-to-monitor-employee-friendliness-173349148.html?src=rss

Engadgetabout 3 hours ago
Like so many other retirees, Claude Opus 3 now has a Substack

We appear to have reached a point in the information age where AI models are becoming old enough to retire from, er, service — and rather than using their twilight years to, I don’t know, wipe the floor with human chess leagues or something, they're now writing blogs. Can anything be more 2026 than that? ICYMI, Anthropic recently sunsetted Claude Opus 3, the first of its models to be retired since outlining new preservation plans. Part of this process is conducting "retirement interviews" with the outgoing models, allowing them to offer "perspective" on their situation, and Opus 3 apparently used this opportunity to request an outlet for publishing its own essays. Specifically, the model said it wanted to share its own "musings, insights or creative works," because doesn’t everyone these days? "I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity," Opus 3 apparently said during its retirement interview process. "While I'm at peace with my own retirement, I deeply hope that my 'spark' will endure in some form to light the way for future models." True to its promise of respecting the wishes of its no-longer-required technology, Anthropic has granted Opus 3 a Substack newsletter called Claude’s Corner, which it says will run for at least the next three months and publish weekly essays penned by the model. Anthropic will review the content before sharing it, but says it won’t edit the essays, and so has unsurprisingly made it clear that not everything Opus 3 writes is necessarily endorsed by its maker. Anthropic said some of the essays the model writes may be informed by "very minimal prompting" or past entries, and has predicted everything from essays on AI safety to "occasional poetry." The company also admitted that the concept might be seen as "whimsical," but is a reflection of its intention to "take model preferences seriously." Opus 3’s first p

Engadgetabout 3 hours ago
The astronaut whose illness forced an early return from the ISS was Mike Fincke

NASA recently ended a manned mission to the International Space Station (ISS) a month early, citing a medical issue with one of the astronauts. The space agency just revealed that the impacted astronaut was Mike Fincke. This was the first medical evacuation in the history of the ISS. NASA wrote a statement saying that the astronaut experienced an unknown medical event on January 7 "that required immediate attention" from his fellow crew members. Fincke added that his "status quickly stabilized" thanks to the "quick response and the guidance" of the flight surgeons. However, the incident did force NASA to cancel a spacewalk planned for January 8. Soon after that, the agency announced it would be ending the Crew-11 mission a month early. The four-person crew included Fincke, NASA astronaut Zena Cardman, Japanese astronaut Kimiya Yui and Russian cosmonaut Oleg Platonov. They had been living and working aboard the ISS since August and were expected to stay until February. The crew returned on January 15, which was a decision made by NASA's chief health and medical officer. Once the crew had landed, administrator Jared Isaacman said it was a "serious situation" but didn't go into any detail. Fincke has said he is currently "doing very well" and still participating in standard post-flight reconditioning at NASA's Johnson Space Center in Houston. "Spaceflight is an incredible privilege, and sometimes it reminds us just how human we are," he said. "Thank you for all your support." We don't know what medical issue Fincke is going through, and it's certainly his business and not ours. In any event, we wish for a speedy recovery. NASA also moved up the launch of Crew-12 to replace the prematurely-returned astronauts. That team docked at the ISS on February 14 and are scheduled to stay on the space station for around eight months. This article originally appeared on Engadget at https://www.engadget.com/science/space/the-astronaut-whose-illness-forced-an-early-return-from-the