NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
MilitaryCrisisStrikesFebruaryDiplomaticTrumpOscarNewsDigestTimelineTariffsIranBestWinFundingInfrastructureAdditionalClimateTrump'sGreenlandDaysAnnouncesIranianMajor
MilitaryCrisisStrikesFebruaryDiplomaticTrumpOscarNewsDigestTimelineTariffsIranBestWinFundingInfrastructureAdditionalClimateTrump'sGreenlandDaysAnnouncesIranianMajor
All Articles
Apple’s AI Gadgets Don’t Sound Groundbreaking at All
Gizmodo
Clustered Story
Published about 6 hours ago

Apple’s AI Gadgets Don’t Sound Groundbreaking at All

Gizmodo · Feb 23, 2026 · Collected from RSS

Summary

Apple will reportedly focus on computer vision to make AI gadgets that sound a lot like other, existing, AI gadgets.

Full Article

Apple seems to be inching toward AI gadgets, and if a recent report from Bloomberg is any indication, lots of them will have one thing in common: they’ll use “Visual Intelligence.” In case you’re not brushed up on your Apple branding, “Visual Intelligence” is the company’s version of computer vision—an AI feature that gives gadgets “sight,” so to speak. According to Bloomberg, Apple wants Visual Intelligence to be a defining feature across a range of hardware, including a new generation of AirPods with cameras, Apple’s first pair of smart glasses, and even an AI pendant that sounds weirdly reminiscent of a failed Ai Pin made by Humane. What exactly will computer vision be doing in those gadgets? Well, apparently, the exact same stuff that it does in other gadgets. Per Bloomberg: “…the most basic applications could involve taking a plate of food and identifying the items and ingredients. More advanced uses include the device giving specific instructions for conducting a task based on what it sees. That might mean upgraded turn-by-turn directions, with the device telling a user to go past a specific landmark — rather than just a certain number of feet. The technology also could remind users to do something when they walk up to a certain object or place.” If you’re at all familiar with computer vision and how it works in gadgets like smart glasses, you probably read the above and got a little déjà vu. Computer vision is a defining feature of popular smart glasses like the Ray-Ban Meta AI glasses and can be used for quite a few things, like translating text on a food menu, identifying objects in your environment, and giving you instructions for a recipe while you’re cooking. While I’ll concede that the use case for navigation would be novel, Apple seems to be on the exact same track as Meta and other companies squeezing computer vision capabilities into their hardware. Whether Apple would have any more success in making computer vision—er, Visual Intelligence—work in AI gadgets is anyone’s guess. While computer vision is arguably among the more futuristic and novel features of smart glasses, it’s also one of the least reliable and, oftentimes, the least applicable to your daily use. In my experience using the Ray-Ban Meta AI glasses, computer vision has a habit of getting stuff wrong (you can read my review of the Meta Ray-Ban Display for specific examples), which makes it hard to trust and even more difficult to incorporate into your day-to-day use as a result. I still think the technology could be great for accessibility purposes, but that’s not exactly what Apple is pitching here. While there’s a chance that Apple is working toward some kind of breakthrough on the computer vision front that would make Visual Intelligence more reliable and useful, it’s done little, so far, to show progress. As Bloomberg notes, the existing Visual Intelligence features inside iOS, for example, are reliant mostly on OpenAI’s ChatGPT and, in the near future, Google’s Gemini. Models offered by those companies, in my experience, are just as fallible as the rest. A lot can happen between now and when Apple finally decides to start rolling out its AI-centric hardware (late this year at the soonest), but for now, it would appear that AI gadgets are a bit stuck on how/when computer vision can be used—or at least stuck on making those scenarios feel functional. Apple’s vision for Visual Intelligence may sound a little more useful than OpenAI’s reported smart speaker with a camera, but that’s a pretty low bar.


Share this story

Read Original at Gizmodo

Related Articles

Engadget3 days ago
OpenAI will reportedly release an AI-powered smart speaker in 2027

OpenAI is reportedly hard at work developing a series of AI-powered devices, including smart glasses, a smart speaker and a smart lamp. According to reporting by The Information, the AI company has a team of over 200 employees dedicated to the project. The first product scheduled to be released is reported to be a smart speaker that would include a camera, allowing it to better absorb information about its users and surroundings. According to a person familiar with the project, this would extend to identifying objects on a nearby table, as well as conversations being held in the vicinity of the speaker. The camera will also support a facial recognition feature similar to Apple's Face ID that would enable users to authenticate purchases. The speaker is expected to retail for between $200 and $300 and ship in early 2027 at the earliest. Reporting indicates the company's AI-powered smart glasses, a space currently dominated by Meta, would not come until 2028. As for the smart lamp, while prototypes have been made, it's unclear whether it will actually be brought to market. Last year OpenAI acquired ex-Apple designer Jony Ive's startup io Products for $6.5 billion. Ive is considered largely responsible for Apple's design aesthetic, having been involved in designing just about every major Apple device since joining the company in the '90s before his departure in 2019. The acquisition of his AI-focused design firm sets the stage for Ive to lead hardware product development now for OpenAI. Since the partnership was forged, there have already been delays due to technical issues, privacy concerns and logistical issues surrounding the computing power necessary to run a mass-produced AI device. Regardless of the behemoths behind the project, the speaker and other future products may still face a consumer reluctant to buy a product that is always listening to and watching its users. This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-reporte

The Verge3 days ago
OpenAI’s first ChatGPT gadget could be a smart speaker with a camera

OpenAI's first hardware release will be a smart speaker with a camera that will probably cost between $200 and $300, according to The Information. The device will be able to recognize things like "items on a nearby table or conversations people are having in the vicinity," The Information says, and it will have a Face ID-like facial recognition system so that people can purchase things. OpenAI acquired Jony Ive's hardware company last May in a deal worth nearly $6.5 billion. Details about their hardware products have been trickling out since then, including that the first device won't be a wearable and that it won't be released to customers … Read the full story at The Verge.

The Verge3 days ago
Meta will ruin its smart glasses by being Meta

Facial recognition has been a requested feature for smart glasses, but the risks are high. Whenever I write about Meta's Ray-Ban smart glasses, I already know the comments I'm going to get. Cool hardware, but hard pass on anything Meta makes; will wait for someone else to come along. It's hard to imagine that sentiment changing anytime soon after The New York Times reported that Meta mulled launching facial recognition software "during a dynamic political environment" precisely because privacy advocates would be distracted. Smart glasses evangelists often tell me this fear is somewhat overblown. After all, the phone in your pocket also has a camera. The government already uses facial recognition tech, and CCTV feeds are everywher … Read the full story at The Verge.

The Verge5 days ago
Meta is reportedly planning to launch a smartwatch this year

Meta is planning to launch a smartwatch with health tracking and AI features later this year, along with an updated version of its Meta Ray-Ban Display smart glasses, The Information reports. The smartwatch would arrive ahead of a pair of mixed reality glasses, code-named Phoenix, that Meta has reportedly delayed until 2027 amidst efforts to streamline the company's AR and MR roadmap. Meta previously scrapped plans for an earlier smartwatch in 2022 due to technical challenges and cost-cutting measures. If the new watch, code-named Malibu 2, comes to fruition, it would intensify competition with Apple, which is rumored to be working on a pa … Read the full story at The Verge.

Gizmodo6 days ago
Apple’s AI Pendant Sounds Like a Watered-Down Humane Ai Pin

And its smart glasses sound exactly like Ray-Ban Meta AI glasses clones.

TechCrunch6 days ago
Apple is reportedly cooking up a trio of AI wearables

As the AI hardware space heats up, the iPhone maker has multiple smart products in development.