
Gizmodo · Feb 23, 2026 · Collected from RSS
Apple will reportedly focus on computer vision to make AI gadgets that sound a lot like other, existing, AI gadgets.
Apple seems to be inching toward AI gadgets, and if a recent report from Bloomberg is any indication, lots of them will have one thing in common: they’ll use “Visual Intelligence.” In case you’re not brushed up on your Apple branding, “Visual Intelligence” is the company’s version of computer vision—an AI feature that gives gadgets “sight,” so to speak. According to Bloomberg, Apple wants Visual Intelligence to be a defining feature across a range of hardware, including a new generation of AirPods with cameras, Apple’s first pair of smart glasses, and even an AI pendant that sounds weirdly reminiscent of a failed Ai Pin made by Humane. What exactly will computer vision be doing in those gadgets? Well, apparently, the exact same stuff that it does in other gadgets. Per Bloomberg: “…the most basic applications could involve taking a plate of food and identifying the items and ingredients. More advanced uses include the device giving specific instructions for conducting a task based on what it sees. That might mean upgraded turn-by-turn directions, with the device telling a user to go past a specific landmark — rather than just a certain number of feet. The technology also could remind users to do something when they walk up to a certain object or place.” If you’re at all familiar with computer vision and how it works in gadgets like smart glasses, you probably read the above and got a little déjà vu. Computer vision is a defining feature of popular smart glasses like the Ray-Ban Meta AI glasses and can be used for quite a few things, like translating text on a food menu, identifying objects in your environment, and giving you instructions for a recipe while you’re cooking. While I’ll concede that the use case for navigation would be novel, Apple seems to be on the exact same track as Meta and other companies squeezing computer vision capabilities into their hardware. Whether Apple would have any more success in making computer vision—er, Visual Intelligence—work in AI gadgets is anyone’s guess. While computer vision is arguably among the more futuristic and novel features of smart glasses, it’s also one of the least reliable and, oftentimes, the least applicable to your daily use. In my experience using the Ray-Ban Meta AI glasses, computer vision has a habit of getting stuff wrong (you can read my review of the Meta Ray-Ban Display for specific examples), which makes it hard to trust and even more difficult to incorporate into your day-to-day use as a result. I still think the technology could be great for accessibility purposes, but that’s not exactly what Apple is pitching here. While there’s a chance that Apple is working toward some kind of breakthrough on the computer vision front that would make Visual Intelligence more reliable and useful, it’s done little, so far, to show progress. As Bloomberg notes, the existing Visual Intelligence features inside iOS, for example, are reliant mostly on OpenAI’s ChatGPT and, in the near future, Google’s Gemini. Models offered by those companies, in my experience, are just as fallible as the rest. A lot can happen between now and when Apple finally decides to start rolling out its AI-centric hardware (late this year at the soonest), but for now, it would appear that AI gadgets are a bit stuck on how/when computer vision can be used—or at least stuck on making those scenarios feel functional. Apple’s vision for Visual Intelligence may sound a little more useful than OpenAI’s reported smart speaker with a camera, but that’s a pretty low bar.