Apple Intelligence Can Now Creep on Your iPhone Screen

It wouldn’t be a developer keynote in 2025 without a little AI, right? As underwhelming as Apple Intelligence has been since its rollout in October last year, Apple seems to be committed to upgrading that experience with pivotal upgrades like… new Genmojis? Okay, so maybe WWDC 2025 wasn’t a revolutionary year for Apple Intelligence, but there are still some upgrades worth noting, including a new feature that can watch what you’re doing on your phone and then take specific actions depending on the scenario.
Visual Intelligence, as Apple is calling it, is a feature that expands multimodal capabilities beyond the Camera app and into your iPhone screen. “Users can ask ChatGPT questions about what they’re looking at on their screen to learn more, as well as search Google, Etsy, or other supported apps to find similar images and products,” says Apple. “If there’s an object a user is especially interested in, like a lamp, they can highlight it to search for that specific item or similar objects online.”
That doesn’t sound novel by any means, but it does bring Apple Intelligence closer to competitors like Google, which has a Gemini feature that does pretty much the same thing. It also brings Apple Intelligence closer to the Holy Grail of “agentic AI,” which is the tech world’s way of describing AI that can do stuff for you. As ho-hum as multimodal features like Visual Intelligence have become in a very short period of time, they still have the power to actually make the phone experience better, in my opinion.

I think I speak for most people when I say that using your iPhone isn’t quite as simple as it used to be, and there are a few reasons for that. One reason is that we expect our phones to do a lot more than they used to, which means devices need to have more features to do all of those things. The problem is that keeping track of those features and finding a spot for them to exist in a UI isn’t easy—it makes software feel more bloated. Agentic AI has the ability to cut through the bloat and bring you to the thing you want to do faster. If that means I get to spend less time entering payment card information or navigating between apps on my phone, then I’m all for it.
This is all theoretical right now since Visual Intelligence was just released, and we can’t say for certain whether it works as promised, but I’m certainly not mad about the idea, even despite being a little underwhelmed. Visual Intelligence should also run on-device AI, which is great because sending data from my phone screen anywhere really wouldn’t be high on my to-do list.
It wasn’t all about Visual Intelligence; Apple also unveiled new AI features like Live Translation in the Messages and FaceTime to translate while you’re texting or calling with someone. There were also updates to Genmoji and Image Playground that add further customization and new art styles for generated images and emoji. Additionally, Apple will open up its on-device foundation model for Apple Intelligence and invite third-party app developers to design their own AI features.
“App developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they’re offline, and that protect their privacy, using AI inference that is free of cost,” Apple said in a statement. “For example, an education app can use the on-device model to generate a personalized quiz from a user’s notes, without any cloud API costs, or an outdoors app can add natural language search capabilities that work even when the user is offline.”
Again, that isn’t exactly the flashiest news for Apple Intelligence, but it may be a solid way of expediting the development of new AI features, especially while Apple lags behind in the field of generative AI and large language models. Speaking of lagging behind, one notable thing that was missing was Apple’s AI-powered Siri upgrade, though Apple did address the AI elephant in the room, stating that we would hear more “later this year.” That’s not surprising by any means, but it’s definitely indicative of Apple’s stumbles on the AI front.
This year’s WWDC did little to assuage any concerns you may have over Apple’s progress on the AI front, but it did move the needle forward just a bit, and that may be enough for most. Despite industry emphasis on AI features, consumers have a decidedly smaller appetite for those features, so I doubt this year’s update will prevent anyone from running out this year and buying the latest iPhone.
Anyone who is a part of the Apple Developer Program can use the new Apple Intelligence features today, while the first public beta will be available next month. If you’re not interested in betas or you’re not a developer, you’ll have to wait until the fall to try these new features in full.
gizmodo