Google Gemini could Be Coming to Apple Intelligence later this year, report says

Emma Roth and David Pierce of The Verge co-bylined a story published on Wednesday in which they report Google CEO Sundar Pichai said it’s his hope Google Gemini comes to Apple Intelligence as an optional model by the end of the year. The comment came during Pichai’s time on the stand giving testimony for Google’s search monopoly trial.

Pichai noted Apple chief executive Tim Cook told him during a recent meeting that Apple Intelligence would get more support for third-party AI models “later this year.”

“The integration would presumably allow Siri to call on Gemini to answer more complex questions, similar to the integration that Apple launched with OpenAI’s ChatGPT,” Roth and Pierce wrote of the ramifications of a potential Apple-Google deal over Gemini. “Apple senior vice president Craig Federighi hinted at plans to build Gemini into its Apple Intelligence feature last June, when the AI service was first announced.”

I attended the “fireside chat,” held after last year’s WWDC keynote, during which moderator iJustine—whom, incidentally, I interviewed in 2023—asked Federighi and then-AI boss John Giannandrea about Gemini and Apple Intelligence. It was during this discussion when Federighi said it’s Apple’s goal to “enable users ultimately to choose the models they want”—which could be Gemini for a not-insignificant swath of users.

I typically don’t write stories couched around what churns out of the Apple rumor mill, but do make exceptions for accessibility’s sake. This is one such occasion, as Gemini has supplanted OpenAI’s ChatGPT as my preferred generative AI tool. The Gemini app has become so integral to my digital doings, in fact, that it has earned a permanent place on my iPhone’s Home Screen and on my iMac’s Dock. In my experience, I’ve found Gemini to be mostly good at reliably giving me good information; it does get things wrong and hallucinates, but that’s to be expected for any such tool. Design-wise, I prefer the Gemini user interface to that of ChatGPT’s. I find the former more humane and dynamic, whereas the latter feels staid and utilitarian. As to how Gemini does as an assistive technology, I find it has subsumed (albeit not entirely) traditional web searches. Rather than endure umpteenth search results in a browser window, which requires good amounts of visual and motor energy, Gemini does the grunt work for me and collates everything into a single space. It’s a shining example of generative AI as accessibility; many disabled people can find conventional Google searches taxing in many respects, especially when doing deep research for essays or other projects. That Gemini exists means conducting said research becomes much more accessible—and expedient. Although many educators lament the proclivity of students nowadays to lean on generative AI for their schoolwork, the reality is to put one’s weight on something like Gemini isn’t (always) an indicator of laziness or, worse, academic dishonesty. On the contrary, it’s downright shrewd, not to mention empowering, to spot generative AI’s strengths in accessibility and take advantage of them accordingly.

Roth and Pierce note Pichai’s comment lends further credence to rumblings that Gemini indeed is coming to Apple Intelligence sooner than later. Their report makes mention of news from February in which “Google” is listed in Apple Intelligence-related code found within an iOS 18.4 beta. As Apple Intelligence currently stands, Siri will ask users if they wish to use ChatGPT in answering complex questions that are beyond the virtual assistant’s ken. Presumably, Gemini could one day do that as well, assuming someone has chosen it as their desired third-party model over the default ChatGPT.

News of Pichai’s comment regarding Gemini was first reported by Bloomberg.

Previous
Previous

If Apple Eventually Raises Prices, the biggest loser Will be Shoppers needing Accessibility

Next
Next

Pocket Casts App Adds Support for Generated Transcripts in Latest Update