All articles

Apple Intelligence in 2026: Has Apple Finally Caught Up to OpenAI?

Eighteen months after a stumbling launch, Apple Intelligence is finally a real product. The on-device privacy story is genuinely differentiated. The frontier capability gap, especially in Siri, remains stubbornly wide.

A

Admin

Author

17 April 20268 min read1 views00

When Apple unveiled Apple Intelligence at WWDC 2024, the keynote felt like a company catching the last train out of the station. The features promised — a smarter Siri, generative writing tools, image playground, ChatGPT integration — were table stakes by mid-2024 standards, and most of them did not actually ship until iOS 18.4 in spring 2025. The much-hyped Siri rebuild, the one that was supposed to use personal context to answer questions only your phone could know, slipped repeatedly. By the time Mark Gurman wrote in March 2025 that the new Siri had been delayed to 2026, the joke had already written itself.

It is now April 2026. Apple Intelligence has shipped two major releases since that embarrassing slip: the iOS 19 update that arrived in September 2025, and the iOS 19.4 mid-cycle release in March. The upgraded Siri, complete with the personal-context features promised in 2024, is finally in users' hands. The on-device Foundation Models framework introduced at WWDC 2025 is now the basis for a small but real ecosystem of third-party apps. Private Cloud Compute, the bespoke server architecture Apple built to do larger inference without seeing your data, has scaled past its initial growing pains.

So the question is fair to ask honestly: is Apple still behind OpenAI, Anthropic, and Google? The answer is yes, but the gap is no longer humiliating, and in one specific dimension — privacy-preserving on-device intelligence — Apple is genuinely ahead of everyone.

What actually shipped, and what users notice

The features that landed in iOS 19 are the ones a normal person actually uses. Notification summaries finally stopped misrepresenting Reuters headlines after Apple suspended the feature for news apps in early 2025 and reworked it. The writing tools, including Rewrite, Proofread, and Summarize, now work offline on any device with an A17 Pro or later chip and on every Apple Silicon Mac. Image Playground gained a serviceable photorealistic mode in iOS 19.2, partially closing the embarrassing distance between it and Midjourney or DALL-E 3.

The Siri rebuild is the one that matters. The new architecture, finally released in iOS 19, lets Siri reason about content on your screen, pull from your messages and calendar, and chain actions across apps. Ask it to "send the photo I took at the beach last weekend to mum" and it will, without asking you to confirm which photo or which mum. Most reviewers, including Joanna Stern at the WSJ and Federico Viticci at MacStories, called it the first version of Siri that did not feel like a downgrade from a competent human assistant.

It is still not as good as ChatGPT or Claude at open-ended conversation, and Apple knows it. That is what the ChatGPT integration is for, and it is also why the rumours about a Gemini partnership and a possible Anthropic deal will not go away.

The OpenAI deal, and the partners Apple is reportedly courting

The ChatGPT integration that shipped in late 2024 is still running. When Siri determines a query is beyond its capability, it offers to hand off to ChatGPT, with the user's permission, and the request is anonymised before it leaves the device. According to Apple's own disclosures, OpenAI does not retain queries routed through this path and cannot tie them to an Apple ID.

What Apple did not announce, but Gurman reported in May 2025 and again in February 2026, is that the company has been negotiating to add Google's Gemini and Anthropic's Claude as alternative third-party model providers. The Gemini deal was reportedly close to signing in mid-2025 and then stalled, partly over revenue-sharing terms and partly because Apple's antitrust exposure in the Google search case made it nervous about deepening that relationship.

The Anthropic talks are murkier. Bloomberg reported in March 2026 that Apple was in advanced discussions to license Claude for use inside Siri's reasoning layer — not as an alternative provider for the user to pick, but as an internal component invoked by Apple's own routing logic. If true, this would be a significant admission that Apple's frontier model work is not yet competitive at the high end. Apple has not commented. Anthropic has not commented. None of this should be treated as confirmed.

The on-device story is genuinely differentiated

Where Apple is unambiguously ahead is the architecture. The Foundation Models framework, which Apple opened to developers at WWDC 2025, gives any app on a recent iPhone or Mac access to a 3-billion-parameter on-device model that runs entirely on the Neural Engine. There is no API call, no token cost, no data leaving the device. A note-taking app can summarise your journal entries. A fitness app can generate workout descriptions. A small business CRM can draft follow-up emails. None of this is sent to a server.

This is something OpenAI cannot do. It is something Google can do partially, with Gemini Nano on Pixel devices, but Google's developer story is far less mature and the addressable installed base is a fraction of Apple's. Microsoft has Phi-3 and Phi-4 running on Copilot+ PCs, but the Windows ecosystem is fragmented and the Neural Processing Units across vendors are not consistent.

When Apple needs more capability than the on-device model provides, Private Cloud Compute kicks in. The architecture, audited by independent security researchers and partially open-sourced, is genuinely novel: the servers run a stripped-down Apple-built OS, attest their software state cryptographically, and process requests in a way that even Apple cannot inspect after the fact. The performance has improved noticeably since launch — early Private Cloud Compute requests sometimes took several seconds, and the March 2026 release brought average latency under 800 milliseconds for most prompts.

For users who care about privacy, this is real. For developers building on Apple platforms, it is a meaningful advantage. The argument is not that Apple has the smartest model. The argument is that Apple has the best model that meets a specific privacy bar, and for many use cases that bar matters more than raw capability.

Where Apple is still behind, and visibly so

The frontier capability gap shows up in three places. First, in agentic behaviour: the Siri rebuild can chain actions across apps in narrow ways, but it cannot do the open-ended computer-use tasks that Anthropic's Claude and OpenAI's o-series are starting to handle. Asking Siri to "book me a flight to London for under $400 next Tuesday and add it to my calendar" still requires hand-holding through multiple apps. Asking Claude or ChatGPT to do something similar via their respective computer-use modes works, imperfectly, but it works.

Second, in coding. There is no Apple Intelligence equivalent to GitHub Copilot or Cursor. Apple's own developer tools added some on-device code completion in Xcode 17, but it is far behind even free-tier Cursor, let alone the Claude-powered Cursor Agent that has become the default professional tool.

Third, in long-context reasoning. The on-device Foundation Models framework caps at a context window measured in low tens of thousands of tokens. Frontier closed models are now routinely working at hundreds of thousands or millions of tokens. For most consumer tasks this does not matter; for any kind of research or document-heavy work, it does.

What WWDC 2026 has to deliver

The window between now and WWDC, scheduled for early June, is the most consequential six weeks Apple Intelligence has had. Apple needs to ship three things to remain credible.

It needs a meaningful upgrade to the on-device model. The 3-billion-parameter model that powers Foundation Models is now nearly two years old in research terms. A 7-billion-parameter version, possibly sparse or mixture-of-experts to keep memory usage manageable, would close some of the gap on tasks like summarisation and structured output.

It needs an agentic API. If Apple wants developers to build the next generation of automated assistants on its platform, it needs to give them tool-use primitives that go beyond App Intents. Some kind of Apple-curated MCP-style protocol would be the obvious move.

And it needs to either announce the third-party model partnerships it has been negotiating, or stop letting them leak. The current limbo, where everyone knows Apple is talking to Anthropic and Google but nothing is confirmed, makes Apple look indecisive.

The frontier model race is not one Apple was ever going to win on raw capability. Its strengths are different: distribution, integration, privacy, and the patience to play a long game. By the end of 2026, Apple Intelligence will not be the smartest AI on the market, and it does not need to be. It needs to be the AI a billion people trust enough to use without thinking about it. It is closer to that than the WWDC 2024 keynote suggested it would be at this point. It is still a year of execution away from being there.

A

Admin

Contributing writer at Algea.

More articles →

0 Comments

Team members only — log in to comment.

No comments yet. Be the first!