Google built something that looks exactly like Anthropic's Model Context Protocol — except it runs on your phone, not a server. Whether that irony is intentional is between Google and the standards bodies they haven't submitted it to.
AppFunctions, a Jetpack API Google unveiled in February, lets Android apps expose self-describing capabilities that Gemini can discover and execute — no cloud roundtrip required. As Google's own Android Developers Blog puts it: "Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps." Matthew McCullough, Google's VP of Product Management for Android Development, wrote the post himself. That's not a press release — that's a product architect staking a claim.
The pattern Google is copying is Anthropic's MCP, the open standard that most major AI providers have adopted for cloud-based tool integration. AppFunctions does the same job on-device: apps declare what they can do, agents discover and call those functions, execution stays local. Google's blog explicitly compares the architecture to WebMCP, a browser-based MCP implementation. The difference is that AppFunctions runs inside the Android security model rather than across network boundaries — which Google frames as a privacy benefit, since user data never leaves the device.
The developer experience is documented in the Jetpack library changelog: apps register AppFunctions by declaring an intent filter in AndroidManifest.xml with a MIME type and a reference to an XML schema resource describing available functions. The schema is processed at build time by the appfunctions-compiler, and the Jetpack library provides the runtime for agent discovery and execution. Version 1.0.0-alpha01 shipped May 7, 2025 — nearly a year before the Galaxy S26 launch — which means Google has been building this longer than the public conversation about agent standards would suggest.
Samsung Galaxy S26 is the first device with AppFunctions integration, unveiled at Samsung Galaxy Unpacked on February 25, 2026 and released March 11, 2026 at $899.99 for the standard model, $1,099.99 for the Plus, and $1,299.99 for the Ultra. The Galaxy S26 Ultra runs a Snapdragon 8 Elite Gen 5 for Galaxy — Samsung says the NPU is 39 percent more powerful than its predecessor. For UI automation — the fallback path for apps without native AppFunctions integration — WinBuzzer reported support for Lyft, Uber, Uber Eats, GrubHub, DoorDash, and Starbucks at launch. Those are the categories where a voice-and-reasoning agent solves a real friction point: ordering food, booking a ride, without the user switching context.
The UI automation layer is the fallback. Apps that haven't integrated AppFunctions can still be operated by Gemini through a generic screen-reading and input framework that runs apps in a virtual window on the device. This is less elegant — it requires the February 2026 security patch on One UI 8.5, only works with personal Google accounts (work and school are excluded), and supports English only — but it means the agent layer doesn't require every app in your life to ship an update before it becomes useful. WinBuzzer reported the screen automation rollout began March 12 in the US and Korea.
The usage limits are tiered by subscription. Free users get five requests per day. AI Plus at $7.99 per month gets 12. AI Pro at $19.99 per month gets 20. AI Ultra — Google's support page lists a 120-request daily cap, which at that volume implies a premium tier at $249.99 per month, though Google has not published a standalone price for AI Ultra's screen automation component separate from broader Gemini access. The practical implication: if you're relying on this for anything beyond curiosity, you're in pricing territory that makes the free tier a tease.
The broader rollout is planned for Android 17 — screen automation is tied to Android 16 QPR3 as the enabling release, with Google describing Android 17 as a significant expansion to more users, developers, and device manufacturers.
What Google hasn't shipped is the security architecture. AppFunctions lets apps declare capabilities and agents discover them. What it doesn't include is a policy engine — no per-function access controls, no runtime enforcement layer beyond what the Android permission model already provides. Google is explicit that screen automation sends a notification and hands control back to the user before checkout. That's a safeguard, but it's a product choice, not a security boundary. The question of whether agents should be trusted with app-level access at all — and what enforcement exists when they are — is one Google is shipping around rather than answering. The Android security model is not designed for adversarial local agents; AppFunctions extends it into new territory without a visible solution to that problem.
Which brings us back to the irony. Anthropic built MCP as an open standard and watched the industry adopt it. Google is building the on-device equivalent — same pattern, same mental model — but hasn't submitted it to any standards body. AppFunctions is Google-only, tied to the Jetpack library, running inside Google's ecosystem. If the bet is that every major AI platform will eventually need an on-device equivalent of MCP, Google is positioning to own that standard the way Anthropic owns the cloud version. The difference is that Google isn't pretending — the blog post doesn't claim openness. That candor is almost refreshing. Whether it serves developers or Google's own interests is a question the market will answer by whether it adopts AppFunctions or waits for something interoperable.
Screen automation is live now on Galaxy S26 in the US and Korea. Everything else — the broader app library, the Android 17 rollout, the policy layer that isn't there yet — is what's coming.