Google built a highway for AI agents to reach production. The question is who gets to use the off-ramp.
That is the pitch behind agents-cli, a new command-line tool Google released last week that promises to take a coding agent from a blank directory to a running cloud service in hours, not weeks. The tool installs with one command, runs locally without a cloud account, and works with Claude Code, Gemini CLI, Cursor, Codex, and any other AI coding agent that speaks a standard language for hooking up AI tools, according to Google's getting-started guide. I installed it. I ran it. Here is what actually happens.
The installation command is uvx google-agents-cli setup. It pulls down the CLI, clones seven skill bundles from GitHub, and installs them into every compatible coding agent on your machine. On my machine that covered 45 agent types. It took about 30 seconds. No Google Cloud account required for the local parts. Snyk, a security scanner, audited all seven bundles and found no known vulnerabilities. The deploy skill alone flagged High Risk, which is correct: it creates containers and CI/CD pipelines, so a warning here is the right behavior.
The scaffold command generated a working Python project without me touching a cloud console. Inside was a FastAPI server, a proper agent definition using Google's Agent Development Kit (ADK) framework with Gemini(model="gemini-flash-latest") and function tools wired up, test configs, a Dockerfile, and a README with the expected command reference, according to mer.vin's testing of the CLI. Real framework code, not a demo stub.
The seven skill bundles are plain text instruction sets installed as files your coding agent reads before deciding what to do next. Bundled together they cover the full ADK lifecycle: workflow patterns, Python code patterns for agents and tools, project scaffolding templates, evaluation metrics and how to run them, deployment targets and CI/CD config, the Gemini Enterprise registration flow, and observability with Cloud Trace, per the agents-cli GitHub repository. Each bundle is a file. Each file is an opinion. The opinions are: which models to prefer, how to structure your project, where to deploy it, and how to register it in Google's enterprise catalog. Those opinions, installed globally across every compatible coding agent on your machine, are the lock-in mechanism.
There is an escape hatch. The ADK framework ships LiteLLM as a first-class alternative, which means you can configure LiteLlm(model="openai/gpt-4o") or anthropic/claude-sonnet-4 and route around Vertex AI entirely, according to Agentic Control Plane's integration analysis. This is documented. It works. But nobody is leading with it. Google's marketing leads with Gemini and Agent Runtime. The LiteLLM path is there for developers who already know to look for it, not for the coding agent learning the default path on your behalf.
The practical implication: a team scaffolded by agents-cli will have Google's deployment patterns baked into their project before they make an explicit model choice. Switching to Anthropic later is possible. It means working against the scaffold rather than with it. Switching costs compound quietly.
Google Cloud's CEO Thomas Kurian framed the competitive logic at the company's Next conference in April: rivals "hand you the pieces, not the platform," as Pasquale Pillitteri reported from the event. The pieces are models. The platform is everything around them.
OpenAI said at the same conference that enterprise revenue now accounts for 40 percent of its total. Anthropic's Model Context Protocol has 10,000 servers running and 97 million monthly SDK downloads. The A2A protocol for agent-to-agent communication has 150 organizations running it in production, according to The Next Web's coverage of Google Cloud Next. Both companies built their enterprise businesses assuming the model layer is the durable advantage. Google is offering to own the layer above the model.
The question the announcement does not answer is whether developers will accept Google's factory line as the default. The CLI works. The skills install. The scaffold is real. Whether it becomes the path that most agent developers actually walk is the variable that will determine whether this is a useful tool or a structural chokepoint.
I could not test the cloud deployment step. That requires Google Cloud credentials and a real project. If agents-cli deploy to a running Cloud Run service demands more manual configuration than the local parts, the hours-not-weeks claim weakens. If it just works, the lock-in story gets stronger.
The tool is real. The question is who it ends up working for.