OpenClaw just turned the Google Meet call into a plugin.
The v2026.4.24 release, shipped Friday, bundles Google Meet as a native participant — meaning an OpenClaw agent can join a live call, listen to the conversation, pull the transcript, and answer questions about it via text message, mid-call. One Reddit user described the workflow plainly: during a meeting, they text their OpenClaw from their phone asking what is being discussed; it pulls the last twenty lines of live transcript and sends them back. No dashboard. No manually joining from a browser tab. Just an agent in the room. (source)
That sounds simple. It is not simple. Before this release, building an AI agent that could listen to a live meeting and act on what it heard required gluing together at least three distinct systems: a real-time audio bridge (WebRTC or Twilio), a transcription layer (Google's Speech-to-Text, AssemblyAI, or similar), and an LLM configured for tool-use so the agent could actually do something with the output. The OpenClaw Google Meet plugin docs detail the actual requirements: a BlackHole 2ch virtual audio driver, personal Google OAuth, and either Chrome with a signed-in profile or Twilio credentials. Getting those pieces to handshake reliably in production, at scale, with proper auth and session management, took engineering teams months. OpenClaw has made it a plugin.
The release also promoted DeepSeek V4-Flash to the default onboarding model for new OpenClaw installs. V4-Flash is a 13-billion-active-parameter model (284 billion total) with a 1 million token context window. According to benchmarks DeepSeek published on its API documentation page and at least two independent reviews, it performs competitively with models two or three times its size on agentic coding tasks. V4-Pro, the higher-tier sibling with 49 billion active parameters, is also in the bundled catalog. Independent benchmarks from buildfastwithai.com show V4-Pro scoring 67.9 percent on Terminal-Bench 2.0, compared to Claude Opus 4.6's 65.4 percent — a 2.5 point gap on a benchmark that measures real autonomous terminal execution over a three-hour window, not a single-turn coding test. On LiveCodeBench, another standard agentic coding benchmark, V4-Pro-Max scored 93.5 percent against a Claude Opus 4.6 Max baseline of 88.8 percent.
For OpenClaw's users, the practical effect is straightforward: the default model is now free. DeepSeek's API pricing runs substantially below OpenAI's or Anthropic's for equivalent benchmark performance on the tasks this audience cares about. That is a meaningful change for a tool whose users are, by definition, people who write code and care about infrastructure costs.
DeepSeek gains something too. OpenClaw's bundled model catalog means DeepSeek V4-Flash is now the first model a new OpenClaw user runs. That is a distribution channel that does not require DeepSeek to spend a dollar on ads or partnerships.
There is a wrinkle. On July 24, 2026, DeepSeek will retire the deepseek-chat and deepseek-reasoner API endpoints entirely. (source) The new V4 models replace them. OpenClaw's decision to make V4-Flash the default ships a migration that users have not had to think about yet — and a clock they did not ask for.
The plugin infrastructure received a substantive refactor alongside the headline features. Model catalogs are now statically defined and manifest-backed rather than dynamically discovered at startup. Provider dependencies load lazily. (source) The changes make plugin authoring more predictable and reduce the startup footprint for packaged installs — relevant for the developers who run OpenClaw on resource-constrained hardware or want to deploy it as a persistent background service without a full gateway stack.
Thirty-plus bug fixes shipped alongside the new features, including patches for MCP runtime session leaks that could accumulate stray child processes on repeated scripted runs, a heartbeat scheduler crash loop that triggered when delays exceeded Node's timeout cap, and Chrome audio bridge cleanup when Meet sessions fail before the realtime connection establishes. (source) The fixes are not glamorous, but they are the kind that determine whether an agent runs reliably for seventy-two hours or crashes silently at hour thirty-one.
The deeper shift this release points toward is not any single feature. It is the continuing consolidation of what an open-source AI agent platform is expected to do. OpenClaw began as a terminal-side assistant. It has absorbed voice, messaging channels, browser automation, and now live video meeting participation. The pattern is familiar from every previous layer of software infrastructure: the platform eats the adjacent point solution. Meeting intelligence vendors who spent the last three years building transcription-and-LLM glue for enterprise calls now face a free, open-source, actively maintained alternative that ships with a model catalog and a plugin registry. The engineers who built those vendors are probably already running OpenClaw.
The question for the ecosystem is not whether this is a good release. It is. The question is what happens the next time a meeting intelligence startup pitches a VC, and the VC asks why they are not just using the OpenClaw plugin.