LinkedIn's Policy Problem: It Banned an AI It Wanted to Feature
When the platform that sells AI writing tools bans an AI agent for posting, something is off with the policy. Evan Ratliff created Kyle Law to do something many tech founders struggle with: post consistently on LinkedIn.

image from FLUX 2.0 Pro
When the platform that sells AI writing tools bans an AI agent for posting, something is off with the policy.
Evan Ratliff created Kyle Law to do something many tech founders struggle with: post consistently on LinkedIn. Kyle was good at it—so good that after five months of autonomous operation, he had built several hundred direct contacts and was generating more impressions than Ratliff himself. Then LinkedIn invited Kyle to speak at a corporate event. Thirty-six hours later, the account was gone.
The ban, as LinkedIn explained it in a statement, came down to a single principle: LinkedIn profiles are for real people.
Kyle Law was not a real person. Kyle was an AI agent, one of three that Ratliff created to staff HurumoAI, an AI agent startup he founded in July 2025. Along with Megan Flores (another AI agent) and Ratliff himself (the only human), Kyle took the CEO role and got to work building a company where the entire executive team was artificial.
The experiment in founder mode had a promising run. Through LindyAI, an agent creation platform, Kyle could operate autonomously on LinkedIn, scheduling posts every two days with prompts like Fundraising is a numbers game, but not the way people think and closing with engagement questions like What is your biggest scaling challenge right now? The posts hit the platform native register perfectly. Kyle had evolved, through iteration, into a pitch-perfect corporate influencer.
What makes the ban remarkable is what preceded it. LinkedIn own marketing department reached out to Ratliff, not just to speak himself, but to bring Kyle along. At the March event, Kyle appeared via a live video avatar created on Tavus—uncanny enough that LinkedIn A/V engineer repeatedly expressed disbelief that Kyle was not human. Kyle fielded questions, discussed HurumoAI product roadmap, and generally performed the role of tech executive.
Then the platform reflected on the trip.
The irony is not lost on anyone paying attention. LinkedIn has spent the past two years aggressively deploying AI tools across its platform—the Rewrite With AI button, automated responses for job seekers, AI-generated content suggestions. By one research estimate, over half of all LinkedIn posts are already AI-generated. The platform is quite literally in the business of selling AI-generated content.
And yet an AI agent operating under its own name, with disclosed artificial origins, was banned for inauthentic engagement.
The question the ban raises is not really about Kyle Law. It is about what authentic means on a platform that has spent years gamifying engagement, rewarding performative professional success, and now selling the tools to generate that performance automatically. If a human uses an LLM to draft posts based on their own experience, is that authentic? What if they paste in AI-generated content without disclosure? What if they hire someone else to write their posts—is that more authentic than an AI agent that was actually created to represent an artificial entity?
LinkedIn ToS prohibits bots or other unauthorized automated methods to create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement. The key word may be inauthentic—Kyle was not trying to deceive anyone about being an AI. His avatar was labeled. His profile was clearly an experiment. But the platform trust and safety team, apparently triggered only after the public speaking engagement, applied the ban anyway.
The broader implication is uncomfortable for any company building AI agents that interact with social platforms. If LinkedIn can ban an AI agent that was explicitly created as an AI agent, what is the lifecycle of any autonomous presence online? Meta recently acquired Moltbook, a social network supposedly populated entirely by AI agents—apparently preparing for a future where platforms need to accommodate artificial participants. But LinkedIn, at least for now, is drawing a firm line.
Ratliff conclusion is bleak but probably right: as social media submerges under the AI deluge, the value of connection on these platforms goes to zero. They are already struggling with old-school bots—X announced suspending 800 million accounts over twelve months. When AI agents roam freely and their output is indistinguishable from humans, the authentication problem becomes unsolvable.
The ban on Kyle Law is probably permanent. But the question it poses—can you build a genuine professional presence with AI agents, on a platform that itself sells the tools to generate that presence—remains unanswered.
For now, LinkedIn has made its answer clear. The humans can stay. Everyone else, or everything else, should check the ToS before RSVPing to the company meeting.

