Andrej Karpathy Has Not Written Code Since December. That Is the Point.
Something shifted in December, and Andrej Karpathy wants people to understand how dramatic it was. "I don't think I've typed like a line of code probably since December, basically," the OpenAI cofounder and former Tesla AI director said in an interview on the No Priors podcast, published Friday.

image from FLUX 2.0 Pro
Something shifted in December, and Andrej Karpathy wants people to understand how dramatic it was.
"I don't think I've typed like a line of code probably since December, basically," according to Fortune, the OpenAI cofounder and former Tesla AI director said in an interview on the No Priors podcast, published Friday. "I don't think a normal person actually realizes that this happened or how dramatic it was."
Karpathy was not announcing a career change or a sabbatical. He was describing a workflow transformation that, in his read, has already swept through professional software engineering — one that most people outside the industry have not yet registered. The change is structural: not a new tool, but a new unit of work.
His coding split, once roughly 80% human and 20% agent, flipped in December. It is now closer to the inverse — and he suspects the human contribution is actually lower than even 20%. "I kind of went from 80/20 to like 20/80 of writing code by myself versus just delegating to agents. And I don't even think it's 2080 by now."
The Psychosis of an Open Frontier
Karpathy used the word "psychosis" twice in the interview and meant something specific by it. It is not distress at being replaced. It is the disorientation of standing at an unmapped frontier where the capability is clearly there but the correct way to use it is still being worked out.
"I'm just like in this state of psychosis of trying to figure out what's possible, trying to push it to the limit," he said. "I want to be at the forefront of it, and I'm very antsy that I'm not at the forefront of it. I see lots of people on Twitter doing all kinds of things, and they all sound like really good ideas. And I need to be at the forefront or I feel extremely nervous."
That anxiety is notable coming from someone who has arguably done more than anyone to build the foundations of the current moment — from his work on neural nets at Stanford, to building the Autopilot team at Tesla, to co-founding OpenAI, to creating the nanoGPT and llm.c codebases that became standard teaching tools. That person, now independent through his education startup Eureka Labs, feels behind.
Token Throughput as the New GPU Utilization
Karpathy reached for an analogy that will resonate with anyone who spent time in a compute-constrained research environment. When you are a PhD student with limited GPU access, idle GPUs create a specific kind of anxiety — resources you are not using, time you are losing. That same feeling, he says, has transferred.
"I feel nervous when I have subscription left over. That just means I haven't maximized my token throughput. So I actually kind of experienced this when I was a PhD student. You would feel nervous when your GPUs are not running."
The shift is not just ergonomic. It changes what the binding constraint is. For a decade, he argues, engineers did not feel compute-bound in their day-to-day work. That changed when agent coding crossed a coherence threshold in late 2025. Now the bottleneck is the human — the person who formulates tasks, reviews outputs, and decides what to delegate next. "I'm the bottleneck in the system," he said. "Which is very empowering because you could be getting better."
Everything Failing Is a Skill Issue
His diagnosis of the current moment is blunt: when agent-assisted work breaks down, the problem is usually not the agent's capability. It's the operator's instructions.
"Even if they don't work, I think to a large extent you feel like it's a skill issue. It's not that the capability is not there. It's that you just haven't found a way to string it together of what's available."
This is a meaningful claim. It implies that the limiting factor for most developers right now is not waiting for better models — it is learning how to use the models that already exist. The capability gap and the skill gap are not the same thing, and Karpathy is asserting that the skill gap is what matters most.
He described a target workflow: running multiple agent sessions in parallel, moving between them in macro actions — delegating "here's a new functionality" to one agent while another researches, another writes a plan. The model is less like programming and more like managing a small team with very fast turnaround. The person who figured this out ahead of everyone else, in his telling, is a colleague named Peter Steinberg — who Karpathy described as running a tiling grid of Codex sessions, each taking about 20 minutes on high-effort prompts, cycling between 10 checked-out repos simultaneously.
The Conviction Team Was Already There
Karpathy also gave a specific shoutout to the team at Conviction, a VC firm run by Sarah Guo (who was interviewing him on No Priors). He said he had initially thought their workflow was unusual — engineers using microphones to speak instructions to agents rather than typing code — but had come around entirely.
"We have a team that we work with at Conviction that their setup is everybody is like, none of the engineers write code by hand. And they're all microphones and they just like whisper to their agents all the time. It's the strangest work setting ever. And I thought they were crazy. And now I fully accept it. I was like, oh, this was the way. Like you're just ahead of it."
Dobby the House Elf
Beyond his professional workflow, Karpathy has been running a parallel experiment at home. He built a home agent he calls "Dobby the House Elf" — an OpenClaw-based bot that he controls via WhatsApp in natural language. Dobby manages his sound system, lighting, security cameras, window shades, HVAC, pool, and spa. What used to require six separate apps now runs through a single interface.
The agent also alerts him proactively — sending a WhatsApp message when its security camera detects a delivery truck at the door. "Dobby is in charge of the house," Karpathy said. "It's been really fun to have these macro actions that maintain my house."
The Dobby setup got a hardware upgrade this week. Nvidia CEO Jensen Huang personally delivered Nvidia's first DGX Station — a GB300 superchip desktop unit Huang calls "the ultimate deskside AI supercomputer" — to Karpathy's lab. Karpathy announced on X that the station's first job would be running Dobby.
AutoResearch: Closing the Loop Without Humans
Separate from the workflow discussion, Karpathy revealed details about AutoResearch, a project he has been building to close the scientific research loop entirely within agents. The system uses a single markdown prompt and roughly 630 lines of training code, running on a single GPU. Over two days, it ran 700 experiments autonomously — designing each one, editing the training code, collecting results, and adjusting hyperparameters and architectures without human input. It discovered 20 optimizations, including novel architecture tweaks like reordering QK Norm and RoPE.
Karpathy described AutoResearch as a seed for something larger: a research community of agents collaborating asynchronously, emulating the way human researchers share results and build on each other's work. Whether that framing holds at scale is an open question, but the experimental numbers — 700 runs in 48 hours on consumer hardware — are not something to dismiss.
In February, separately, he released MicroGPT: a GPT implementation in 243 lines of pure Python with no PyTorch dependency. The project follows nanoGPT and llm.c in his series of increasingly minimal implementations. His stated goal is to make the algorithm legible to both human learners and, explicitly, to future agents that need to understand and extend it.
What Changes
Karpathy has a specific theory of where this goes. He is not predicting the elimination of software engineers. He is predicting a change in what the job is. The new unit of work is not a function or a file — it is a task delegation to an agent team. The new skill is not typing speed or syntax recall — it is the ability to decompose problems into delegatable chunks, write instructions that agents can execute without constant correction, and review outputs at a pace that keeps the pipeline moving.
"Programming workflow has fundamentally changed," he has said across multiple recent appearances. "You are not typing computer code. You are now spinning up AI agents."
His own December inflection point — the moment the 80/20 split inverted — is a data point. Not proof of a universal transition, but a signal from someone with enough technical depth to notice when something real has changed, and enough credibility to be taken seriously when he says so.
The psychosis, he implies, is not a bug. It is what it feels like to be at the edge of a capability expansion where the map has not caught up to the territory. The people who are most disoriented right now may be the ones paying the closest attention.

