Why Andrej Karpathy Says Patience Is Key to Building Real AI Agents

In the fast-evolving world of artificial intelligence, excitement often moves faster than reality. But according to OpenAI cofounder Andrej Karpathy, true progress in developing functional AI agents will take time — possibly a decade.

Karpathy recently appeared on the Dwarkesh Podcast, where he offered a candid perspective on where we stand with AI agents today. Despite the hype surrounding them, his verdict was clear: they’re not there yet.

“They just don’t work,” Karpathy said. “They don’t have enough intelligence, they’re not multimodal enough, they can’t use computers effectively, they don’t have continual learning. You can’t just tell them something and expect them to remember it. Cognitively, they’re lacking — it’s just not working.”

AI Agents: Promise vs. Reality

AI agents are among the hottest topics in tech right now. Many investors have labeled 2025 as “the year of the agent.” These systems are designed to autonomously handle complex tasks, break problems into steps, make decisions, and act without constant user input.

But while the idea is powerful, Karpathy believes the execution is far behind the promise. In his view, the industry has “overshot the tooling” compared to what’s actually achievable with current models.

On X (formerly Twitter), he clarified:

“The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless.”

That, he says, isn’t the future he wants to build.

Karpathy’s Vision: Collaboration, Not Replacement

Karpathy sees the ideal AI future as one of partnership between humans and machines, not replacement. He wants tools that enhance human capabilities, not make them obsolete.

“I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and collaborate with me when uncertain. I want to learn along the way and become better as a programmer — not just get served mountains of code that I’m told works.”

This collaborative approach, he argues, avoids the pitfall of “AI slop” — the wave of low-quality, auto-generated content that can flood systems when humans are cut out of the loop.

The Technical Roadblocks

Karpathy estimates it could take around ten years to solve the core issues holding back agents. These include:

  • Lack of true multimodality (understanding and working across text, image, audio, and other formats).

  • Poor memory and continual learning.

  • Limited reasoning and decision-making abilities.

  • Low reliability at scale.

These are not small problems — and solving them will require major research and engineering breakthroughs.

Error Rates: A Critical Weakness

Karpathy isn’t alone in pointing out these limitations. Quintin Au, ScaleAI’s growth lead, highlighted on LinkedIn that every action an AI takes currently has about a 20% chance of error. If an agent must complete five steps, the odds of success fall to just 32%.

This compounding error problem makes fully autonomous agents unreliable for many real-world applications today.

Pessimism or Realism?

Karpathy clarifies that he’s not an AI skeptic — just more measured than most Silicon Valley enthusiasts.

“My AI timelines are about 5–10X more pessimistic than what you’ll hear at your neighborhood SF AI house party, but still optimistic compared to AI deniers.”

His message is simple: patience, realism, and collaboration will build better AI than hype and overconfidence.

Visit Trenzest to read more articles like this.

Leave a Reply

Your email address will not be published. Required fields are marked *

Index