AI·

We Are Near the End of the Exponential

Dario Amodei's new Dwarkesh Patel interview stopped me in my tracks. His private 2017 'Big Blob of Compute' doc predated The Bitter Lesson by two years — and his prediction about coding? We're already there.
A dramatic visualization of two intertwining exponential curves rendered as luminous energy streams against a dark void. One curve blazes bright white-gold representing raw AI capability, the other a cooler blue-silver representing economic diffusion, both spiraling upward but at different rates. The curves emerge from a glowing origin point that resembles a data center constellation. Mathematical grid lines fade into the background. The composition conveys immense velocity and inevitability. Cinematic lighting with deep contrast, volumetric light rays, photorealistic 8k quality, shallow depth of field. Sci-fi atmosphere, teal and amber color grading, 16:9 composition with negative space on the left for text overlay.

I'm sharing this with everyone at work, and now I'm sharing it with you.

Dwarkesh Patel just dropped a new interview with Dario Amodei, CEO of Anthropic. I also posted about it on X.

"We Are Near the End of the Exponential"

This stopped me in my tracks:

"What has been the most surprising thing is the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have people — within the bubble and outside the bubble — talking about the same tired, old hot-button political issues, when we are near the end of the exponential."

He's not talking about hype. He's talking about the actual capability curve — smart high school student → smart college student → beginning PhD-level work → and in coding, already beyond that. He's frustrated that almost nobody seems to notice.

The Big Blob Came Before The Bitter Lesson

Most people in AI know Rich Sutton's "The Bitter Lesson" (2019). It's practically required reading. The core argument: general methods that leverage computation are ultimately the most effective. Stop trying to be clever. Scale wins.

What blew my mind is that Dario wrote a private document in 2017 called "The Big Blob of Compute Hypothesis" that arrived at essentially the same conclusion — two years earlier. Before GPT-1 had even come out. Before transformers had taken over.

In his own words:

"It wasn't about the scaling of language models in particular. When I wrote it, GPT-1 had just come out. That was one among many things. Back in those days there was robotics. People tried to work on reasoning as a separate thing from language models, and there was scaling of the kind of RL that happened in AlphaGo and in Dota at OpenAI."

"Rich Sutton put out 'The Bitter Lesson' a couple years later. The hypothesis is basically the same. What it says is that all the cleverness, all the techniques, all the 'we need a new method to do something', that doesn't matter very much."

The field was fragmented across robotics, game-playing RL, reasoning systems, and early language models. Dario looked at all of it and said: none of the specifics matter. Just the blob. He listed seven things that actually matter:

  1. Raw compute — how much you have
  2. Quantity of data
  3. Quality and distribution of data — it needs to be broad
  4. Training duration
  5. A scalable objective function — pre-training's next-token prediction is one; RL rewards are another
  6. Normalization and conditioning — keeping the numerical stability so the blob flows in a "laminar" way
  7. (Related stability/engineering concerns)

Everything else — the clever architectures, the novel techniques — gets eaten by the blob.

"We're Already Almost There for Software Engineering"

This is the part that hit me hardest as someone who works in Cloud AI and talks to engineers every day. Dario lays out a spectrum that most people conflate:

"About eight or nine months ago, I said the AI model will be writing 90% of the lines of code in three to six months. That happened... But that's actually a very weak criterion. People thought I was saying that we won't need 90% of the software engineers. Those things are worlds apart."

He breaks down the actual progression:

  • 90% of code written by models → Already happened
  • 100% of code written by models → Big difference from 90%
  • 90% of end-to-end SWE tasks (compiling, environments, testing, memos) → Coming fast
  • 100% of today's SWE tasks done by models → Doesn't mean engineers are out of a job
  • 90% less demand for SWEs → Will happen, but further down the spectrum

And Anthropic is living this internally. When Dwarkesh pushed back on whether productivity gains are real or just vibes, Dario didn't mince words:

"We're under an incredible amount of commercial pressure... There is zero time for bullshit. There is zero time for feeling like we're productive when we're not. These tools make us a lot more productive."

"We have engineers at Anthropic who don't write any code."

That's not a prediction. That's a present-tense statement from the CEO of the company that makes Claude.

The Revenue Curve Tells the Story

If you need numbers instead of words:

  • 2023: $0 → $100 million
  • 2024: $100 million → $1 billion
  • 2025: $1 billion → $9-10 billion
  • January 2026 alone: Added another few billion

10x per year. And Dario says the curve hasn't bent yet.

Country of Geniuses in a Data Center

When Dwarkesh asked for a concrete timeline on reaching what Dario calls "a country of geniuses in a data center" — AI systems that match or exceed Nobel Prize winners across domains — Dario put it at one to three years:

"I have a strong view — 99%, 95% — that all this will happen in 10 years. I think that's just a super safe bet. I have a hunch — this is more like a 50/50 thing — that it's going to be more like one to two, maybe one to three."

Not ten years. Not five. One to three. And on coding specifically, he thinks we'll be at end-to-end automation in one to two years. "There's no way we will not be there in ten years."

Two Exponentials, Not One

This might be the most useful mental model in the entire interview. Dario describes two curves happening simultaneously:

"I think everything we've seen so far is compatible with the idea that there's one fast exponential that's the capability of the model. Then there's another fast exponential that's downstream of that, which is the diffusion of the model into the economy. Not instant, not slow, much faster than any previous technology, but it has its limits. When I look inside Anthropic, when I look at our customers: fast adoption, but not infinitely fast."

This is the framing I've been missing. It's not "AI will change everything overnight" and it's not "AI is overhyped." It's two fast exponentials — capability and adoption — running at different speeds. The capability curve is screaming ahead. The adoption curve is chasing it, faster than any technology before it, but still bound by reality: legal reviews, security compliance, change management, explaining to the person two levels below you why this matters.

If you're in enterprise, you're living on that second curve right now. And if you're not actively working to close the gap between those two exponentials, you're falling behind whether you feel it or not.

Why I'm Sharing This With Everyone at Work

I'm a Principal Architect for Cloud AI at GE Aerospace. I talk to engineers and architects every day about what these models can and can't do. The gap between what's actually happening inside companies like Anthropic and what most people in enterprise tech think is happening is enormous.

We're not preparing for a future disruption. We're in the middle of it. The exponential is ending, and most people haven't looked up from their desks to notice.

If you're in tech and you haven't watched this interview, block out two hours this weekend. It's the most important conversation happening right now about where all of this is going.

Full interview on YouTube | Dwarkesh Podcast transcript | My X post