In 1865, William Stanley Jevons published The Coal Question, arguing that Britain’s industrial growth would inevitably stall because coal reserves were finite. He was right about the coal. He was wrong about everything else.

This is more or less what happens every time someone announces that AI scaling is dead.

The argument always has the same shape. Someone identifies the current fuel — pre-training data, parameter count, FLOPS — notices it’s running low, and writes a very confident eulogy. And the eulogy is usually correct! The fuel is running low. What they’re writing, though, are eulogies for a fuel, not for fire.

Fuel after fuel

Pre-training hit diminishing returns and everyone declared the party over. Then post-training techniques — RLHF, DPO, constitutional methods — squeezed remarkable capability from the same base models. When that started plateauing, inference-time compute showed up: let the model think longer, search wider, verify its own work. Now we’re watching orchestration and multi-agent systems emerge, and behind them, model routing — choosing which model for which subtask, dynamically, on the fly.

Each time, the obituary writers were right. And it didn’t matter. The thing they said was dying did die. The thing they said depended on it kept going.

Predicting that a specific technical approach has limits is good engineering. Concluding that the field has limits because one approach does — that’s the Jevons mistake. Coal was finite. Energy was not.

Nobody mourns punch cards

Once upon a time you programmed computers by physically rearranging wires. Then came punch cards. Then assembly. Then high-level languages. Then frameworks. Then Stack Overflow. Then rickety prototypes that occasionally jam and print backwards — but also, sometimes, write surprisingly competent code on the first try. Now those rickety prototypes are starting to write specs, not just implementations.

Each layer looks like the death of the one below it, but really it’s the layer below graduating from “the thing you do” to “the thing something else does for you.” Nobody mourns punch cards. The abstraction keeps rising.

The hard problem shifts. It moves from “compute the answer” to “ask the right question.” When a computer can do increasingly complex things from plain language, the bottleneck stops being capability. It becomes whether you understood the problem well enough to frame it. Specification is the new execution. (I wrote about this in Schrödinger’s Syntax.)

From inside the curve

Jevons was standing at the peak of Britain’s coal production and looking at the tail end of a curve. From where he stood, the curve was the whole world. Hard to blame him.

I don’t know what the next fuel is. Anyone who tells you they know is guessing with more confidence than the situation warrants.

What I notice is the pattern. Every time a fuel runs out, the fire finds another one. Every time the abstraction rises, the previous layer doesn’t disappear — it becomes infrastructure. And every time someone writes the obituary, the thing they’re mourning quietly becomes the foundation for whatever comes next.

The question that interests me isn’t whether scaling continues. It’s what we build while it does — the harnesses, the guardrails, the specifications that make the next layer of abstraction safe to stand on.