Schrödinger’s Syntax
English is the new coding language, they said. What could go wrong, they said.
When I ask you to “draw a bow”, did I mean for you to:
- Pull back a longbow
- Play a violin
- Sketch a ribbon
- Select a bow card from a deck of cards
- Illustrate an actor bowing
- The bow of a ship drawing water
English — and to be fair, every human language I speak — is terribly unspecific. Take a word, find a “strong” antonym of it on thesaurus.com. Many times your original word won’t even show up as an antonym of its own antonym.
Engineers didn’t spend the last seventy years building progressively better programming languages just for fun. Okay, maybe a little for fun. But primarily to make writing instructions for computers safer, easier, and deterministic.
First raw machine code — engineers as manual compilers, holding the physical architecture of the machine in their heads. Then compiled languages — we wrote the how in human-readable syntax and the compiler handled the hardware. Then declarative contracts — a .proto file, a SQL query, the what instead of the how. At every step, we moved up an abstraction layer.
Most engineers don’t handle ones and zeros anymore. We don’t write bytecode or assembly. Those who do work on chip design, compilers, or very specialized infrastructure. The rest moved up an abstraction layer. That’s not job loss. That’s job evolution.
Now we’re at the next layer. AI translates human intent into code, the same way a C++ compiler translates logic into machine code. It is not magic. It is the next compiler.
And this compiler takes English as input.
The problem is that the English language is terribly imprecise. That’s not a failure of the English language. It was never meant to be Turing-complete. It was meant for communication between emotional actors — humans. Human literature is full of authors poking fun at the vagueness of human language:
“Draw me a sheep.” “No, not from a deck of cards, using a pen and pencil.” “No, no — that sheep is too sickly.” “No, no — that sheep is too old.” “No, no — that sheep has too many horns.”
“What is the airspeed velocity of an unladen swallow?” / “European or African?”
“Begin at the beginning, and go on till you come to the end: then stop.” (technically correct, practically useless)
Accurate specification of intent has always been the hard part of engineering. We hid it behind layers of human interpretation — organizations, sprints, code review, the entire SDLC process itself. Those layers were imperfect proxies for “did we specify what we actually meant?” The companies that scaled learned this the hard way — design docs at Amazon, readability reviews at Google, chaos engineering at Netflix — parallel evolution to similar conclusions about making intent explicit.
AI doesn’t create the specification problem. It strips away the scaffolding that was masking it.
Spec carefully. The compiler is listening. It does not do context.
Related reading:
- Don’t Hate the Agent, Hate the Process — on what happens when imprecise specs meet execution
- Stop Asking Einstein to Run Your Datacenters — on making sure your intent reaches the right tool