Inverted Prompt Distance
LLMs are not simply a new level of abstraction in programming. They are a change in kind. They float next to the stack.
They enable programming in prompts across abstraction levels, alongside the abstraction ladder (Martin Fowler). And they move abstraction into non-determinism, breaking with the stack entirely. You cannot stack a non-deterministic layer atop a deterministic stack where each level is defined by a grammar. Each translator maps one grammar to another. Deterministic abstraction enables stacking: the chain holds because every level respects the grammar of its target. Invalid code will not compile. Abstraction is powerful because it is rigid, and rigid it must be. The higher the abstraction, the higher the efficiency, but the lower the flexibility. Birgitta Böckeler calls it an "invisible force field".
No grammar inherently constrains the LLM, but its output must still obey the grammar it targets; otherwise, it will not flow through the stack to reach running code. Although the LLM is grammar-free, the stack is not. Point an LLM higher up the stack, and everything gets cheaper: less prompting, less compute (fewer tokens, lower latency, fewer cycles), and lower risk of the agent failing. Consistency improves too: more behaviour is hard-coded into the stack, leaving less for the model to infer. But the force field kicks in, limiting what can be built to what the abstraction anticipated.
Implementation gets simpler at any layer. But how much? Böckeler proposes the idea of "promptable distance". A high-level prompt such as "build me an order-management app" may suffice at the highest level of abstraction. Push the same prompt down. How much more detail would it need to produce the same result in TypeScript or Python? The promptable distance is the gap between the detail in a prompt and the abstraction level at which the LLM operates; it captures the additional expertise the prompter must supply.
Böckeler suggests rethinking abstraction to reduce promptable distances. We are plugging AI into an abstraction stack designed for humans; is there a better way to fill in the blanks; one that is both AI-friendly and powerful enough to express complex, custom requirements?
Promptable distance asks how detailed a prompt must be to reach a given level. Flip it. What happens when the abstraction is more specific than the intent, when it overbuilds, or builds the wrong way? The inverted promptable distance measures the unwanted specificity that the (forcibly opinionated) abstraction imposes on the output: the cost of working within a rigid framework when intent and framework diverge. You want a CRUD app with a particular UX flow. You pick a high abstraction level (say, a low-code platform), figuring the promptable distance will be short: a high-level prompt, a working app, minutes. But the platform enforces screen and navigation patterns, widget behaviours, and an overly complex data model. The force field kicks in: not because the LLM cannot generate what you want, but because the abstraction will not bend. Worse, layering abstractions forces LLMs to implicitly manage deeper layers (crucially, layers they cannot observe) and leaves them uncertain about how the immediate grammar will translate into the running application. The contract is detached from the final asset. The end-to-end feedback loop breaks down.
The gap between what you prompted and what the abstraction allows is the inverted promptable distance. Think of it as the "fixing" prompt you would write to trim and correct the app to the requirements. The inverted promptable distance was tolerable when humans were the bottleneck. A developer could work around constraints by using escape hatches to drop down levels. For programmers, LLM improvements reduce code-generation costs across layers but increase rigidity costs for the upper layers, not only due to expressiveness gaps, but also because LLMs expand the input space through open natural-language prompts (which necessarily create expectations when compared to controlled visual programming environments and point-and-click interactions). The user can specify anything, but code generation is constrained by the abstractions. They can ask for oranges, but they'll get spotless bananas. The gap from oranges to bananas is the inverted prompt distance, and sometimes, you cannot revert bananas into oranges.
The failure mode is specific: applying an LLM at an abstraction level whose rigidity exceeds the flexibility your requirements demand. The goal is not just to provide layers that are easier for LLMs to target. It's to provide layers that are flexible enough not to impose unwanted specificity. Thinner. More composable. More tolerant of variation, customization, and extensibility.
In short, vibier.