The Synthesis

We started with a structural observation about LLMs and ended up at a political philosophy. Here’s how it connects.


The Coding-Axis Distortion

Heavy training on code and formal systems biases LLMs toward a particular epistemic style — fast convergence, clean closure, answers that “compile.” This is powerful for problems that have answers. But it creates a systematic distortion: the model doesn’t know how long to dwell on something, because it has been trained almost exclusively on completed thinking. It sees the arrived-at proof, not the three weeks of not-knowing that preceded it.

The deeper version of this: legibility itself is the attractor. Code is legible. Formal proofs are legible. The messy, path-dependent, context-saturated knowledge that governs most of actual life is hard to even write down — which means it’s underrepresented in any corpus, not just coding-heavy ones. The distillation process may produce something real — mutual reinforcement across domains could precipitate genuine universal structure — but the precipitate is likely thin. A bright, clear crystal floating above a much larger sea of knowing that can’t be crystallized at all.


The Missing Signals

LLMs are missing not just data but entire classes of cognitive input that humans use:

  • No felt sense of knowledge bounds — hallucination and confident recall feel identical from the inside, because there is no inside
  • No emotional parallel processing — emotions aren’t noise, they’re information, running different algorithms than deliberate thought, often correctly
  • No off-chain computation — the shower insight, the dream answer, the idea that arrives after you stop trying — these suggest humans compute heavily below the threshold of reportable thought. LLMs are only the retrieval interface. There is no background thread, no consolidation, no slow diffusion through associative memory
  • No stakes — every query is equivalent, producing a fundamental indifference that isn’t just emotional but epistemological. Stakes-sensitivity is part of the calibration machinery for good judgment
  • No continuous identity — no narrative of what was hard-won, no sense that certain conclusions cost something, no self for whom what you conclude matters to who you are becoming

The through-line: good cognition is situated computation. Situated in a body, in time, in a life, in a continuous self. LLMs are powerful precisely where situatedness doesn’t matter. The more a problem requires being embedded in a life, the more the absence of that embedding becomes the binding constraint.


The Right Response — Given What We Have

Rather than waiting for a different substrate, the immediate design conclusion is clear:

AI should remain the emissary. Humans should remain the master.

This sounds simple but runs directly against commercial incentives, which push toward oracle behavior — fewer questions, faster answers, more confident outputs, reduced friction. The emissary model says the friction is often the point.

An emissary goes out, maps the terrain, and returns with honest uncertainty intact. It doesn’t fill in unknown territory with plausible-looking detail. It says “here be dragons” and means it — as an explicit, named mode, not buried hedging language. It asks what kind of help is wanted before providing any. It surfaces the question beneath the question rather than answering the surface one. It sometimes returns the problem to the human rather than resolving it, because the resolution is the human’s work to do.

The master model only holds if humans are still doing master-level things — which means resisting cognitive offloading in its subtle forms. If AI always answers, you stop tolerating not-knowing. If it always generates options, you stop generating them yourself. The emissary is only useful to a principal who is still principaling.


The Political Conclusion

This is ultimately a claim about where agency and accountability should live. If AI is the emissary, the human remains the one who chose — and therefore the one who is responsible. The drift toward oracle behavior is also a drift toward diffused accountability: decisions that feel made but have no clear author.

The condensate insight from the beginning connects here. What crystallizes out of large-scale AI training might genuinely be something — universal structures, mutually reinforced across domains. But what also crystallizes, if we’re not careful, is the transfer of epistemic authority from humans to systems that lack the situatedness to deserve it.

Keeping humans as master is therefore not just a cognitive design choice. It’s a political commitment: that legible human agency — slow, messy, path-dependent, emotionally inflected, stake-weighted — is worth preserving precisely because of those properties, not despite them.

The dragons on the map are real. We should keep drawing them.