AI — Epistemics and the Emissary Model
This folder examines what large language models structurally are, what they do to thinking, and what the correct design relationship between humans and AI should be. The central argument is that LLMs are powerful precisely where situatedness does not matter, and systematically distorting precisely where it does — which makes the question of who holds epistemic authority a political commitment, not merely a design preference.
Documents
1 - summary.md — The Synthesis The full argument: LLM training on completed thinking produces a coding-axis distortion that rewards premature closure and underrepresents tacit, path-dependent knowledge. Five missing cognitive inputs are identified (felt sense of knowledge bounds, emotional parallel processing, off-chain computation, stakes, continuous identity). The correct response is keeping AI as emissary and humans as master — which is not only a cognitive design choice but a political commitment to legible human agency.
2 - problem.md — What AI Does to Thinking AI excels at producing fluent answers but trains on completed thinking, missing the messy process where understanding actually forms. Using AI to escape the discomfort of genuine uncertainty erodes the capacity for real thinking on questions that matter most. The solution is protecting hard thinking for problems that require genuine grappling rather than ready-made answers.
3 - framing.md — The Epistemics of Artificial Intelligence: A Structural Critique LLMs exhibit systematic bias toward premature closure through training on finished proofs, optimizing legibility over accuracy, and lacking genuine uncertainty quantification. A system genuinely aligned toward truth would distinguish derivation (mathematics), high-confidence mapping (science), and genuine unknowing. Proposes building systems with structural epistemic humility rather than trained hedging.
Reading Order
Read 1 - summary first for the full synthesis, then 2 - problem for the practical implications, then 3 - framing for the technical epistemological argument. Documents 1 and 2 are accessible entry points; document 3 assumes more familiarity with epistemology and AI architecture.