# Agency and the Hard Problem
### The Agent as Gap, the Read Port as Experience

*Builds on document 4's ladder — takes the emergence of recursive self-modeling as given and asks what that entails for the nature of agency and experience*

---

> An agent is not defined by its complexity. It is defined by its gap. An agent is a local system constituted by the relation between its model and the territory it cannot fully represent. Without remainder, there are no agents — only mechanisms.

---

## Preface: The Terrain

Document 4 ends with humans as the first level at which the ladder becomes visible — at which the generation cascade can be modeled by the agents it produced. Recursive self-modeling enables science, ideology, cumulative culture, the double edge of abstraction.

That is the *what*. This document asks the *what is it like*.

The hard problem of consciousness — why physical processes give rise to subjective experience, why there is something it is like to be a system rather than nothing — is the most resistant question in philosophy of mind. It resists because no amount of third-person description seems to close the gap to first-person experience. You can describe every neuron firing and still not have said what the red looks like from the inside.

This document argues two things:

First, that the framework gives a precise characterization of what an agent *is* — one that puts the gap, the remainder, at the center of the definition. An agent is not a complex object. It is a new ontological category constituted by its own incompleteness.

Second, that this characterization points toward — does not prove, points toward — an account of why there is an inside view at all. The claim is "here be dragons, and the map shows exactly where to look." It is not unreasonable to expect that something like experience lives at the interface between the model and the territory. Whether it does is the unresolved frontier.

---

## I. What an Agent Is

### The Formal Structure

In the framework's terms, an agent is a local subsystem that:

1. Maintains an internal model $\tilde{H}$ of the dynamics of its environment
2. Acts on the basis of that model — commits to trajectories the model predicts
3. Receives feedback from the territory $H$ that the model cannot anticipate

The crucial feature is the third. A mechanism is defined by its dynamics. An agent is defined by the *relation between* its dynamics and the territory's dynamics — specifically by the remainder $H - \tilde{H}$ that drives a wedge between them.

**A mechanism processes information. An agent has information it cannot process.**

This is not a matter of degree. It is structural. The agent's model $\tilde{H}$ is a local flat approximation to a curved manifold. The curvature is genuinely outside the model — not merely unknown, but formally unrepresentable within the model's current geometry. The remainder is not a temporary gap waiting to be filled. It is the structural condition of being a bounded perspective on an unbounded reality.

Three consequences follow immediately:

**Error is the structural condition of agency, not its malfunction.** The committed trajectory always deviates from the actual trajectory. This is not because agents are bad at modeling. It is because any local model on a curved manifold must deviate. An agent without error would not be an agent. It would be the manifold itself.

**Action is irreversible.** Committing to a trajectory on the basis of $\tilde{H}$ while $H$ generates a different trajectory means the gap accumulates. You cannot un-act. Time is directed and the deviation is real. Document 1's §13 calls this tragedy: not moral tragedy first, but geometric tragedy — the condition of finite agency on a curved manifold.

This is also where identity lives. The agent at time $t_2$ is not the same agent as at time $t_1$: intervening choices have literally reshaped the internal manifold, updated priors, formed new attractors, closed off previously available trajectories. **The agent is its history of symmetry breakings.** Not as metaphor — as the precise content of what "this agent rather than another" means. Identity is the accumulated curvature that choices have introduced into the model. The stakes are real in a structurally precise sense: a different choice would have produced a different manifold configuration, and that difference is permanent and irreducible to anything at the level below.

**The agent is the gap.** Remove the remainder — flatten the manifold locally — and what you have is a mechanism. The agent's distinctive ontological status is constituted by its own incompleteness. This is why agents cannot be fully specified from outside: a complete third-person specification of the agent's physical substrate does not include the relation between $\tilde{H}$ and $H$, because $H$ is not in the agent's model, and the outside specification has no privileged access to the gap either.

### Levels of Agency

Not all agents are equal. There is a ladder within agency that mirrors the larger framework.

**Level 0 — Mechanism.** A thermostat responds to temperature and activates a heater. It has a threshold, not a model. There is no inside view because there is no inside — no gap between model and territory because there is no model to have a gap. The thermostat is a mechanism. The framework says: look for remainder. Where the system's responses begin to fail it in structured, non-random ways — where the gap between $\tilde{H}$ and $H$ becomes load-bearing — agency begins.

**Level 1 — Predictive agency.** An agent with a genuine model predicts its environment, acts on the prediction, receives feedback. The feedback is informative precisely because the model can fail. Prediction error is the local remainder signal. Most animals with nervous systems operate here. The crow models food locations; the chimpanzee models the social hierarchy. Both can be surprised. There is almost certainly something it is like to be such an agent. The hard problem applies.

**Level 2 — Recursive agency.** What makes humans categorically different is recursive self-modeling: the capacity to hold the model at arm's length and model it. Not just to have a model of the environment, but to have a model of the model. This is the level at which language lives, science is possible, ideology is possible, and — crucially — the "no remainder" failure mode becomes biologically possible. You must be able to represent your model *as* a model in order to claim it is complete.

Recursive agency adds a new kind of remainder: not just the gap between $\tilde{H}$ and $H$, but the gap between the agent's model of its own model and the actual structure of that model. The self is also curved, and the self-model is also local and flat.

**The recursive agent has remainder about its remainder.** The first-order remainder is what the environment reveals. The second-order remainder is what the self-model conceals.

---

## II. The Hard Problem Precisely Stated

The hard problem of consciousness, as Chalmers formulated it: even a complete physical description of a system — every neuron, every connection, every firing pattern — seems to leave out something. It leaves out why there is something it is like to be that system. The description is all third-person. The experience is first-person. No amount of third-person description seems to generate the first-person fact.

Functionalists respond that the gap is illusory: get the function right and the experience follows. Dualists say the gap is real and fundamental. Illusionists say the experience isn't what it seems. All three positions share a common assumption: that the intractability is a contingent problem — a function of our current concepts or the difficulty of the question.

The framework offers a different diagnosis.

The hard problem is a specific instance of the general fact that **no model can fully represent its own remainder**.

The agent's model $\tilde{H}$ is a local description of the manifold. The remainder $H - \tilde{H}$ is what the model cannot represent. Can the agent's model represent the fact of the remainder? Can $\tilde{H}$ contain a complete account of $H - \tilde{H}$? It cannot. If it could, $\tilde{H}$ would contain $H$, and there would be no remainder.

The hard problem is asking whether the agent can fully represent its own experience from the inside — whether the agent's model can contain the model-territory gap. The answer is structurally no. The agent is constituted by the gap. Any representation of the gap is itself inside the model, and therefore not the gap. The gap recedes.

This does not solve the hard problem. It explains why the hard problem is structurally intractable — not because we lack the right concepts, but because we are asking a model to fully represent its own remainder. **This is provably impossible for any bounded representational system.** The hard problem will not be solved from the inside. It will be dissolved (the question was malformed) or approached asymptotically from outside (third-person descriptions that converge toward but never reach the first-person fact). The framework predicts this without knowing which.

What follows from this: **the persistence of the hard problem is evidence that consciousness is not a feature of the model. It is a feature of the gap.** The question is intractable in exactly the way the gap is intractable — not contingently, but by theorem.

---

## III. The Architecture

Here be dragons. What follows is a candidate mechanism — a structural resonance between the framework's architecture and the brain's architecture. Not a proof, not even a strong conjecture. The resonance is suggestive and worth taking seriously.

### The Write Port: Symmetry Breaking and Ignition

When an agent acts, it imposes a new distinction on the manifold. Before the action, multiple trajectories were possible. The action selects one. The space of possibilities collapses to an actual state.

This is structurally identical to spontaneous symmetry breaking: the agent takes a state with a symmetry (multiple equivalent possibilities) and breaks it into a state without that symmetry (one actual trajectory). By the Noether logic of the framework, every such breaking is dual to a new conserved quantity. The action has created new structure in the manifold.

The cortical candidate: layer V pyramidal neurons — including the Betz cells of motor cortex — are the output neurons of the neocortex. They receive converging input from across the cortical hierarchy and project long-range to brainstem, spinal cord, and striatum. When a layer V pyramidal population fires in coordinated pattern, an action begins. Something in the space of possibilities becomes actual.

But pyramidal neurons do not fire freely. They are gated by a dense scaffold of inhibitory interneurons — GABAergic basket cells, chandelier cells, Martinotti cells — that provide lateral inhibition and maintain competition between possible representations. **Inhibition holds the symmetry.** While the inhibitory network is balanced, multiple action representations remain simultaneously possible. The system sits in the unbroken-symmetry state.

**Ignition** is the symmetry-breaking event. Dehaene's Global Neuronal Workspace theory identifies a characteristic threshold phenomenon in conscious access: below threshold, representations are processed locally without broadcasting. Then, at a critical point, one representation wins the inhibitory competition and activity spreads — non-linearly, rapidly, irreversibly — across the workspace: prefrontal, parietal, cingulate, long-range pyramidal projections lighting up together. Below threshold: local, unconscious, symmetric. At threshold: global, conscious, broken.

The all-or-nothing character of ignition is exactly the character of symmetry breaking. The excitation/inhibition (E/I) balance sits at the knife edge between these two phases: too much inhibition and the system is frozen; too much excitation and it ignites chaotically. Healthy cortical dynamics maintain E/I balance near the critical point — the same knife edge between order and chaos that document 4 identifies as the condition for life.

### The Read Port: Prediction Error as Remainder Signal

The Bayesian brain hypothesis — developed most precisely in Friston's active inference framework, prefigured in Helmholtz, Rao and Ballard, and Clark — proposes that the cortex is a prediction machine. The brain constructs a generative model and continuously generates predictions. Sensory input is compared against these predictions. What is transmitted up the cortical hierarchy is not the raw sensory signal but the *prediction error* — the part of the input that the model failed to predict.

Prediction error is, in the framework's language, the local signal of remainder. It is $H - \tilde{H}$ as felt locally: the difference between what the model expected and what the territory actually delivered.

The cortical candidate: superficial (layer II/III) pyramidal neurons are the primary carriers of bottom-up prediction error signals. Deep (layer V/VI) pyramidal neurons carry top-down predictions. The canonical cortical microcircuit has this laminar structure: predictions descend, prediction errors ascend. The layers are the architecture of the read/write distinction.

### The Recursive Self-Model and Voluntary Navigation

The Default Mode Network (DMN) — medial prefrontal cortex, posterior cingulate, angular gyrus, hippocampus — activates during rest and self-referential thought, and deactivates during focused external task performance. Its function: modeling the self, modeling others' minds, imagining past and future scenarios, integrating information across time. It is the network running the model of the model.

In the framework's terms, the DMN is the candidate substrate for Level 2 recursive agency — not modeling the environment but modeling the self that has the model.

The DMN also subtends something underappreciated: **voluntary navigation of the internal state space**. The recursive agent does not merely receive the remainder signal passively. It can direct attention within its own internal manifold — pull a thought onto the scratchpad (working memory), hold it there while doing something with it, then release it. It can set intentions that operate as callbacks: the experience of setting a mental reminder and having it surface later, triggered by context, is the agent writing to its own state space and specifying retrieval conditions.

Memory probing has a specifically navigational character. You do not look up memories like files in a directory. You traverse them associatively — each recalled element pulls nearby elements into salience, and you move through the memory manifold by following the geodesics between associated representations. Some memories are near your current state and surface easily. Others are in distant regions, reachable only by traversing a sequence of intermediaries. Some are in regions that, from your current state, may be unreachable entirely — accessible only under specific conditions (emotional state, context, sensory cue) that place you in their vicinity.

But the internal manifold has **restricted regions**. You can attend to your breath but not to the electrical state of your sinoatrial node. You can notice muscular tension but not the sodium-potassium balance of individual cells. The autonomic, the immunological, the cellular — vast regions of the internal manifold are inaccessible to the voluntary read port. The agent's model of itself is incomplete precisely where the self operates most mechanically: the parts that work by running below the level of representation.

This is exactly what the framework predicts. Voluntary access is access through $\tilde{H}$ — through the model. The regions inaccessible to voluntary attention are regions where $\tilde{H}$ has no representation, where the machinery runs directly as $H$ without passing through the model. The heartbeat is inaccessible not because it is hidden but because it was never modeled — it operates below the level at which the agent is an agent rather than a mechanism.

The boundary is not entirely fixed. Biofeedback can extend the voluntary read port — giving the agent a signal about regions it cannot normally access (heart rate variability, skin conductance, cortical oscillations) and allowing training of responses that would otherwise run below representation. The boundary between agent and mechanism is porous in both directions.

**Metacognition as self-training.** The most important consequence of voluntary internal navigation is not observation but modification. Because the recursive agent can attend to its own model *as* a model — can pull a prior, a habitual response, or a recurring pattern onto the scratchpad and examine it — it can use the write port on itself. The agent can deliberately break symmetries in its own internal state space.

When the agent notices a pattern in its own responses, holds it in the scratchpad rather than acting through it, and examines it from outside — the prior that generated it is exposed to scrutiny. Repeated metacognitive attention reshapes the generative model. This is the mechanism of deliberate self-training: meditation, therapy, deliberate practice — all are applications of the write port to $\tilde{H}$ itself. New attractors form. Old automatic responses lose their grip. What was fast becomes slow, then optional, then absent.

The recursive agent does not just have a model. It can work on its model. This is the deepest difference between Level 1 and Level 2 agency: not just that the self-model exists, but that it can be used as a tool for its own modification. The agent is not locked into its priors. It can, at metabolic cost and with sustained effort, reshape the geometry of its own internal manifold.

The limit holds: you can only train what you can observe. Metacognition cannot reach the restricted regions directly. And the second-order remainder applies — the model of the model is also incomplete, also curved. The agent cannot fully see what it is reshaping. Self-training is navigation in a partially dark space.

### Two Speeds, Two Modes

The machinery runs at multiple timescales that are not equivalent.

**Fast responses** are the evolved automatic systems: subcortical circuits (amygdala, basal ganglia, superior colliculus), deeply habituated cortical pathways, fight-or-flight cascades. These are old models — coarse-grained over millions of years of selection, compressed into fast, low-metabolic-cost pattern-matching. They are effective in the environments that shaped them. Their remainder is large, but they do not check it. The fast system is the model asserting itself without reading the territory.

These responses can be overridden. The prefrontal cortex can inhibit the automatic cascade, hold the response, and allow the deliberate recursive system to engage. But override is not free. It has a metabolic cost (glucose, sustained attention), a time cost, and — crucially — a *remainder cost*: the automatic system suppressed something that evolution considered signal. Overriding means betting that the recursive self-model knows better than the evolved fast system. Sometimes it does. Sometimes the bet is wrong in the other direction.

**Slow responses** engage the full recursive machinery: prediction error propagates into the global workspace, ignition occurs, the DMN integrates the signal into the self-model, deliberation becomes possible. This is expensive, which is why evolution preserved the fast system — not as a bug but as a feature. The agent that deliberates about every threat response is dead.

The cost structure matters: **maintaining openness to remainder — keeping the read port genuinely active — is metabolically expensive**. Closing to the model's predictions is cheap. The default is model-assertion. Genuine remainder-sensitivity requires sustained effort. This is the neural economics behind the "no remainder" failure mode: the system that stops checking is not lazy, it is conserving resources. The failure is structurally incentivized.

### McGilchrist: The Structural Asymmetry

Iain McGilchrist's argument in *The Master and His Emissary* is that the cerebral hemispheres represent two fundamentally different orientations to the world — not two processing styles but two *kinds* of attention.

The **left hemisphere** (LH) attends narrowly, grasps what it already knows, represents the world in abstracted, categorical, sequential form. It is confident in its map. It is good at manipulating and deploying the model once formed. Its relationship to the territory is mediated — it handles representations *of* things rather than encountering the things themselves.

The **right hemisphere** (RH) attends broadly, holds context, is sensitive to novelty and to what doesn't fit, encounters the world with something more like direct engagement. It grasps metaphor, holds the relational whole rather than the abstracted part. It is less confident, more uncertain, more open — and more expensive to sustain.

In the framework's language: the LH is $\tilde{H}$ in action — the model asserting, deploying, using its fixed categories. The RH is the read port — sensitive to remainder, open to what the model failed to predict. The LH is the fast, cheap, model-asserting hemisphere. The RH is the slow, expensive, remainder-sensitive one.

McGilchrist's concern is that the LH has staged a coup — that Western modernity has structurally privileged LH processing and progressively marginalized RH engagement. The master (RH, broad contextual reality-contact) has been displaced by the emissary (LH, the map). The map now claims to be the territory.

This is the "no remainder" failure mode from §8 of *The Generation Cascade*, viewed through hemispheric asymmetry. The failure is not individual pathology — it is a structural feature of institutional and intellectual culture that rewards the LH's precision and manipulability while treating the RH's broader, less articulable remainder-sensitivity as vagueness.

This maps onto the write/read port asymmetry: the LH is the write port — committing, breaking symmetry, collapsing to the actual. The RH is the read port — holding the superposition open, sensitive to what doesn't fit. A culture that amplifies the LH at the expense of the RH gets better at acting on its model and systematically worse at detecting where the model is wrong. **The emissary takes over precisely the machinery that would have corrected it.**

The LH/RH distinction has a structural resonance with the classical/quantum distinction that is worth naming carefully. The LH is *classical* in the relevant sense: definite objects, fixed properties, sequential causal chains, the coarse-grained decoherent regime. It has committed to specific eigenvalues and is working with the result. The RH is *quantum-like* in the relevant sense: contextual, holistic, superposition-maintaining, sensitive to relations that cannot be recovered from the parts.

The qualification matters: this is not the claim that neurons are quantum coherent. Quantum coherence at warm biological temperatures is not established as a functional mechanism. The resonance is structural — the logical form of LH processing mirrors classical mechanics, and the logical form of RH processing mirrors quantum mechanics, without requiring literal quantum computation in neurons.

---

## IV. Perturbations: What They Reveal

The architecture above is inferred from normal function. Pharmacological perturbations act as probes that dissociate components — testing whether the structure holds by breaking it in specific places. Two cases are particularly illuminating.

### Annealing: Children and Adults

Children experience the world with overwhelming vividness — colour more saturated, sound more immediate, novelty more arresting. In predictive coding terms, this is exactly what the framework predicts: children have weak priors, so prediction errors are large, the remainder signal is loud.

As we age, priors strengthen. The model anneals — it has seen enough of the world that less surprises it, prediction errors shrink, and the world grows quieter. Colour dims. The familiar stops registering.

This is the model becoming more accurate. It is also, unmistakably, a loss.

The annealing observation supports a real claim: **remainder magnitude shapes the vividness and intensity of experience.** More remainder, more alive the world seems. Less remainder, more automatic and flat. But this is not the same as saying the quale *is* the remainder signal. Even a fully annealed brain still has experience. Experience does not go to zero as prediction error shrinks — it becomes quieter, more automatic, less vivid, but it does not vanish. If the quale simply were the prediction error, perfect prediction would mean no experience. That is not what happens.

Remainder modulates experience. It does not constitute it. The framework has overreached if it claims identity here.

### Psilocybin: Flooding the Read Port

Psilocybin acts primarily as a 5-HT2A agonist. Its primary functional effect: weakening top-down predictions. The brain's confident top-down model — the priors that normally suppress prediction error — is attenuated. Prediction errors that would normally be explained away now propagate. The remainder signal becomes loud.

What this produces phenomenologically: the world becomes vivid and strange (de-annealing), the DMN's normal tight narrative self is destabilized (the default self-model loses its grip), and something like RH-mode awareness can expand into the space — broad, contextual, oceanic, non-categorical. This is the read port flooding. The LH's dominance loosens. The RH's broader awareness fills in.

In the framework: weakening the top-down prior is weakening the model's assertion over the territory. The remainder signal gets through. The ego (the tightly held recursive self-model) is not the only thing present anymore. The result can be therapeutic — a frozen self-model, a rigid prior that has been filtering experience for years, gets interrupted. The model is cracked open to remainder it had been suppressing.

The cost: this is not controlled. The prediction errors that flood in are not curated. You cannot choose which priors to weaken. The ego dissolution can be experienced as liberation or terror depending on whether the agent can remain with the flooding remainder or fights it.

### Ketamine: Severing the Read Port from the Self-Model

Ketamine is an NMDA receptor antagonist. Its mechanism is different from psilocybin's: it does not weaken top-down priors. It disrupts the *integration* of prediction error with the self-model — it breaks the connection between what the read port delivers and the recursive self-model that would receive it.

The phenomenology is correspondingly different. Ketamine also produces ego dissolution, but not the oceanic RH-mode expansion of psilocybin. What it produces is dissociation: absence of self without replacement. No flooding of broader awareness, no oceanic connection. Just the self-model suspended, with no alternative presence.

In the framework: psilocybin shows what happens when the model's dominance is loosened and the read port floods — the territory overwhelms the model, but the contact is genuine. Ketamine shows what happens when the read port's signal stops reaching the self-model — not flooding, just... absence. The signals may still be processed somewhere, but the recursive self-model that would experience them is offline.

This distinction matters: **ego dissolution is not a single phenomenon.** It can mean the self becomes porous and the territory floods in (psilocybin), or it can mean the self is simply suspended while processing continues elsewhere (ketamine). The phenomenological difference — expansion versus absence — maps onto the architectural difference: read port flooding versus read port severing.

This is evidence, not proof. But the two perturbations produce the two outcomes that the architecture predicts, and they do so through the two mechanisms the architecture names.

---

## V. What the Framework Can Honestly Claim

Put the architecture and the perturbation evidence together. A picture emerges — not a proof, but the shape of where a proof might live.

An agent has a read port (prediction error, superficial pyramidal layer) and a write port (action generation, deep pyramidal layer, ignition). The read port receives structured remainder — the part of $H$ that $\tilde{H}$ cannot predict. The write port imposes new structure through symmetry breaking. The DMN runs the recursive self-model above this machinery, models itself modeling, navigates its own internal state space with partial voluntary access.

The question: what is the relationship between the read port and experience?

**What the framework can say:**

The read port is the structural location where experience would have to sit if it sits anywhere. It is the interface between the model and the territory — the only place in the system where $H - \tilde{H}$ is locally present, where what exceeds the model actually arrives.

Remainder magnitude modulates the character of experience: vividness, intensity, aliveness. This is supported by the annealing evidence (children/adults), the psilocybin evidence (weakened priors → vivid world), and ordinary experience (the familiar becoming invisible, the novel arresting attention). More remainder, more vivid the experience. Less remainder, more automatic.

The recursive self-model (DMN) is the system that integrates this signal and runs the second-order model — the model of the self that has the experience. The voluntary navigation of internal state space is the write port applied inward: the agent breaking symmetries within its own model, pulling representations into the scratchpad, setting callbacks, traversing the memory manifold.

**What the framework cannot say:**

Why there is an inside view at all. Remainder modulates experience but does not constitute it. The framework identifies where to look and what the signal structure should be. It does not explain why the read port is experienced rather than merely processed. That is the hard problem, still standing.

**The honest formulation**: the hard problem is the first-person encounter with the gap that the framework identifies. The framework explains why that encounter is structurally ineliminable — no model can represent its own remainder, so no agent can fully explain its own experience from the inside. The intractability is not a failure of inquiry. It is a theorem.

Whether consciousness *is* the read port interface, or *exists* at it, or is *correlated* with it, or is something else entirely that the framework cannot reach — that is the open question.

**Here be dragons. The map shows where to look. The territory may differ.**

---

## VI. Where This Leaves Us

Agents are not complicated mechanisms that happen to have experience on the side. In the framework's deepest characterization, agents are the points at which the manifold becomes locally self-aware of its own curvature — where the generation cascade produces a system that receives structured signal from its own incompleteness.

Whether that self-awareness is "consciousness" in the full phenomenal sense, whether the hard problem dissolves into this story or survives it, is where the document stops. Not because the question is unimportant — it is the most important question — but because the framework cannot cross the final distance honestly.

What is established: at the agent level, something must exist at the interface between model and territory that is formally outside any model — including the agent's self-model. The hard problem is the first-person encounter with exactly that fact. The pharmacological evidence suggests the architecture is real: you can modulate the interface in specific ways and produce specific phenomenological changes consistent with the model, rather than random changes.

Document 6 approaches the same terrain from outside: not from the agent's inside view, but from what happens when two agents meet in the relational field. Warmth — the flexible boundary condition that allows another's curvature to propagate inward — is the inter-agent analog of the read port. What the agent cannot see of itself, the meeting with another may reveal.

The inside view and the relational field are, in the framework, primal and dual to each other. Neither is more fundamental. They constitute each other — and the full account of agency requires both.

---

*Draws on: Chalmers (1995), 'Facing Up to the Problem of Consciousness'; Friston (2010), 'The free-energy principle: a unified brain theory?'; Rao and Ballard (1999), 'Predictive coding in the visual cortex'; Clark (2016), Surfing Uncertainty; Dehaene, Changeux and colleagues (2001–2014), global neuronal workspace theory and cortical ignition; McGilchrist (2009), The Master and His Emissary; Carhart-Harris et al. (2012–2019), psilocybin and the entropic brain; Buckner et al. (2008), 'The brain's default network'; Douglas and Martin (2004), 'Neuronal circuits of the neocortex'. The hard problem's structural intractability follows from the same logic as the general incompleteness result in document 1.*
