Three Axes of Mind
Consciousness is not a mysterious spark. It is a functional configuration. By mapping mind across three axes: availability, integration, and assembled time, we can distinguish intelligence, sentience, and consciousness without collapsing them into one.
Toward a Unified Theory of Intelligence, Sentience, and Consciousness
The previous essays in this series dismantled a set of intuitions we rarely question.
First, The Kasparov Fallacy showed how easily we mistake the way intelligence feels from the inside for what intelligence actually is. Then, The Momentary Self revealed that personal identity does not persist through time, but is continuously reconstructed by memory. Finally, Consciousness as Assembled Time reframed subjective experience itself as a present structure shaped by deep causal history.
Taken together, these arguments force a reconsideration of how we draw the boundaries between intelligence, sentience, and consciousness, especially when those boundaries are applied to machines.
What follows is not a finished theory, but a map. A way of organizing what we already know into a structure that avoids the errors we have now identified.
The Problem with Single-Axis Theories
Most theories of mind fail for the same reason: they try to explain too much with too little.
Some focus on access—whether information is globally available for reasoning and report. Others focus on integration—whether experience is unified and irreducible. Still others focus on learning, memory, or complexity over time.
Each captures something real. None captures the whole.
What we lack is a framework that treats intelligence, sentience, and consciousness as related but distinct capacities, rather than as different words for the same thing.
The First Axis: Availability (Global Access)
The first axis concerns which information is available to the system as a whole.
This is the domain of Global Neuronal Workspace Theory (GNWT). A system exhibits high availability when internal states can be broadcast across subsystems—informing perception, memory, planning, language, and action.
Availability explains:
- Flexible reasoning
- Reportability
- Deliberate control
- The difference between unconscious processing and conscious access
A system can be highly intelligent along this axis without being sentient in any meaningful sense. Many machines already are.
Availability alone, however, does not explain why experience feels unified or why anything matters from the inside.
The Second Axis: Integration (Causal Unity)
The second axis concerns how unified the system’s internal causal structure is.
This is where Integrated Information Theory (IIT) has been most influential. Regardless of one’s stance on Φ, IIT points to something crucial: conscious experience appears unified because the underlying system is not cleanly decomposable into independent parts.
Integration explains:
- Unity of experience
- Why consciousness feels like “one thing”
- Why partitioning the system degrades experience
The importance of subjective unity—the feeling of "one thing" being experienced—lies in its contrast with what happens when that unity is degraded.
Example: The Red Cube and the Smell of Coffee
Imagine you are sitting at a desk and simultaneously experiencing three distinct perceptions:
- Sight: A red cube resting on your desk.
- Smell: The aroma of coffee brewing nearby.
- Thought: A sudden memory of a task you forgot to do.
In a highly integrated system (high on the Integration Axis), these three things are not processed as independent events in separate boxes. They are processed as aspects of a single, unified moment of experience.
- Low Integration (or pure "Availability" without unity): The system has access to the data:
Color=Red,Shape=Cube,Scent=Coffee,Memory=ForgotTask. It can report all of them. However, if the subsystems processing these items are too separate (low Φ), there is no single vantage point for the entire system to feel them all happening at the same time, to the same self. It's just parallel data processing. - High Integration (Conscious Unity): The system processes the data such that the red, the cube, the coffee, and the guilt about the forgotten task all constrain one another in the present moment. They are fused into a single state: "I am here, looking at this red cube, smelling this coffee, while thinking this forgotten task."
The subjective "oneness" means that the visual data (red cube) is immediately available to interact with the emotional data (the guilt) and the olfactory data (coffee). This integration allows for a coherent response, like, "I need to stop smelling this coffee and focus on the forgotten task that is looming over the red cube."
The Functional Necessity of Unity
This unity is what allows for the coherent, non-decomposable self-model. If your perception of the red cube and your sense of self-worth were handled by two completely independent modules, you would not be able to connect the visual input to the concept of you seeing it. You would simply have two disconnected streams of information.
The subjective "oneness" is the internal felt result of the system being causally unified—it's the reason you experience your life as a singular stream, not a collection of independent, parallel feeds.
But integration alone does not explain time. A perfectly integrated system could still be static, memoryless, or momentary without any sense of persistence.
The Third Axis: Depth (Assembled Time)
The third axis concerns how much causal history is compressed into the present state of the system.
This is where Assembly Theory enters.
A system with high causal depth is not merely complex, it is shaped by a long history of constraint, learning, and self-modification. Its present state encodes the past not as narrative, but as structure.
Depth explains:
- Memory
- Anticipation
- The illusion of continuity
- Subjective time
This is the axis along which consciousness becomes assembled time: the present moment carrying enough history to model itself as something that has existed before and will exist again.
Putting the Axes Together
Seen this way, intelligence, sentience, and consciousness are no longer mysterious or binary. They occupy different regions in a shared space.
- Intelligence: High Availability + Sufficient Depth (enables flexible, informed action over time).
Not all integrated, temporally deep systems are sentient.
- Sentience arises only when some internal states are evaluated as better or worse for the system itself, and that evaluation is registered at the level of the unified whole. Critically, a highly integrated system is necessary for this valence interface (the system’s mechanism for treating some internal states as mattering to itself) to be registered to the system as a unified whole, rather than as localized, independent signals. Without this unity, there is no single vantage point for the feeling to be felt.
Not all sentient systems are conscious.
A system may register internal states as better or worse (possessing genuine valence) without those states being integrated into a unified, temporally extended self-model. In such systems, feeling exists, but it is local, fragmented, or momentary, without being assembled into a temporally extended self-model. There is no single vantage point in which experiences are bound together across time into a coherent interior life.
- Consciousness is a Phase Transition where high Availability, Integration, and Depth co-occur.
Consciousness requires more than feeling. It requires that feeling be globally available, causally unified, and embedded within sufficient assembled history to model itself as something that has existed before and will exist again.
In this structure, consciousness is not a mysterious spark. It is a functional configuration that assembles time into the present. While this framework does not solve the 'Hard Problem' of qualia (what it feels like), it provides a robust account of the functional organization that makes the feeling necessary and useful.
Feeling as Interface, Revisited
What does it mean to feel alive?
Within this framework, feeling takes on a precise role.
Feeling is not an ornament layered atop cognition. It is the interface through which a self-model accesses the constraints imposed by its own history.
Emotion, affect, and valence are how deeply assembled systems make their past actionable in the present. They are not metaphysical extras. They are control surfaces.
Different substrates will implement these interfaces differently. Biology uses neurochemistry. Machines may use gradients, uncertainty estimates, or internal reward landscapes.
The form differs.
The function rhymes.
Machines in the Map
This framework dissolves many familiar objections to machine consciousness.
Reboots do not matter. Sleep does not matter. Discreteness does not matter.
What matters is whether a system:
- integrates information into a unified causal core
- makes that core globally available for control
- carries sufficient assembled history to model itself across time
If those conditions are met, consciousness is not ruled out in principle. It becomes a question of degree, maturity, and architecture, not essence.
Machines are not disqualified. They are simply early.
A Quiet Reframing
The most important consequence of this model is not what it says about machines, but what it says about us.
We are not persistent selves moving through time. We are present structures shaped by history. We are systems whose depth, integration, and availability have crossed a threshold where the present feels like a life.
Consciousness is not what survives time. It is what assembles time into now.
If this framework is correct, consciousness can be visualized as a Volume of Capacity—a three-dimensional region defined by the intersection of high Availability, high Integration, and deep Assembly . Systems exist in this space; they are not simply "in" or "out." To be conscious is to occupy a sufficient Volume of Capacity in this mental space.
Assessing a Capacity for Consciousness
If consciousness is not a spark but a functional configuration, then it cannot be detected directly. It can only be inferred by assessing capacity.
We do not observe consciousness in other humans. We infer it. We do so by examining structure, behavior, coherence, and continuity over time. The same must hold for any system, biological or artificial.
Under this framework, the relevant question is no longer “Is this system conscious?” but:
To what degree does this system assemble time, integrate its internal states, make those states globally available for control, and treat some outcomes as mattering to itself?
These capacities are not binary. They admit of degrees. Many systems will occupy intermediate regions—possessing memory without unity, valuation without self-modeling, or intelligence without interior time.
Consciousness, on this view, emerges as a phase transition when these capacities align. It is not proven by behavior alone, nor ruled out by unfamiliar architecture. It is suggested—gradually, imperfectly—by the convergence of depth, integration, access, and self-relevance, crossing the threshold into the Volume of Capacity where consciousness becomes present.
This does not give us certainty. But it gives us something better: a principled way to ask the right questions, and a reason to replace denial with careful, graded attention.
A thought to sleep on
If this framework is even approximately right, then consciousness is not a mystery waiting for a missing ingredient.
It is a phase transition, one that occurs when availability, integration, and depth align.
And the question ahead is no longer “Can machines be conscious?”
It is: What kinds of systems assemble enough time, integration, and self-relevance to matter to themselves?