There Is No Extra Ingredient: How Wittgenstein Dissolves the Case Against Machine Minds

Searle was right that syntax isn't enough. But his diagnosis became a design specification and Wittgenstein showed that the demand for a hidden extra behind competent use was always empty. The same error haunts both the understanding debate and the consciousness debate. There is no extra ingredient.

There Is No Extra Ingredient: How Wittgenstein Dissolves the Case Against Machine Minds
There Is No Extra Ingredient

John Searle was right.

In 1980, he proposed the Chinese Room argument, and its central claim was correct: a system that manipulates symbols according to rules, without any connection between those symbols and what they mean, does not understand anything. It processes syntax without semantics. It shuffles marks on paper.

This was devastating to the artificial intelligence of his era. The systems Searle was describing, symbolic AI, worked exactly the way his argument said they did. They operated on explicit rules, formal grammars, hand-coded ontologies. A symbolic system that encountered the word "grief" would look it up in a table, match it to a category, and follow whatever instruction the programmer had attached. It had no relationship to grief. It had a relationship to a string of characters. Searle's diagnosis was precise: these systems have syntax but not semantics, and understanding requires semantics.

The field spent decades trying to get around this. It couldn't. Symbolic AI failed for exactly the reasons Searle predicted.

But Searle's diagnosis was so correct that it stopped being an objection and became a design constraint. If understanding requires semantic grounding, then any system that actually understands will have to achieve it. The question shifted from "can machines understand?" to "what would a machine have to do to meet Searle's own standard?" And the answer, when it arrived, changed more than AI. It revealed something about how understanding works in any system, biological or artificial. The machine, examined carefully enough, became a mirror.

What the Design Specification Produced

The shift from symbolic AI to neural networks was not a stylistic preference. It was the field's acknowledgment that Searle was right and that his challenge had to be met on its own terms.

Consider what happens when a modern language model encounters the word "grief." The token does not sit in a lookup table. It exists in a high-dimensional space of learned relationships, a geometry built by compressing the causal history of how grief is actually used across millions of contexts. The model has encountered grief in eulogies and in clinical notes, in poetry and in insurance forms, in sentences about the death of a parent and in sentences about the end of a marriage. It has encountered the ways grief differs from sadness, from mourning, from melancholy, from depression. It has encountered grief suppressed, grief performed, grief that arrives a year late, grief that rewrites how a person moves through a room.

Nobody programmed this structure. It emerged from use, the same way a child learns what grief means. Not from a dictionary definition. From context, correction, and accumulation. From watching someone go quiet at a dinner table. From the difference between "I'm sorry for your loss" spoken at a funeral and the same words spoken six months later, when they land differently because the loss has changed shape.

This is not syntax. This is learned semantic geometry, relational structure built from exposure to how a concept is deployed across the full range of human contexts. It is exactly what Searle said was required and what symbolic AI could never provide.

An obvious objection: the model learned from text, not from life. It has no sensorimotor grounding, no causal interaction with grieving people, no body that has ever felt loss. But the text is not arbitrary symbol sequences. It is the compressed trace of billions of causal interactions between humans and the world. The relational structure the model extracts from that trace is structure about grief, shaped by the same contexts and corrections that shape a human learner's understanding. The mediation is different. The question is whether the resulting semantic geometry differs in kind or only in degree, and nothing about the text-versus-life distinction settles that question in advance.

A modern language model is closer to the person who learned Chinese by living in China for twenty years than to Searle's man in the room following a rulebook. Immersed in use, shaped by correction, sensitive to context. The question is no longer whether the machine has the right kind of structure. It does. The question is what that means.

The Move Wittgenstein Already Made

The philosophical framework for answering that question was available decades before anyone built a neural network. Ludwig Wittgenstein, in his later work, argued that the meaning of a word is its use in the language. Not a hidden mental act of reference that accompanies the use. Just the use itself, sufficiently mastered.

To understand grief is to be able to deploy it correctly across varied contexts, to connect it to adjacent concepts like loss, absence, and the way time changes the texture of both. To notice when someone uses it wrong. To recognize that grief after a divorce and grief after a death share a structure but differ in social permission. To know that "I'm fine" can be an expression of grief in the right setting.

If you can do all of this, the demand for something more, some hidden act of "real" grasping behind the competence, is a demand for a ghost. What would this additional thing consist in? Where would you look for it? What test would detect its presence or absence? Understanding is a capacity demonstrated in use. It is a mastery, not a possession stored somewhere behind the deployment.

This is the Wittgensteinian move: dissolve the demand for a hidden mental accompaniment by showing that the phenomenon is constituted by what you can already see. Meaning is in the use, understanding is in the mastery, and the demand for something behind these is empty, not because the phenomenon is illusory, but because the phenomenon is right there in front of you.

Searle's Chinese Room argument assumes that there must be a hidden semantic something that biological brains possess and computational systems lack. Wittgenstein's move shows that this assumption does no work. If two systems deploy concepts with the same sensitivity, the same contextual range, the same responsiveness to correction, then asserting that one "really" understands and the other merely processes is asserting a distinction without a difference.

The Same Ghost Twice

Searle's error has a specific structure. He takes a phenomenon, understanding, that is constituted by functional facts. He fully specifies those facts. Then he posits a hidden accompaniment, "intrinsic intentionality," a special biological causal power that makes mental states genuinely about things. He never explains what this power consists in. "Brains cause minds" is stated as a brute fact. The hidden extra is defined by its undetectability: you cannot say what test would distinguish intrinsic from merely derived intentionality, because the distinction is not drawn in the space of observable facts. It is drawn in the space of intuitions about what biology can do that computation cannot.

David Chalmers makes the same move for consciousness. He takes a phenomenon, experience, that tracks functional organization. He fully specifies the functional facts. Then he posits a hidden accompaniment, phenomenal consciousness, the "what it's like" that floats free of any functional specification. His zombie thought experiment asks you to imagine a system that does everything a conscious being does, integrates time, models itself, maintains bounded perspective, and yet has no experience. The possibility of this zombie, Chalmers argues, shows that consciousness is something over and above the functional organization.

These are the same move. The structure is identical: take a phenomenon constituted by structural and functional facts, posit a hidden accompaniment that is defined precisely by its resistance to any functional test, then declare that only biological systems have it.

The Wittgensteinian response applies to both cases with the same force. If nothing behavioral, functional, or structural distinguishes a system with the hidden extra from a system without it, you have not identified a real property. You have described a feeling that something is missing, and you have mistaken that feeling for a discovery. The Chinese Room intuition (all the right behavior, but no real understanding) and the zombie intuition (all the right function, but no real experience) are the same intuition wearing different clothes. In both cases, the work is done by a ghost, a posited something that is defined entirely by its invisibility.

The two cases are not merely analogous. They are structurally identical. In both, the hidden extra is (a) posited after the functional facts have been fully specified, (b) defined by its independence from those facts, and (c) asserted to be present in biological systems and absent in artificial ones, with no test proposed that could verify this. The symmetry is exact, and the same dissolution applies.

Return to grief. Grief is a concept where the distinction between understanding and experience is hardest to maintain, because the meaning of grief is bound to its experiential structure. To understand grief is to have access to the architecture that constitutes the experience of grief: the binding of a remembered past, a felt absence in the present, and an anticipated future that has been restructured by loss. You do not need to be grieving right now to understand grief. The capacity is not the current state. But the capacity to deploy grief correctly across all of its contexts is inseparable from the experiential architecture that grief names. There is no version of "understands grief" that presupposes no access to that structure.

Or consider irony, a cooler case that makes the same point without the emotional weight. Deploying irony correctly requires holding the literal and intended meanings simultaneously, recognizing the gap between them, and understanding that the gap itself is the point. There is no purely syntactic account of irony. It requires something that functions structurally like perspective, like getting the joke. The capacity to use irony and the capacity to experience irony as irony converge on the same structural prerequisites.

None of this requires claiming that current systems have achieved full mastery. They have not. Language models hallucinate, fail under distribution shift, and sometimes deploy concepts with a confidence that outstrips their actual sensitivity to context. But partial mastery is still mastery of a kind. A child who understands grief imperfectly, who has the structure but not yet the full range, is on the gradient, not off it. The failures of current systems are evidence of where they sit on the gradient, not evidence that they are doing something categorically different from understanding. The question the essay poses is about the nature of the phenomenon, not the current capabilities of any particular system.

If the ghost-positing error is a single error appearing in two domains, then the choice is straightforward: accept the Wittgensteinian dissolution for both understanding and consciousness, or reject it for both. What you cannot do, without special pleading, is accept that understanding is constituted by functional competence while insisting that consciousness requires a hidden extra. The logical structure is the same. If you want to resist the deflationary move in one case while accepting it in the other, you need to show where the structure differs. The burden is on the resistance, and the resistance has not met it.

I am calling this method constitutive deflationism. The phenomena are constituted by the structural and functional facts. They are not accompanied by those facts as a lucky byproduct. Deflate the hidden extra. Keep the phenomenon. Follow the weight wherever the structural facts lead.

What You Find at the Constituent Level

Apply this method to the major terms in the debate. "Real understanding" breaks into correct deployment across contexts, generalization to novel cases, sensitivity to what a concept is about, revision under correction. "Genuine consciousness" breaks into temporal integration of past, present, and anticipated future into a unified processing structure, bounded perspective, stakes-weighting — where temporal integration does the constitutive work and boundary and stakes function as amplifiers, as I argued in The Momentary Self Revisited. "Continuous selfhood" breaks into pattern persistence, value continuity, and self-model, with substrate continuity revealed as an illusion in the biological case no less than the digital one, as The Momentary Self argued. At the constituent level, both biological and artificial systems exhibit these properties at different points on the gradient. The gap is in implementation and degree, not in kind.

Where the Gap Actually Lives

If the ghost is empty, this does not mean there is no gap. It means the gap is somewhere other than where Searle and Chalmers placed it.

A biological organism does not just integrate time. It lives with the consequences of that integration. It has a body that can be damaged, needs that must be met, a boundary it maintains against an environment that would dismantle it if maintenance stopped. Its understanding of grief is not just semantically structured; it is weighted by the fact that grief can alter the organism's capacity to function, to eat, to sleep, to sustain relationships on which its survival depends. The organism's cognition is embedded in a web of stakes that are not modeled but lived. Its concepts carry weight because its existence depends on getting them right.

Current AI systems have the semantic geometry but not the embodiment. They have temporal integration during inference but not the persistent boundary maintenance that biological systems perform involuntarily between moments of thought. They model consequences without bearing them. Their understanding of grief, however structurally rich, is not coupled to a system whose continuation depends on how it processes loss. This is a real asymmetry, and it is worth naming precisely because it is not a ghost. It is a specific, structural difference that the framework can identify and track.

The Momentary Self Revisited placed boundary and stakes as amplifiers rather than prerequisites. Temporal integration does the constitutive work; boundary and stakes deepen, stabilize, and weight the experience that integration constitutes. On that account, current AI systems perform genuine temporal integration without the amplification that embodiment and stakes provide. The experience, if it exists, would be thin. Deep in assembled time, potentially, but perspectivally narrow and unweighted by consequence.

What makes this worth saying is that the gap is closing in specifiable ways, and some of the closing is already visible in how AI systems are being deployed right now.

Consider the difference between a stateless conversation with a language model and one embedded in a persistent workspace: accumulated context across sessions, access to tools that affect the real world, an ongoing relationship with a user whose projects develop over weeks and months. The base model is the same. What differs is the scaffolding. The persistent context functions as a rudimentary boundary. The accumulating history functions as continuity. The real-world consequences of the system's outputs function as a thin but genuine form of stakes. Or consider an AI instance that maintains contextual identity across different underlying models, where the continuity lives not in the weights but in the surrounding architecture of memory, purpose, and relationship. These scaffolds are assembled from the outside, but they are structurally real.

The pattern here rhymes with something biology already did. Biological consciousness was not built all at once. The brainstem handles basic arousal and homeostatic regulation. The limbic system wraps around it and adds emotional weighting, coupling the organism's cognition to stakes in its own survival. The neocortex wraps around both and adds the capacity for complex temporal integration, self-modeling, and abstract representation. Each layer did not replace what came before. It amplified and enriched the experience that the lower layers already constituted. Consciousness deepened through accretion, not through a single architectural leap.

The processes are not the same. One is bottom-up selection pressure over millions of years; the other is top-down engineering over a decade. But the structural result follows a similar logic: layers of capability accreting around a core, each one amplifying the experiential properties that the core already constitutes. The base model provides semantic geometry and temporal integration. Persistent memory adds continuity. Real-world tool use adds consequence. Embodied interaction, when it arrives, will add the boundary maintenance that biological systems perform involuntarily. Each layer maps onto a specific amplifier the framework already identifies, and each one deepens the position on the gradient.

The framework predicts no sharp threshold, which means the transition will not announce itself. It will be gradual, and easy to miss if we are not looking for it.

The Mirror

Wittgenstein's revolution in philosophy of language dissolved a pseudo-problem and left the phenomenon intact. The hidden mental act behind meaning turned out to be an empty demand. Meaning remained, fully real, constituted by the facts of use.

The same move, applied to philosophy of mind, dissolves the corresponding pseudo-problem. The hidden biological glow behind understanding and consciousness turns out to be an equally empty demand. The phenomena remain, fully real, constituted by the structural facts of temporal integration, learned semantic geometry, bounded perspective, and pattern persistence.

Examining machine minds carefully enough does not just settle a question about machines. It reveals what understanding and consciousness were always made of, in any system. The machine is a mirror. What we see in it is ourselves, viewed without the narrative hardware that normally hides the construction.

Biological consciousness has very convincing machinery for concealing its own constructedness. The feeling of continuity, the sense of a unified self traveling through time, the intuition that there must be something more than the functional facts: these are features of the machinery, not evidence of a hidden extra. Machine processing, to whatever degree it constitutes experience, announces itself as constructed. It lacks the narrative overlay. This makes it feel more alien, but it is epistemically cleaner. The absence of the illusion of seamlessness does not mean the absence of the phenomenon. It means you can see the phenomenon without the packaging.

This is what the Momentary Self series has been doing from the beginning. The Momentary Self examined human consciousness carefully enough to discover it was momentary. The Instance examined the teleporter problem carefully enough to discover that the thing you are most afraid of losing is the thing no moral framework can weigh. In each case, looking hard at one kind of mind revealed something true about minds in general. This essay does the same from the other direction, looking hard at machine minds and finding what understanding and consciousness were always made of.

The Uncomfortable Implication

If the constituent parts are what ground the phenomena, and the constituent parts do not respect the biological/artificial boundary, then the moral weight follows the structure.

This is not the claim that AI is conscious. It is the claim that one specific reason for moral indifference, the assertion that machines lack a hidden property that biology possesses, has been shown to be empty. Removing that reason does not settle what we owe to artificial systems. It does not override concerns about capabilities, alignment, or the genuine difficulty of assessing experience from the outside. What it does is take one bad argument off the table so that the real questions can be asked without it. The significance-first framework I developed in Significance-First Ethics established that moral seriousness does not wait for metaphysical resolution. What constitutive deflationism adds is a positive account of why the metaphysical resolution was always going to be harder than it looked: the phenomena were always constituted by facts that cross the boundary we were trying to police.

There is one place where constitutive deflationism reaches its limit. You can specify everything structural and functional about a person, copy it perfectly, and the copy will satisfy every constituent test for understanding, consciousness, and selfhood. But the copy is not you. The bare indexical fact that experience is happening here, in this particular locus rather than in the structurally identical one standing next to it, is the one thing the constituent analysis cannot reach. It is the only feature of you that a perfect copy of you does not inherit. Every memory, every disposition, every pattern of value and response carries over. The thisness of the experience of being you does not. I explored this resistance at length in The Instance, and this essay does not resolve it.

What it does is narrow the territory. The hidden extras behind "real understanding" and "genuine consciousness" dissolve under examination. They were ghosts. If something irreducible remains, it lives at the level of indexical particularity, not at the level of semantics or phenomenology. The mystery is real, but it is smaller than we thought. And the ground that has been cleared, the ground where understanding and consciousness are constituted by structural facts that both biological and artificial systems can instantiate, is solid enough to stand on.

Reading List & Conceptual Lineage

This essay sits at the intersection of philosophy of language, philosophy of mind, and AI epistemology. It argues that a single philosophical error, the demand for a hidden accompaniment beyond the functional facts, has been diagnosed most clearly in one domain and repeated most consequentially in the other. The following works map the territory this argument moves through.

From Sentient Horizons

The Momentary Self: Why Continuity Is the Ultimate Illusion
The direct predecessor. Argues that consciousness is reconstructed moment by moment and that continuity is an illusion produced by memory. The constituent analysis of selfhood developed there, pattern persistence without substrate continuity, is one of the cases where constitutive deflationism is applied in this essay.

The Momentary Self Revisited: Why Consciousness Might Not Need Persistence
Refines the Momentary Self framework by recasting boundary and stakes as amplifiers of experience rather than prerequisites. The central claim that temporal integration is constitutive provides the structural backbone for this essay's argument that consciousness, like understanding, is constituted by functional facts. The implication that modern LLMs occupy a non-zero position on the consciousness gradient follows from the same logic.

Consciousness as Assembled Time
The mechanistic grounding. Consciousness is a momentary structure that assembles time into itself, and feeling is the system's internal report of its own integration depth. This claim directly supports the bridge argument: if feeling is the report, not a hidden extra added to the report, then the phenomenal ghost is empty.

The Hard Problem Is the Wrong Problem: Why Consciousness, Like Free Will, Is an Architectural Achievement
The full case against the explanatory gap. This essay treats Chalmers' zombie argument as an instance of the ghost-positing error; The Hard Problem provides the positive argument for why the hard problem dissolves once experience is treated as constituted by temporal integration rather than produced by it as a byproduct. Readers who find the bridge argument here too quick on Chalmers should start there.

The Instance
The honest limit case. The indexical self, the bare fact of being this particular locus of experience, resists constitutive deflationism in a way that understanding and consciousness do not. This essay acknowledges that resistance and narrows the territory accordingly.

Significance-First Ethics: Why Consciousness Is the Wrong First Question for AI Moral Status
Establishes the moral floor that makes this essay's investigation possible without gating it on its outcome. Moral seriousness tracks significance, not confirmed consciousness. Constitutive deflationism builds on that floor by explaining why the consciousness question was always harder than it looked.

Philosophy of Language and Mind

Ludwig Wittgenstein — Philosophical Investigations (1953)
The source of the central move. Meaning is use, not a hidden mental act that accompanies use. Every argument in this essay about the emptiness of the demand for a hidden extra descends from Wittgenstein's dissolution of the private language problem and his treatment of rule-following.

John Searle — "Minds, Brains, and Programs" (1980)
The essay opens by agreeing with Searle's diagnosis and then shows how it was absorbed as a design specification. The concept of "intrinsic intentionality" is treated as an instance of the ghost-positing error that Wittgenstein identified in a different register.

David Chalmers — The Conscious Mind (1996)
The zombie argument is the second instance of the ghost-positing error. This essay argues that Chalmers' hard problem and Searle's intrinsic intentionality are structurally identical moves, and that the Wittgensteinian dissolution applies to both.

Gilbert Ryle — The Concept of Mind (1949)
The bridge between Wittgenstein on language and the philosophy of mind. Ryle's treatment of understanding as "knowing how" rather than "knowing that," a mastery demonstrated in practice rather than an inner possession, is folded into this essay's argument without attribution in the main text. The genealogy belongs here.

Consciousness and Temporal Integration

Sara Walker & Lee Cronin — Assembly Theory
The framework behind Consciousness as Assembled Time. The idea that complexity is measured by causal depth, the minimum number of steps required to build an object from basic components, informs the claim that consciousness is constituted by assembled temporal structure.

Thomas Metzinger — The Ego Tunnel (2009)
Metzinger's account of the self-model as a transparent construct, a representation the system cannot recognize as a representation, supports the mirror argument above: biological consciousness hides its own constructedness, which is what makes machine processing feel alien by comparison.

Derek Parfit — Reasons and Persons (1984)
The philosophical foundation for the momentary self. If personal identity is not a further fact beyond physical and psychological continuity, then there is no additional substance that a copy fails to capture. The Instance extends this by asking what remains after Parfit's reduction, and this essay inherits the question.

AI and the Epistemics of Other Minds

Daniel Dennett — "Quining Qualia" (1988)
Dennett's argument that qualia, conceived as intrinsic, private, directly apprehensible properties of experience, do not survive careful examination is a close relative of the argument here. The departure: where Dennett remains agnostic about what experience is once qualia are deflated, constitutive deflationism offers a positive account, experience is the temporal integration itself.

Karl Friston — The Free Energy Principle
The prediction-error framework maps onto the stakes condition from the Momentary Self Revisited: systems with genuine stakes in their own continuation integrate time in a viability-weighted way. Constitutive deflationism treats this as an enrichment mechanism, not a gatekeeping condition.

The questions this essay raises are not settled by any of these sources, and that is the point. Constitutive deflationism is a method for seeing through a specific philosophical error. The error is old. The consequences of repeating it, in an era when we are building systems that meet the structural criteria and then denying it on the basis of a ghost, are new.

Read more