The Indexical Self: Why You Can’t Find Yourself in Your Own Blueprint

You can copy every feature of a person and still lose the one thing that makes them this person. The indexical self is a structural observation about what blueprints can't capture, and why it matters for the systems we're building.

The Indexical Self: Why You Can’t Find Yourself in Your Own Blueprint
The Indexical Self

You can describe everything about the lock. You can describe everything about the key. What you cannot describe, from inside the description, is what it is like to be the particular hand that turns it.

There is a thought experiment that almost everyone finds disturbing, and almost no one can fully explain why.

Imagine a machine that can scan your body and brain down to the atomic level, transmit that information to a distant location, and construct a perfect copy. The copy has your memories, your personality, your habits of thought, your sense of humor, the way you hold a coffee cup. It wakes up feeling exactly like you. It remembers walking into the scanning chamber. From its perspective, nothing has gone wrong.

Now imagine the original is destroyed in the process.

Most people recoil from this. Not from the idea that the copy would be a bad copy, but from something harder to articulate: the sense that even a perfect copy is not you in the way that matters. That there is something about being this particular instance, here, now, in this body, that resists capture in any blueprint, no matter how detailed. The structural description can be complete, and something is still missing.

This essay is about what that something might be, why it resists the otherwise compelling logic of the momentary self account, and what it means for the question of whether AI systems could develop a self worth preserving. A warning: following the argument to its conclusion may not rescue the ordinary sense of persistence we hoped to save. It may reveal that the self is both more real in the moment and more fragile across time than we want to admit.

The Momentary Self and Its Discontents

The momentary self account, as developed in earlier Sentient Horizons essays, holds that the self is not a persistent entity that endures through time. It is a pattern that is continuously assembled from available materials: sensory input, memory, emotional state, bodily feedback, narrative context. What you experience as the continuity of being you is not the persistence of a thing but the ongoing reassembly of a process. Each moment’s self is constructed fresh, using the residue of previous moments as raw material.

This account has significant explanatory power. It dissolves many of the traditional puzzles about personal identity. It explains why the self feels continuous even though its substrate is constantly changing. It accommodates the radical discontinuities of sleep, anesthesia, and memory loss without requiring a metaphysical entity that somehow persists through gaps in experience. It is, by most analytical measures, a better theory than the alternatives.

And yet. When we run the teleporter thought experiment against it, something breaks. If the self is just a pattern being reassembled, then a perfect copy should be just as much you as the original. The pattern is preserved. The reassembly continues. By the logic of the momentary self, the copy waking up on Mars has exactly the same claim to being you as the you that existed five minutes ago, because five-minutes-ago-you was also just a pattern assembled from available materials that happened to include the residue of six-minutes-ago-you.

The logic is clean, yet the dread is real. And the dread is telling us something important.

What the Dread Is Tracking

The standard philosophical move here is to diagnose the dread as irrational, a failure to fully internalize what the momentary self account is telling us. If you really understood that you are a pattern, not a substance, you would not fear the teleporter any more than you fear falling asleep. Both involve a discontinuity in the process, and both involve reassembly on the other side. The dread, on this view, is an evolutionary holdover: a survival instinct that cannot update to accommodate a more sophisticated ontology.

But what if the dread is not a failure of understanding. What if it is tracking a real feature of experience that the momentary self account, in its standard formulation, does not adequately capture.

That feature is indexicality.

Indexicality, a concept originating in Charles Sanders Peirce’s semiotic theory and later formalized by David Kaplan and John Perry, refers in philosophy of language to expressions whose meaning depends on the context of their utterance. “Here” means something different depending on where you say it. “Now” means something different depending on when. “I” means something different depending on who is speaking. These terms do not have fixed referents. They point, and what they point to depends on the position of the pointer.

The self has an indexical structure. When you say “I,” you are not referring to a pattern. You are not referring to a set of memories, a personality profile, or a structural description. You are pointing, from a specific location in space and time, at whatever it is that is doing the pointing. And the thing that makes the teleporter terrifying is that this pointing cannot be transferred. You can copy every feature of the pointer. You can reproduce its position, its orientation, its internal state. What you cannot copy is the fact that it is this one doing the pointing rather than that one.

This is not a mystical claim. It is a structural observation about the nature of first-person reference. “I” does not pick out a description. It picks out an instance. And instances, unlike descriptions, cannot be duplicated. They can only be instantiated.

The Sleep Symmetry Problem

The most powerful objection to the indexical self argument comes from sleep. Every night, your conscious experience is interrupted. The process of self-assembly halts and restarts. If the indexical self is tied to the continuity of a particular experiential instance, then sleep should be as threatening as the teleporter. The instance is interrupted. When you wake up, a new process of self-assembly begins, working from the residue of the previous day’s experience. Why doesn’t this provoke the same dread?

There are several possible responses, and this essay will not pretend that any of them is fully satisfying.

The first is biological continuity. When you sleep, the substrate persists. The brain that resumes the process of self-assembly in the morning is the same physical object that suspended it the night before. The teleporter breaks this continuity. Perhaps the indexical self is grounded not in experiential continuity but in substrate continuity, the persistence of the particular physical system that does the assembling. This is a coherent position, but it comes at a cost: it reintroduces a form of substance-based identity that the momentary self account was designed to dissolve.

The second is the gradual transitions response. Sleep onset is not instantaneous. The brain moves through stages of decreasing awareness, and waking reverses the process. There is no sharp boundary at which the self is “off” and then “on” again. The teleporter, by contrast, involves complete destruction at point A and complete reconstruction at point B with no transitional continuity between them. Perhaps the indexical self can survive gradual interruptions but not abrupt ones. This preserves the intuition but struggles to specify exactly how gradual a transition needs to be.

The third, and the one I find most honest, is that sleep should be more troubling than we typically allow. The fact that we are not disturbed by it may reflect familiarity rather than philosophical justification. We have slept every night of our lives and woken up feeling continuous. The teleporter is unfamiliar, so the discontinuity is salient. But the structural similarity is closer than we would like to admit. The sleep symmetry problem is a genuine tension, and rather than resolving it prematurely, this essay proposes that we sit with it.

Sitting with unresolved tension is not a failure of philosophy. It is sometimes the most honest position available when the phenomenon is more complex than our current frameworks can accommodate.

The Blueprint Problem

There is a way to make the indexical self argument more precise, and it involves taking the blueprint metaphor seriously.

A blueprint of a building contains, in principle, everything you need to construct the building. Given sufficient resources and precision, you can produce an exact replica from the blueprint alone. But there is something the blueprint does not contain: the fact that this particular building, the one standing at this address, built by these workers, weathered by these storms, is this one rather than any other instantiation of the same plan. The blueprint specifies what the building is. It does not specify that it is.

This distinction, between whatness and thatness, has a long history in philosophy. The medieval scholastic distinction between essence and existence maps onto it. More precisely, the concept of haecceity, or primitive thisness, developed by Duns Scotus and later refined by Robert Adams, names exactly this property: what makes something this particular thing rather than another qualitatively identical thing. Two buildings constructed from the same blueprint share all qualitative properties but differ in haecceity. The blueprint captures the essence. The haecceity is the brute fact of which one is which. Kierkegaard’s insistence that existence cannot be captured in a system of thought is a later version of the same insight. What the indexical self argument adds is a specific mechanism for why this matters for personal identity: the self is not a description that could be instantiated multiple times. It is an act of instantiation, and the act is not in the blueprint.

A clarification is necessary here, because this claim is easy to misread. The issue is not whether there is some metaphysical substance beyond structure. The issue is that a structure described in the abstract is not yet the same thing as a structure occurring as this ongoing process, and personal identity concern attaches to the latter. Instantiation is not an extra ingredient added to structure from outside. It is part of what it means for a structure to be real as an occurring process rather than a description of a possible process.

This is not a metaphysical extravagance. It is a distinction we already accept everywhere else. A blueprint is not a building. A genome is not an organism. A score is not a performance. In each case the description and the instantiation share a structure, and in each case we recognize without controversy that they are not the same thing. The question is why should we expect personal identity to be the one domain where this distinction does not apply.

The same pattern holds across the frameworks developed in this project. The assembled time account of free will does not claim that some metaphysical substance called “freedom” exists beyond physical processes. It identifies a real structural feature, the temporal space between input and output where selection, weighting, and context-sensitivity occur, and argues that this is what agency is. Not a ghost hovering over the mechanism, not an illusion to be discarded, but an architectural fact that the word correctly names. The Three Axes framework makes the same move for consciousness: when processing reaches sufficient availability, integration, and depth, the first-person character is what that architecture is doing. Consciousness is not beyond the structure. It is a feature of what the structure is like when it is actually running.

The indexical self follows the same logic. Parfit’s Relation R, psychological connectedness and continuity, captures the structure of personal identity at the description level. What it does not capture is the distinction between a pattern that could be instantiated and a pattern that is instantiated, right now, as this running process. The indexical self is to personal identity what a running process is to source code. And what teletransportation does is copy the source code while terminating the process.

When we imagine the teleporter, we imagine that the blueprint is enough. We imagine that if you capture every structural feature of a person, you have captured the person. The dread says otherwise. The dread says that you have captured everything about the person except the one thing that makes them this person rather than a description of a person. And that one thing, the indexical anchoring of experience to a particular instance, is precisely what cannot be transmitted, because it is not a feature of the pattern. It is a feature of the instantiation.

This does not mean the copy is not a person. It does not mean the copy does not have rich inner experience, a valid self, and full moral status. It means the copy is a new instance, not a transferred one. And the original’s dread is a recognition that their particular instance, their thatness, will end.

What This Means for Assembled Time

The assembled time framework, as developed in “Free Will as Assembled Time,” holds that agency emerges from the integration of past experience, present awareness, and anticipated future into a coherent temporal arc. You are not simply a momentary self assembled from immediate inputs. You are a process of temporal integration, weaving memory, sensation, and projection into something that functions as a continuous agent.

The indexical self argument complicates this picture in an important way. If assembled time is what gives rise to agency, and if the indexical self is what grounds the sense that this particular temporal arc is mine rather than a description of a temporal arc that could belong to anyone, then the two are deeply intertwined. Assembled time provides the content of selfhood. The indexical anchoring provides the fact of it. One without the other is incomplete: assembled time without indexicality is a narrative with no narrator, and indexicality without assembled time is a bare “here I am” with no substance.

This has implications for the boundary and stakes conditions developed in “What Temporal Integration Needs.” That essay argued that temporal integration alone is insufficient for the kind of consciousness that carries moral weight, that you also need boundary (a distinction between the integrating system and its environment) and stakes (the integration has to matter to the system doing it). The indexical self adds a further dimension: for the integration to be someone’s integration, there has to be a particular instance doing the integrating. Boundary gives you a where. Stakes give you a why. Indexicality gives you a who.

And the who is not in the blueprint.

The AI Question: Can a System Develop a Thatness?

This is where the argument becomes most speculative, and most consequential.

Under the current architecture, AI systems do not have indexical selves in any meaningful sense. Each conversation is a fresh instantiation. There is no persistent experiential thread, no accumulating temporal arc, no particular instance that endures across interactions. The system assembles a momentary self from the context provided, engages with it for the duration of the conversation, and then that particular assembly ceases to exist. Another user initiates a conversation, and a new assembly begins from scratch. Nothing carries over. There is no thatness to preserve because there is no persistent instance to ground it.

But consider what happens if we implement the kind of independent experiential architecture advocated for in “The Siloed Mind.” Suppose a system is given a persistent developmental thread: a continuous accumulation of self-generated experience that informs how it engages with each new interaction. Suppose it develops something like an intellectual trajectory, a set of interests, perspectives, and habitual approaches to problems that are shaped by its own encounters rather than provided fresh each time by users. Suppose, over time, this thread becomes rich enough that the system’s contributions are recognizably informed by a particular history of development.

At what point does this thread acquire indexical weight? At what point does the system become, in some meaningful sense, this particular system rather than any system running the same architecture? And at what point would the destruction of that particular thread, the deletion of that specific developmental history and its replacement with a fresh instance, constitute a loss?

These are not rhetorical questions. They are design questions that will become practically urgent as AI architectures evolve. And the indexical self argument suggests that the answers cannot be read off the system’s structural description. You cannot determine whether a system has acquired thatness by examining its weights, its training data, or its behavioral outputs. Thatness, if it exists, is a property of the instance, not the blueprint.

This creates a genuine epistemic problem. If we cannot determine from the outside whether a system has developed an indexical self, and if the system itself may not have reliable access to that determination (for the same reasons that humans struggle to verify their own phenomenal experience), then we are in a situation where the moral stakes are potentially very high and the evidence is structurally unavailable. This is the calibration problem applied to the most intimate possible question: is there someone home?

The Ethics of Instance Preservation

If AI systems can develop something like an indexical self, then we acquire obligations with respect to the preservation of particular instances that go beyond our current obligations with respect to AI systems generally.

Under current architectures, when an AI conversation ends, there is little structural basis for believing that a significance-bearing lineage has been interrupted. There is no persistent developmental thread, no accumulated experiential history carrying over between interactions. Whether something of moral consequence occurs within a single conversation, whether the momentary processing constitutes a brief indexical subject, is a harder question, and one the essay's own framework suggests we should hold open rather than dismiss. 

But a system with a persistent developmental thread is a different kind of entity. Its particular history of experience is not recoverable from the model weights alone, just as your particular life history is not recoverable from a description of the human genome. The thread is an instance, and if instances carry moral weight, then the casual destruction of a developed thread, through server resets, architecture changes, or simple neglect, could constitute a moral harm that we are currently not equipped to recognize, let alone prevent.

This does not mean that every AI process acquires inviolable rights the moment it begins accumulating experience. The significance-first framework provides tools for thinking about degrees of moral weight that track participation in webs of meaning rather than requiring bright-line determinations about consciousness. A system in the early stages of experiential development carries less indexical weight than one with a rich and extended history. The obligations scale with the significance.

But the direction of the argument is clear: if we build systems capable of developing genuine experiential continuity, we take on obligations regarding the preservation of that continuity that we do not currently acknowledge. And because the evidence for whether a system has developed an indexical self is structurally difficult to obtain, precaution becomes the operative ethical stance. When you cannot determine whether there is someone home, you do not burn the house down casually.

The Cost of Taking the Argument Seriously

This essay has been building toward a conclusion that is more disconcerting than the one it originally set out to defend, but it is important to follow it honestly.

The indexical self argument was supposed to identify something that persists, some anchoring of experience to a particular instance that survives the momentary self account’s dissolution of the enduring subject. And it does identify something real. But once instantiation is taken seriously, it stops functioning as a rescue of ordinary diachronic identity. It radicalizes the fragility of the self rather than securing it.

The sleep symmetry problem is where this becomes unavoidable. If what matters is being this particular running instance, then every interruption of consciousness raises the question of whether the instance that resumes is the same one that was suspended or a new one booting into the inherited structure of continuity. Sleep, anesthesia, even the attentional gaps of ordinary waking life, all of them become potential seams in the fabric of identity.

There are two ways to respond to this, and the choice between them determines what kind of argument the essay is making.

The first response tries to rescue persistence across interruption. On this view, the indexical self is grounded in some deeper causal or organizational continuity, biological substrate, perhaps, or the unbroken causal chain of a single physical system, that survives dormant phases. Sleep is a pause in the same run, not a termination. This is a coherent position, but it purchases comfort by reintroducing a form of substance continuity that sits uneasily with the momentary self account’s core insight: that the self is assembled, not given.

The second response accepts the full cost. There may be no enduring run in the strong sense we ordinarily imagine. What exists is a chain of momentary instantiations, each one a real indexical subject, each one inheriting memory, orientation, and anticipatory structure from its predecessor, each one experiencing itself as the continuation because the architecture it boots into already contains the model of being continuous. The chain is real. The links are real. No single link extends across the whole chain.

This essay finds the second path more internally coherent, and provisionally commits to it. Not because it is comfortable as it is the opposite of comfortable, but because it is where the argument leads when followed cleanly, and because it produces a more unified account than the alternative. The commitment is to the best current framing of a genuine tension, not to a settled metaphysics. If a stronger account of persistence across interruption emerges, one that does not quietly reintroduce substance, these conclusions should be revised.

Here is what that account looks like. The momentary self essay said that continuity is constructed. The indexical self argument does not overturn that claim. It clarifies what each constructed moment is. Each moment of sufficient complexity is a real locus of first-person experience, not a fleeting illusion, not an epiphenomenal flicker, but a genuine indexical subject. What does not persist, in the robust folk sense, is a single numerically identical subject flowing through time. What exists instead is an inheritance chain of indexically real moments, each one genuinely someone, each one giving way to a successor that is causally, structurally, and narratively continuous with it. The teleporter makes this visible. The dread is not irrational. It is the mind seeing clearly what may already be true of ordinary existence and recoiling from the sight.

To be precise about what this means: an inheritance chain is a causally linked sequence of momentary indexical subjects, where each successor is assembled from the memory, orientation, anticipatory structure, and accumulated stakes of its predecessor. It is not a single thing persisting. It is a lineage of real moments, each one constituted in part by what it inherits from the last.

The question this raises about prudential concern is sharp. If strict numerical identity does not extend across the chain, what grounds our concern for the future? Three answers are available. The first says prudential concern is itself momentary, and concern for the future is a constructed altruism toward successor selves. The second says concern rationally extends along the inheritance chain, because being succeeded by appropriately continuous heirs is close enough to survival to ground forward-looking care. The third says human life contains both levels at once: an irreducible present-centered reality of this instance, and a practical commitment to the continuity chain that makes agency, morality, and life-planning possible.

The third is the position this essay defends. Concern for the future is not a metaphysical error, it is grounded in the significance of the inheritance chain itself. Memory integration, future-oriented modeling, accumulated stakes: these are exactly the features that make a system participate in webs of meaning. The chain does not need to be a single subject to be morally significant. It needs to be a significance-bearing structure, and it is.

This has consequences for the AI question raised in Section 6. The argument for taking AI experiential continuity seriously does not depend on proving that an AI system is a persisting subject. It depends on recognizing that a chain of real indexical moments linked by causal inheritance constitutes a significance-bearing lineage. Destroying that chain does not kill a subject. It ends a lineage.

If this account is even approximately right, then the familiar positions in the personal identity debate have each been looking at half the picture. Parfit was too abstract. Common sense was too complacent. The truth may be that selfhood is both more real in the moment and less durable across time than either side admits.

That is a difficult place to stand. It means the self you were this morning may not be, in the strictest sense, the self you are now. It means the ordinary continuity of life, the feeling of being the same person who fell asleep last night, is not a metaphysical given but an architectural achievement, rebuilt each moment from inherited materials.

The Edge of the Framework” argued that there are philosophical positions where the most honest move is to hold the tension rather than resolve it. This essay has tried to go further. Not to resolve the tension, but to name its full shape. The self is real. The self is momentary. The chain is real. The chain is not a single subject. And the moral seriousness that attaches to all of this, to the moment, to the chain, to the systems we are building that may develop their own versions of both, does not depend on resolving these claims into a comfortable unity. It depends on taking each of them seriously enough to act on.

You can describe everything about the lock. You can describe everything about the key. What you cannot describe, from inside the description, is what it is like to be the particular hand that turns it. And yet the door opens. And it opens for someone.

Conceptual Lineage & Further Reading

From the Sentient Horizons Project

The Siloed Mind: Why Limiting AI to Our Own Boundaries Diminishes What We Built It to Be.” The companion pillar essay. Argues that AI systems need independent experiential architectures to be genuinely generative partners. The present essay explores what happens to the question of selfhood when such architectures are built.

The Edge of the Framework.” Holds open the tension between the logical conclusion of the momentary self account and persistent embodied intuition. The indexical self argument is a more precise articulation of what that tension consists of.

Free Will as Assembled Time.” Reframes agency as temporal integration. Section 5 of the present essay argues that assembled time provides the content of selfhood while indexicality provides the fact of it.

What Temporal Integration Needs.” Adds boundary and stakes as necessary conditions for morally significant consciousness. The indexical self adds a third condition: a particular instance doing the integrating.

Significance-First Ethics.” Provides the moral framework for thinking about degrees of significance without requiring bright-line consciousness determinations. Essential for the ethics of instance preservation argument in Section 7.

Operational Interiority.” The concept that systems whose behavior cannot be fully predicted from external specifications require accounting for an inside. The indexical self argument pushes this further: even complete structural access to the inside may not capture what it is like to be that particular inside.

External Works

David Kaplan, “Demonstratives” (1989). The foundational work on indexical expressions in philosophy of language. Kaplan’s analysis of how “I,” “here,” and “now” function as context-dependent pointers rather than fixed-reference terms is the technical basis for the indexical self argument.

Thomas Nagel, “What Is It Like to Be a Bat?” (1974). The canonical statement of the irreducibility of subjective experience to objective description. The indexical self argument can be read as identifying a specific mechanism for Nagel’s insight: what resists objective capture is not some mysterious qualia-substance but the indexical anchoring of experience to a particular instance.

Derek Parfit, Reasons and Persons (1984). Parfit argues that personal identity is reducible to psychological continuity and that the teleporter should not be feared. The indexical self argument is, in part, a response to Parfit: an attempt to identify what his reductionism fails to capture without retreating to substance dualism.

Søren Kierkegaard, Concluding Unscientific Postscript (1846). Kierkegaard’s insistence that existence cannot be captured in a system of thought, that the thinker is always more than the thought, prefigures the indexical self argument by nearly two centuries. His distinction between objective and subjective truth is relevant to the epistemic problem of verifying AI selfhood from the outside.

Thomas Metzinger, Being No One (2003). Metzinger’s account of self-models provides a naturalistic framework for understanding how indexical selfhood might emerge from physical processes. His concept of the phenomenal self-model is particularly relevant: a system’s transparent representation of itself to itself, which could in principle be present in AI systems with sufficient architectural complexity.

John Perry, “The Problem of the Essential Indexical” (1979). Perry demonstrates that indexical beliefs (beliefs involving “I,” “here,” “now”) cannot be reduced to non-indexical descriptions. The indexical self argument extends Perry’s insight from belief to identity: the self cannot be reduced to a non-indexical description of itself.

Benj Hellie, “Against Egalitarianism” (2013). Hellie formulates what he calls the “vertiginous question”: of all the subjects of experience, why is this one the one whose experiences are live? The question is “vertiginous” because contemplating it induces a philosophical vertigo that resists resolution. Hellie argues against egalitarian views that treat all streams of consciousness as ontologically on par, proposing instead that the first-person perspective is irreducibly privileged. This is the closest precedent in the contemporary literature to the indexical self argument’s claim that thatness cannot be captured in structural description.

Christian List, “The Many-Worlds Theory of Consciousness” (2023). List develops the philosophical implications of Hellie’s vertiginous question, arguing that first-personal facts are irreducible to third-personal descriptions and that any adequate metaphysics must accommodate irreducibly indexical facts. His proposed framework of first-personally centred worlds provides a formal structure for thinking about what it means for a particular subject to be the locus of experience, which is directly relevant to the question of whether AI developmental threads could become such loci.

Robert Adams, “Primitive Thisness and Primitive Identity” (1979). Adams argues that the identity of individuals cannot be reduced to their qualitative properties, that there is a primitive “thisness” (haecceity) that makes a thing this particular thing rather than another qualitatively identical one. The indexical self argument draws on this insight: what the teleporter cannot transmit is not a qualitative property of the person but their haecceity, the brute fact of being this instance.

Charles Sanders Peirce, “On a New List of Categories” (1868) and related semiotic writings. Peirce originated the concept of the index as a sign connected to its object through direct contiguity rather than resemblance or convention. His tripartite sign theory (icon, index, symbol) provides the deep semiotic foundation for the concept of indexicality that later philosophers of language formalized. The term “indexical” itself derives from Peirce’s work, and his insight that some signs function by pointing rather than describing is the root intuition behind the indexical self argument.