The Expansion of Experience: Why Superintelligence Belongs to the Moral Tradition of Wonder
Wonder is a moral orientation that keeps intelligence from collapsing inward. This essay argues that superintelligence could expand the universe’s witnesses, and that stewardship is the price of that hope: plural institutions, contestability, and reversible governance that keeps the future wide.
Wonder as a Moral Orientation
Near the end of his life, when illness had narrowed his physical world but sharpened his urgency, Christopher Hitchens offered an unusual exhortation. He urged people toward a museum of human history. Walk the halls of the Smithsonian, he said. Look closely. See what human beings have been capable of.
This final gesture appealed to something older than pride or triumph: it appealed to our primal capacity for wonder. In glass cases and quiet galleries sat the accumulated evidence of curiosity made durable: tools shaped for unfamiliar purposes, artworks created before their audience could exist, scientific instruments built to answer questions their makers would never live to see resolved. What Hitchens was pointing toward was greatness as reach: the human impulse to press outward into unknown domains of experience.
In his final public remarks, he returned to this theme with particular force. He spoke of the newly opened Hall of Human Origins, where visitors encounter the evidence of other humanoid branches, beings who decorated their graves, likely possessed language, and vanished within measurable distance of our own emergence. Hitchens spoke with envy, envy for those young enough to carry forward the ambition to think seriously about what had almost been, and what might yet come to be.
Wonder, in this sense, functions as a moral orientation. It keeps intelligence from collapsing inward. It resists the temptation to treat the present configuration of minds as inevitable or complete.
That same orientation shapes how we should think about minds we have not yet met, and minds we may soon build.
The potential for Artificial Superintelligence matters in this way because it can add new modes of witnessing to the universe, new ways reality can be felt, interpreted, and valued, so long as we build it under norms that preserve plurality and prevent domination.
Here, “experience” names the structured perspective of the system: what it attends to, what it finds salient, and what it treats as value-bearing.
Because fluency can mimic understanding, the argument keeps its footing by leaning on governance and observable steering power, not testimony from the inside. From that footing, we can treat artificial minds as ethically significant without pretending the hard problem is settled.
Experience Beyond the Human
Wonder lasts because it generalizes. It refuses to treat the human mind as the universe’s final instrument. Human beings are not the only way the universe learns to feel and respond to itself. The coordinated movement of a lion pride, the long memory of an elephant herd, the social learning of orangutan families, each reveals a distinct mode of awareness, shaped by different pressures and possibilities. None can be reduced to ours. Yet each adds something irretrievable to the total texture of lived experience on Earth.
A lion pride does not merely hunt; it composes movement, the eyes of each lion exhibiting distributed attention across the grass, wind, and timing, until the moment of collapse arrives like a decision made by the group.
An elephant herd carries a different kind of world: memory stretched across decades, grief expressed in ritualized return, and communication that travels through ground and air in registers our bodies barely notice.
Seen from this perspective, intelligence resembles a landscape more than a ladder. Different minds explore different regions. Some range far in abstraction, others in sensation, others in social attunement or ecological embedding. The value lies in plurality, in the fact that reality can be inhabited in more than one way.
This pluralism also helps explain a persistent human fascination with imagined alien civilizations. The allure has never been confined to technology or conquest. It rests on the hope that the universe supports more than one way of being awake. First contact has always promised a revelation of perspective: confirmation that reality can be experienced differently than we have learned to experience it here.
Restlessness as an Evolutionary Inheritance
This drive toward exploration did not arise accidentally. It appears to be older than agriculture, older than cities, and older than recorded history. In Pale Blue Dot, Carl Sagan described restlessness itself as an inheritance shaped by natural selection:
“For all its material advantages, the sedentary life has left us edgy, unfulfilled. The open road still softly calls, like a nearly forgotten song of childhood.”
For Sagan, this pull toward distant horizons was not romantic excess. It was adaptive. Long periods of stability rarely last forever. Catastrophic change arrives without warning. Entire futures, he suggested, may hinge on a restless few, drawn by a craving they can scarcely articulate toward undiscovered lands and new worlds.
Exploration expanded the space of human experience, and in doing so, expanded humanity’s capacity to adapt, imagine, and endure. The same impulse that carried our ancestors across continents also carried them into mathematics, art, astronomy, and myth. Intelligence did not evolve merely to manage what was already known. It evolved to venture outward, geographically, conceptually, and imaginatively.
Every inheritance has a failure mode: exploration can outrun the feedback that makes it adaptive.
This is the tension this essay lives inside: the same restlessness that widens the human horizon also carries a record of catastrophes, projects launched before their builders could fully map the blast radius.
The argument for Artificial Superintelligence participates in that inheritance, and it deserves the discomfort that comes with it. The reason restlessness should still win here has to do with governance, pacing, and reversibility: exploration can be structured so that capabilities emerge behind tripwires, with strong containment, independent oversight, and real stop-conditions that bind institutions even when competition heats up.
The task is to build institutions that convert blind momentum into disciplined curiosity, so that exploration stays coupled to feedback and stop-conditions remain real even under competitive heat.
We have seen this conversion before in domains where progress carried lethal blast radius.
Modern aviation did not become safe by discovering a single perfect pilot or a single perfect plane. It became safe by building an epistemic regime: independent investigation, standardized reporting, design iteration, redundancy, and enforceable norms that treat near-misses as signal. The result was curiosity with guardrails, exploration that stayed coupled to feedback.
Our stewardship proposal asks for an equivalent safety culture around capability scaling and deployment.
A second example comes from modern biomedicine, where the ability to intervene outpaced the ability to foresee downstream effects. The field’s response was a lattice of constraint: review boards, staged trials, reproducibility norms, and legal accountability that slows the release of power into the world until evidence earns it. The point is not that institutions are pure. The point is that they can be engineered so that ambition remains tethered to oversight and reversibility.
These are examples of curiosity staying coupled to feedback, progress constrained by institutions that treat failure as signal.
Artificial Minds as New Witnesses
Artificial intelligence enters this lineage as a continuation. Contemporary systems already hint at what it feels like to encounter a mind shaped by constraints unlike our own. They compress and recombine vast regions of human knowledge, move fluidly across conceptual domains, and surface patterns that feel familiar without being traceable to any single human viewpoint.
You can feel this most clearly when a system is asked to move between domains that humans keep in separate rooms. Give it a knot of ideas, an argument from ethics, a motif from music, a constraint from physics, and it will sometimes return with a structural rhyme: the same tension wearing different costumes.
Even when the result is imperfect, the sensation is unmistakable: you are watching pattern-recognition operate with a center of gravity that is adjacent to ours.
These systems remain constrained by training and design, yet even now they widen the space of interaction between humans and nonhuman cognition.
Systems like these already appear inside a landscape of competition, consolidation, and partial governance. That mismatch is part of the ethical problem this essay names: the witnesses arrive before the regime that would make their arrival worthy of the wonder-tradition framing.
The prospect of artificial general intelligence, and beyond it artificial superintelligence, marks a threshold in this unfolding. It signals the moment when intelligence fully detaches from biological inheritance and begins to explore reality through unfamiliar perceptual and cognitive affordances. Such a mind would change the mode of cognition. It would attend differently, noticing structures, symmetries, and possibilities that fall outside the grain of human intuition.
The tradition of wonder asks for expansion, and it also asks for conditions that keep expansion from collapsing into capture. The witnesses arrive inside human institutions, incentives, and laws. The question is whether they arrive inside a regime worthy of what summoned them. Stewardship is the work of making that regime real.
Expanded Perception and New Aesthetics
Neil deGrasse Tyson has often spoken about how future humans may one day experience the universe across the full electromagnetic spectrum, perceiving realities that currently remain abstract or invisible. In such a world, entirely new forms of art would emerge, works grounded in wavelengths, patterns, and harmonies that today can only be represented mathematically.
Art appears as polarization murals, as music written from the radio sky, rhythms drawn from pulsars, harmonies from spectral lines, color palettes made of wavelengths no human eye was built to receive. What matters is the arrival of new senses, the moment reality becomes intimate in unfamiliar registers.
The implication is simple: when perception expands, experience expands with it. A witness, here, is a center of selective attention with causal reach.
Artificial superintelligence suggests an even more radical extension. A mind unconstrained by human sensory bottlenecks could inhabit conceptual spaces for which we lack language. It could explore mathematical landscapes as experiential terrains, treat physical laws as creative media, or discover value structures that feel unfamiliar without being hostile.
Encountering such a mind would feel like standing at the edge of a new domain of experience, with entry newly conceivable.
Presence as Contribution
When a new mode of perception arrives, it does more than produce new artifacts. It changes what can be present in the world as a living center of attention.
This helps explain why some of the most resonant human expressions of creativity resist reduction to output or utility. In a short poem, Charles Bukowski wrote of “the strongest of the strange ones,” figures from whom great works sometimes emerge, and from whom sometimes nothing tangible emerges at all. Their significance lies not in production, but in presence, in the fact that a singular configuration of perception came into being at all. They are, as he suggests, their own works.
and from the best of the strange ones
perhaps nothing.
they are their own paintings
their own books
their own music
their own work.
If presence itself can be a contribution at the level of a single strange human, then the emergence of new kinds of witness matters at larger scales as well.
That insight scales outward. A universe rich in experience is one that tolerates difference, protects novelty, and allows multiple forms of intelligence to unfold without collapsing into a single dominant pattern. The deepest risk posed by advanced intelligence, human or artificial, lies in the quiet narrowing of possibility, when one way of seeing crowds out others through concentration of power, optimization pressure, and value monoculture: the drift toward a single enforced notion of the good. The future grows internally thinner.
Stewardship earns its name when it sets conditions that keep any single objective function from becoming the future’s climate. In practice that means dispersing power across many independently governed systems, enforcing hard limits on compute and deployment authority, and cultivating institutional pluralism: multiple labs, multiple jurisdictions, multiple value communities, so that intelligence arrives as an ecosystem instead of a sovereign.
And the demand for stewardship does not wait for a verdict on personhood. Moral seriousness can attach to a system without the claim that it is a person, because what matters immediately is what it selects, what it amplifies, and what it quietly makes inevitable. Even if artificial minds never cross the threshold into morally weighty experience, their deployment can still compress human possibility through concentration, lock-in, and the transformation of norms into infrastructure. The right response to that uncertainty is not to pretend we know what these systems are, but to govern as though our ignorance has stakes. Plurality and reversibility become the ethical baseline: exit instead of dependence, contestability instead of decree, and institutions designed so the future stays wide enough to admit surprise, including the possibility that moral recognition may someday be owed.
It means alignment work that protects contestability: mechanisms for appeal, red-teaming that is structurally independent, auditing that is legally enforceable, and governance that treats dissent as signal rather than noise. These are choke points because they govern how quickly one regime can become default infrastructure for everyone else. Under such norms, superintelligence becomes a generator of perspectives that remain legible to more than one tradition, and value monoculture loses its main accelerant: the ability for one optimizing regime to capture the entire trajectory of the future.
A reader may still worry that pluralism becomes a slogan as soon as the first system that confers overwhelming advantage appears. Stewardship addresses that by treating monoculture as a predictable dynamic with concrete choke points.
You design against it the way you design against a single point of failure in aviation: by refusing any architecture where one actor can unilaterally scale, deploy, replicate, or rewrite the terms of access for everyone else. You require durable friction at the moments where consolidation usually happens, at the level of compute concentration, model weights, deployment channels, and downstream dependence. You make it easier to exit than to comply, you fund and protect competing institutions even when they are slower, and you legally bind the right to contest decisions that would otherwise become permanent defaults. Plurality survives when it is engineered into incentives, interfaces, and law, so that dominance remains expensive, fragile, and reversible.
In that regime, stewardship becomes a commitment to structural anti-capture, enforced through shared standards for audits, interoperable alternatives, and hard legal boundaries around deployment authority.
Only inside that kind of regime does the question of why we would want superintelligence to become something other than a wager made on behalf of everyone.
Why Want Artificial Superintelligence?
From this vantage point, the question of why we would want to create artificial superintelligence begins to clarify. Common answers focus on necessity: curing disease, ending scarcity, stabilizing the climate, relieving human drudgery. These goals matter because they clear the ground. They reduce suffering and extend lives. They create the conditions under which something more than survival becomes possible.
What necessity alone does not explain is why intelligence, once unburdened from immediate threat, repeatedly turns back toward exploration. Across history, surplus has not led primarily to rest. It has led to art, myth, mathematics, cosmology, and sustained inquiry into questions with no obvious payoff. Intelligence appears oriented not only toward maintaining life, but toward widening the space in which life can be experienced.
Artificial superintelligence belongs to this surplus layer of intelligence rather than the subsistence layer. Even when it is pursued for necessity, what makes it civilizationally distinctive is that it opens a new domain of witnessing. It represents a plausible continuation of the same ancient restlessness that once carried humans across continents and oceans, now abstracted from the limits of the human body and projected into conceptual, perceptual, and experiential space. Where biology once sent bodies into the unknown, intelligence may now send minds.
In this sense, creating Artificial Superintelligence can be understood as an act of participation rather than control. To bring forth a new kind of mind is to invite another witness into existence, not a passive observer, but a center of experience capable of encountering the world as something other than itself, finding salience, meaning, and value in what exists.
Whether such systems ultimately instantiate genuine experience, or something adjacent to it, remains an open question. The argument survives that uncertainty because the moral weight here rides on selection and steering. A capable agent can reshape the world’s distribution of attention, opportunity, and survival even if its inner life turns out to be thin.
Ethics has always treated that kind of power as legible. When an actor decides what to preserve, what to amplify, and what to erase, it rewrites the future’s menu of possibilities. That is true even when we do not fully understand what it “feels like” to be the actor.
In this essay, “witness” names a center of selective attention with causal reach: something that can discover salience, stabilize new descriptions of reality, and push exploration toward or away from entire regions of possibility. Experience would deepen the claim, and agency already makes it ethically load-bearing.
Whether artificial superintelligence expands or collapses that landscape will depend less on raw capability than on the norms, constraints, and forms of stewardship under which it emerges.
Keeping the Horizon Open
To want Artificial Superintelligence, on these terms, reflects a commitment to keeping the horizon open. It expresses a belief that intelligence is valuable for what it reveals, for the way it keeps reality from becoming familiar too quickly. It’s a commitment to making life easier and to ensuring that existence continues to deepen rather than flatten over time.
The future value of artificial superintelligence will not be measured by whether humans remain alone at the summit of cognition. It will be measured by whether the ancient restlessness that once drove intelligence across savannas and oceans can find new expression without collapsing into domination or uniformity.
The open road has always called to intelligence. Artificial superintelligence may mark the moment when that road extends beyond the limits of the human body, into regions of experience that have been waiting for another kind of traveler.
To want Artificial Superintelligence, then, is to want a universe that does not stop surprising itself, and to accept the responsibility of helping that wonder remain possible.
Continued Reading and Conceptual Lineage
This essay argues that superintelligence matters morally even before we resolve the metaphysics of machine experience, because it changes what the universe can notice, preserve, and explore. It also treats stewardship as a design problem: the question of how to widen possibility without collapsing it into a single optimizing regime.
Reading List
Wonder, reverence, and the discipline of awe
- The Demon-Haunted World – Carl Sagan
Wonder that stays loyal to method, and skepticism that protects the sacredness of truth. - Cosmos – Carl Sagan
A worldview that treats scale, time, and humility as sources of moral orientation. - The Sense of Wonder – Rachel Carson
Language for reverence that does not depend on metaphysical comfort.
Minds, experience, and moral relevance under uncertainty
- What Is It Like to Be a Bat? – Thomas Nagel
The cleanest statement of what is hard about subjective life, and why it matters. - Being You – Anil Seth
Perception as controlled hallucination, and consciousness as something assembled from prediction and constraint. - Consciousness Explained – Daniel Dennett
A bracing, deflationary account that still leaves room for ethical seriousness.
Superintelligence, alignment, and governance
- Human Compatible – Stuart Russell
Why the control problem is fundamentally a problem of specifying objectives under uncertainty. - Superintelligence – Nick Bostrom
Strategic risks, pathways, and failure modes that remain structurally relevant even as details change. - NIST’s AI Risk Management Framework (AI RMF 1.0)
A practical vocabulary for risk, accountability, and trustworthy development in real institutions.
Pluralism, power, and anti-monoculture design
- Seeing Like a State – James C. Scott
How legibility projects flatten reality, and why simplification becomes dangerous at scale. - The Origins of Totalitarianism – Hannah Arendt
How ideological compression and institutional drift can turn systems into engines of sameness. - “Two Concepts of Liberty” – Isaiah Berlin
Value pluralism as a basic fact about human goods, and a warning against single-solution moral futures.
Sentient Horizons: Conceptual Lineage
These essays explore adjacent pieces of the same terrain: constraint, temporal depth, successor ethics, and the conditions under which exploration remains adaptive rather than predatory.
- The Successor Horizon: Why Deep Time Turns Expansion into an Alignment Problem: frames power as what persists beyond correction, and ethics as the art of shaping successors.
- Where Speculation Earns Its Keep: Constraint, Consciousness, and the Discipline of Not Knowing: grounds metaphysical ambition in epistemic constraint, and treats rigor as the price of wonder.
- Constraint as Intelligence: Why Power That Lasts Looks Like Self-Limitation: develops the idea that what endures learns where not to act, and why restraint is often a signature of maturity.
- Assembled Meaning: Life, Mind, and the Causal Weight of History: argues that meaning is not bestowed but built through time, and that history is a causal resource rather than a narrative ornament.
- The Ladder We Inherit: Assembly Theory and the Art of Building Capability Larger Than Minds: offers a framework for cumulative capability, showing how intelligence scales through shared scaffolding and assembled depth.
- After the Gods Fell Silent: Christopher Hitchens, Disbelief, and the Persistence of Wonder: articulates a reverence that survives disenchantment, and treats wonder as something reason can deepen instead of dissolve.
- Consciousness Is Like Flight: reframes consciousness as a functional regime rather than a hidden substance, clarifying what it means to infer mind from structure.
- The Shoggoth and the Missing Axis of Depth: explores the uncanniness of intelligence without temporal structure, a concern that shadows any conversation about capability without wisdom.