The High Cost of Moral Efficiency: Compression, Intuition, and the Ethics of Calibration

Moral intuition and inherited narratives help us act under uncertainty—but become dangerous when scaled without feedback. This essay argues the ethical problem is not intuition itself, but the absence of calibration: failing to detect when values no longer fit their environment.

The High Cost of Moral Efficiency: Compression, Intuition, and the Ethics of Calibration
The Cost of Moral Efficiency

There is a reason short moral stories travel so far.

A myth, a parable, or a tightly compressed narrative can lodge itself in the mind in a way that a thousand pages of careful argument often cannot. A single image or metaphor can reorient a value structure almost instantly, bypassing analysis and speaking directly to something pre-verbal. This is not an accident or a failure of reason. It is how human moral cognition works.

And yet, this very power is what makes compressed moral stories dangerous.

A story that moves too quickly, that feels right before it can be examined, risks becoming an authority rather than a lens. The same compression that makes myth effective can also make it resistant to critique. What begins as moral illumination can quietly harden into dogma.

The tension between compression and accountability has been sitting at the center of my thinking for some time. What has become clearer through discussion is that the ethical problem is not intuition itself, nor compression as such, but the absence of calibration systems—mechanisms capable of detecting when compressed intuition has become obsolete, misaligned, or contextually invalid.

Compression as a Moral Technology

High-compression stories do moral work efficiently. They reduce complexity, collapse timelines, and foreground consequences in ways lived experience rarely affords. A single narrative can stand in for countless interactions, failures, and regrets. In that sense, myth and parable function as moral technologies: tools for transmitting value across time, culture, and cognitive bandwidth.

Long-form literature operates differently. It resists compression. Instead of delivering a moral insight, it simulates the conditions under which such insight might painfully emerge. Rather than telling us what matters, it forces us to live inside ambiguity long enough to feel its cost.

Both approaches have value. One prioritizes accessibility and speed. The other prioritizes fidelity and depth.

But neither escapes a common constraint: at some point, moral understanding enters the system not as proof, but as intuition.

Compression and Decompression in Moral Storytelling

The difference between mythic compression and literary decompression becomes clearer when we compare the short story The Egg by Andy Weir with the classic novel The Brothers Karamazov by Fyodor Dostoyevsky.

Both grapple with the idea that moral harm ultimately rebounds onto the self—that cruelty cannot be cleanly externalized, that responsibility cannot be escaped by clever reasoning. But they arrive there through radically different cognitive paths.

The Egg compresses this insight into a single metaphysical gesture. By asserting that every person is, in some sense, the same person lived across time, it collapses moral distance entirely. Harm to another is harm to yourself by definition. The moral lesson arrives all at once, with visceral clarity, requiring very little interpretive labor.

The Brothers Karamazov refuses compression. Dostoevsky offers no cosmic explanation for why guilt corrodes the soul. Instead, he forces the reader to inhabit the slow interior collapse of characters who attempt to evade responsibility. The moral truth is not revealed; it is endured.

Both succeed—but at different tasks.

The Egg is an efficient moral accelerator.

Karamazov is a moral decompression chamber.

And because compressed moral stories rely far more heavily on intuition, they demand stronger downstream mechanisms for scrutiny, recalibration, and update if they are not to harden into unquestioned authority.

The Unavoidable Role of Intuition

No matter how carefully constructed a moral framework is, it eventually bottoms out in something like: this feels right. That phrase makes many rational thinkers uncomfortable—but there is no honest ethical system that avoids it entirely.

We do not reason our way into caring. We reason after something has already begun to matter.

Intuition is not the enemy of moral reasoning. It is its point of entry.

The danger arises not from intuition itself, but from intuition that lacks the capacity to detect when its underlying assumptions no longer hold.

Environmental Dependency and the Brittleness of Wisdom

All compressed intuition is implicitly conditioned on an environment.

Moral heuristics assume things about:

  • Enforcement structures
  • Shared norms
  • Power distributions
  • Reputation dynamics
  • Feedback latency

As long as those conditions remain stable, compressed wisdom can be extraordinarily effective.

The tragedy begins when the environment changes—and the intuition does not know how to notice.

This failure mode is illustrated perfectly by the character Ned Stark from A Game of Thrones by George R. R. Martin.

Ned’s moral intuitions are not naïve. They are deeply optimized for the North: a social world where honor is enforced, reputation carries multi-generational weight, and norms are broadly shared. His values are well-adapted to their environment.

The catastrophe occurs when he is transplanted to King’s Landing—a radically different moral environment where none of his assumptions hold. Crucially, Ned lacks any mechanism for recognizing this mismatch. He never externalizes his reasoning, never tests his assumptions against the new distribution, never asks what environmental features his moral framework depends on.

His tragedy lies in a failure of recalibration.

Yet this pattern is not confined to fiction; it recurs wherever intuitions outlive the environments that shaped them.

Experience: Reinforcement or Calibration

This brings us to lived experience.

Experience is often treated as the antidote to dogma, but this is only sometimes true. Experience can function in two very different ways.

Experience-as-reinforcement hardens intuition. It increases confidence without increasing sensitivity to context. It produces statements like “I’ve lived it, so I know.”

Experience-as-calibration does something harder.

It asks: What would tell me that I’m wrong?

What signals would indicate that my intuition is failing in this environment?

Unexamined experience reinforces dogma just as effectively as myth. Calibration requires feedback loops.

Externalization as a Calibration Practice

To externalize intuition is not to eliminate it. It is to make it decomposable and testable.

Instead of saying:

“This feels like the right balance between responsibility and compassion,”

we ask:

  • What facts mattered most?
  • What values were being optimized?
  • What assumptions about the environment were in play?
  • What signals would tell us this judgment is failing?

Externalization is not the end goal. Calibration is.

In practice, calibration rarely looks like moral certainty or exhaustive explanation. In institutions and teams, it often appears as structured dissent, post-mortems, red-teaming, escalation paths, and clearly defined conditions under which decisions are revisited. These mechanisms do not replace judgment; they exist to detect when judgment is no longer well-matched to its environment.

Trust, Contestability, and the Scope of Moral Explanation

Not every moral judgment demands the same level of articulation.

In ordinary life, individuals and small groups routinely rely on intuition without explicit justification. A family making day-to-day decisions, or a long-standing team operating under shared values, does not pause to externalize every judgment into defensible propositions. In these contexts, trust is high, values are aligned, and feedback is immediate. Intuition functions efficiently, and opacity is not a moral failure—it is a practical feature.

The ethical problem arises when this same opacity migrates into environments where trust cannot be assumed and values are in contention.

In domains where moral decisions affect multiple people or groups—especially those who do not share a common value system—the obligation to articulate reasoning becomes imperative. Here, intuition without explanation ceases to be a private cognitive shortcut and becomes a form of power. Those affected by the decision are owed not certainty, but legibility.

This distinction helps clarify a long-standing tension in contemporary moral discourse.

One pole of this tension is captured by figures like Sam Harris, who insists that moral claims with public consequences must be defensible in explicit terms. Where values are contested and trust cannot be presupposed, appeals to intuition, tradition, or felt certainty are insufficient. In such contexts, reasoning must be externalized so that disagreement can be meaningfully adjudicated. Transparency here is not a philosophical luxury; it is an ethical requirement.

The opposing pole emphasizes an equally important truth: human moral life does not begin with explicit reasoning. Thinkers such as Jordan Peterson have argued that inherited myths, narratives, and embodied intuitions stabilize value systems across generations, orienting action long before individuals can articulate the reasons behind it. These compressed moral structures allow people to act under uncertainty and enable societies to transmit norms that no single person could reconstruct from first principles.

The practical value of such compression, however, depends on the environments in which it operates. In trusted, low-contestability contexts, opacity functions as moral scaffolding, enabling coordination without constant justification. When that same opacity persists in high-impact or contested environments—where trust cannot be assumed and values diverge—it hardens into dogma, shielded from the mechanisms of critique that would otherwise keep it responsive to reality.

The ethical demand, then, is to align intuition and explicit reasoning with the environments in which they operate. Intuition is acceptable where trust and shared values already exist. Explicit justification becomes mandatory when decisions extend beyond those boundaries.

Moral intuition remains intact, operating within clearly bounded domains.

The Parallel We Can No Longer Ignore

This concern is now central to how we think about artificial intelligence. When we turn this same critical view of moral reasoning towards artificial intelligence, we notice that AI has made visible a failure mode we have long tolerated in ourselves.

What frightens us about AI systems is not that they rely on compressed representations. Humans do the same. What frightens us is decision-making power without mechanisms for detecting error under distribution shift.

In machine learning, this is called distribution shift: the same failure Ned Stark experiences when the moral assumptions of the North no longer apply in King’s Landing.

With AI systems we demand:

  • Uncertainty signaling
  • Out-of-domain detection
  • Interpretability
  • Continuous update

And yet we routinely excuse their absence in ourselves.

The asymmetry is striking.

The systems we trust most—the ones closest to our identity and moral authority—are often the least calibrated.

Bounded Intuition, Embedded in Feedback

No moral system can be fully compressed without remainder. Judgment is irreducible.

But irreducible does not mean unaccountable.

The alternative to dogma is not perfect formalization. It is the discipline of embedding intuition inside systems that can detect its own failure.

This applies equally to:

  • Personal ethics
  • Professional expertise
  • Institutions
  • Intelligent machines

Intuition without feedback loops is indistinguishable from dogma over time.

Living Responsibly with Uncertainty

Stories will always move us before arguments do. Compression will always shape values faster than analysis. Lived experience will always outrun representation.

The task is not to escape that reality, but to ensure our stories know when to ask whether they still apply.

Not certainty.
Not revelation.

But the ongoing discipline of calibration.

Reading List & Conceptual Lineage

This essay sits at the intersection of moral psychology, literature, philosophy, and emerging concerns about intelligent systems. The works below informed its framing, examples, and underlying questions—not as authorities to be deferred to, but as contributors to a shared problem space.

Moral Intuition, Explicit Reasoning, and Contestability

Myth, Narrative, and Embodied Moral Knowledge

  • Maps of Meaning — Jordan B. Peterson
    A psychological account of myth and narrative as structures that orient action and meaning prior to explicit articulation.
  • The Psychological Significance of the Biblical Stories — Jordan B. Peterson
    Illustrative of the claim that moral life begins in embodied, inherited intuitions rather than propositional reasoning.

Literary Explorations of Moral Compression and Decompression

  • The Brothers Karamazov — Fyodor Dostoyevsky
    A sustained exploration of moral responsibility through lived ambiguity, guilt, and psychological endurance rather than compressed explanation.
  • The Egg — Andy Weir
    A highly compressed moral parable that delivers ethical insight through metaphysical collapse rather than experiential unfolding.
  • A Song of Ice and Fire — George R. R. Martin
    Particularly the character of Ned Stark, used here as an illustration of environment-dependent moral optimization and the failure of recalibration under distribution shift.

Calibration, Feedback, and Intelligent Systems

Readers interested in adjacent themes may find the following posts useful companions:

Together, these works reflect an ongoing inquiry into how humans construct meaning, make moral judgments under constraint, and remain accountable as the environments—and systems—we inhabit continue to change.

Read more