Where Does Thinking Live? AI, Automation, and the Future of Human Agency

In a world optimized for speed and output, AI forces a deeper question: where does thinking live? Automation can quietly hollow out human agency, or it can be used to cultivate a higher level of thinking, responsibility, and intellectual depth.

Students today can ask an AI system to generate essays, poems, analyses, and arguments in seconds. Work that once required hours of solitary struggle. Many educators and parents worry that something essential is being lost. Not merely academic integrity, but the deeper purpose of education itself: the formation of a capable, thinking human being.

For generations, the labor of writing has been the labor of thinking. Wrestling with sentences was not just a means of expression, it was the crucible in which ideas were formed. The blank page forced raw expression, making difficulty formative, not incidental.

So when AI appears to bypass that struggle, our alarm bells sound:

Are students skipping the very friction that makes them educated?

In many cases, the answer is yes.

But focusing only on the presence of AI risks missing a deeper and more important distinction, one that has already played out, successfully and unsuccessfully, in other high-stakes domains.

The Real Risk: Becoming Passengers

The most serious danger of AI in education is not plagiarism. It is cognitive outsourcing without interior formation resulting in an erosion of epistemic agency.

When a student uses AI to produce work they cannot explain, defend, or extend, something vital has failed. They have acquired an artifact without acquiring understanding. The machine has replaced, not amplified, their thinking.

This pattern is not new.

We have seen it before with calculators used before number sense forms, with GPS eroding spatial reasoning, and with automation that removes humans from the loop entirely. In each case, the danger is the same:

Humans become passengers in systems they no longer meaningfully operate.

When failure occurs, we feel helpless in these systems, and agency quietly dissolves.

Critics of AI in education are right to fear this outcome. It is already happening. But it is not the only possible trajectory.

Automation Does Not Necessarily Reduce Human Capability

Modern aircraft are among the most automated systems ever built. They manage stability, calculate trajectories, prevent stalls, and continuously optimize performance. If automation inevitably made humans helpless, pilots would have become obsolete long ago.

Instead, the opposite occurred.

Pilots no longer spend cognitive bandwidth on raw mechanical control. They operate at higher levels of abstraction: managing intent, systems interaction, edge cases, and failure modes. Training did not diminish. It intensified.

A modern fighter pilot is not less skilled than an early aviator. They are amplified, capable of operating in environments no unaided human could survive.

Crucially, aviation rebuilt human layers on top of automation:

  • interpretability (pilots understand what the systems are doing),
  • oversight (they can override and redirect),
  • and narrative responsibility (decisions are explainable after the fact).

Automation did not remove agency in the modern airplane, it relocated it upward.

Medicine: AI as Partner, Not Oracle

A similar fork is emerging in medicine, but unlike a plane (which has objective physical laws), medicine and education involve subjective human values.

Used poorly, diagnostic AI becomes an oracle: a system that outputs answers doctors defer to without understanding. Accountability blurs. Clinical intuition atrophies.

But when used well, AI can expand perception. It can surface patterns across millions of cases, highlight anomalies, and augment human judgment rather than replacing it. The physician remains responsible for contextualizing, questioning, and deciding.

In these cases, the doctor becomes more capable, not less.

Once again, the difference is not the presence of automation, but the role humans are trained to occupy once it exists.

Passenger vs Amplified Agent

This distinction matters more than any individual technology.

There are two archetypes of human–machine integration:

The Passenger

  • The system acts
  • The human monitors passively
  • Skill erodes
  • Intervention is rare and poorly practiced
  • Failure feels inevitable

The Amplified Agent

  • The system handles speed, complexity, and scale
  • The human handles judgment, intent, and responsibility
  • Skill increases rather than disappears
  • The human remains inside the causal loop
  • Failure is intelligible and actionable

The contrast between driverless cars and fighter jets makes this visible.

One removes the human entirely while the other makes the human vastly more capable.

Same class of technology. Opposite philosophies of design.

Education Is Now at the Same Fork

AI introduces this same choice into the creative and intellectual domain.

The student can use AI to bypass struggle, or they can use it to cultivate a more robust epistemic agency.

In this second model, the AI is a sparring partner. The student is forced to adjudicate between competing claims, verify hallucinations, and synthesize machine-generated breadth with human-centered depth. They are not just learning facts; they are learning how to own the process of knowing in an age of automated information.

The writing may be easier; the thinking often becomes harder and more complex.

Ideas must be articulated, defended, revised, and re-owned. The student becomes an operator of concepts, not a consumer of prose.

This is not the elimination of rigor. It is its relocation into a richer, more complex domain.

It is important to note, though, that this agency cannot be summoned from a vacuum. Just as a fighter pilot begins in a basic trainer where they have to master the raw mechanics of flight, a student must first master the raw mechanics of thought—grammar, logic, and basic synthesis—before they can effectively manage a machine that automates them.

You cannot oversee a system whose foundational principles you do not grasp. Intuition is built from the bottom up; without the "raw struggle" of the basics, we lack the mental architecture required to be effective stewards of the collaborative process.

The Gravity of the Passenger State

We must also acknowledge the "path of least resistance." Our economic and social systems often reward the efficiency of the Passenger over the depth of the Agent. In a world that prizes speed and volume, "outsourcing" is incentivized, while "ownership" is treated as an expensive luxury.

Choosing to be an Amplified Agent is therefore not just a pedagogical shift—it is a quiet act of rebellion. It is a commitment to depth in a civilization that is increasingly designed for drift.

The Velocity Trap: Pedagogy vs. Proliferation

We must also address the exhaustion of the educator. One of the greatest barriers to this new model is that AI tools are proliferating faster than our ability to build a relationship with them. By the time a curriculum is designed for one model, the next version has already rendered its specific constraints obsolete.

This creates a "Velocity Trap." Educators feel they must be masters of the software before they can guide their students, but in an age of exponential growth, "tool mastery" is a losing game.

The solution is a shift in the teacher’s role: from Feature Expert to Process Philosopher.

If we focus on the specific buttons and prompts of a single AI, we will always be behind. But if we focus on the stewardship of thought: the ability to interrogate output, the ethics of attribution, and the preservation of epistemic agency, then we are teaching skills that are model-agnostic. We must move from teaching students how to use a tool to teaching them how to exist in a permanent state of principled co-learning with intelligence itself.

The Three Shadows of Automation

Even a well-designed system of "Amplified Agency" faces legitimate friction. To build these systems, we must first answer the critics who see three shadows looming over this transition.

1. The Atrophy of Intuition

The critic argues that "gut feelings" are simply high-speed pattern recognition built through thousands of hours of manual toil. If we rely on AI for 99% of the work, does the physician’s hunch or the writer’s instinct ever actually form? The answer is that struggle must be designed, not just tolerated. We do not remove the weights from the gym simply because we have invented the forklift. To keep intuition sharp, our "thinking with" machines must include deliberate intervals of "thinking without" them, ensuring the mental muscles of the craft are still being torn and rebuilt.

2. The Cognitive Class Divide

There is a darker risk: a new epistemic inequality. If only a few are trained as "Architects" while the rest are conditioned as "Passengers" of proprietary algorithms, we haven't liberated humanity; we’ve merely automated the hierarchy. Epistemic agency must be treated as a universal right, not a luxury tier. Our educational mission must be to democratize the "Agent" training. If we only teach the use of tools without teaching the logic of the tool, we are creating a world of consumers, not creators.

3. Bukowski’s Ghost: The Loss of the "Soul"

Finally, the poet whispers: what of the soul? If a machine smooths out the prose, does it rub away the jagged, inefficient edges that make human art resonate? If we use a "sparring partner" to perfect our logic, do we lose the "grit"? The "soul" of a work is not found in the labor of typing, but in the gravity of intent. The machine can provide the polish, but it cannot provide the why. The "jagged edges" of a Bukowski poem don't come from a lack of tools; they come from a human deciding that the raw truth matters more than the smooth lie. In an AI world, the human remains the sole arbiter of what is "true" and what is "resonant."

The Only Criterion That Matters

Debates about tools miss the point. The question is no longer whether AI is used, but rather:

Can the student explain the idea in their own words, defend it under questioning, extend it to new contexts, and recognize where it might fail?

If yes, education has succeeded. If no, no amount of handwritten struggle would have saved it. However, we must recognize that to arrive at "yes," the student likely needed a foundation of un-automated struggle to build the very vocabulary of their agency.

Education should not be about producing artifacts. The end goal should always be about producing agents capable of operating effectively in an ever-changing world.

Rebuilding the Human Layers

As machines enter more domains of life, we must find ways to foster stewardship of the human-machine relationship at every stage.

Where automation works, societies can rebuild:

  • interpretability: humans can say what the system is doing and why,
  • oversight: humans retain authority to intervene,
  • narrative responsibility: outcomes are owned and explainable within a human context.

A pilot can explain a maneuver.
A doctor can explain a diagnosis.
A student must be able to explain an idea.

That is the through-line.

Rigor Was Never About Difficulty Alone

At its heart, the fear surrounding AI in education is not about technology. It is about formation.

Parents and teachers alike sense that something essential is at stake: whether young people will grow into adults who can think for themselves, tolerate uncertainty, and take responsibility for their beliefs in a complex world. That concern is justified.

But formation has never depended on isolation alone. It has always depended on challenge, accountability, and the demand to stand behind one’s understanding. AI does not remove that demand unless we allow it to.

Used poorly, these tools will make thinking feel optional. Used well, they can expose students to more perspectives, more tension, and more responsibility than solitary struggle ever could.

The task before us is not to shield students from powerful tools, but to ensure they become the kind of people who can learn how to wield power without losing themselves in the process.

Further Exploration on Sentient Horizons

If this essay sparked a curiosity about the intersection of mind, machine, and the structures of reality, you may find these previous entries relevant:

Foundations & Inspirations

For those looking to go deeper into the "Passenger vs. Agent" dilemma and the nature of human grit: