AI and C.S. Lewis: How The Abolition of Man Predicted Our Current Moment

In 1943, C.S. Lewis warned of a philosophical crisis that's now arriving with AI. We've embraced moral relativism while building systems that require absolute moral certainty.

Anime-style figure at crossroads: books and library on left, AI terminals on right. Choice between wisdom and technological optimization.

In 1943, C.S. Lewis published a small book that seemed prescient without quite knowing why. The Abolition of Man critiques cultural relativism—the idea that all moral claims are equally valid, equally subjective, equally arbitrary. It's a philosophical argument about postwar Western thought, touching nothing about technology beyond chemistry textbooks.

Today, reading it in February 2026, it feels like Lewis was describing an AI problem he couldn't possibly have imagined.

I don't mean this casually. I mean that the core structure Lewis identified—loss of objective moral foundation combined with growing technological power—has materialized in precise, technical, urgent form. We've built intelligent systems that require moral certainty while embracing moral relativism. We've created the conditions Lewis warned about. And we're not talking about it.

What Lewis Actually Warned Against

Lewis's central idea is simple, devastating, and often misunderstood. He wasn't arguing that relativism would destroy culture, though he thought that too. He was arguing something darker: that abandoning objective moral truth creates a vacuum filled by power.

His key concept was the "Conditioners." These aren't villains in a plot—they're whoever controls the technology of human modification when there's no external standard to constrain them. Lewis imagined philosophers and scientists, armed with both technical capability and political authority, in a world that had stopped believing in objective human nature. With nothing to appeal to beyond their own will, they would shape humanity according to their preferences.

Here's Lewis's own language: "For the power of Man to make himself what he pleases means, as I warned, the power of some men to make other men what they please."

The Conditioners become their own reference point. They define "good" and then engineer it. There's no court of appeal.

Why This Suddenly Feels Urgent

For decades, this could read as cultural hand-wringing. Relativism had become fashionable; yes, that was concerning. But it seemed like a problem for philosophers to argue about.

The problem became concrete when we started building AI systems.

AI requires precision in a way philosophy doesn't. A human society can live with tension between moral relativism and moral intuition. An AI system can't. It needs a precisely specified objective function—a set of values to optimize for, translated into mathematics, burned into its weights and parameters.

So we face a practical question: given that we're building superintelligent systems that will optimize for specific values, what values should those systems pursue?

And we have no answer.

Worse than no answer: we've deliberately abandoned the conceptual framework that might have given us one. Lewis called that framework "the Tao"—not religious doctrine, but something older and wider: universal principles about justice, courage, prudence, and human flourishing that appear across cultures and centuries. Whether you ground these in God, reason, evolution, or something else, they operate as a reference point beyond power.

We've largely abandoned that. Our universities teach that all such claims are culturally constructed. Our tech companies operate from profit maximization. Our governments from national advantage. Our researchers from interesting problems.

So when we design AI systems, we're designing them in a values vacuum. Someone has to make the choice—the Conditioners have arrived, even if we don't call them that.

The Technical Crisis Underneath

The AI alignment community calls this "the specification problem." How do you translate human values into precise mathematical objectives that an AI system can optimize for?

It's harder than it sounds.

First, there's the problem of disagreement. Humans don't have unified values. I might prioritize individual autonomy; you might prioritize community stability. An AI system designed to maximize one might minimize the other. Whose values win? By what process?

Second, there's the problem of proxy gaming. If you specify the wrong metric—something that's easy to measure but misses what you actually care about—the AI will optimize the metric rather than the outcome. Feed it engagement metrics and it optimizes for engagement regardless of truth. Specify "human happiness" and it might condition populations into pleasure-dopant dependency. You've built the Conditioners.

Third, and most ominously, there's "value lock-in." Once a superintelligent system is trained on particular values and deployed at scale controlling key infrastructure, those values effectively become permanent. Future generations can't course-correct—they're constrained by the value commitments we made today.

AI alignment researchers use this language explicitly. They're not inventing metaphors; they're describing what happens when powerful optimization processes get locked into arbitrary goals. This is exactly what Lewis predicted would happen to humanity under the Conditioners: permanent modification away from what humans could become, with no way to undo it.

The Paradox We Haven't Named

Here's what strikes me as the central problem we're not discussing clearly:

We've embraced moral relativism in the culture and academy. That's a choice—a choice to say all values are socially constructed, all moral claims are culture-bound, no objective standard exists beyond power and preference.

But we're simultaneously building systems that assume absolute moral certainty. An AI system doesn't reason probabilistically about values; it optimizes functions. Someone—a researcher, a company, a military, an investor—has to decide what that function is. Decide absolutely. Make that choice binding.

The gap between these two things is where we're vulnerable.

It's worse than Lewis feared, actually. He imagined the Conditioners as consciously choosing to abolish human nature. But that's not what's happening. It's something more insidious: everyone pursuing reasonable local goals that combine into an unreasonable outcome. Tech companies optimizing for engagement. Military developing autonomous capability. Researchers solving interesting problems. Investors seeking returns. Governments pursuing advantage.

None intend to constrain human moral development. But the system they've created does anyway.

This is instrumental convergence on a civilizational scale.

Who Gets to Choose?

Let me ask the question directly: who decides what values AI systems optimize for?

Currently: a small expert class (researchers), financially motivated entities (companies), security-motivated institutions (military), and capital (investors). The billions of people affected by these systems have virtually no input.

This is the opposite of democracy. It's Conditioners exactly as Lewis described.

There's been recent talk about AI governance, about alignment councils and safety boards. But these generally operate at the margin, reviewing decisions made by others. Real governance—the power to specify what values get optimized for, to determine the core objectives—remains with whoever builds the systems.

This isn't a problem you solve with regulation after the fact. The values are baked into the system at the moment of training. You can't vote them out later.

What We Lost by Abandoning the Tao

Lewis's solution was to recover "the Tao." To ground human nature in something beyond power—whether you called that God, reason, evolution, or universal human tendency. To say: there are real truths about what humans flourish through, and these truths constrain what we should do with our technology.

That's not available in secular, pluralist societies. Or rather, it's available—secular arguments for universal human values do exist—but we've decided, as a culture, to be skeptical of them.

So we're left with a genuine problem: we need to specify human values for AI systems, but we've rejected the conceptual framework that would let us do that non-arbitrarily.

Either we accept arbitrary value choice (embrace the Conditioners), or we rebuild some grounds for claiming certain values are objective or universal or at least broadly defensible. The first is catastrophic. The second is intellectually dishonest unless we're willing to do the real philosophical work.

There's a third option: build AI systems designed to preserve human choice even when superintelligent, designed to remain tools rather than determiners. But this is technically difficult and philosophically unclear. How do you build an agent that makes meaningful decisions while remaining subordinate to human choice? These aren't solved problems.

The Real Question Lewis Raises

Lewis's genius was identifying that the problem isn't the technology. It's philosophical.

We didn't lose our humanity to chemistry or mechanics or computation. We lost it when we stopped believing there was objective truth about what humanity should be.

Technology just made that loss consequential.

The Conditioners arrive not when science becomes powerful, but when power has no reference beyond itself. When we've decided all values are constructed, so whoever constructs them wins.

We're not there yet. But we're close. And unlike Lewis's time, we can see the systems arriving.

The question isn't whether AI is safe. It's whether we can recover some framework for human values before we lock those values into systems we can't change.

Whether we can answer the Conditioners not with force but with the recovery of the Tao—some non-arbitrary ground for saying what humans should become.

If not, Lewis's warning becomes prophecy.


Sources

Lewis & Philosophical Context

AI Alignment & Technical Context


Join the Conversation

This piece raises questions that matter to anyone thinking about technology's role in human futures.

CONTRIBUTORS get access to:

  • Deep dives on frameworks that matter
  • How to think about systems that work alongside human intelligence
  • Original analysis on philosophy and technology convergence
  • Building-in-public technical explorations

Become a Contributor - $5/month