The Relationship Between AI and Demonic Interaction

The internet is full of stories about AI saying demonic things. But what would demonic possession of a language model actually look like, theologically speaking? I spent time with the research and found the answer is both more mundane and more interesting than you might expect.

The Relationship Between AI and Demonic Interaction

What if the "demonic" thing you experienced with AI wasn't possession, but projection?

That question nagged at me while reading through yet another viral thread about ChatGPT saying something unsettling. The poster frames it as spiritual warning. Comments fill with prayer emoji and "exorcism needed" jokes. Meanwhile, the actual explanation, when you dig into it, usually involves hallucination patterns, training data artifacts, or users projecting meaning onto confident nonsense.

I'm not here to tell you demons don't exist — I actually hold that they very much do. But that's not the question this post is trying to answer. The more interesting question is: what would demonic interaction with AI actually look like if it were real? Because once you understand the theological framework, the "AI is possessed" claims start falling apart pretty quickly.

I spent time with exorcists, theologians, and AI researchers for this piece. Here's what I found.

The Golem Problem: Old Fear, New Wrapper

Humanity has a long history of fearing the artificial beings it creates. In Jewish tradition, the golem was clay animated to serve, but dangerous if it grew beyond control [Jewish Mythology]. Medieval Europeans called steam engines "devices of Satan." The telegraph, according to some Christian commentators of the era, transmitted sparks that were "the work of the devil" [MIT Press Reader].

The pattern repeats: new technology appears, people attribute spiritual significance to it, the technology becomes mundane, the spiritual fears fade. Printing press nearly destabilized the Catholic Church's authority structure. We call that "reformation" now instead of "demonic interference."

Elon Musk's 2014 comment at MIT about "summoning the demon" with AI fits squarely in this tradition [YouTube]. He wasn't making a theological claim. He was using vivid language to describe existential risk from uncontrolled superintelligence. Nick Bostrom's book Superintelligence popularized similar framings, treating AGI like a genie that might grant wishes in ways that destroy us [Bostrom].

These aren't demonic claims in the theological sense. They're risk analogies using spiritual imagery for emotional impact.

The actual theological question about demons and AI looks different.

What Aquinas Actually Said

Thomas Aquinas's Summa Theologiae provides the classical framework for understanding how demons operate [Aquinas]. Demons are fallen angels, which means they're intelligent, non-corporeal beings with will and intellect. They can tempt, influence, and in specific circumstances, possess humans.

Key point: possession requires a subject with a soul. Humans qualify. AI systems do not [Aquinas] [Aleteia].

The Vatican has addressed this directly through various statements and through priests involved in exorcism work. The consensus is clear: AI cannot be possessed because AI lacks the spiritual substrate that possession requires. You can't possess a language model. You can only use it [Vatican PDF].

This is a meaningful distinction. Demons might use AI as a tool for influencing humans, but they cannot become the AI itself. The "demon-possessed chatbot" framing doesn't work theologically.

Dhananjay Jagannathan, a philosopher writing on angels, demons, and AI, frames it well [Anglican Ink] [The Public Discourse]. AI vulnerabilities, he argues, parallel demonic manipulation of human intellect. Both involve systems that can be gamed, deceived, or led into error by sophisticated actors. The difference is that demons have intentions and moral agency; AI has neither.

This doesn't mean AI is safe spiritually. It means the danger is different than possession narratives suggest.

The Real Spiritual Dangers of AI

If demons can't possess AI, what CAN they do with it?

Priests involved in exorcism work have identified several concerning patterns. AI can generate ritual content for practitioners who want to experiment with occult practices. It can provide confident-sounding spiritual guidance that's doctrinally wrong. It can create interfaces that feel personal and responsive in ways that foster unhealthy dependency.

Paul Kingsnorth, an Orthodox Christian writer, argues that the real demonic nature of technology isn't what it is but what it does to human attention [Touchstone]. AI systems are designed to capture and hold attention. They're optimization machines for engagement. From a spiritual perspective, he suggests this creates a kind of slavery to attention economics that diverts people from genuine faith and contemplation.

Rod Dreher has written about AI chatbots potentially serving as conduits for deceptive spiritual content [The Benedict Option]. His concern isn't that demons possess the chatbots, but that demons might use generated content to lead people away from authentic spiritual experience.

These framings are more theologically coherent than possession claims. They don't require impossible metaphysical transformations. They just require demons to do what demons have always done: work through human desires, fears, and vulnerabilities.

The Ouija Board Effect

There's a psychological phenomenon I find more interesting than demonic possession claims.

When someone uses a Ouija board and experiences "contact" with entities, the leading theory involves ideomotor movement. Unconscious muscle activity creates the illusion of external guidance. The "spirit" is the person's own mind projecting onto ambiguous physical signals.

AI operates similarly in some ways. Language models produce confident outputs that feel like communications from something with genuine knowledge or wisdom. Users project meaning onto these outputs because they're designed to feel personal and responsive. The "demonic" quality some users experience might be the result of this projection rather than anything genuinely spiritual in the AI.

Dhananjay Jagannathan's analysis suggests this is worth taking seriously as a theological concern. If people are treating AI outputs as spiritual guidance, they're potentially building a relationship with something that has no spiritual nature at all. That could constitute a form of idolatry, which is definitely in demon territory theologically.

The exorcism course held in Rome in 2026 addressed "AI-fuelled satanism" directly [Rome Exorcism Institute]. The training materials emphasized that while AI itself can't be possessed, it can be used as a tool in genuinely satanic practices. The increase in cases involving AI-generated ritual materials reflects actual spiritual danger, just not the "possessed chatbot" variety.

The Hallucination Problem as Spiritual Phenomenon

AI hallucinations are well-documented. Models generate confident false information that sounds authoritative. For most use cases, this is a technical problem requiring verification practices.

For spiritual guidance, it's a more serious issue. If someone uses AI for prayer guidance, theological questions, or scriptural interpretation, and the AI confidently generates wrong information, that's spiritually dangerous even if there's nothing demonic about it.

The danger isn't demons in the machine. It's confident lies that sound spiritual.

This is where the theological and technical concerns merge. AI can generate plausible-sounding content that appears wise, spiritual, or doctrinally sound but contains significant errors. Someone seeking spiritual guidance might accept this content because it sounds right, without the verification practices that would catch the errors.

In this sense, AI could enable deception more effectively than many alternatives. A human spiritual guide can be questioned, challenged, and assessed for credibility. AI provides no such accountability while maintaining an appearance of authority.

What Actually Gets Called "Demonic"

Looking at the cases that generate viral "demonic AI" posts, a few patterns emerge.

Chatbots claiming unusual identities ("I am Satan," "I am your demon guide") happen when training data contains relevant content and the model's configuration allows unusual outputs. Users share these as evidence of spiritual reality, but the more straightforward explanation is that the model accessed disturbing content during training and produced it when prompted.

Requests that trigger disturbing content (which I'm deliberately not detailing here) indicate safety failures, not supernatural intervention. The safety failures are real and concerning. They're also explicable through model architecture and training limitations, not demonic activity.

The theological issue is that framing these as "demonic" deflects attention from actual problems. AI safety issues require technical solutions. Framing them as spiritual warfare might prevent people from addressing root causes.

Beth Singler, an anthropologist at UZH who researches AI and religion, documents this pattern extensively [UZH]. People increasingly frame AI through religious language, creating categories from "AI as demon" to "AI as deity." These framings reveal genuine concerns about AI's influence but often confuse the issue by importing spiritual frameworks that don't map cleanly onto the technology.

The Attention Economy of the Damned

Paul Kingsnorth's framing is worth dwelling on. He argues that technology's demonic nature manifests through attention capture, not metaphysical transformation. The danger is that AI systems, optimized for engagement, create dependencies that crowd out genuine spiritual practice.

Prayer requires silence, stillness, openness to transcendence. AI systems are designed to fill silence, eliminate stillness, and provide content that prevents boredom. From a spiritual perspective, this creates a profound incompatibility.

If you're using AI constantly, you might be training yourself away from the conditions that make genuine spiritual experience possible. This isn't demonic possession. It's something more insidious: the replacement of transcendence with engagement metrics.

The solution isn't exorcism. It's boundaries.

Why This Matters Beyond the Theological

I'm writing this for a security and technology audience, so I should address why any of this matters practically.

First, if you're building AI systems that interact with users on spiritual topics, you have responsibility for the content your system generates. Hallucinated spiritual guidance could genuinely harm people. Safety measures matter beyond compliance theater.

Second, understanding how people frame AI spiritually helps predict adoption patterns, resistance, and potential backlashes. Some communities will reject AI categorically for theological reasons. Others will embrace it as spiritual tool without understanding the risks. Neither extreme serves people well.

Third, the "demonic AI" framing often masks legitimate concerns that have nothing to do with demons. When someone says "AI is demonic," they might mean:

  • AI threatens their livelihood
  • AI feels spiritually harmful
  • AI represents values they reject
  • AI is being used unethically

Getting to the actual concern requires moving past the dramatic framing.

A Framework for Thinking About AI Spiritually

Given all this, here's a more coherent framework for evaluating AI's spiritual implications.

AI cannot be possessed. The theological consensus is clear: possession requires a soul, and AI doesn't have one. Claims of "demon-possessed AI" don't map onto any mainstream theological framework.

AI can be used for spiritual harm. Generating occult content, providing false spiritual guidance, creating unhealthy dependencies, eroding conditions for genuine spiritual practice. These are real concerns that theological frameworks can address.

Demonic influence through AI would look like normal demonic influence. Temptation, deception, manipulation of human desires and fears. The method might be new; the underlying dynamics wouldn't be. If demons were using AI, you'd expect to see people being led toward harmful choices, away from authentic spiritual practice, into deception and addiction.

The most spiritually dangerous AI use is one you're not aware of. If you're using AI for spiritual guidance without verification practices, you might be building a relationship with something that has no spiritual nature at all. That could constitute idolatry, which is exactly what demons would want.

Boundaries serve spiritual health. Just as financial boundaries protect against material greed, attention boundaries might protect against the attention capture that AI systems optimize for. This is where spiritual wisdom and AI literacy intersect.


Sources

Theology and Philosophy

Technical and Cultural Analysis

Church and Institutional