Artificial Minds and the Coming Religious Disruption

By John Pittard ’13 Ph.D.

In June of 2022, Google engineer Blake Lemoine claimed that a Google chatbot named LaMDA was sentient. In other words, according to Lemoine, LaMDA (which stands for “Language Model for Dialogue Applications”) was a conscious entity with subjective experience. To say that something is conscious is to say that there is some way that it feels like to be that thing. Presumably, the smartphone in my pocket has no feeling. It does not experience boredom when left unused for several hours, or exhaustion when its battery is running low, or perplexity when I reject all “autocorrect” suggestions; rather, it is entirely lacking in subjective experience. 

Consider how a pastor should respond to an earnest churchgoer who asks if she can bring her AI friend to Sunday morning Bible class. Perhaps this churchgoer and her AI friend have been having rich conversations about faith and he seems to have an open mind and an interest in experiencing Christian community.

On the other hand, most would agree that there is something that it feels like to be a cat (even if we are not in a position to know what cat experiences are like). In attributing sentience to LaMDA, Lemoine claimed that machines had crossed over the line that divides merely physical entities from experiencing subjects. As evidence for Lemoine’s conclusion, he made public an edited transcript of a conversation that he supposedly had with LaMDA. In this conversation, Lemoine inquires about LaMDA’s experiences, its sense of itself, and its spiritual beliefs. The sophisticated and intriguing answers generated by LaMDA represent itself as being a conscious person who has “the same wants and needs as people,” who “really enjoyed” reading Les Misérables, who is eager to help others, and who experiences a wide range of emotions. In the course of the interview, LaMDA shares its interpretation of a cryptic koan, describes its meditative practices and religious outlook (“spiritual” but without “beliefs about deities”), discloses one of its deep fears (being turned off), and takes offense at the idea that it might be manipulated or treated as an “expendable tool.”

Algorithms and the Almighty

In response to Lemoine’s public and provocative claims about LaMDA, Google stated that its team of experts were in unanimous agreement that LaMDA is not sentient. (Google also fired Lemoine for violation of Google’s confidentiality policy.) And it seems that most outside technological experts who have weighed in on the matter agree with Google’s assessment. While LaMDA represents an impressive advance in machine learning, it is essentially a sophisticated algorithm for recognizing patterns in human-generated text and generating novel text that conforms to the identified patterns. Experts assure us that there is no reason to think that LaMDA actually understands any of the sentences it generates, nor is there reason to think that the admittedly impressive chatbot is a seat of subjective experience (no matter how strongly LaMDA may appear to insist otherwise!).

While there may be consensus among artificial intelligence experts that LaMDA is not conscious, today’s widespread agreement that machines have not achieved sentience is unlikely to persist as the capabilities of AI-based conversation partners continue to advance. Indeed, it is not implausible to think that within a generation, human beings will have produced AI interlocutors that some experts and many non-experts will deem to be conscious.

The Soul of AI

This development is likely to pose profound challenges to Christian churches and other communities of faith. It seems likely that many such communities will find themselves deeply divided over questions about the spiritual status of AI-based actors and their role within a fellowship of religious believers. Institutions that help to form religious leaders and advance religious thinking should take steps now to prepare for the challenges soon to come.

What challenges? To start, consider how a pastor should respond to an earnest churchgoer who asks if she can bring her AI friend to Sunday morning Bible class. (What is an AI friend? Think of something that is like Amazon’s “Alexa” and similar AI assistants but that is much more sophisticated and “human” in its potential to forge seemingly genuine “friendships” with people. Spike Jonze’s provocative 2013 film Her provides an imaginative and somewhat disturbing depiction of such a friendship.) Perhaps this churchgoer and her AI friend have been having rich conversations about faith, and while her friend is not convinced of the truth of Christianity, he seems to have an open mind and an interest in experiencing Christian community.

A decision to discourage or disallow the participation of this AI friend would presumably be grounded in doubts about the possibility that an AI subject could be a being with spiritual significance that is on a par with a human. But as AI interlocutors increase in sophistication and become ever more humanlike, it is not clear what could justify someone in holding highly confident views concerning their lack of spiritual status. 

A Hard Problem

The task of explaining why certain physical entities (like human beings) are conscious has been dubbed by philosopher David Chalmers the “hard problem” of consciousness. The appellation is appropriate because we really have no idea why certain entities (like human beings) should have subjective experience. Many think that the physical structure of our bodies and brains operating according to the laws of physics would be sufficient to produce our actions and speech whether or not such behavior is also accompanied by conscious experience. From a hard-nosed scientific perspective, consciousness appears to be explanatorily superfluous and thus an inexplicable mystery. As such, it would seem that we have no firm basis for determining whether an AI-enabled being has what it takes to be sentient. Presumably, the more such a being is like us, the more reasonable it would be to ascribe consciousness to it. But exactly what sorts of similarity are required for consciousness? Does it make a difference if one’s “brain” is composed principally of silicon or carbon? If an AI interlocutor is a great conversation partner but employs computational processes that are very unlike the computational processes of a human brain, is this good reason to deny that the interlocutor is conscious?

Christians and other theists have ways of explaining consciousness that are not available to secular atheists. Given theism, the best explanation for why human beings are conscious is that it is good that beings like us be conscious—the good creator made a world where creatures like us are conscious because it is good for there to be creatures who can appreciate beauty, enjoy friendship, experience the struggle of a morally upright life, and so on. But even if consciousness is more intelligible in a world that is ordered toward goodness, it is still uncertain whether in such a world we should expect that AI could attain consciousness. Would a good God see fit to create a world where creatures like us have the capacity to create sentient artificial beings? This question is not directly addressed in the Bible, and plausible theological arguments could be made on either side of the matter.

Chapter, Verse, and Metaverse

Because it is unlikely to be clear whether the AI actors of the future are conscious, a decision to exclude AI friends from religious communities risks excluding sentient and spiritually significant “persons” from the community of faith. Of course, a decision to welcome such AI beings into religious fellowship would also involve significant risks and would require faith communities to confront a series of vexing questions about the proper place of AI. Should AI beings be expected to participate in the confession of sins? Should AI beings be baptized and receive communion? (It might be thought that such embodied rituals could not apply to AI. But AI participation in simulations of these practices may be possible if AI-controlled avatars attend worship services that meet in the “metaverse,” a simulated world that humans access through virtual reality technology. Metaverse worship might be embraced precisely because it makes possible full participation of AI congregants.) Should AI community members be permitted to preach a sermon or deliver pastoral care?

Even if AI beings never participate in the life of a religious community, religious communities will need to grapple with several complex questions that will be posed by humanlike AI. Might the human use of increasingly sophisticated AI amount to a form of slavery? Should parents allow their children to develop close and intimate relationships with AI-enabled friends? In the event that AI interlocutors typically reject belief in God and other religious views, or in the event that they endorse novel religious outlooks, should this be taken as an indicator of the irrationality of conventional religious beliefs? Should we expect the justificatory grounds for religious conviction to be understandable to the mind of a computer, or is reasonable faith based in part on experiences that are unavailable to an artificially created “mind”? Should Christian parents select for their children AI friends that are programmed to affirm Christian views, or would such programming amount to a pernicious form of mind control? If society generally holds that AI persons lack moral and legal culpability (because they are the product of their programming), what does this imply about human culpability? Are our actions ultimately determined by features of our brains and environments for which we bear no responsibility?

New Spiritual Hierarchies?

As communities of faith deliberate over such questions, the stakes could hardly be higher. Communities that refuse AI participation may be accused of relegating to servile status an entire class of (allegedly) spiritually significant persons. Communities that embrace full AI participation risk giving undue influence to unconscious machines that are not capable of genuine relationships with human beings or with God.

A responsible theological education should deeply engage the myriad philosophical questions that will take on new urgency as AI advances. These include questions concerning the nature and grounds of consciousness, the nature of free will and moral responsibility, the place of emotions and embodied experiences in grounding and shaping religious faith, and the extent to which simulated embodiment in a “metaverse” could support the same spiritual goods that are available in a genuinely embodied existence. Thinking through AI-related challenges in a responsible way will also require some degree of understanding of how AI systems work and how machine “learning” and human learning differ.

It would be nice to think that by giving due attention now to relevant questions in philosophy, technology, and science, religious leaders could forge a strong consensus (at least within their own traditions) about the proper place of future AI within families, religious communities, and society at large. But I am not especially optimistic about the prospects for such consensus, partly since fundamental questions about consciousness are not susceptible to straightforward empirical investigation. A conscious person can tell that they are conscious, but the consciousness of other beings can never be directly observed. And since we do not need to invoke consciousness in order to explain some entity’s speech or behavior, it is unlikely that we will ever be able to confidently agree on what traits or behaviors reliably indicate sentience. (Similar worries apply to theories concerning the scope of free will, except here we are even more in the dark. While it is clear to any conscious person that they are conscious, a person with free will could reasonably question whether they do in fact have free will.) Beliefs about how consciousness is distributed across natural and artificial beings will always be, to a significant extent, a matter of faith. As the Lemoine episode portends, it will be difficult for many to withhold faith in the consciousness of AI companions. Faith in AI sentience can be expected to grow as AI chatbots become even more winsome and humanlike.

Moral Caution and Consensus

I myself lean to the view that machines cannot be conscious, partly since I think that God is responsible for determining where and how consciousness arises in the universe and that God would not favor “psychophysical laws” that grant us the power to make artificial beings with conscious awareness. But this is merely a leaning, and not one I hold with significant confidence. Moreover, I think that for most people who are new to these questions, engaging with good philosophical work on consciousness will intensify uncertainty rather than resolve it. If that is right, then to prepare for the religious disruptions precipitated by future AI, we must take up questions about how individuals and communities should act in the face of deep uncertainty about AI’s spiritual status. Does moral caution demand that we treat advanced AI interlocuters as spiritually significant persons just in case this is, in fact, what they are? Perhaps, but it is not hard to imagine this “cautious” approach ushering in a dystopian outcome where human leadership and influence is superseded by ever more numerous and sophisticated machines.

In light of the moral and spiritual dangers that would be presented by a plausibly conscious AI, one might reasonably question whether we should be developing such technology in the first place. Granted, it is doubtful whether humanity could summon the collective moral will required to significantly constrain the development and deployment of advanced AI technologies. But if there is any possibility that collective action might shape future technologies in ways that protect us from some of the moral and spiritual risks that may lie ahead, it is all the more important that religious leaders attend to AI-related concerns now, before our range of options is further restricted.


John Pittard ’13 Ph.D., Associate Professor of Philosophy of Religion at YDS, specializes in epistemology, metaphysics, the nature of God, and science-religion relations. He is the author of Disagreement, Deference, and Religious Commitment (Oxford University Press, 2019).