Matters of Silicon and Spirit: An Interview with John Pittard
YDS philosopher of religion John Pittard ’13 Ph.D. regards AI with fascination and urgency. Debates about AI give his abiding interest in metaphysics a fresh opening onto perennial questions: what is consciousness? What is mind? Are only humans capable of spiritual experience? Is the human brain merely a complicated input/output machine—a lot like AI—or does it engender consciousness in ways that cannot be accounted for in purely physical terms? Should it matter to us if AI technologies reach “singularity” and supersede human capacities? Does it matter to God?
Pittard, Associate Professor of Philosophy of Religion, is a Christian whose courses at YDS and in the Yale Department of Philosophy focus on epistemology, the nature of God, and science-religion relations. Of special interest to him is the rational significance of religious disagreement—intractable disagreement between people when both sides have informed, reasonable arguments. He is the author of Disagreement, Deference, and Religious Commitment (Oxford, 2019).
As the technology’s ambitions accelerate, Pittard has turned more attention to consequential moral and philosophical questions posed by AI. He discussed the hazards of AI on a YDS Quadcast episode last year, hosted by Emily Judd ’19 M.A.R. In the Fall 2022 issue of Reflections, he wrote to alert church leaders to the strange new questions that future AI will likely pose about the life of the spirit. If bots do become more and more humanlike, would they be also capable of spiritual curiosity, even redemption? If so, should congregations accept them as members, baptize them, allow them to take communion? As a theist, Pittard leans toward the argument that machines might eventually exceed human achievements in creative endeavors and other tasks, but will never attain consciousness or free will—only God is responsible for how consciousness and freedom of the will occur in the universe. Pittard offered his latest thoughts in an interview with Reflections editor Ray Waddle earlier this year. Here’s an edited version of their conversation.
Reflections: I don’t get the impression that theological debate is raging in Silicon Valley or guiding discussions about the AI future. What perspective can a university-based divinity school like YDS inject into the conversation?
John Pittard: YDS and people who teach and study here are very concerned about basic questions of human flourishing and ethics, and there’s a host of immediately pressing ethical issues in light of the AI technologies we already have. How much of our creative activity should we off-load onto technology? What are the implications of that for human flourishing? What is gained or lost when we engage with technology rather than with people? We’ve already encountered similar concerns posed by our use of social media, but AI adds a whole other layer of interacting with technology rather than with human beings. There are questions of how it’s used by corporations, with human labor being displaced by AI technology. And there are related concerns about biases in AI decision-making and the explicability of decisions undertaken by AI. So one thing YDS can bring to the table is people who are ethically reflective and already thinking about these immediate issues.
Now is the time to begin thinking about what it is we value about human beings and why, and whether ceding our place to AI in certain ways would represent a loss of what we care about.
But beyond these immediately pressing ethical problems, there are more fundamental questions about human purpose that are taken up at YDS, questions that will be increasingly relevant as we grapple with more powerful AI technologies. What do we hope for in the human story? Are our aspirations tied to humanity per se? What kind of autonomy or self-direction do we want to preserve for humanity? What’s lost or gained if we give significant power to future AI? Such questions aren’t so pressing in the near term because certain high-level capacities (for instance, those aimed at planning, leading, crafting a vision for an institution) are not yet in the province of AI, though I suspect they will be someday. Now is the time to begin thinking about what it is we value about human beings and why, and whether ceding our place to AI in certain ways would represent a loss of what we care about.
These questions are, of course, pursued in other parts of the university. But YDS does bring thoughtfulness about how religion informs these conversations—Christianity, for many of the faculty, but a wider set of religious views as well. I think that the relevance of religion to questions about the significance of AI gets lost in some of the conversations in philosophy departments or in Silicon Valley. Religious views are often not well represented. If we want to have a productive society-wide conversation about these issues, one that aims at meaningful consensus, then religion has to be included.
Reflections: Should we humans be restraining ourselves, curtailing our dreams of building these systems, or is it too late to ask that question?
JP: It’s not too late to ask the question. A lot of these technologies are already deployed, but it doesn’t mean they are permanent. Moreover, we have not yet developed the capacity to make extremely humanlike AI, bots that are just as capable as us across a wide range of cognitive tasks. But we may not be that far away. One major survey from 2023 suggests that most AI experts think that we are likely to achieve Artificial General Intelligence (where machines match or exceed human capacities across most cognitive tasks) by 2050. If so, that’s close, but there’s still time for reflection about whether to pull back, regulate, and shape how these technologies should be developed.
Reflections: Does belief in God mandate that we restrain ourselves?
JP: I think one’s theological commitments can inform how one thinks about this, but it’s not always a straightforward thing to say what bearing those theological commitments have. Speaking from a Christian perspective, it’s not like we have Bible passages talking about AI. So, I’d urge a kind of humility. Even among those who agree on a source of religious authority, there is room for significant disagreement about these matters.
But certainly from a theistic and Christian perspective, the world as we have it is a gift, the creation of a good God. And Christians regard our humanity as something God has seen fit to take on in the person of Jesus Christ. Even if human nature has been influenced by the contingencies of evolution, it has a dignity and importance as a key nexus where God meets creation. That suggests we should at least be cautious about devising technologies that would be extremely disruptive of humanity’s place in the world.
But if you don’t believe there’s a God, you might conclude that there’s no special reason to think that human nature should be privileged in some way. Maybe the project of civilization is better carried out by artificial beings that are extremely different from us, whether AIs or “cyborgs” that are partly artificial and partly natural. I’m not suggesting that this radical view is straightforwardly ruled out by a Christian perspective. But if we do see creation, and human nature in particular, as a gift and the product of divine, loving intentionality, then we should be careful to appreciate what is good about our humanity and cautious about replacing it with something of our own devising. Of course, the counter to an excessively cautious approach is to say one of the gifts God has given us is the capacity to create this kind of technology. Perhaps God desires that we carry forward the story of creation by producing artificial beings that are cognitively and morally superior to us, that can do better in addressing scarcity and injustice, and other critical matters. I don’t think that we can rule this possibility out with complete confidence, though my own leaning is that it would be unwise for us to make machines that could supplant human beings.
Reflections: Given our economic system, do we have a built-in bias or inclination to trust technology, let the market have its way, and hope for timely industry solutions? Can/should AI be regulated?
JP: In the last few years we’ve seen increasing concern about the potentially corrosive role that certain technologies are playing in our society. There’s a well-articulated set of worries about how the social media landscape—how people are increasingly getting their entertainment and news—is leading to greater cultural fragmentation and polarization. Some of that is already based on AI-powered algorithms that aim to drive engagement with content and that land on pernicious ways of doing that. There’s a recognized mental health crisis, especially with young people, and concern about how technology might be contributing to this crisis. The fact that there’s a lot of attention to these issues means that, at least for those who are attuned to such worries, there’s no quick equation of technology use and improvement in well-being.
I think that effective regulation of AI is conceivable, and I hope that governments might play a role. Some have pointed to international cooperation in regulating cloning technologies as a hopeful example of what might be possible. Of course, enforcement of AI regulation would likely be highly imperfect, and there is understandable concern that attempts at regulation might give certain bad actors a dangerous advantage. But while narrow AI is here to stay, I think there should be serious discussion of a ban on the development of more general forms of artificial intelligence, of machines that are humanlike in the breadth of their capabilities. Developing such machines would take us into morally problematic territory that I think we should avoid. At a minimum, we need to have a serious society-wide conversation about these issues, a conversation that isn’t operating on a deadline imposed by corporations that are pushing the technology forward with very little public accountability.
Reflections: Can humans claim anything unique that can’t be replicated?
JP: However much AI can mimic us or exceed us in certain capacities we can observe, still, that wouldn’t mean that AI is conscious, that it has subjective experience (what is sometimes called “sentience”). So it is unclear whether AI could, for example, appreciate beauty. It’s one thing to output speech and behavior that is humanlike or even, in certain respects, superhuman. But it’s another thing to understand what that behavior amounts to. It’s something quite different to experience and appreciate beauty rather than just produce it. In my view, this kind of subjective, qualitative aspect of the human experience is not reducible in any straightforward way to computational abilities achievable by a physical system, whether a brain or computer. It is something that transcends the physical world.
So if AI is incapable of conscious experience, and for a religious person it’s a live possibility that AI couldn’t or wouldn’t have that, then there’d be something immensely important lost if we were to cede our place to AI. If humans were replaced by AI in our various roles as friends, teachers, scientists, and artists, then it might be that we would be replaced by beings incapable of what we cherish most, things like loving people, appreciating beauty, and endeavoring to understand the world. But I don’t know with confidence that AI is incapable of such things, and this is where I think some people on the religious side are too quick to jump to conclusions, saying God has given to human beings a soul, AI isn’t a human being, therefore AI can’t have a soul. But even if we knew that souls or consciousness are realities that transcend the physical and that are given by God, we don’t know the laws of souls or consciousness. We can’t rightly claim to know the divinely ordained principles that determine where and how minds or souls actually arise in the universe.
If an AI system was conscious, I suspect that it would be impossible for us to confirm this. We all take it for granted that other human beings have subjective experience, but it isn’t something we can observe directly, nor can we explain scientifically why there is consciousness in the world at all. Christians and theists have a theological explanation for why there’s consciousness: it’s because it’s extremely good and wonderful that there are conscious beings who experience the world. It seems to be one of the main points of creation. God has seen fit to bring conscious experience into the world and to integrate conscious experience into the lives of physical creatures in a wonderful way. But God might also see fit to bring consciousness into the world of AI systems and integrate it into their computational activities. But we don’t know that—we’re in the dark about that. There is no principled reason why God couldn’t, if God wanted to, bestow consciousness on silicon-based machines that are the product of human ingenuity. It’s up to God to decide how to integrate spirit and mind in the physical world. We don’t have insider insight into that.
Reflections: Should we accept the idea that AI will someday produce better musicians, novelists, scientists, and theologians than us? Wouldn’t that be a crisis for human identity?
JP: My suspicion is there’s no barrier in principle that would stop machines from doing all that. Maybe it can’t be done currently by the deep learning models and other pattern recognition machines that are driving all the AI success stories. But we should ask this: when we ourselves have those creative insights, are they the result of the massive, beautiful, neuro-grounded computational architecture of our brains? Or does creativity involve the Holy Spirit or a muse seeding some idea that could not be produced by a computational process? If creativity is computational, then in principle it seems you could port those abilities into a machine that is silicon rather than carbon-based, and that has chips instead of neurons.
I haven’t seen reason to think that’s impossible. I suspect that creativity is not one of the traits that resist explanation in physical terms. My suspicion is that if we keep pushing AI technology forward without restraint, AI will one day give us better scientists, better philosophers, better novelists—better in the sense of producing output that by our standards we’d recognize as superior. But I’d question whether such AIs could understand and appreciate the philosophy or the science they’re producing, whether they could enjoy the experience of writing or reading a great novel, or whether they could actually be moral agents who act freely in a way that makes them responsible for their actions.
Speaking as a Christian, I would say that while we don’t have a clear mandate on the matter of whether to push this technology forward, there is a strong case to be made for restraint and for refraining from developing AI with humanlike capacities. We can observe the behavior of AIs, but questions may always remain about their inner life. Creating entities whose moral and spiritual status is unknown carries great moral risks. We don’t know if the entities would be capable of suffering and flourishing and therefore should be given moral respect, or if they are mere tools.
One might think that there would be no harm in extending moral respect to future advanced AIs “just in case” they are conscious. But if future AIs are conscious and worthy of moral respect, then arguably it would be wrong to manipulate their desires and aims so as to ensure that they are mainly concerned with human interests. Human interests and the apparent interests of possibly conscious AI might conflict in certain ways, and there is an imaginable future where human interests are sacrificed for the sake of the apparent interests of AIs that are not conscious. The moral risks here are so high that I believe we should hold back from creating AI entities whose moral and spiritual status would be opaque to us.