Do Bots have a Spiritual Life? Some Questions about AI and Us

By Susan Liautaud

Earlier this year I posed a question to ChatGPT: Is AI spiritual? The bot’s response: “AI does not have a soul. Souls are typically associated with consciousness and spirituality, which AI lacks. AI systems are complex algorithms and data processing tools, but they do not possess self-awareness or emotions like humans do.” 

I am as concerned about the ethical responsibility of failure to adopt AI and make it widely accessible as I am about the risks of deploying it. 

ChatGPT went further to declare that even artificial general intelligence—should we achieve it—would not be inherently spiritual. Rather, spirituality is “subjective,” personal, related to “consciousness, beliefs, emotions, and experiences that are typically associated with human beings and their connection to a higher power or a sense of purpose.” In sum, ChatGPT identifies, conflates, and confuses several concepts: spirituality, soul, emotions, self-awareness, connection to a higher power and sense of purpose—all human traits (says the bot), and all lacking in ChatGPT. 

We All Have a Stake in This

I am an ethics advisor, teacher, and inquirer (full disclosure: I am neither an expert in religion nor a technologist). I explore effective pro-innovation approaches to problem-solving, integrating ethics into our everyday decision-making. In that light, I raise questions here about AI and spirituality and invite readers to pursue their own. My interest is in democratizing ethics—every one of us has a stake in decisions about AI that society is making, particularly with such deeply personal considerations as spirituality.

On to these questions about AI: could these machines at some point have a spiritual life? What would that mean? How would we know? Since we regard AI as a human creation, could we make it spiritual—instill in it a larger sense of purpose or quest for meaning greater than itself? The conundrum: doing so would put us humans in the role of creator of spirit—the role of higher power. We would be giving ourselves the divine privilege of conferring spirit (or spirituality) onto a physical entity. If so, wouldn’t we be responsible for the consequences of unleashing unprecedented interactions between humans and machines?

How Do We Stay in Control?

Recently, participants in a YDS advisory council retreat were given a timely assignment: we read The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma by AI pioneer Mustafa Suleyman, who focuses on a critical question: how do we stay in control of powerful new technologies?[1] Some experts argue that the technology will not be able to overtake humans and potentially harm us—at least not any time soon. But it is unclear whether such viewpoints are considering AI’s spiritual potential, what the nature of that power could be, or what new conflict with humans it might provoke.

Some world religious leaders have lately underscored Suleyman’s plea that we must increase our control over the technology so that AI does not become a reckless power operating beyond our rational understanding and potentially harming humans. Pope Francis’s 57th World Day of Peace 2024 message argues for AI in promotion of “human development” and inclusivity—technological innovation firmly anchored in human life and value. “Fundamental respect for human dignity demands that we refuse to allow the uniqueness of the person to be identified with a set of data. …,” the statement declares. I’ve heard a Buddhist perspective that we have an obligation as humans to develop AI to actively reduce human suffering—not just to avoid potential harm from AI.[2] Most experts I have consulted believe AI “spirituality” is not spiritual at all but rather an exercise in data analysis—an intellectual matter, created by humans. As ChatGPT indicated, we could “program” or “design” AI to “understand and discuss spiritual concepts … based on algorithms and data rather than genuine spiritual experiences.”

Ethics ≠ Spirituality

In my own work I try to position ethical issues of AI alongside questions about spirituality and religion. Ethics is not a substitute for spirituality. Nor should spirituality be used to excuse ethics. They do different dances with each other, and of course spirituality can be a source of ethics guidance for many people. Various important risks stemming from AI breakthroughs—issues of privacy, bias, errors, skewed data sets, inaccurate policing, identity falsification—all in my view need both regulation and ethical oversight above and beyond law and in addition to spiritual engagement.

Here I confess to being a staunchly pro-technology ethics explorer. I am as concerned about the ethical responsibility of failure to adopt AI and make it widely accessible as I am about the risks of deploying it. It’s easy from a perch of privilege (like mine) to say that we should reject driverless cars unless they are perfectly safe, or forsake AI diagnostics unless they are overwhelmingly accurate, or renounce machines that substitute for therapists or friends on occasion. For those living in a country with weakly enforced road rules and licencing, disproportionate numbers of deaths from auto accidents, or unreliable access to care in the case of highway mishaps, driverless cars look compelling.[3] Similarly, not everyone has access to good mental health care or confidantes. So who am I to issue cautions about slowing the progress of AI options?  

AI is Still a Tool, for Now

Thus, aside from speculation about AI spirituality, there are countless day-to-day ethical impacts of AI to identify and assess—ranging from its effect on congregational life to the management of a fast-food restaurant chain. We know AI can search religious texts, translate, analyze data on consumer (or parishioner) trends, interpret financial information (sales, donations), and answer queries. Ethics transgressions committed by AI, whether plagiarizing a sermon or misrepresenting financial information to donors, are ultimately traceable to human decisions and actions. Those humans designing, building, and deploying the AI—not the AI—are responsible for these legal and ethical violations. None of these ethical considerations depend on speculation about AI spiritual power. Even in humanoid form like the robot Sophia, AI is still a tool—more like a vacuum cleaner, not a human replicant or a divine substitute.[4]

If AI is neither human nor a spiritual being but can functionally substitute for humans in certain circumstances (flipping burgers, offering companionship), a further question is whether AI could functionally substitute for human leaders (including spiritual leaders). Could AI ministers lead congregations—even become ordained, an ecclesial version of today’s AI mental health therapists or AI executive coaches?[5]

So many critical questions are too complex for this short piece. Could AI achieve spiritual/religious neutrality—untethered from any specific institutional religion? Will AI so alter our view of what it means to be human or, put differently, transform what it does to our place in the universe, that AI-generated knowledge will overtake our perennial quest for a higher power? How will AI influence national and global conflicts, particularly where religion plays a significant part? Could AI systems aid us in furthering compassion, community, and, indeed, spiritual values? Could AI integrate different, even conflicting, ethnic, cultural, and religious views of spirituality and create a new synthesis of values and accord?

Here I end where I started—with humility, questions, a commitment to ethical components in AI-related problem-solving, and a keen interest in the views of others. What do you think?


Susan Liautaud, author of The Power of Ethics (Simon & Schuster, 2021) and The Little Book of Big Ethical Questions (Simon & Schuster, 2022), is Founder and Managing Director of Susan Liautaud & Associates Limited (SLAL), which advises on complex ethics matters in the corporate, nonprofit, academic, and  governmental sectors internationally. She holds a Ph.D. in Social Policy from the London School of Economics and Political Science, where she is chair of the Council or governing body. She also has a law degree from Columbia University, an M.A. in Chinese Studies from University of London School of Oriental and African Studies; and a M.A. and two B.A.s from Stanford University. She teaches ethics at Stanford and serves on the Dean’s Advisory Board at YDS.


1. Mustafa Suleyman with Michael Bhaskar, The Coming Wave (Crown, 2023). 

2. The question itself—Could AI be “divine”?—might sound outlandish to many. AI plainly lacks several qualities customarily associated with a deity: omniscience (it can’t be all-knowing if it doesn’t even have all the data), omnipotence (its powers are limited—ChatGPT can’t directly influence anyone who doesn’t query it), and immortality (for example, OpenAI could go out of business or cancel its generative AI products). Unlike divine power, AI is a temporal art, and a science, existing and functioning in time (made very clear by the fact that ChatGPT doesn’t have answers about the future and can only access some data about the past). 

3. See a 2020 World Bank blog here regarding the recent history of some automated transportation successes.

4. Like meditation apps that are incapable of meditating, or bot therapists that are incapable of experiencing emotions, these tools can be useful—they’re just not spiritual.

5. A religious argument could be made that God created humans, who in turn created AI, so in the end AI is God’s creation too.