From the Dean’s Desk

By Gregory E. Sterling, Dean of Yale Divinity School

Plato* reported that the pre-Socratic philosopher Heraclitus said that a person “could not step twice into the same river” (Cratylus, 401d–402c). Heraclitus was speaking of the changes that take place in the cosmos. If we apply his bon mot to technological developments today, we can say that AI has accelerated the speed of the river to such an extent that it feels that the river changes even as a person’s foot is going into it.

As a professional school largely based in the humanities and the arts, we have used AI in research, teaching, and administration in uneven ways up until the 2023–2024 academic year. This is quite different from other units at Yale where AI is a standard tool in research, especially in the STEM and medical fields. 

This year, we have made formal efforts to explore AI in five ways. First, we have discussed the use of large language model chatbots (LLMs) like ChatGPT in course instruction two times at faculty meetings. We decided to follow the principle of subsidiarity and allow each instructor to set guidelines for their courses, but we requested that they include these guidelines in their syllabi. It would be foolhardy to think that students do not make use of AI. Some faculty have done interesting things with AI. For example, at least two faculty gave their students a question to pose to ChatGPT that addressed the thrust of the course and then asked the students to write a critical evaluation of ChatGPT’s answer based on their own understanding of the course.

Second, we held a special retreat devoted to AI in January of 2024 with the leadership of the School and select faculty. Every attendee was expected to have read Mustafa Suleyman’s The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma (2023). If you do not know much about AI and wonder what all the fuss is about, I recommend this book to you. We invited Jenny Frederick of the University’s Poorvu Center to lead the majority of the afternoon retreat. The retreat went so well that we invited Dr. Frederick to make a presentation at our February staff meeting and provide an updated overview of the University’s initiatives on AI. It was well received. You can get an idea of her orientation by reading her article in this issue. 

Third, we made the decision to devote this issue of Reflections to the topic of AI. The articles cover a wide range of topics and were written by individuals from different backgrounds. I strongly urge you to read through them. 

Fourth, we experimented with AI in the second chapel service of this semester. Associate Dean Awet Andemicael created a service using ChatGPT and read a homily created by ChatGPT in her style. She solicited feedback from the audience afterward. The consensus was that ChatGPT did a good job of capturing her vocabulary, but offered a flat sermon, filled with platitudes but lacking substance and surprise (almost predictable at this stage of the development of ChatGPT). This mirrors my own assessment of the current status of AI: the promise is greater than the present state of development. It is critical to realize that it will become more sophisticated and at a rapid pace (remember Heraclitus)!

Finally, we held a 90-minute training session for YDS staff with AI in concert with Frank Matthew (IT) and John Baldwin (Library), who built a “walled garden” that permitted staff to experiment with administrative possibilities without exposing any sensitive data beyond Yale. AI will have an impact on the way that we conduct business. 

As Susan Liautaud points out in her article here, it would be irresponsible to ignore AI. But what can a Divinity School contribute to the discussion of AI? In all candor, at this point we have more questions than answers; however, there are two broad areas where we believe we can and should make a contribution. 

First, there are challenging ethical issues that cannot be ignored. When OpenAI began, it had an ethical board—a board that met only once. My fear—shared by many—is that the market or international competition will be the sole drivers for AI development. I realize that AI research is being done internationally and that we cannot control what happens in other countries; however, this does not excuse us from setting some ethical controls in place. We need to think through the ethical implications and unintended consequences of AI research.

Let me offer a couple of examples.  How do we account for bias? Recent experiences with Google Gemini—Google’s AI program that depicted historical figures with modern sensibilities but were historically inaccurate (for example, an African American female pope)—have alerted us to the political biases that can be programmed into AI. Even more disconcerting is the use of AI to deliberately misrepresent information in an effort to control public views. Or thinking from the reverse vantage point, how will we protect privacy, especially when consent to data use is a precondition for the employment of an AI program? This list of questions could be expanded at great length.

Second, how do we understand what it means to be a human being? This became a cause célèbre when Blake Lemoine, a Google engineer, hired an attorney to defend the rights of LaMDA , his AI programmed computer, that claimed to be a person. Perhaps the central—although not the only issue—is how we understand the human mind. Is it fully replicable in a machine? At present, LLMs are capable of processing vast amounts of data—far more than a human can and much more quickly—but it is not clear that the machines comprehend the text that they produce. Rather, LLMs predict what word is most likely to follow based on an algorithm that processes the available data. 

Christians, Jews, Muslims, Platonists, and others have for centuries held that human beings are capable of two different forms of thought: discursive rational thought (extending known propositions through the application of inferential rules) and intuitive thought (participating in a higher, non-discursive form of knowing by which ultimate realities are immediately and reliably grasped). What would it mean for AI to become capable of these forms of thought? I was recently asked to produce a paper on the role of mind in the thought of Philo of Alexandria (c. 20 BCE-50 CE). I entitled it: “When Intelligence was not Artificial.” Philo believed that the mind was the key to understanding the imago Dei. What will we think in fifty years?

At YDS we are setting out to explore these ethical and anthropological concerns from a theological perspective. Three members of our faculty (Jennifer Herdt, John Pittard, and Kathy Tanner) are in the initial stages of planning a conference on AI and Theology to be held in 2024–2025. We hope that the proceedings will be published. 

We believe that the Divinity School has something of great importance to contribute to discussions about AI. In 2020, there were more than 350,000 communities of faith in the United States. While the percentage of individuals who do not have affiliation with a religious tradition has grown notably in recent decades, religion remains a robust and potent force in the US and in the world and will shape how a large number of people react to and think about AI. 

In 2004, Madeleine Albright was invited to give a lecture at the Divinity School after her years of service as Secretary of State. She was asked what she had thought about religion while in office. She responded: “I didn’t.” However, after her lecture, she wrote a book reflecting on the importance of religion and foreign policy entitled The Mighty and the Almighty: Reflections on God, America, and World Affairs (2006)In a subsequent interview with CNN, Secretary Albright explained why many thought religion was irrelevant to US policies and offered a rejoinder: “So you think, ‘Well, if we don’t believe in the convergence of church and state, then perhaps we shouldn’t worry about the role of religion.’ I think we do that now at our own peril.” 

Religion can be responsibly or irresponsibly practiced with significant consequences. We think that we can and should make a contribution to discussions about AI within the School, the University, and our society.


* My letter in this issue is an adaptation of a report that I wrote for Yale University Provost Scott Strobel. Professors Jennifer Herdt and John Pittard read that document and made helpful comments. I am grateful for their assistance.