Can We Chat? A Scientist’s Invitation

By Jennifer Frederick ’99 Ph.D.

I am a chemist working in academic administration. I find generative artificial intelligence (AI) fascinating, even hopeful, because of its potential to do good things. AI also portends great harm to humans and the world. No single person or discipline is equipped to grapple with all these hopes and harms. 

Which diseases and health conditions warrant a powerful AI-based arsenal, and which ones get less attention, and why? Who chooses? 

So I write this essay as a call to collaborative multidisciplinary inquiry. The advent of widespread access to powerful AI tools touches all of our lives and poses many questions for our academic disciplines. It is urgent that we find ways to think and work together to ensure that this rapidly expanding technology serves the greater good of humanity.

As a scientist, I look to deep traditions in the humanities to help us clarify the ethical, moral, and philosophical questions surrounding artificial intelligence. The way forward in an AI-enriched world, I believe, will best be defined by deeply human intelligenceIt might also require a dose of divine intelligence—I’ll need your help to know that. My hope is that our inquiries into ChatGPT and related forms of non-human intelligence will plant seeds for collaborative action.[1]

A Force for Good?

Machine learning is not new, and narrowly trained forms of artificial intelligence are already present in our daily lives. Medical diagnostics benefits from powerful pattern recognition tools to notice abnormalities, allowing early detection and more precise treatment. AI is behind smart phone features we take for granted, such as facial recognition, voice dictation, and camera optimization. AI has improved accessibility for many who rely on instant voice transcription tools, captioning, and text description of images. AI will soon enable personalized medicine, with the aim of customizing lifestyle and treatment regimens aligned to one’s individual genetic makeup and physical abilities. In education, AI has potential to boost learning and reduce opportunity gaps through interactive assistants that tailor content according to a student’s interests, strengths, and cultural context. 

And yet, each of these examples of innovation is shaped by values and priorities that often go unexamined. Which diseases and health conditions warrant a powerful AI-based arsenal, and which ones get less attention, and why? Who chooses? How are patients involved in decision-making? What is the social cost of increasing reliance on smart phones? Customs of interacting with the world stubbornly portray people with disabilities as lacking. Who is paying attention to the question of which patients and which students get access to fancy new tools? How can we avoid exacerbating existing inequities? 

Theologians and philosophers have methods and practices for identifying the good, the not-so-good, and the outright evil. They reflect on the shadowy origins of evil, that it frequently emerges out of and is conveyed by what is also good. These traditions of inquiry insist that humane values and priorities be guides for the applications of AI. History offers lessons about our responses to technological innovations and disruptions, and we ignore the past at our peril. 

Keeping It Real

If we aren’t intentional about the ways we integrate AI into our personal and professional lives, we invite the possibility of harm, even an existential threat to human health and society. The same tools that can fast-track biomedical remedies might also create deadly biological agents. With transformative power concentrated in just a few corporations, we may lose the ability to regulate and contain AI. Society is rife with inequities associated with race, gender, and class: tools built on false or biased information perpetuate prejudice and misinformation and further disenfranchise already marginalized communities and ideas. Working with an AI assistant to do all kinds of daily tasks may erode human connections and intensify isolation. Ceding more and more cognitive work to machines may, over time, reduce human capacity to create and innovate.

Given AI’s immense promise, we need to discern its best uses and resist its abuses. That work will require sustained thinking at the intersections of science, technology, theology, and philosophy. It will mean designing social and educational structures that draw on human curiosity and multiple perspectives. Local governments, for example, could convene groups of people with different roles and backgrounds to discuss policies and standards for AI integration in their community.

“I Can Tell the Difference”

In discussions with faculty about students using generative AI, I occasionally hear a confident assertion that “I can tell when text was written by ChatGPT.” That might be true if the text was generated by a generic prompt without further editing. But if you’ve worked with these tools to generate text, you have probably discovered that more sophisticated prompts yield more sophisticated outputs. One can take entire courses to learn “prompt engineering,”[2] and researchers are now publishing evidence-based guidance for prompting chatbots with increasingly effective results.[3]

These pedagogical issues extend to a far deeper realm—to the essence of humanity itself, which readers of this journal might call theological anthropology. AI is forcing us to ask once again, Is intelligence an essential marker of humanity? Is it the essential marker? Or does our humanity lie in social emotions, awareness of others and God, and the ability to transcend our circumstances to some degree? Will a time come when AI has these attributes in addition to its vast knowledge? As scientists revise the boundaries between humans and non-humans, thinking across disciplinary boundaries is essential so that many voices are overseeing the revising and redefining. 

Slow Down and Fix Things

Even as the technology industry is urged to “move fast and break things,”[4] educational institutions need to “slow down and fix things.”[5] In the modern economy’s high-pressure bottom-line conditions, the ideals of a liberal arts education and interdisciplinary learning take on an especially vital importance—they supply a crucial counterbalance. Would technology industries have different goals if every leader’s technical and entrepreneurial expertise were balanced with a respect for historical perspective and humility? Would a willingness to take up philosophical questions across disciplinary lines point them down a different path? Would those trained in theology and other humanistic fields similarly benefit from meaningful encounters with the cutting edge of scientific discoveries? What would it mean to center ethics at the very heart of technological development? 

I propose that one aim of cross-disciplinary AI conversations should be to complement and challenge the approaches of today’s industry leaders. Based on news reports, podcast interviews with industry entrepreneurs, and my own visits to AI corporations, the prevailing “just go for it” attitude guiding technology development is at odds with thoughtful deliberation about what is truly good for humanity.[6] We need to place humanity at the forefront. Educational institutions, from divinity schools to schools of science and technology, have a critical role to play, bringing tested and pluralistic traditions to the moment. University faculty should contribute to AI policy and regulation. When partnering with industry, universities could establish conditions such as involvement in an ethics board or exchange programs to build cross-sector understanding and influence. Collaborative discussions will yield many more productive approaches.

“The Quirks are My Own”

As designers of educational experiences, we in university life have a responsibility to help shape the future. By equipping students to think and discuss in many disciplinary languages, we increase the chances of fruitful collaboration across boundaries. Much of that design work is an ongoing conversation. When vexing new issues arise, we have more questions than answers. Whether it’s the peripatetic Socrates, Aquinas and his Summa Theologica, a scientist toiling in a lab, or our own pioneering experimentation with AI, our questions matter. They sharpen our interest, confess our finitude, and seek assistance of those who know. 

In one way, AI is a tool that brings us closer to a vast store of knowledge. Early in writing this essay, I considered consulting ChatGPT as a brainstorming partner. The irony of starting a dialogue with a chatbot about proposing human dialogue across disciplines made me quell the idea, reminding me of the much more energizing richness of human interaction. 

I propose we continue this conversation with our natural human intelligence and curiosity, and, most importantly, with one another. This essay was written without assistance from an AI chatbot. The imperfections and quirks are my own.


Jennifer Frederick ’99 Ph.D. is the founding Executive Director of the Yale Poorvu Center for Learning and Teaching and Associate Provost for Academic Initiatives. With a Yale Ph.D. in chemistry, she taught at public and private universities in Connecticut before returning to Yale in 2007 as Associate Director of the Graduate Teaching Center. Other positions at Yale have included Director of the Center for Scientific Teaching, with a focus on transforming undergraduate science teaching at colleges and universities across the U.S.


1. OpenAI’s generative artificial intelligence chatbot was released to the public in November 2022.

2. See, for example, Coursera’s online catalog of prompt engineering courses here.

3. Sondos Mahmoud Bsharat, Aidar Myrzakhan, and Zhiqiang Shen, “Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4,” arXiv:2312.16171 (submitted, Dec. 26, 2023; revised, Jan. 18, 2024), Cornell University.

4. Attributed to Mark Zuckerberg, CEO of Facebook. See also Jonathan Taplin’s Move Fast and Break Things: How Facebook, Google, and Amazon have Cornered Culture and What It Means for All of Us (Pan Macmillan, 2017).

5. See Yale University President Peter Salovey’s Opening Assembly Address to the Yale College Class of 2027, Aug. 21, 2023.