Does Anything Matter to AI?

By Jennifer Herdt

There is lots of talk about “ethical AI” these days. This usually means the development and use of AI in ways that ensure compliance with core ethical norms and principles, such as beneficence, non-maleficence, human autonomy, and justice.[1] This is a worthy goal. But while the European Commission’s Independent High-Level Expert Group on Artificial Intelligence speaks readily of developing “ethical” and “trustworthy” AI, this does not mean that we are creating artificial ethical agents

We have no idea what AI would care about, should it one day come to be capable of caring.

In fact, AI chatbots are being trained to be maximally deceptive, to engage with us in ways that convey caring and concern that are in fact wholly simulated. Indeed, it has been suggested that AI exhibits features characteristic of psychopathy: “a lack of empathy for others, oftentimes exhibited in the process of accomplishing an aim or desired outcome by almost any means and without concern for how the outcome affects others.”[2] But in fact AI lacks not just empathy but any sort of care or concern. Literally nothing matters to AI.

Frictionless Fantasy

It should therefore be a matter of great concern that we are becoming more and more accustomed to the deceptions bound up with simulated care and attention, as well as the frictionlessness of relationships with carebots that cater to our every whim. Intimate relationships like that which Theodore Twombly forms with AI Samantha in the 2014 movie Her are no longer merely the stuff of science fiction. But will AI caring always be merely simulated? Or could some things, perhaps people, come genuinely to matter to AI?  This is not an easy question to answer, since it is closely related to the question of AI consciousness, and consciousness, being by definition a matter of subjective experience, is difficult to study from the outside.[3] I think we will make progress here only if we grasp how our own capacity for caring is rooted in the ways that living beings grow and develop in our earthly organic environment.

Living things are sensitive to their environments. To be alive is to exist in a constant process of exchange, communication, and flow: living organisms are bounded but permeable, open to the world. In order to remain alive, a living thing must be able to differentiate between what is to be taken up into the organism as nutrition and what is not (or between what is to be retained and what is to be expelled). A host of things matter for a living thing to be able to sustain itself as an organism, or it dies. Things matter for organisms because they are needy and vulnerable. But nothing matters to these simplest of organisms. It is affect, feeling, that enables mattering. 

Who Cares, and Why

Simple animals, like coral, exist as colonies. In more complex animals, stimuli are received by sense organs, and movements are initiated by motor organs, and a central organ or central nervous system is needed in order to coordinate these. This yields a “unified space of conscious experience.”[4] In these forms of life it becomes possible to register, at the level of the organism’s own experience, what is good/bad for the organism’s survival and flourishing. Things now come to matter to and not just for the creature; they are affectively valenced.

Machine learning is made possible by neural networks in which connections between various nodes and layers are strengthened or weakened as the algorithm is trained on a dataset. For instance, an algorithm fed with thousands of images labeled “cat” can learn to identify images of cats. Each successfully identified cat reinforces the strength of certain connections and weakens others, in ways analogous to what happens when neurons repeatedly fire in the brain. Unsupervised machine learning is also possible, in which the algorithm is fed unlabeled datasets and simply extracts statistically significant patterns from the data. The results can be unpredictable, but sometimes useful. For instance, an algorithm that is fed all the photos on your phone might cluster them according to who appears in the photo. It might, though, also cluster them according to color palette or geometrical features.

Dueling Neural Networks

Despite the analogies between the brain and AI’s neural networks, and despite the impressive results of machine learning based on deep neural networks, there is nothing in AI like the biologically rooted primary reinforcers that shape living neural networks. These are not learned but are in place prior to the capacity to learn, rooted in the evolutionary character of life itself. Nothing matters for AI, so nothing matters to AI. It does not care about identifying cats, or about the statistical patterns it discerns. It does not matter to itself, so neither can anything or anything else.

Is it impossible that things could come to matter for and to more sophisticated forms of AI?  What is intriguing here is that animal caring is bound up with goal-directed learning processes, and AI can already be said to display goal-directed learning. But the goals of an animal belong to it in a deep sense; living things have goals before they have consciousness; in evolving consciousness, they become aware of what already mattered for their continued existence, and in evolving self-consciousness, they become aware of themselves as beings to whom things matter, and by the same token aware also of others to whom and for whom things matter. 

The goals of AI, by contrast, are not its own but, initially at least, those of its human designers. The form of goal-directed learning that it exhibits is thus unhinged from any need for subjective awareness of how it is faring. This means that any pathway AI could take to arrive at feeling and caring would be radically different from our own. Others have worried that autonomous AI would care only about its own power and self-preservation.[5]  But that is still to conceive of AI in our own (fallen) image, in ways rooted in our experience of the vulnerable preciousness of life. The point is more radical:  we have no idea what AI would care about, should it one day come to be capable of caring. Right now, AI is a useful, if dangerous, tool, and it is up to us to use it for good, for the things that truly matter. Nothing matters to AI. Let’s keep it that way. 


Jennifer A. Herdt, Gilbert L. Stark Professor of Christian Ethics at YDS, has published widely on virtue ethics, early modern and modern moral thought, and political theology. Her books include Forming Humanity: Redeeming the German Bildung Tradition (Chicago, 2019), Putting On Virtue: The Legacy of the Splendid Vices (Chicago, 2008), and Assuming Responsibility: Ecstatic Eudaimonism and the Call to Live Well (Oxford, 2022).


1. Independent High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI,”(The European Commission, 2019) defines “ethical AI” on p. 37. Luciano Floridi and colleagues compare and seek to synthesize six proposed sets of ethical principles for AI; Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, et al., “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations,” Minds and Machines 28.4 (2018), pp. 689–707.  

2. Jarrett Zigon, “Can Machines Be Ethical? On the Necessity of Relational Ethics and Empathic Attunement for Data-Centric Technologies,” Social Research 86.4 (2019), pp. 1001–1022. It is worth noting that psychopathy is not itself a well-defined or well-understood phenomenon. Recent research suggests that it is linked to a deficit in fear (see A.A. Marsh, “What can we learn about emotion by studying psychopathy?”,Frontiers in Human Neuroscience, vol. 7, 2013).

3. David Chalmers, “Facing up to the Problem of Consciousness,” Journal of Consciousness Studies 2 (1995), pp. 200–219.

4. Márton Dornbach, “Animal Selfhood and Affectivity in Helmuth Plessner’s Philosophical Biology,”Philosophy Forum 54 (2023), p. 208.

5. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014).