What does it mean to be conscious, sentient, or agentic? Should animals or AI systems be legal persons or have some other kind of legal status? We sat down with author Jeff Sebo to discuss how he explores these questions in his new book, The Moral Circle.
The Moral Circle argues that we should extend moral consideration to insects, AI systems, and other nonhumans. What’s the core case for that, and what made you believe it was urgent enough to write this book now?
The core case is not that insects, AI systems, and other nonhumans are definitely sentient or morally significant, but rather that we cannot rule out those possibilities at present. These are important questions because it would be extremely harmful to make mistakes about which beings deserve moral consideration, and they are difficult questions because they force us to confront our limited understanding of minds very different from our own, as well as our own bias and ignorance. When we are genuinely uncertain whether a being can suffer or pursue goals, that calls for caution and humility—a precautionary approach that treats them with respect and compassion while we investigate further.
I feel urgency about this topic because our species is at a crossroads. We spent the past century scaling up industrial uses of animals through farming, research, and other practices, often because we assumed they were not sentient or morally significant. We now appreciate that they are, yet we are also deeply dependent on these practices. Meanwhile, we are developing and deploying increasingly sophisticated AI systems at vast scales. Even if the probability that current AI systems are sentient is low, it will likely increase over time, and we need to start preparing now—correcting our mistakes with animals as fast as possible while avoiding repeating these mistakes with AI systems.
Some readers might find this argument overwhelming. If your argument is correct, what follows in practice for teachers, students, and other individuals, and how can this work be sustainable for them?
Many people sense, correctly, that if insects, AI systems, and other nonhumans deserve moral consideration, then we have a responsibility to challenge our practices, our policies, our priorities, even our fundamental presumption of human exceptionalism—the idea that we always matter most and always take priority. Orienting ourselves appropriately requires nuance. On the one hand, our responsibilities toward the nonhuman world are overwhelming. On the other hand, our limitations are equally overwhelming: We lack knowledge about what they need, capacity to provide it, and political will to sustain that level of care. A good first step is simply acknowledging both our responsibilities and our limitations in equal measure.
That clarifies that the task is not to do everything overnight. It is to build momentum in the right direction—treating nonhumans as well as we realistically can while building knowledge, capacity, and political will to do more over time. This is why teachers and students matter so much: Teachers are equipping the next generation to go farther than the current one can. Concretely, we can do more research, education, and advocacy on animal and digital minds. We can also make small personal gestures—such as saving an insect from drowning or saying please and thank you to a chatbot—in part because they train us to see beings differently, so we can be better equipped to treat them as they may well deserve at scale.
The book draws on science, ethics, and law and policy, among other fields. How did you come to work across all of these fields, and why do you think that kind of interdisciplinary approach is necessary for these questions?
My background is in philosophy, but these are inherently interdisciplinary questions. We need the humanities to clarify basic concepts and methods: What does it mean to be conscious, sentient, or agentic? What does it mean to have moral, legal, or political status? How can we use evidence and reason to determine which entities qualify? We then need the social sciences to understand expert and public attitudes toward these entities and how we relate to them and to each other in a world that includes them. And we then need the natural sciences to understand what their minds and lives are actually like—using behavioral, internal, and developmental evidence to assess whether, for instance, insects are conscious or AI systems might become so.
Finally, we need law and policy to determine what follows for our institutions. Should animals or AI systems be legal persons or have some other kind of legal status? Should they be political citizens or have some other kind of political status? How can imperfect human-led institutions represent these nonhuman stakeholders while preserving what matters about liberalism and democracy? Moving forward, what role, if any, should advanced AI systems play as participants in these institutions? These are enormous questions, and no single discipline can answer them alone. The book tries to model that interdisciplinary approach—drawing on philosophy, the social sciences, the natural sciences, and law and policy together.
For instructors considering adopting the book in their courses: How do you see the book working in a classroom setting? What kinds of lectures, discussion, and assignments does it open up for students?
This is a short book about big issues, so it does not go into maximum detail on any single topic. Instead, I wrote it to function on multiple levels. At one level, it serves as an advanced introduction to the science of animal minds and digital minds and to ethical questions about moral status and moral theory—who matters and what do we owe them. At another level, it works as an applied ethics text, asking pointed questions about what we owe insects, AI systems, and other nonhumans in policy areas such as food, infrastructure, and AI governance. And at another level, it asks deeper, more existential questions about our presumption of human exceptionalism—the idea that we always matter most and always take priority.
I also weave thought experiments throughout the book that spark great discussions in the classroom. In one example, you live with two roommates and discover through genetic testing that one is a Neanderthal and the other is a robot, and you need to decide how to relate to them. In another, you run an animal sanctuary and must choose between saving a small number of elephants or a large number of ants, or between a small number of biological animals and a large number of digital ones. My experience is that these and other thought experiments can ground what might otherwise feel like big, abstract questions about moral status, moral theory, and global policy in concrete, specific, personal decisions that students can latch onto.
In your experience teaching this material, how do students tend to respond to these ideas? What do you take away from your conversations with them, and how does that affect your own understanding of the material?
I have been extremely impressed by how students engage with this material. They come to my classes and talks with different motivations—some are genuinely curious and still forming their views, and others have strong opinions for or against animal or AI welfare. What they share is that nobody is dogmatic. They handle debates about deeply personal issues with remarkable generosity and open-mindedness. Given how polarized these discussions can be in broader society, it makes me really happy to see how many students are still capable of and interested in talking with each other and holding themselves and each other accountable, even when they disagree about issues that touch on their own practices.
Most of all, these students give me hope. As I mentioned, moral circle expansion is going to be an intergenerational project. The task for each generation is not to solve the problem but to chip away at it—treating nonhuman stakeholders at least a bit better while building knowledge, capacity, and political will for the next generation to go further. To have the experience, year after year, of students showing up wanting to challenge themselves and each other, striking that balance between conviction and openness, and reminding me that we can make more progress on these issues together than separately—that gives me confidence that the project can continue and that the next generation will carry it further than ours.
Interested in reviewing a copy of The Moral Circle for your course? Request your print copy and keep exploring the Norton Shorts series.
MEET THE AUTHOR
Jeff Sebo is associate professor of environmental studies; affiliated professor of bioethics, medical ethics, philosophy, and law; director of the Center for Environmental and Animal Protection; and director of the Center for Mind, Ethics, and Policy at New York University. He’s the author of The Moral Circle, part of the Norton Shorts series.
Image Credit: Kate Reeder