December 05, 2022
Guest David Gunkel, Northern Illinois University
Hosted by Michael Dello-Iacovo, Sentience Institute
David Gunkel on robot rights
“Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.”
Can and should robots and AI have rights? What’s the difference between robots and AI? Should we grant robots rights even if they aren’t sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light?
David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA).
Topics discussed in the episode:
- Introduction (0:00)
- Why robot rights and not AI rights? (1:12)
- The other question: can and should robots have rights? (5:39)
- What is the case for robot rights? (10:21)
- What would robot rights look like? (19:50)
- What can we learn from other, particularly non-western, ways of thinking for robot rights? (26:33)
- What will human-robot interaction look like in the future? (33:20)
- How artificial sentience being less discrete than biological sentience might affect the case for rights (40:45)
- Things we can learn from science fiction for human-robot interaction and robot rights (42:55)
- Can and should we do anything to encourage people to see robots in a more positive light? (47:55)
- Why David pursued philosophy of technology over computer science more generally (52:01)
- Does having technical expertise give you more credibility (54:01)
- Shifts in thinking about robots and AI David has noticed over his career (58:03)
Resources:
Resources for using this podcast for a discussion group:
Transcript (Automated, imperfect)
Michael Dello-Iacovo (00:00:11):
Welcome to the Sentience Institute podcast, and to our 20th episode. I'm Michael Dello-Iacovo, strategy lead and researcher at Sentience Institute. On the Sentience Institute podcast, we interview activists, entrepreneurs, and researchers about the most effective strategies to expand humanity's moral circle. Our guest for today is David Gunkel. David is an award-winning educator, scholar, and author specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters, and has published 12 internationally recognized books including the Machine Question, Critical Perspectives on AI, Robots and Ethics, Of Remixology, Ethics and Aesthetics After Remix, and Robot Rights. He currently holds the position of distinguished teaching professor in the Department of Communication at Northern Illinois University in USA. All right. I'm joined by David Gunkel. David, thank you so much for joining us on the Sentience Institute podcast.
David Gunkel (00:01:10):
Yeah, thank you for having me. Good to be here.
Michael Dello-Iacovo (00:01:12):
Great. Great to have you. So I'd like to start with a question about terminology. And I should preface this by saying I actually know your answer, I think, because I heard it in your interview with Ben Byford on the Machine Ethics podcast in 2020. But I think it's a good place for our listeners to start. So why do you call it robot rights and not artificial intelligence rights, sentience, or something else like that?
David Gunkel (00:01:37):
Yeah, so it's a really good question, and I think I can answer it with three sort of ways of directing your, you know, way of thinking about this. First of all, it's just, it's an alliterative statement, right? Robot rights. So just the alliteration sort of makes it easy to say artificial intelligence rights seems a little clumsy, and as a result, robot rights is pretty easy for a person to sort of rattle off, and it has a alliterative, sort of poetic feel to it because of the way it sort of rolls out of your mouth and sounds when you hear it. The second reason is that this terminology is not mine. This is terminology that came to me from other people who preceded me in this work. Either legal scholars or people in philosophy or in the area of artificial intelligence and ethics, but they were the ones that began using this terminology.
David Gunkel (00:02:31):
And so my engagement with it was to sort of pick up the thread and carry it further. And since that was the terminology that they had already employed, I sort of inherited that terminology. Lastly, and I think most importantly, the difference between the word robot and artificial intelligence is something that we oftentimes, you know, struggle with and try to figure out where to draw the line. And is a cloud based application a robot? Is it just ai? You know, how do we, we sort of sort these things. The important thing for me is that AI is the result of a academic workshop held in the mid 1950s in Dartmouth College. Robot, on the other hand, is the result of science fiction. It comes to us from the 1920 stage play RUR, and it was formulated and really was a reuse of a Czech term robota, meaning worker or slave laborer, by Karel Čapek.
David Gunkel (00:03:29):
And already in that play, which not only gave us the idea of the robot, but the word robot, the robots are already subjugated to human masters, and there's this uprising. So not only does that set the sort of template for future science fiction, but it also gives us this notion of the robot as an enslaved or a servant type, you know, figure or individual. And so the robot rights idea sort of fits in that pattern, beginning with Čapek's play, and the way in which that has developed not only in subsequent science fiction, but also in subsequent writings on the legal and the moral aspects of these technologies and the way they connect with us.
Michael Dello-Iacovo (00:04:15):
Yeah, that makes sense. I think for the rest of the interview, I'll probably use robots and maybe have that apply to AI as well, just as shorthand, but in a sense, they're not really interchangeable. Does that make sense to you? Because AI sort of brings an image of an intelligence that's not necessarily tied to a physical body, whereas robot seems to imply it's tied to a physical body. Does that sound about right?
David Gunkel (00:04:45):
It sounds about right, but I will say that this idea that the intelligence is disembodied, it's almost a kind of transcendentalist way of thinking that's almost religious. Our AIs are embodied, right? Even the cloud has a body, it's a server connected to wires and to fiber network cables and things like this. So there is an embodiment even for the so-called disembodied AI. It's just that that body doesn't look like our bodies. And so I find this to be a useful distinction, the embodiment distinction, but I also find it a bit troubling because it leads us to think that the AI has this kind of transcendental feature to it, and it really doesn't, when we talk about the, you know, resources of the earth about power, about environmental impact and all these other things that AI certainly contributes to carbon and to climate change. And that has to do because of the way it is embodied and where it is embodied.
Michael Dello-Iacovo (00:05:39):
Yeah, that makes sense. Thanks for mentioning that. So in 2018, you wrote a paper called The Other Question, Can and Should Robots Have Rights? And the other question here being in reference to the main question most people focus on when they think about AI and robots, which is about how they affect us, for example, AI safety being a hot topic at the moment, that's mostly about how an AI affects humans. And the other question, I guess is more focused on the interests of the robots themselves. So you wrote that there's an important difference between the can and the should here. So can you start by talking about that?
David Gunkel (00:06:17):
Right. So I'll make two distinctions, which I think will help sort this out. In moral philosophy, we distinguish the moral agent from the moral patient. A moral agent is someone who can act in such a way that is either good, bad, or is morally culpable or praiseworthy, whatever the case is, a moral patient is the recipient of that action. Now, in our world, we are both moral agents and moral patients. We can do good and bad, we can suffer good and bad, but there are some things that are moral patients and not moral agents. Animals, for example. We don't hold the dog responsible for barking at the postman, but we do hold the postman responsible for kicking and injuring the dog, right? And the dog therefore can be the recipient of an action that is either good or bad. And as you said, I saw a lot of people putting a lot of the research they were doing on the side of agency.
David Gunkel (00:07:07):
And the question was, how can we ensure that these devices, these tools, these instruments, these artifacts are employed in our world in a way that has the right outcomes, that doesn't disadvantage people, that doesn't create bad outcomes, whatever the cases, that's an agency question. My question, the other question was, well, okay, that's great, but what is the status of these things? What is the social moral, legal position of these devices that are increasingly intelligent and socially and interactive? And how do we grapple with the moral patiency question of these machines? And then when, go to the next stage, is the can should distinction. And this is really derived from David Hume. David Hume says, you know, you cannot derive ought from is, it's a very famous item in David Hume's thinking, but a lot of people have picked it up and developed it to us in, you know, since that time. The can and should question is really this, Can robots have rights? Yeah. All you have to do is make a law that says robots have rights. Now, should you do that is another question. So the ability to do so maybe is entirely feasible and very easy to answer that question. The moral question, the should question, is a little more complicated. And I think that's where we get into the weeds on how we want to shape the world that we live in and how we integrate these things alongside us in our social reality.
Michael Dello-Iacovo (00:08:33):
Mm-hmm. So I'll jump the gun a little bit, and just because you mentioned it and ask, what are some reasons why robots shouldn't have rights? What are some arguments one might use?
David Gunkel (00:08:45):
So one of the strongest arguments is that they're machines, right? They're not people, they're not human beings, they're not animals. They should, you know, they're just artifacts and therefore they are things. We may have this distinction that comes to us from the Romans, uh, from Gaius in particular, that, you know, actions that are moral or legal in nature have two kinds of objects or two kinds of entities. There is either persons or things. And so in the category of persons, we put you and I and we put corporations and we put maybe some animals, but we don't generally put technological objects. Those are things that's a strong argument based on this ontology that we've inherited from the Romans and the way in which our legal structures especially have operationalized this way of dividing things into persons or property. Another argument is that if we give robots some kind of moral or legal standing, we have complicated our legal system in ways that goes beyond what maybe we would like to handle, and that it maybe doesn't lend anything very useful to the way that we decide these relationships. Those are usually the two big ones.
Michael Dello-Iacovo (00:10:00):
Mm. When you make those arguments or when, when you put those forward, it makes me think of, these are some of the arguments I've heard in relation to why when people say non-human animals shouldn't have rights, it complicates the legal system. So that sounds familiar, but what are the best arguments for why robots should have rights?
David Gunkel (00:10:21):
So there's a number of arguments, and I don't wanna try to be exhaustive here, but let me cover some of the important ones that have been circulating. The literature in this field has absolutely exploded in the last decade. And when I started working on this back in the early, well, 2006 I started. First book comes out in 2012. So in those early years, it was really easy to keep track of who was arguing what, because the number of different arguments and circulation were pretty manageable. By the time I get to robot rights in 2018, this thing is spinning out of control because a lot of people find reason to engage the question and to deliver their own sort of response to it. So let me just hit a few reasons why we might want to do this. One is directly derived from what we've learned in the animal rights experience.
David Gunkel (00:11:20):
So in animal rights, we know Jeremy Bentham really is the pivot, right? And he said it's not, can they think, can they reason, but can they suffer? Are they sentient? And that opened up the moral circle to include things that had been previously excluded. It had been, up until that point, a very human-centric kind of moral universe. And when we start to engage in the animal question, we widen that circle to include other creatures that are non-human. And the reason why we included animals in the moral circle, whether you're following Peter Singer or Tom Regan, or one of the other innovators in animal rights, is because of sentience, because of the experience the animal has of pain or pleasure. And you can see just recently with Blake Lemoine, he was talking to Lambda, and Lambda, he said, is sentient, which led him to believe that Lambda needed to have rights protected for it because it was another kind of sentient creature.
David Gunkel (00:12:15):
So we use sentience as a benchmark, and the question is, how can you tell whether an AI or a robot is sentient? Well, that takes you back to the Turing test because you can't necessarily look inside and know exactly what it's doing. You can know some of what's going on, but really what Blake Lemoine did is learn from the behavioral experience that it was exhibited to him in the conversational interactions that he had with Lambda. So that's one reason why people argue for giving robots rights, this notion that they'll be at some point either sentient or conscious or some of these other benchmarks that make something available to having standing status and need for protection. Another argument, and this comes from Kate Darling, who I think was really innovative in this area by using Kant's indirect duties argument. Kant, unlike Bentham, was no animal rights advocate.
David Gunkel (00:13:08):
Kant thought animals were just mechanisms like Descartes did. But he argued you should not hurt animals, because when you do so, you debase yourself, you are corrupting your own moral character. You're corrupting your own moral education and providing a bad example for other people. And so indirectly, you're harming somebody else if you harm an animal. And Kate Darling says, you know, this is one way of thinking about why we don't want to harm the robot because of the example it sets, because of the way in which it could corrupt our own moral characters. And she uses the Kant's direct duties argument to make a case for the rights of robots as a way of protecting our social mechanisms, the way that we relate to each other, either morally or legally. A third argument for doing this, and this is more in line with where I take it in my own research, is that the properties, ascensions and consciousness have traditionally been really good benchmarks for things like animal rights and items like that.
David Gunkel (00:14:10):
But I come out of environmental ethics, and in environmental ethics, you don't harm a mountain, dirt does not feel pain. A waterway does not experience displeasure. Nevertheless, these are part of our integral experience on this planet, and we have responsibilities to the other entities that occupy this fragile planet with us. And I think climate change is a really good example of how we can screw that up if we assume wrongly, that these things are just raw materials that we can utilize to our benefit and artifacts may also play a role in our social world in a way that we need to think more creatively about how we craft our moral responsibilities and our legal responsibilities to others. And that is what I'm calling a relational approach to moral status. That it's not what the thing is, but how it stands in relationship to us and how we interact with it on a scale that treats these other entities as fellow travelers, as kin. And how we can come up with ways of integrating not just the natural objects of the environment, but also the artifacts that we create in our moral universe, in our legal universe in a way that makes sense for us, but also for our future generations and for the kind of environment and the kind of world we want to occupy.
Michael Dello-Iacovo (00:15:40):
That was great. Thanks David. There's a lot I wanted to mention from that. First that you mentioned Blake Lemoine and Lambda. We actually spoke to Thomas Metzinger in our last podcast episode that came out last week, and we talked about that topic as well. You, so the second argument you made about if we treat robots in a bad way, in the same way that if we treat animals in a bad way, that might have repercussions for how it affects us and how it affects humans in general. Now it sounds like even if robots are not currently sentient, even if robots maybe even can never be sentient, that would still remain an argument in favor of giving robots rights.
Michael Dello-Iacovo (00:16:28):
And the last point you mentioned, again, it seems to be coming back to how the way we interact with robots affects humans. I know you've spoken about the case of the Whanganui River in New Zealand being granted legal personhood rights as an analogy. And I think you said something like, it's not that people are arguing that the river is sentient, it's just that that's a tool in our current legal system to get protection. And then it's for instrumental purposes or for extrinsic purposes. It's how that affects humans. Giving robots rights now might therefore be an instrumental tool in the same way.
David Gunkel (00:17:04):
Yeah I think a really good example is recently 12 states in the US made some legislative, decisions and put into act some laws that try to deal with these personal delivery robots that are being deployed on the city streets. And they decided that for the purposes of figuring out right of way and who can go in the crosswalk and who can't go in the crosswalk and things like this, it made sense to extend to the robot the rights of a pedestrian. Now, that's not making a decision about robotic personhood. That's not making ,you know, a distinction that would grant personhood to the robot. It's just saying rights are the way that we figure out how to integrate things in situations where you have competing claims, powers, privileges, or immunities coming from different actors in the social environment. And one way in which we negotiate this is by granting rights, by giving something a right, a privilege, a power, a claim, or an immunity. It has to be at least respected by someone else in that social environment. So this is just a tool we have to try to develop ways of responding to these challenges that allow us to occupy space and work with. And alongside these various things.
Michael Dello-Iacovo (00:18:24):
I think a pretty clear example of that is how we, corporations have rights and they're clearly not persons, but in the eyes of the law, often they are treated like they are persons in a lot of ways.
David Gunkel (00:18:36):
Can I say the real trouble here, I think the real point of debate and contention is the fact that we're trying to work with two legal categories that are mutually exclusive person or thing. And as I said before, this comes from 2000 years ago when Gaius developed this distinction in his own legal thinking. And our western legal systems have worked with us for a long time, and it's worked pretty well, but that's why the corporation got moved from a thing to a person because we wanted to be able to sue it. We wanted it to be able to stand before the court as a subject and not just an object. And so this whole debate is about how do we negotiate this distinction. Person, on the one hand that is an subject before the law, an object or thing on the other hand, and it may be the case that we just need a better moral ontology, we just might need something that gives our legal system a little more latitude with regards to the kinds of entities that can occupy these kinds of positions in our world.
Michael Dello-Iacovo (00:19:40):
Yeah. When it comes to say, non-human animal rights, I do like Peter Singer's take on what that might look like. It's not that we're asking for non-humans to have the right to drive or to vote, for example, we're just asking that their interests are considered, their similar interests. For example, the right to not be harmed. With that in mind, do you have any thoughts about what robot rights might look like? Or perhaps one of your most ideal scenarios might be for how that might look in practice? You've sort of talked about that a little bit, I guess. But also just given that robots may have very different interests to us, let's say in the case where they do become sentient. And I'd just like to nudge you as well to maybe mention, Kamil Mamak's paper, Humans, Neanderthals, robots and rights. I think that that seems relevant here as well. That's talking about moral patiency vs agencies. So please, please talk about that a little bit.
David Gunkel (00:20:35):
Yeah. So, you know, this is where I think the analogy with animal rights starts to, if not break down, at least reach a limit. Animals, I think we can say have interests and we can guess pretty well what they are, even though it's guesswork, we can pretty much sort of figure out pretty well for ourselves, you know, what the dog has as an interest, food, a walk, whatever the case is, right? Our current technologies don't have interests, and if we're looking for interests, we might be barking up the wrong tree. At some point in the future, that's possible. But I don't want to hold out this expectation that we should wait for that to happen before we engage these questions. If that does happen, then we may be finding ourselves having to deal with this analogy to the animal much more directly.
David Gunkel (00:21:34):
But I think for now, the real issue is we have interests and these objects are in our world, and we need to figure out how to integrate them in our moral and legal systems in a way that makes sense for us. And just responding to these devices as if they were tools or instruments doesn't seem to work. In other words, the robot is sort of in between the thing and the person, and it resists both reification and personification. And that's the problem, right? That's the challenge and the opportunity before us. How do we scale existing moral and legal systems that work with these two categories for something that seems to resist one or the other? And I think what Kamil has done in his work on the subject of moral patiency is really instructive because he's saying, we're not looking for similitude. We're not looking for robots to be like human beings and therefore have those same rights as a human being would have.
David Gunkel (00:22:33):
Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different. And I think Kamil's point is that difference actually matters here. And that if we're looking for similitude, we are going to actually paint ourselves into a corner on the subject where we'll be really missing what's important. And I think the focus on difference and how to integrate what is different into our current way of thinking about our relationships to other kinds of things will actually help. And again, I think environmental ethics is a really good guide here, because we don't want to think about the water like us. It's not like us, right? What is the water to us and why is it different? And how is that difference important to our way of engaging it and living alongside it?
Michael Dello-Iacovo (00:23:40):
Yeah. So to go back to that paper, and there's one point I found kind of interesting. So they used Neanderthals as an analogy, I guess for robots in that there seems to be some evidence - I'm not really familiar, I just going off that paper - that Neanderthals might not have moral agency per se. They might have moral patiency, correct me if I'm mistaken here, but they're arguing that we might want to treat Neanderthals if they were in our current society, in the legal system, treat them more as say we would a human child in that, what we do to them matters and what, but we might not necessarily hold them, accountable or to blame for the actions they do to us. Does that sound about right, and if so, is that a reasonable analogy for how we might treat robots in a legal system?
David Gunkel (00:24:38):
Right. So let me say two things. One, a disclaimer. I know very little about Neanderthals . So I can't really enlighten you in any appreciable way in that regard. But I will say, and this again, I'm gonna go back to environmental ethics. Thomas Birch, who was an environmental ethicist said that, you know, what we're talking about is power. When you talk about widening the moral circle or widening the inclusion of what is on the inside and what's on the outside, someone in the inside decides whether or not to expand the circle, right? And that is a power relationship where those in power extend to those without power inclusion. You can see this already in previous rights expansion, Mary Wollstonecraft, who wrote the vindication of the rights of women, had to pitch her argument to the men who were in power and had to write to them because they were the ones who could expand the circle to include women in moral and legal consideration.
David Gunkel (00:25:44):
The same with animals, right? In order to include animals, someone had to pitch an argument on behalf of the animals, but they were inside the circle to begin with, otherwise they would not have been able to make the argument for expanding that circle. And I think this is the same thing we see playing out with, you know, rights expansion beyond even animals. That this is a dynamic that is very much related to power and politics, and how this plays out is really something that is in our hands because we're the insiders. We're the ones who make these decisions. So how robots get integrated into our moral and legal systems is entirely ours to decide, and therefore, we need to engage this responsibly in ways that really adhere to our moral standards and protect our futures.
Michael Dello-Iacovo (00:26:33):
You've said that, to change the topic a little bit, you've said that AI ethics is often dominated by western thinking, and that expanding the dialogue to include other ways of thinking, like, for example, indigenous or animism could be useful. In some of our research at Sentience Institute, we found that people who reported more belief in that artificial beings like AIs and robots, if they have reported belief that those entities can have spirits, they also tended to extend more moral consideration to them, which doesn't sound that surprising, I guess, but it's an example of how maybe some other ways of thinking might actually be beneficial to bring into this discussion. So do you have any other examples of how, say bringing indigenous animism or other ways of thinking into the AI ethics conversation might be useful?
David Gunkel (00:27:22):
Yeah, so let me just say that a lot of the AI ethics discourse has been distinctly western, right? We've used consequentialism, we've used deontology, we've used virtue ethics, we've used traditions that are very much grounded in a Western European Christian sort of tradition. And there's nothing wrong with that except that we've gotta recognize that that's not a universal position, right? That's very particular. And for people who live in the global north and have grown up with these philosophical and religious traditions, it may make sense, but the rest of the world looks at things from different perspectives and does things that do not necessarily track with what comes out of a Western experience. And so, I think you're exactly right. There's ways in which we can look beyond our own way of thinking about these matters and do so to help inform this in a more global perspective and draw on a wider range of human wisdom as a way of developing responses to this.
David Gunkel (00:28:21):
Now, I'll caution, we gotta be careful here because this could turn into Orientalism, right? This is one of the premier sort of colonialist kinds of gestures. You go out to the other and you take from them what you think is gonna help you in your own endeavors. And we've gotta protect against that kind of gesture. It's not about going and colonizing these other ways of thinking in order to mine from them some sort of insight that we lack in our way of doing things. So it's about learning, it's about engagement in order to be students of other ways of thinking and to learn from these other traditions how to see and engage the world in ways that will be different from what we may have grown up with and different from the standard practices that we have from our own traditions. So I'll mention just a couple things that I think are useful here.
David Gunkel (00:29:14):
One is, I think, African philosophies like Ubuntu. Obviously Ubuntu is not one philosophy, it's a collection or a constellation of different philosophies, but it is a much more holistically oriented and less individualistic. Whereas Descartes said, I think therefore I am, the philosophers arguing and working in the Ubuntu tradition says, you know, I am because we are, And it comes much more out of a communal kind of relationship to a wider perspective on the world. And I think that can help us, because I think a lot of the work that is done in AI ethics and in even the robots rights literature tends to be very much focused on a Cartesian subject that is sentient, that is conscious, and that becomes the unit of analysis. If you look at things from a more holistic, communal perspective, we're looking at it then in a more relational approach that I had described earlier.
David Gunkel (00:30:13):
Another tradition I think can be really useful is by looking to indigenous epistemologies and cosmologies. And again, there is no one indigenous epistemology. There are a plurality, a multiplicity, because they're very different across the world. But there are ways in which our very idea of rights is already a western concept, right? This idea of God given rights to the individual. And that's very Christian, it's very European, it's very modern. And the pre-modern sort of indigenous ways of thinking about these things look at not rights. They don't have that concept yet. They talk about kinship relationships and how do we build kin with our machines? How do we exist alongside these entities that are our tools, our servants, our instruments that doesn't turn them into a slave, that doesn't turn them into something that is beholden to us. And I think kinship relationships as developed in a lot of indigenous traditions can be a nice way to sort of complicate the rights literature that we often bring to bear on these questions.
David Gunkel (00:31:20):
And then the third thing I will say, and this comes outta Confucianism and some research that some Confucian scholars have done recently. Instead of talking about robot rights, R I G H T, they talk about robot rights, R I T E S, that it's idea of a ritual of a performance, and that the robot is engaged with us alongside us in performative activity. And as a result, they are engaging us in rights of social interaction and that we should put the focus not on rights as R I G H T, but writes R I T E S as a different way of sort of shifting the focus from this individual possession to a communal performance.
Michael Dello-Iacovo (00:32:05):
Yeah, that's interesting. Do you have any examples of how that might look in practice? What would that entail doing?
David Gunkel (00:32:12):
So this is what I've tried to develop, especially with this relational turn, concept that I, along with Mark Coeckelbergh have really been formulating and researching for the last decade or more. This idea is not ours alone. It comes out of environmental ethics. It comes out of the STS feminist ethics, like Karen Barad and Rosie Braidotti. But it's this idea that we need to begin to think about our moral relationships as maybe taking precedence over the individual moral entity, and that we are all born alongside others, and that we are already in that relationship prior to this extraction of our sort of identity of ourselves. So it sort of works counter to the Cartesian way of thinking about being in the world where Descartes is sort of isolated from others and then has to go out to others and figure out his responsibilities to others. This way of thinking is always already responsible to others, and that the, the individual is a product of that sort of interaction. But, yeah, that's a life's work right there.
Michael Dello-Iacovo (00:33:20):
Do you have any thoughts about what a human-robot interaction might look like in the future? We've talked a bit about the legal context, but there would be a lot of aspects of interaction. And I guess you could answer this in the long term where, as you say, when at some point in the future, robots become likely sentient, but there's also the short term answer. I mean, we have human robot interaction now that's not necessarily related to robots being sentient. Is there anything that we can expect in the future? One thing I wanna talk about as well is how much can we learn from science fiction? How much of that is lessons about what we might see and how much of that is just mere fantasy?
David Gunkel (00:34:00):
Yeah. No, this is a really important question because I think sometimes we think that the robot invasion is something from the future, right? We're waiting for it. The robots are gonna rise up, or they're gonna descend from the heavens with guns and, you know, bombs and they're gonna attack us. And that's a science fiction scenario. I think the robot invasion is way less exciting, way less dramatic, even mundane, it's like the fall of Rome. We invite these things into our world, and over a couple hundred years, we wonder where the robots came from, because they have infiltrated us in very slow movements of, you know, our decisions to use a device here to use a device there. So I think we need to look not necessarily at the big picture, long term kinds of questions, but I wanna look more immediately, where are we at right now?
David Gunkel (00:34:50):
Like, what is happening in our relationships to these devices that is maybe of interest to us in changing our social relationships with each other in the process? So one thing we've seen recently as being reason for both concern, but also of interest is children saying, Thank you to Alexa. Now, that's weird. We don't say thank you to objects, right? We, you know, don't say thank you to our automobile for getting us around town. We say thank you to persons.
Michael Dello-Iacovo (00:35:20):
Some people do.
David Gunkel (00:35:22):
Yeah, mostly not. But you know, when we say thank you to persons, right, and yet the abilities of these very simple digital assistants to use language brings us into a social relationship where we think that we need to be polite to the object. And there's nothing necessarily wrong with that. There's reason to think that that is part of what makes us social creatures and that we need to be really concerned with not only what that artifact is, but how we engage it, how we respond to it.
David Gunkel (00:35:56):
I think sometimes people try to write this off as anthropomorphism. They say, you know, this is anthropomorphism, anthropomorphism is a dirty word because we shouldn't be doing that. I think anthropomorphism is not a bug. It's a feature. It's a feature of human sociology. We do it to each other, we do it to our animals, and we do it to our objects. So it's not a matter of yes, no, with anthropomorphism, it's not a binary. It's a matter of careful and informed management. How do we want to manage the anthropomorphism that we are developing and designing in the process of creating these things? And I don't know that we have answers to those questions, but I do know we have lots of ways of engaging in this question. Because we not only have the example of talking to Alexa and saying, thank you.
David Gunkel (00:36:40):
We have robot abuse studies in which people find it very disconcerting and problematic to harm something that they're told is just like a toaster. Nevertheless, it's social interactivity makes it very difficult to do these things. We can already see in very rudimentary robotic and AI systems, ways in which we are accommodating ourselves to these objects and bringing them into our social relationships in ways that maybe don't exactly fit our human to human relationships, but are creating new relationships. I'm part of a new field in communication called human machine communication. And that's because we recognize the machines are no longer the medium through which we send messages to each other. They are the thing we talk to, they are the thing we interact with. And this, I think, raises some interesting, immediate questions that we don't have to wait until, you know, two, three decades from now when we get sentience or AGI or whatever the heck it is.
Michael Dello-Iacovo (00:37:42):
Yeah, yeah. We talked about this a little bit with Thomas Metzinger as well. It's, I guess, kind of a social hallucination where we might just all accept that, whether Alexa or something else, we just kind of accept and act like it's sentient even if it's not. One thing I wanna maybe push back a little bit on is, I mean there are some examples, I guess other examples of where people kind of act like something is sentient when it's not like children with stuffed toys, for example. Or maybe in like a very realistic video game where you kind of, or you are, maybe not intentionally, but you're sort of forgetting maybe that what you're interacting with is an NPC, like an AI character, not another, a real person. So I have to ask, I guess, is that necessarily a bad thing or is, I mean, you mentioned before, the way we treat robots, even if they're not sentient, might actually be important because it influences how we affect how we interact with other humans as well.
Michael Dello-Iacovo (00:38:45):
So is that a good thing, a bad thing? Not quite a clear answer?
David Gunkel (00:38:49):
So I don't think it's a good or bad thing, but it's a thing. It's a thing we have to really take seriously. We talk about suspension of disbelief. When you go to the theater or you watch a movie, the characters on screen are not real. And yet we feel for them, we engage with their emotions, and we have an experience as a result of that. And, you know, in the early years of cinema, that was something that people were worried about. Would people, you know, lose themselves in the story and, you know, exit reality and spend more time in the movies than in real world? Well, that didn't happen. We figured it out. But that's why I say I think it's a management problem. It's a way of managing these relationships and managing these responses that we make to these devices, because that is where I think the real challenge is. I think saying yes no is way too simplistic. It's, you know, we're not going to fix this by saying, don't do that. I don't think you fix a social problem by saying to people, stop doing something. Prohibition never really fixes the problem. You've gotta figure out how to engage them in a reasonable and emotionally informed response that we are able to effectively manage and that works for them.
Michael Dello-Iacovo (00:40:00):
Yeah. I actually find that a little bit amusing how you mentioned people would think that cinema is going to make people lose themselves in all these fictional worlds. I guess the example I'm familiar with most recently is virtual reality and I guess video games in general. People had that worry, and I didn't realize there was that worry about cinema. And then I also thought, well, I mean, you could go back further and, it's not like cinema was the first iteration of fiction. There were plays, there were books. So unless something is particularly different about this new medium, maybe it's, you know, the newer mediums are more engaging. It is kind of interesting and funny to think about for me. So one example from science fiction that I wanted to get your thoughts on is in science fiction, artificial entities are often seen as being quite discreet.
Michael Dello-Iacovo (00:40:45):
So for example, often you have a robot and that robot is sentient, and their mind is encased in that physical robot. But in reality, it might be a little bit more complex. You might have you say a single sentient entity, artificial entity that controls multiple different robots. And you mentioned already that it's a mistake to think about artificial intelligence as being disembodied because it is embodied somewhere. It might just be more diffuse, more spread out in say, different servers. So for example, maybe for an AI losing that controls multiple different robots, losing an individual robot might be more like, say, losing a limb than say, a human dying. So in cases like this where it's much more diffuse and hard to tell really where an AI begins and ends, or robot begins and ends, how might this affect the case for robot rights? Or how might this affect robot rights in practice?
David Gunkel (00:41:51):
So I think here, corporate law provides a pretty good template because corporations are also diffuse, right? Corporations are in such a way that there is no one place you can go and say, that's the corporation, right? It's all over the place and it has different manifestations. And I think if we really want to learn from that experience, I think we'll have a good template, at least for beginning to address a lot of these questions. Because I think the relationship is much more direct between AI and the corporation because both are artifacts, both are humanly created and both have a kind of status. In the case of the corporation, they have personhood. In the case of AI, we're now arguing whether AI should also have personhood. And again, I think oftentimes we're looking to the animal question as the template for how we decide a lot of questions regarding the moral legal status of AI and robots. But I think the corporation may be a better analogy to follow as we try to think our way through these things.
Michael Dello-Iacovo (00:42:55):
Are there things you think that we can learn from science fiction that maybe some, some depictions where the useful thought experiments or where you might think, Oh, they've got it right, that looks like it's a plausible scenario?
David Gunkel (00:43:08):
Yeah, I think there's a lot we can learn from science fiction, and I appreciate the question because I think sometimes the response to science fiction by roboticists is this kind of, yes, but no, you know, I'm interested in it, but don't go there because science fact is way more complicated and it's not as simplistic as what you get in fiction. And we have to bracket off that fictional stuff, so we can talk about the real stuff. I think science fiction does a lot of heavy lifting for us in this field. It already gave us the word robot. We wouldn't have the word robot if it wasn't for science fiction to begin with. Secondly, I don't think science fiction is about the future. Many science fiction writers and filmmakers will tell you this. Cory Doctorow is one of them. He says, science fiction is not about predicting the future, it's about diagnosing the present.
David Gunkel (00:43:54):
And so what we see in science fiction are present anxieties, present worries, present concerns projected on the screen of a possible future. And so we can see science fiction as a way of self-diagnosing our own worries and concerns, almost like a psychoanalysis of us as a species right here, right now, and what really troubles us. And so if we look at science fiction, not as predictive, but as diagnostic, I think we can learn a lot from our science fiction. I also think science fiction gets a lot of things right way before we get into those matters in the scientific research. So for example, already in Blade Runner, you have this analogy between real animals and robotic animals. And this whole notion of the electric sheep that is the title from Phillip K. Dick's original novella, is this idea that we are developing devices that are indistinguishable from real entities, and that we could have artificial animals and we could have natural animals.
David Gunkel (00:44:56):
And so this idea, I think, helps us grapple with the way in which we build these analogies to other kinds of entities, the way we analogize the robot by comparing it to the animal and learning from our relationship to animals, how we relate to the robot or the AI. I also think you see in science fiction, a lot of deep thinking about human robot interaction. I mean, we already today are talking about the effect and possible social consequences of sex robots. We've already grappled with a lot of those questions in science fiction. Now, maybe we haven't got the right answers, and maybe we haven't developed even the right inquiry, but we've already seen prototyped for us the kinds of things that we should be concerned with, the kinds of directions that we should take our inquiries so that when we do engage these questions in social reality, we are prepared to do so.
David Gunkel (00:45:50):
Finally, I think science fiction does a lot of good work making public a lot of things about robots and AI that average people would not have access to. A lot of the research is done in proprietary labs behind closed doors, and we only hear about it once it's released to the public. And then there's either excitement or outrage as the case may be, right? I think a lot of people, if you ask them what they know about robots, they inevitably are gonna talk about Mr. Data. They're gonna talk about Westworld, they're gonna talk about WALL-E, they're gonna talk about R2D2. They know the robots science fiction way before they get into the science fact. This is what is called science fiction prototyping. And I don't think that science fiction prototyping is necessarily something that is bad. I think there's a lot of education that takes place in popular media, and if we are careful about how we create our stories, if we're careful about how we cultivate critical viewers in our young people, I think we can use this resource as a really good way of popularizing our thinking about these new challenges.
Michael Dello-Iacovo (00:46:58):
Yeah, I really like what you said about science fiction being almost like a thought experiment, which that's one of the reasons why I love reading science fiction, watching science fiction so much. And I just wanna shout out as well, one of my favorite science fictions, which depicts AI in a lot of different forms is Culture, Ian Banks' series. So I would recommend people check that out. One thing that's related that we found from some research at Sentience Institute, we found that people with a science fiction fan identity who self-identified as being science fiction fans, that trait was correlated with people perceiving more mind in currently existing robots and AI perceiving more mind in robots that might exist in the future, stronger beliefs that AI and robots should have, would have similar value to human feelings, less moral exclusion of robots and AIs and I could go on.
Michael Dello-Iacovo (00:47:55):
But it does seem that science fiction fan identity or being interested in science fiction has some positive effects, I guess hard to say, whether that's causal or maybe that's, if someone has one they're likely to have the other. But that gets me thinking about what kinds of things can we do to actually, I guess, almost like an intervention, if we were interested in moral circle expansion in the AI robot context, what can we do? I don't mean like making people watch science fiction or something, but, is there anything that you think we could do to, encourage people to think about robots and AI in a more positive light? And should we be doing anything?
David Gunkel (00:48:40):
Yeah, no, it's, again, it's a really good question and it's important because moral expansion is something that is part of our evolution in both moral philosophy and in law, right? I mean, we've opened the circle to include previously excluded individuals and groups, and that's been a good thing. And so engaging people in these kinds of, exercises, if you wanna call 'em, that I think is not only instructive for them, but it also contributes to our evolution in our thinking on these matters. I think, as we just have discussed, I think science fiction is one place that you can engage people in these questions. I know when I work with my students, one of the things that I find them to be most engaged with and most excited about is when you can take something in their popular media experience and open it up in a way that allows for them to really see a lot of these things at play, and gives them some access to it.
David Gunkel (00:49:45):
Because I think a lot of times these technological subjects and these matters seem rather inaccessible. And if you can make 'em accessible by fiction, by whatever means, I think that's a really good opener to the conversation. It doesn't mean you end the conversation there, but that's where you begin to cultivate this way of thinking. I think another way to do this, and I again, have found this to be a direct instance in my own classroom, is by giving people access to devices, by letting them just engage with robots. You know, we have this idea of the robot petting zoo that some people put together at conferences and stuff, but I think this is important. I think kids are curious, especially younger kids, you know, high school and below, they want to engage these things. They want to take their curiosity and see, you know, what happens.
David Gunkel (00:50:38):
And giving them access, I think is crucial, because otherwise, it's something that happens at Google. It's something that happens at Microsoft, and therefore it's not really a part of what they are. It's not really in their world in a way that they can make sense of it. And I think access is absolutely necessary in that area. I also think education is very key to a lot of this stuff. Again, I think we've limited access to a lot of these materials to specialists in computer science, artificial intelligence, engineering, and elsewhere. I think we've gotta open the curriculum and make this stuff broadly available. You can see already with the release of DALL-E and the way people are using it to generate images, we need artists to be engaged with this technology, and we need artists to help us make sense of what this creativity with AI is all about. And if we don't let artists into the conversation, we're not going to learn what we can possibly gather from the history of human artistic expression and experience. The same with music, the same with journalism, the same with any field. I think this technology is no longer able to be limited to one specific field, and we've gotta teach it across the curriculum in a way that begins early and that gets the curiosity of our young learners engaged from the very early stages of their career.
Michael Dello-Iacovo (00:52:01):
Great. Thanks for that. So just a couple of questions to sort of wrap this all up. I've noticed that you had an interest in programming from a young age, and you've actually developed internet applications. You're an established developer, but instead of pursuing computer science more generally, you followed a career in the philosophy of technology. Why do you think that is? What interests you about the philosophy of technology more so than coding itself?
David Gunkel (00:52:29):
Yeah, this is interesting because I used web development as the means to pay for the bad habit of going to grad school, . But it's funny because those two things tracked really nicely because one was very hands on, very practical, and the other was very heady, very theoretical, and so they sort of balanced each other out. But one thing I noticed as I was doing these things simultaneously is that the one could really speak to the other, if somebody would build the bridge, that what we were doing in programming and in developing applications could actually gain from the traditions of human learning, from epistemology, from metaphysics, from ethics, you name it. If we would only build the bridge to those philosophical traditions, we'd be able to open that conversation. And I think we've been rather successful with that. If you see how AI ethics has really exploded in the last five years, but it also goes the other direction.
David Gunkel (00:53:23):
I think the computer and digital media and AI and robots can actually provide philosophy with some really interesting thought experiments on the one hand, but also some interesting challenges, the human exceptionalism and the way we think about ourselves as being the center of the universe, and therefore the only sentient creature to, you know, exist on planet earth, which obviously isn't true. So what I saw was this ability to use the one to inform the other, and the reason I went in one direction as opposed to the other direction, it just had to do that. It turned out I'm a better teacher than I am programmer . And so I pursued the one that was gonna take me in that direction.
Michael Dello-Iacovo (00:54:01):
Yeah. Do you think your work in developing has given you some credibility? Because I imagine there might be some people in the philosophy of technology who maybe aren't taken so seriously by people who actually work on artificial intelligence, machine learning, what have you. And I can think of some, there are some people who don't, for example, don't take AI safety very seriously, who work in the actual development of AI. They might think these people, you know, they have these ideas, but they don't really know anything about technology. They're kind of naive is what they might say. So do you think because you've kind of done both, do you think that gives you some credibility in the tech space?
David Gunkel (00:54:58):
I hope it does. What, what I will say is that it feels very dishonest to me to talk about machine learning or any other technology and not know how it works. I'll just give you some examples from my own sort of trajectory. So I wrote a book on Remix in the, you know, 2016, I think it came out. And it took me a while to write the book, not because I couldn't write it, but because I wanted to learn to be a DJ before I wrote the book. And I spent all this time developing the practice because I didn't think I had the credibility to speak about this artistic practice and the technology behind it without knowing the how it works, knowing the tools and having hands on experience with it. The same, when I started to engage with AI and robotics, I knew that there was no way I could speak with any credibility about machine learning, about big data, about neural networks and all these things.
David Gunkel (00:55:47):
If I hadn't built one, if I hadn't done the task at hand in actually constructing a neural network, training it on data and see what it does. And I think for my own writing, this is what allows me to speak with some conviction, with some real good grounding in the technology. And that hopefully is communicated in the resulting text to the rest of the world that I'm not just making this up. I come to this from the perspective of really knowing what goes on behind the scenes and have brought my philosophical knowledge to bear on something I have direct hands-on experience with.
Michael Dello-Iacovo (00:56:26):
So I've got a similar experience in that I tried to, I actually did a PhD in space science, and I have a geoscience background and I wanted to work a little bit on say, some long-termism, like ethical questions and apply that as it applies to space science. And I thought that doing that might, give me more credibility when I talk about these ethical problems. But in my experience, something that's too soon to say perhaps, but it doesn't feel like it. I think there's just, I've been met with a lot of skepticism from the space science community on some of those ethical ideas. But it sounds like that's worked out better for you, so that's that's good to hear.
David Gunkel (00:57:09):
It doesn't mean that you don't get pushback. It doesn't mean that you don't get criticism. I think, you know, it's always a push pull. You're always kind of putting yourself out there and then trying to justify what it is you're doing to others who may be skeptical of it. Especially when your ideas might be less than popular, which often is the case in academia. But I think the dialogue is crucial. And I think meeting people where they're at is part of building that transaction and making it work. I did have a guy at one point on Twitter say to me, you should shut up because you don't work in this field and you don't know what you're talking about. And so I sent him the neural network that I built. I just sent him the code, and I just said, Here.
Michael Dello-Iacovo (00:57:57):
That must be satisfying in a kind of , vindictive way.
David Gunkel (00:58:02):
It was very satisfying.
Michael Dello-Iacovo (00:58:03):
I can imagine. Well, just to kind of bring this together. Over your career so far, do you think you've noticed any shifts in thinking about how we think about robot rights? One just to kind of prime you is maybe a shift in people thinking about robots as moral agents to shifting to thinking about them more as moral patients. Do you, what have you seen over your career so far?
David Gunkel (00:58:30):
So, yeah, as I said, I think earlier is, you know, when this started for me it was a really small, you know, I could really hold on, you know, my fingers, how many people were working on this subject, right? And that was it. And it's really exploded. I think the work that you've done at Sentience Institute documenting this in your review of literatures that you've done really shows this exponential increase in interest, but also scholarship in this field. And that, on the one hand is very gratifying. On the other hand, it's hard to keep up because there's so much happening and there's a lot to read and a lot to stay on top of. But I will say that a couple of trends have emerged in this process. I think there has been an increasing move from this being a speculative moral subject to this being a much more pragmatic and practical legal subject.
David Gunkel (00:59:21):
My own thinking has evolved in that way. My first book, the Machine Question was very philosophical, very morally situated in those traditions. My most recent book, which is gonna be this thing called Person, Thing, Robot, from MIT Press, will come out next year, is much more engaged with the legal philosophy and with legal practice. And that just, I think, is a reflection of the fact that that's how the trend has gone in the research over the last decade. Another thing I've noticed is a development in the, bringing into the conversation these non-western ways of thinking about these questions. I think when these questions began over a decade ago, the way in which I and others were very much engaging these things were by leveraging sort of the western moral and legal traditions to try to speak to the people who are building these things and developing these technologies.
David Gunkel (01:00:18):
Over this past decade, we've seen, I think, a greater engagement with, and a greater desire to engage with other ways of thinking and other ways of seeing not as a way of doing something better or worse or whatever the case is, but it's just tapping into that difference that we can see in human thought processes that allow for us to really cultivate a relationship to the wide range of human wisdom as it's developed over time, but also over space. And I would say the last thing I've seen, and this is very gratifying, I think when I started this and began to talk about robot rights as a subject matter for investigation, there was I think a lot of very abrasive and very sort of triggered reactions. How can you say that this is, this is just horrendous. I mean, who would talk this way?
David Gunkel (01:01:17):
And I had this very famous picture I put on Twitter with me holding a sign that said, Robot rights now. And it really sparked an amazing, huge controversy about a decade ago, well, about five years ago. And I learned a lot in that little exchange that took place, but it was an explosion of interest, but also of pushback. I think we've seen it evolve to the point now where people are saying, yeah, we need to talk. This has gotta be talked about. This has gotta be grappled with, we can't just put our fingers in our ears and go, blah, blah, blah, blah, blah, this doesn't exist. It does exist. Laws are being made, hearings are happening. AI personhood is not something for the future. It's something that legislatures are looking at right now. And as a result of all this, taking these questions seriously and engaging with them in a way that is informed by good moral and legal thinking processes, I think is absolutely crucial. And I've seen in the last five years that mature in a way that I think really speaks to the fact that a lot of people have found this to be not only of interest, but also something that is crucial for our engagement with as researchers.
Michael Dello-Iacovo (01:02:29):
Great. Well thanks. Thanks for that, David. Just to finish up, where can listeners best follow you or your work? And is there anything in particular you'd wanna suggest they look at, whether it's a book or any other piece of work you've worked on? Especially if they're interested in this topic and want to learn more.
David Gunkel (01:02:44):
So you can follow me on Twitter. It's David_Gunkel. That's my handle. You can find it very easily. My website is gunkelweb.com and you can go there for access to texts and books and things like that. I would say right now, if this is of interest to you and you really want to jump in feet first and sort of see what it's all about, the two books that began all this was the Machine Question, Critical Perspectives on AI, Robots and Ethics from 2012. And Robot Rights from 2018, both published by MIT Press, and you should be able to get both of 'em used for very cheap these days. Or go to the library, they have 'em too. But that's a pretty good way to get, I think, into this material. And because of the kind of research I do, I try to be as exhaustive as possible and documenting what people have said, where it's going, where it's come from, and hopefully make sense of it. So it hopefully will provide people with a good guide to finding their way through this stuff and figuring out where they stand.
Michael Dello-Iacovo (01:03:52):
That's great. Thanks. We'll have links to all of that in the show notes and everything else that we've referred to in the show. So thank you again, David, really appreciate your time and thanks for joining us.
David Gunkel (01:04:02):
Yeah, it's been really great to talk to you and I appreciate the questions. As you said early on, there's a sort of reason that the Sentience Institute is interested in these questions and there's a reason I'm interested in these questions and I think they dovetail very nicely. And it was great, you know, to talk with you about these matters.
Michael Dello-Iacovo (01:04:18):
Thanks for listening. I hope you enjoyed the episode. You can subscribe to The Sentience Institute podcast on iTunes, Stitcher, or any podcast app.