January 19, 2023
Guest Matti Wilks, University of Edinburgh
Hosted by Michael Dello-Iacovo, Sentience Institute
Matti Wilks on human-animal interaction and moral circle expansion
“Speciesism being socially learned is probably our most dominant theory of why we think we're getting the results that we're getting. But to be very clear, this is super early research. We have a lot more work to do. And it's actually not just in the context of speciesism that we're finding this stuff. So basically we've run some studies showing that while adults will prioritize humans over even very large numbers of animals in sort of tragic trade-offs, children are much more likely to prioritize humans and animals lives similarly. So an adult will save one person over a hundred dogs or pigs, whereas children will save, I think it was two dogs or six pigs over one person. And this was children that were about five to 10 years old. So often when you look at biases in development, so something like minimal group bias, that peaks quite young.”
What does our understanding of human-animal interaction imply for human-robot interaction? Is speciesism socially learned? Does expanding the moral circle dilute it? Why is there a correlation between naturalness and acceptableness? What are some potential interventions for moral circle expansion and spillover from and to animal advocacy?
Matti Wilks is a lecturer (assistant professor) in psychology at the University of Edinburgh. She uses approaches from social and developmental psychology to explore barriers to prosocial and ethical behavior—right now she is interested in factors that shape how we morally value others, the motivations of unusually altruistic groups, why we prefer natural things, and our attitudes towards cultured meat. Matti completed her PhD in developmental psychology at the University of Queensland, Australia, and was a postdoc at Princeton and Yale Universities.
Topics discussed in the episode:
- Introduction (0:00)
- What matters ethically? (1:00)
- The link between animals and digital minds (3:10)
- Higher vs lower orders of pleasure/suffering (4:15)
- Psychology of human-animal interaction and what that means for human-robot interaction (5:40)
- Is speciesism socially learned? (10:15)
- Implications for animal advocacy strategy (19:40)
- Moral expansiveness scale and the moral circle (23:50)
- Does expanding the moral circle dilute it? (27:40)
- Predictors for attitudes towards species and artificial sentience (30:05)
- Correlation between naturalness and acceptableness (38:30)
- What does our understanding of naturalness and acceptableness imply for attitudes towards cultured meat? (49:00)
- How can we counter concerns about naturalness in cultured meat? (52:00)
- What does our understanding of attitudes towards naturalness imply for artificial sentience? (54:00)
- Interventions for moral circle expansion and spillover from and to animal advocacy (56:30)
- Academic field building as a strategy for developing a cause area (1:00:50)
Resources:
Resources for using this podcast for a discussion group:
Transcript (Automated, imperfect)
Michael Dello-Iacovo (00:00:05):
Welcome to the Sentience Institute Podcast, and to our 21st episode. I'm Michael Dello-Iacovo, strategy lead and researcher at Sentience Institute. On the Sentience Institute podcast. We interview activists, entrepreneurs, and researchers about the most effective strategies to expand humanity's moral circle. Our guest for today is Matti Wilks. Matti is a lecturer and assistant professor in psychology at the University of Edinburgh. She uses approaches from social and developmental psychology to explore barriers to pro-social and ethical behavior. Right now, she is interested in factors that shape how we morally value others, the motivations of unusually altruistic groups, why we prefer natural things, and our attitudes towards cultured meat. Matti completed her PhD in developmental psychology at the University of Queensland, Australia, and was a postdoc at Princeton and Yale Universities.
Michael Dello-Iacovo (00:01:00):
All right. I'm joined by Matti Wilks. Matti, thanks so much for joining us today.
Matti Wilks (00:01:03):
Thank you for having me.
Michael Dello-Iacovo (00:01:04):
Our pleasure. So I've been excited to have you on the podcast for a while now. Ever since I heard you speak on the Sentientism podcast, which I have been a guest on myself as well. In that interview, you spoke about what matters ethically, namely sentience. So I was hoping we could start this conversation off by talking about what actually matters ethically. So when we're trying to do good, in quotation marks, what do you think that means exactly?
Matti Wilks (00:01:30):
Yeah. Okay. So that's obviously a really hard question, a great question to start with. Whenever I'm asked these questions, I like to preface that, you know, I'm not a philosopher. I, my job isn't to figure out what is true. My job is to understand what other people think. But of course, I have some opinions of my own. I guess for me, I've always been really interested in trying to understand how to reduce suffering. So it feels like, you know, there's a whole bunch of things that could matter, but when it really, for me, when you really boil down to it, the idea that there are beings with experiences, and those experiences could be negative, even if there's sort of nothing that truly matters, those experiences likely matter to those beings. And so being able to reduce the suffering for them is kind of the, the point that I've come down to with what I wanna do. And suffering obviously as well, like increasing wellbeing and reducing suffering, both come in there, but for me, I find the suffering a bit more salient.
Michael Dello-Iacovo (00:02:19):
Okay, sure. So, would it be fair to say your, does that make you negative leaning utilitarian, or?
Matti Wilks (00:02:26):
Well, I always worry about the negative utilitarian label because of the, you know, their extreme implications. So I actually think of it's, there's probably some kind of like logarithmic scale where like the more extreme there are, these things are like, the more extreme the suffering, the more it is more important than wellbeing. But, yeah, I guess I lean a little negative utilitarian, if you can, if you can take that lightly.
Michael Dello-Iacovo (00:02:47):
Yeah, sure, sure. Yeah. Cool. So that, I think that tracks mostly with, with my own views. I think I'm negative leaning in practice in the sense that I think suffering tends to dominate, just in the universe in general. And so that's what I choose to focus on reducing suffering, rather than increasing wellbeing. But yeah. Cool. So thanks, thanks for, thanks for starting off with that. So the reason I wanted to start with that is so some of our listeners may be surprised recently to be hearing episodes about animal protection and suddenly be hearing episodes about digital minds and robot rights. So I wanted to start with this to give a little bit of a context of why Sentience Institute has pivoted a little from focusing mostly on animal protection to focusing mostly on digital minds. And I think the link there is, as we've just discussed, it's about sentience and if digital mines are going to be sentient, either at some point in the future or maybe even, maybe even now to an extent, we want to make sure that the future goes well for these sentient minds, regardless of what their, regardless of their species and regardless of what their substrate is, whether they're, biological or synthetic.
Matti Wilks (00:04:03):
Yeah, I think that makes a lot of sense. Like it doesn't really matter who's suffering it matters that there is suffering.
Michael Dello-Iacovo (00:04:09):
Exactly. Cool. So, with that I, well, just one other point I wanted to touch on is there's this idea of a higher versus lower order suffering where there might be higher order pleasures, like some people say, pleasures of the mind, versus lower order pleasures, pleasures of the body. So do you, do you have any thoughts about, well, I guess how much this matters, when we're talking about, the experience of sentient minds, whether you think animals might experience that or digital minds might experience that, and feel free to speculate wildly.
Matti Wilks (00:04:45):
Yeah, so I mean, of course this will be very wild speculation at this point. I suppose my sense is that it's much clearer to get evidence that animals have lower order suffering. And my hunch would be that it's unlikely that many of them have higher order suffering, but I'm not an animal cognition researcher, so I don't know what I don't know. Digital minds, I could actually imagine it would be much easier for them to be able to take on this kind of higher suffering just in terms of like, you know, if you're being developed and built in this entirely different way and you have these like strong agentic capacities and capacities to do these advanced, like, make advanced decisions, then I could imagine that being, whether that's true or it's just a perception thing, it might be easier to imagine them having these kind of higher order capacities for suffering in higher order capacities for pleasure. But I don't think I feel confident enough to make any claims about what I think would be true at this point.
Michael Dello-Iacovo (00:05:36):
Fair enough. Thanks for that Matti. Alright, we'll move on then. So, first topic I would like to discuss with you and get into is the psychology of human animal interaction. And then also what might that mean, based on our knowledge of that for human robot interaction or human artificial sentience interaction. So could you start by talking a little bit about human animal interaction and what we know about that field?
Matti Wilks (00:06:02):
So there's increasing, especially in the last few years, there's a lot of research looking at sort of the kinds of factors that shape the way we value others, and particularly the way we value animals. So for example, I recently completed a review with Karri Neldner, and we kind of looked at all the factors that are related to children giving moral worth to animals or children valuing animals. So things like how beautiful that animal is, how good that animal is, how intelligent or how much mind we perceive it to have, or how intelligent we perceive it to be. And I think a lot of these kinds of questions are gonna also be able to apply to digital minds and to artificial sentience. So understanding like what the factors are that are related to having moral concern for these beings. I think there'll also be differences.
Matti Wilks (00:06:43):
So, you know, I know Kurt, you had Kurt Gray on recently and he probably spoke about the uncanny valley effect, so I'm not exactly sure how that's gonna tie in here. If you have this incredibly beautiful, incredibly human-like robot, with a mind, I'm not really sure how we would feel about that because it might be sort of too real or too accurate. But I think in general, the kinds of things that we're seeing that are related to moral concern for animals are probably gonna be somewhat related to moral concern for artificial sentience as well. I imagine though that we would also have a little bit more trouble kind of getting on board with that. And I think this kind of connects to this idea of substrate, which I know people at Sentience Institute have spoken about and looking at, like comparisons to speciesism where I imagine that the barriers to accepting these kinds of factors are gonna be a little bit higher.
Matti Wilks (00:07:28):
But in general, I think looking at the literature of what is predictive is probably gonna be really helpful for artificial minds. But then there's also, sort of like the, human-animal interaction literature. So looking at things like, uh, positive contact, which I know is a really big thing in like the prejudice literature in general and I think has been in recent years picked up in the animal literature. So looking at things like, how exposure to different species can improve our attitudes towards them. So I imagine that once we get to the point that we can have these kinds of interactions with artificial sentience, then that would be something that would go a long way in improving attitudes as well, as long as of course that contact is positive.
Michael Dello-Iacovo (00:08:04):
Hmm. Yeah, sure. And just on that as well, I think, so one of the things we talked about with Kurt Gray was, how the presence of robots, might actually make people less prejudiced towards other humans because it might lead them to realize that actually compared to this other very different entity, they're very, humans are actually quite similar, you know, they might be different, race, gender, etcetera. But in the face of an artificial sentience, or robot it makes everyone feel closer together. So do you get that effect in the presence of animals or non-humans? Does that have the same effect?
Matti Wilks (00:08:48):
So I don't think I've come across research showing that, and it could just be that I've missed it. But my sense is that with robots or with AI systems, we're sort of wondering if they're gonna be possible social partners or social categories in a way that we don't tend to categorize animals as being. So there's actually a lot of work, and this is a bit of a tangent, looking at sort of the way we categorize animals and the fact that we categorize certain animals as food actually sort of affects the way that we think about them. So animals that we think of as food, we're more likely to deny mind too. But back to the question you asked, to me it doesn't feel like we're thinking about animals as social partners in that same way, potentially with sort of companion animals, but I don't think they offer a threat, whereas with robots and artificial sentience, there is some potential for them to sort of socially integrate in a more human-like way.
Matti Wilks (00:09:32):
So there's that great study showing that, you know, that can kind of, if you think of robots as the art group that can bring people together. But there's also another study by, I think Adam Waytz was the senior author on the paper, and he and his collaborators found that exposure to AI in the workspace actually increased prejudice. And so I think the answer isn't necessarily that exposure to AI increases or decreases prejudice. It's more of where do you draw the social line? Does that social line divide you and other people increase threat or does that social line, like capture you and people together in opposition to artificial sentience? And so I don't think it's just quite as clear as, you know, robots reduce prejudice, it depends how you wanna think about them socially.
Michael Dello-Iacovo (00:10:14):
Mm-hmm. Yep, that makes sense. You also mentioned speciesism, and I've heard you say elsewhere that we seem to learn speciesism socially, a lot of your research is about the attitudes of children to animals and robots. So the question is, how much of speciesism do you think is learned socially, versus I guess whether it's more on the nature side of the nature versus nurture? So what's the evidence for speciesism being primarily socially, if you think that's the case?
Matti Wilks (00:10:49):
So speciesism being socially learned is sort of our, probably our most dominant theory or idea of why we think we're getting the results that we're getting. But to be very clear, this is super early research. We have a lot more work to do. And it's actually not just in the context of speciesism that we're finding this stuff. So we've found that, so basically we've run some studies showing that while adults will prioritize humans over even very large numbers of animals in sort of tragic trade-offs, children are much more likely to prioritize humans and animals lives similarly. So an adult will save one person over a hundred dogs or pigs, whereas children will save, I think it was two dogs or six pigs over one person. And this was children that were about five to 10 years old. So often when you look at like biases in development, so something like minimal group bias, that peaks quite young.
Matti Wilks (00:11:37):
So children around four years of age show really strong, minimal group bias. And then it plateaus off across sort of middle childhood four to eight years of age, for this as well as some other biases that we've looked at or some other preferences that we've looked at. Children are showing sort of no preference or a very mild preference for animals over people until as old as 10 years. And I think Luke McGuire and Nadira Faber also found this in children a little bit older. So it seems like there's sort of this very late emerging tendency to start prioritizing humans much more than animals. Which is why we think it could be socially learned because it's unlikely that this kind of thing would emerge that late if it was something that was sort of innate or early emerging. And so it's likely that sort of with social exposure and a whole potentially a whole range of other factors, that's where we're starting to see this perception shift.
Matti Wilks (00:12:25):
So we've also found this with other kinds of distant beings. So in addition to animals, we've found this with perceptions of robots where children are more likely to grant moral worth to robots than adults are. And this is also quite late and also in a recent study with culturally and physically distant others. So people who are very different or people who live very far away, younger children think that we're more obliged to help them, and they're also less likely to differentiate between those two categories, like between someone who lives close and someone lives far who lives far away than are older children and adults. And older children and adults really don't think that we're obliged to help anybody at all. So the answers of like, do you have to help them, people, older children and adults were not likely to say yes at all, really. And so this suggests that it's not necessarily specific to animals, but instead maybe what's happening is that young children kind of feel, I think of it them as being more morally expansive. They're more likely to think that we have obligations or that we should feel concerned for a whole range of different beings that are quite socially distant from us in a way that adults don't. And so social learning as I, as we said, is like one possible explanation, but there's, there's a few different ideas of what could be going on there.
Michael Dello-Iacovo (00:13:30):
Yeah. I, so this is very far from my area of expertise, but, so you've said that one of the pieces of evidence for being socially learned is because you see this, the change in attitudes is up to about the age of 10. And it's unlikely that it would be, something else. So if it's say developmentally, there's a developmental shift at some point in a child's development is, so is the idea that 10 is too late for the developmental shift?
Matti Wilks (00:14:03):
So, when we think about other capacities that children have, there's like, the other kinds of big changes that we see tend to come online a little bit earlier. So, a good example is theory of mind. So there's this task called the Sally-Anne task where you have a child come into a room, so there's the child and the confederate. So confederate is like a person who's acting in the study. And you have them both see a box, and the box has the lid on, but it's got jelly beans all over it. So you make the assumption that there's jelly beans in the box, and then you have the person leave the room, you show the child, hey, there's actually pencils in the box and then put the lid back on, then the child comes back into the room. I mean, so the confederate comes back into the room and you say to them like, what does this person think is in the box?
Matti Wilks (00:14:41):
And younger children, particularly before about four or five years of age, will assume that this person has the same knowledge as them. So they'll say there's pen, they think there's pencils in the box, whereas as you get a little bit older, children realize that this person didn't see that they have this theory that this person has a different mind to them. And so they'll say, oh, they think there's jelly beans in the box. And so these kinds of sort of capacities that maybe are a little bit more innate and not necessarily something that you pick up through social learning tend to happen a little bit earlier. I'm not an expert in all areas of child development, so there's, it's possibly something that I am missing here, but the fact that it's so late suggests that this is changing once children have a lot of capacity to learn from others socially, engage with others and, and experience from others. And just the fact that this bias seems to emerge so much later than other kinds of biases that we're seeing, like this minimal group bias or a tendency to prefer your group.
Michael Dello-Iacovo (00:15:31):
Yeah. What age did you say that the children of this experiment are, and when you see that shift in how they perform in that experiment?
Matti Wilks (00:15:40):
In terms of the human animal studies?
Michael Dello-Iacovo (00:15:43):
Sorry. In terms of the theory of mind.
Matti Wilks (00:15:47):
So that's about four or five. You get some kids and some versions of the task can be about three or as late as six depending on the specifics, but around that age, whereas we've tested kids up to 10, and I believe Luke McGuire's tested children even older than that. And we are still seeing this very weak tendency to prioritize humans of animals. So that's developmentally very different.
Michael Dello-Iacovo (00:16:06):
Yeah. Okay. That's interesting. So the age of about, I think, like four to six, that kind of tracks with a colleague of mine sent a few studies in my prep for this interview, saying that there's a developmental shift between the age of about three and five in perceiving or understanding objects, as animate, and alive or sentient. Now, I guess my first thought is okay if it's that they're socially learning this and they, you see the shift more towards the age of 10, that that also seems, I guess, a little bit late for when someone might socially learn that behavior I guess because they're interacting with their parents from birth. And so why haven't they socially learned that earlier, I guess is my question?
Matti Wilks (00:16:53):
Right. So I think this probably gets to the idea that I don't think it's purely social learning. I don't think this is something where it's like, okay, you would've completely stayed as caring about humans and animals equally your whole life until somebody told you. So I think probably, and we're speculating about this at the moment, some colleagues and I are writing up a paper while we're trying to understand what is actually going on here. Because right now we don't have a lot of research of these mechanisms to try and speculate about like definitely what's going on, but possibly there's some sort of combination of, you know, when you're younger you get taught these kinds of rules like, hey, you should share, you should help others, we should care about other people, right? And so you get given these, I think about almost as a blanket rule where you're, you're taught, yeah, you should help others.
Matti Wilks (00:17:33):
And it's probably really easy to operate with that rule and go around functioning saying, yes, it's good to help. And you see this like six year-old children really into sharing things equally. So sometimes six year-old children will even give away extra resources to make sure that they're, like will throw away extra resources, sorry, to make sure that they're distributing equally between two groups. And it's not, I think until around, I wanna say six or seven, but I'm not confident on the numbers there, that children start to share rather than sharing everything just in half, they start to share based on needs. And so it's probably really easy to learn share equally, and then as you get a little bit older, you learn, actually share based on needs and it's, it's a more difficult rule to pick up. So I can imagine something similar happening here where you think, okay, yeah, let's just be kind to everybody care about everybody.
Matti Wilks (00:18:17):
But as you get a little bit older, you learn that, hey, I maybe don't have enough resources to do that. That's actually quite costly. And at the same time, when you start getting a little bit older, that's probably when you're gonna be exposed to sort of more of this kind of behavior from other people where it's not just gonna be like, let's teach you to, to do the right thing, but let's actually start dealing with these more difficult social situations. And so I don't think it's simply, oh, well we get told that we shouldn't care about animals, so we stop. It's a combination of what we're seeing other people do when we start to learn about, you know, where meat comes from and animals as one example. But also when we start to think more concretely about what does it actually mean to care about these things? I don't have unlimited resources, I can only help one person or I can only help one like entity in this situation. And so it's probably a really complicated combination of all of these things.
Michael Dello-Iacovo (00:19:02):
Sure, yeah.
Matti Wilks (00:19:03):
Hopefully if you interviewed me in maybe six months, I could have had a whole paper written up about this to talk to you about.
Michael Dello-Iacovo (00:19:07):
Oh great. We'll have to do a follow up then. Yeah, cool. That sounds pretty compelling. So I guess practically, how should listeners update based on this information or, let's say in six months, you know, you've proven it one way or the other, pretty conclusively. How should say an animal advocate or someone interested in reducing speciesism in society or reducing even substratism in society, how should they react to this information and change their strategy? I guess what implications does that have for what interventions someone might do if they're trying to reduce speciesism?
Matti Wilks (00:19:42):
So I think that this is a really hard question because what seems like a really obvious answer would be, well, we should just be targeting kids. We should be trying to get kids to not be speciesist, to not be substratist, figure out what changes and try to stop that happening. And the other thing is we don't exactly know between sort of 10 or 11 and 18 years of age when this changes. It's, to me it seems unlikely that it's gonna be, oh, when they turn 15 and suddenly their views change. I imagine it's a gradual thing, but we haven't tested that yet. It's very hard to work with adolescents. So still an open question, but I think the easy answer would be, yes, we should target kids. And I think in some ways it makes a lot of sense. Children are gonna be more open to caring about these entities.
Matti Wilks (00:20:16):
You see a lot of young kids, for example, when they learn about what happens to meat and they feel really uncomfortable. So some of my other research, which isn't published yet, and it's just taking a very long time cuz it's hard to recruit vegetarian children, is looking at, you know, what is unique about children who become vegetarian and meat eating families as opposed to children who are vegetarian and vegetarian families or non-vegetarian kids. But I think the, the problem there is obviously you have the family barrier, you know, it's really hard to go in and tell children like, we shouldn't be eating meat, we shouldn't be doing all these things that, that animal activists might feel because you've got that family, like the family values there and the family preferences. So I think possibly starting to work with younger children is a good thing, but I just wanna caution that it's also comes with a lot of really big ethics questions.
Matti Wilks (00:21:01):
I mean, we've even had issues with, you know, the kinds of studies that we run where sometimes families don't like that their children are choosing to save two dogs over one person and they feel a bit uncomfortable. And for the most part it hasn't really been a problem once you explain, oh, well, you know, this is actually what all kids say at this age. But there's certainly been a few shocked parents going, oh, why, why is my child saving two dogs over one person? And so keeping in mind that the values that children have in these cases are probably very different than the values that adults have and that we need to kind of come at that sensitively.
Michael Dello-Iacovo (00:21:32):
Yeah, that makes complete sense. So I guess putting aside, I guess trying interventions targeting young children or young adults, what is there anything we can take away from this that we might apply to adults? So I think, and another part of something that you touched on just a few minutes ago was, understanding what changes that that people are learning socially. Can we later in life undo that? Are there any interventions that you can think of that might like target that and maybe undo that?
Matti Wilks (00:22:01):
So one of the things that came up when this, this speciesism study first came out, somebody went and put the study on Reddit and there was a lot of people with a lot of opinions there about who they would save and what this kind of meant. And I think it was for some people it was one of the first times that they'd maybe considered that this idea of prioritizing, you know, our in-group or humans or our species and all of this stuff that we probably think of as this evolved trait, which makes a lot of sense, is something that's evolved isn't necessarily present in young children. And I think that that's potentially a good sort of way just to get people to maybe challenge their norms a little bit or challenge their beliefs about this stuff. But in terms of a specific intervention or understanding, you know, how we can change the way we value others at an old age, I actually think that that's a really difficult thing to do.
Matti Wilks (00:22:43):
There hasn't been any good research on this yet in terms of sort of how much do we care about different beings across the life span. It's definitely on my like bucket list of things that I wanna run, which is basically understanding as we get older, how does our moral circle change? How many entities do we care about? So the moral circle is how many entities we care about, who we think of as worthy of moral concern, which I'm sure your listeners know, but I haven't seen any age-based analysis of the moral circle. And I think it's something that absolutely needs to be done. But my sense is once you're at a certain age, and I dunno what that age is yet, it's quite difficult to change your values. However, a colleague of mine, my old PhD supervisor James Kirby, has just released a paper showing that I think it was mindful compassion training, I think it was an intervention and then there was like sort of a two week follow up was able to expand the moral circle so people were more likely to grant moral concern to a whole range of different beings in that moral circle.
Matti Wilks (00:23:34):
And I think it was specifically to things like villains and particular groups as well. And so I thought that was really cool, but that's the first study I've ever come across that seems to have shown some kind of real impact on the moral circle, but isn't just, just something like a framing effect. So very exciting, but I don't think that there's a lot of promising avenues just yet, unfortunately.
Michael Dello-Iacovo (00:23:51):
Okay, sure. And the moral expansiveness scale, is there any way that we can use that, whether it's for, I guess, understanding people's attitudes, to more distant entities or just in terms of interventions that we might pursue?
Matti Wilks (00:24:08):
Yeah, so I think the moral expansiveness scale is fantastic. So it was actually developed by a friend and colleague of mine, Charlie Crimston, during her PhD. But I think the scale is fantastic. It's a really great way of trying to capture this sort of individual variability and moral expansiveness, which is something I'm very interested in. Why is it that some people are more morally expensive than others? And I think it's a really great measure. So this is the scale that my PhD supervisor James Kirby used during this study. And it does a great job of understanding moral concern across the board. But I think there's also a lot of questions that have come up for me in recent years about this scale. So one of them, for example, is how do we think about moral concern towards different entities?
Matti Wilks (00:24:45):
So if I care, I think there's always been this kind of perception that the more you care about some entities, like if I'm somebody who cares a lot about human out groups, I'll also be somebody who cares a lot about animals. But there's been a great study recently by Josh Rottman and Charlie Crimston showing that this isn't necessarily the case. So for some people, they actually care about animals more than they care about human outgroup. And this is the first study to kind of show, I guess, a difference in the shape of the moral circle rather than the size of the moral circle. And it's, it's raised really big questions for me, which is, you know, how does our moral concern for different beings and different entities relate to moral concern for other entities? And so I think using that scale to be able to like measure moral expansiveness in general, but also break down who we care about and how this partials out towards different groups is a really important question.
Matti Wilks (00:25:30):
There's also, of course, I think some limitations not just of this scale, but of research into moral concern in general. So I guess the best way to illustrate this for me is if you think about something like a caterpillar or a chair, for example, a chair is probably a better example here, and then also a white supremacist. So if you look at research into the moral circle, both of these beings will be on the outskirts. They will receive almost no moral concern, but they're getting there for very different reasons. The white supremacist has done something wrong, this person is immoral, the chair is amoral, it's kind of morally irrelevant. And so thinking about what we're actually talking about when we think about moral concern is a really important question that I think hasn't received nearly enough attention in the literature. And so this is true in terms of what we're actually measuring as a concept when we think about moral concern.
Matti Wilks (00:26:17):
But I think this also raises questions for me about, you know, how does individual variability predict our moral circle, like predict our moral expansiveness? So most research in this topic to date has been looking at how individual factors about like the entity, so how beautiful it is, how smart it is, things like that, how much mind we perceive it to have affect moral concern. But Bastian Jaeger and I just did a study where we found that this actually only accounts for about 30% of variability in the moral circle. So there's also about 30% comes from individual variability in who I am. And you know, the factors about me and then another 30% is the interaction between the two of those things. And so we need to be thinking not only about the entity, but also who's making the judgment and also what kind of judgment are they making and how does their concept of moral concern going to vary between different entities. And so it's a really complicated kind of set of findings, but I think the moral self, this moral expensiveness scale is an extremely important tool, but we also need to be thinking about all these other factors and how they play into responses that we get on it.
Michael Dello-Iacovo (00:27:15):
Yeah. You touched on there being some people who actually care more about animals than human outgroups. So I guess not quite a criticism, but limitation of that, some people argue the more expanding the humanity's moral circle has is that rather than expanding it and maintaining what's in the center, it might dilute when you expand the circle, it might dilute the circle. So it sounds like you might be able to use the moral expansiveness scale to measure that, because I think, I'm not familiar with any studies that have been done on measuring that, but maybe you could use that to measure if you expanded the humanity's moral circle, does that dilute it on average?
Matti Wilks (00:27:59):
Yeah. So basically, are you caring more about distant entities at the cost of close entities?
Michael Dello-Iacovo (00:28:04):
Yeah.
Matti Wilks (00:28:04):
So I think this actually comes up a lot in discussions where when I talk about speciesism and for example, people's interpretation of what I'm saying or what advocates might take from my work is that, well, we should care equally about humans and animals. And I think that actually what, at least what I take from my work is maybe we should just be caring about animals a little bit more. Like we don't necessarily need to push them to being equivalent, but what we wanna do is take those animals particularly the ones that are at the tail end and the things where we're having this, you know, awful mistreatment of animals and bring them up a little bit. And so I think if you, if you think about it more as a shift rather than a switch, it takes less resources and costs like it's less costly, which means you're not necessarily gonna have that same dilution at the center of the moral circle.
Matti Wilks (00:28:49):
And again, if, even if you have some level of dilution, it's not necessarily a bad thing if we're not talking about like, okay, I'm gonna completely neglect my child to save children living on the other side of the world. And so I think it's more helpful and more practical to think about it as a little bit of a shift rather than the state cost. But I do think that your point about this scale of being able to understand this kind of effect is very true. And, but my sense in saying that, my sense is that there's not very many people out there in the world who are gonna rate, for example, their friends and family as, as low on the circle of moral concern. It would be just a little bit more like rating them a little bit low or having a little bit less distance between them and the distant beings. So I don't find that criticism particularly compelling, I guess.
Michael Dello-Iacovo (00:29:33):
Sure. Makes sense. Thanks.
Matti Wilks (00:29:35):
Not to say that there aren't individual differences, so there's that fantastic book, Strangers Drowning, which kind of talks about people who take these really extreme like these really extreme views to help others. And I think you can see a theme in there where some people are kind of neglecting or being less compassionate towards those close to them at the expense of distant others. But I think that that's very rare and very uncommon, not something we need to worry about for the average person.
Michael Dello-Iacovo (00:30:01):
Yep. Sure. So to change topic a little bit, I wanna come back to moral circle expansion more generally in a little bit, but, for now I wanna talk a little bit about some predictors of people's attitudes towards other species or people's attitudes towards other substrates. So digital minds, robots. So one thing that we found in some of our research at Sentience Institute is belief in animism, that objects can have spirits is a strong predictor of positive attitudes towards digital minds. I'm wondering if we could talk a little bit about that, and get your thoughts on that. And I guess there's also anthropomorphizing and animism seems a little, it's somewhat different, but, maybe there's some overlap there. So do you have any thoughts about animism and anthropomorphism, as they apply to how attitudes towards speciesism and also just more generally for digital minds as well?
Matti Wilks (00:31:01):
So the animism measure was actually the first time I came across it was with the Sentience Institute paper, so I'm not extremely familiar with it. I've used anthropomorphism in some of my work with attitudes towards robots, and we find that children who are more likely to anthropomorphize robots are also more likely to grant the moral concern, which is in line with what we predicted in general. I think there's probably some effect, and I'm guessing this captures the animism as well, where the more likely you are to sort of see humanness or see a mind or something along those lines in another being, the more willing you are gonna be to grant moral concern to it. And so I imagine that you could have some kind of moderating kind of causal effect here. Where if you encourage people to see beings or mind in or see humanness in other beings, then you would be more likely to grant moral concern for them.
Matti Wilks (00:31:50):
But I think that probably the stronger effect is gonna be individual differences. So the more likely I am to already perceive humanness or something in another being, the more likely I will be to grant moral concern to them. So this also goes as far as I believe I read a study once, and this is possibly something that hasn't replicated, I don't know, but people who work regularly with cars, I think are more likely to see faces in cars, which I found really interesting. So you know how sometimes you can see a face on the front of a car apparently, and I think I've got that right, that people who work with cars more regularly see more faces in them. And so it's this idea that like there's probably a lot of individual variability there that you are probably drawn to having those feelings because you've already got this level of moral concern.
Matti Wilks (00:32:30):
And I imagine the causal error goes both ways, but it also seems like something that could work. And one of the things I'm really interested in understanding is how do these individual variability, how, like what, what are the factors that make somebody more likely to grant moral concern to distant others? And so if there was a way that we could learn that, you know, Abigail Marsh has this fantastic work showing that people who have donated their kidneys to strangers are actually better able to identify fearful faces. So they have a stronger amygdala response when they see a fearful human face, but they also are better able to identify a fearful facial expression than people who haven't. And this suggests that like if we could identify the factors that are sort of driving people to care about others, and maybe this tendency towards this is one of them, then if there's something that we could then encourage in society, we could sort of back work that and say like, okay, well if it's empathy, for example, let's train people to be more empathic, and then we can see how that, um, like you see if that actually expands their moral circle.
Matti Wilks (00:33:21):
But this is the kind of speculative long-term focus of my research.
Michael Dello-Iacovo (00:33:25):
Sure, yeah. That, kidney donor example was fascinating. Would I be right in assuming that that's correlation and not, they haven't looked at causality. I imagine that was somewhat harder to measure it.
Matti Wilks (00:33:41):
Yeah. So then you'd have to track everybody and figure out who's gonna in the future donate their kidneys. So basically she recruited a sample of kidney donors and then a sample of non-kidney donors and how to look at their differences. I'm not a neuroscientist, so I probably really butchered the neuroscience explanation there, but I know that there was this sort of behavioral difference as well where they were able to identify these fearful faces. So it's a really cool thing which actually has motivated a lot of my work.
Michael Dello-Iacovo (00:34:03):
Yeah, interesting. So earlier you talked about how some children, when they're presented with the reality of where their meat comes from, they are very uncomfortable, whereas other children, some other children don't have that. And, I was just wondering if you have anything to say about what we know about the reasons why that's the case and whether that has any implications for strategy?
Matti Wilks (00:34:27):
Really good question. I guess we don't have a lot of information about why there are these differences, but I think that question really illustrates the sort of individual difference angle that we've been talking about today where I'm saying like, you know, what are the factors that shape who I care about, and how expensive my moral circle is. So as you say, all children at some point are gonna learn about what happens with meat consumption. I think most children feel like a little bit confused or a little bit uncertain about it at some point. I mean, I'm speculating here, but it tends to be a conversation that parents are a little bit uncomfortable to have with their children. And it's certainly means something that I've been really mindful of doing this kind of work of not necessarily wanting to plunge really hard into the meat stuff for children who may not have had that conversation yet.
Matti Wilks (00:35:07):
But what you see, and like we've found this without a little bit with our research, but also you have these great YouTube videos of young kids who are like, oh, I don't wanna eat the octopus, like I this, why is he dead, kind of thing. And they're really quite heartwarming. And I think for me it's a really fascinating question of, you know, why is it that some children who don't necessarily have a model of somebody in their family who's a vegetarian, who's spouting these values, some children have this really strong emotional reaction and sort of from there resolve not to eat meat anymore, often in time. And I've had friends and a quite a high number of people have come and told me this about themselves as well. So I think I must attract this kind of person to chat about this where even though their parents didn't support it, they would sort of, through their childhood, they kept trying to not eat meat and they kept saying that they didn't wanna do it.
Matti Wilks (00:35:51):
And so even without a social model, or even in some cases without support, there are some children who feel so strongly about this when they learn about it, that they make this really big moral decision. And if you think about children's capacities to do really moral things, this is one of the only things that they can do, right? Like they can't donate their kidneys to strangers and they can't, you know, donate their income to effective charities, but they can choose not to eat meat. And this question of why some children have this is really fundamental to me. And I think one of the really important questions, not necessarily just in the case of vegetarianism, but in terms of why some people are so concerned about some distant groups that other people don't necessarily care about, and for example, the people who would've advocated to end slavery, what is it about these people that make them kind of so morally motivated to care for these groups that aren't necessarily in other people, the people who are surrounding them, they're not necessarily in their circles of moral concern. And that's something I'd love to get a little bit more insight on as part of my research program.
Michael Dello-Iacovo (00:36:44):
Yeah. So do you have any preliminary thoughts, or even any hypotheses that you'd like to explore?
Matti Wilks (00:36:51):
So I wonder if there's, at the moment the approach I've been taking to this is sort of testing a whole range of different personality predictors and sort of seeing what sticks. So it's not very theoretically driven, but I imagine there's probably like a couple of different paths to it where your path could be that you have just like very strong capacity to feel, so you might really be able to empathize or feel compassionately towards those kinds of groups. And so maybe you just vary on that, but it could also be something like a more reasoned approach to things. So in a recent study we've looked, which is not published yet, so I don't know how much I can talk about this, but we've looked at individual differences between people who've taken the giving, what we can, sorry, group level differences between people who've taken the giving, what we can pledge and sort of country match controls.
Matti Wilks (00:37:31):
And in that study we find that giving what we can pledge is higher in a whole range of sort of reason traits as well. So actively open-minded thinking, need for cognition, things like that, as well as being more morally expansive and a couple of other things. And so I think it's gonna vary a lot based on the kinds of moral concern that you have. So in terms of giving what we can, this is kind of a more, an action that's a bit more like reasoned and thoughtful. So that's probably gonna come from people who maybe have had the feeling initially, but are all gonna be quite rational in how they decide to share their actions. Whereas if it's something like choosing not to eat meat or doing something that's more emotionally driven, it could be somebody that just has this stronger sense of feeling. But I think you probably have some strange combination of like environmental experiences and genes and this like individual differences. So I don't have a very good answer, just yet.
Michael Dello-Iacovo (00:38:19):
Cool. Well, we'll look forward to reading about that when you've done some more research on that.
Matti Wilks (00:38:25):
I think that might be a few years away.
Michael Dello-Iacovo (00:38:27):
. So to change topics again, and this is a topic that's quite close to my heart. So the idea that naturalness and acceptableness are often correlated, this has been a frustration of mine in some other contexts. For example, I guess public distrust towards, genetically modified organisms in terms of food, public trust for organic, the organic label, and just in general, I guess, the idea that things that are natural are better. And so there's the context that, is closest to my heart, I guess, of science and mistrust of certain products that otherwise have reasonable scientific grounding for being safe and healthy. But also there's this idea that animal products are fine because they're natural and it's fine to eat meat because of that. So first I'd like to ask, I know you've done a lot of research on this. How do you define natural in the context of your work? Let's start with that.
Matti Wilks (00:39:29):
So I actually don't define natural in the context of my work, which is kind of central to the point of my research, which is this idea that we, as you've touched on, and I have very similar frustrations, people seem to think that natural things in general are good and they use that kind of concern about naturalness to, to dismiss a lot of potentially really beneficial technologies. So I came to this through my research with attitudes towards cultured or cultivated meat. But I think it's also true for things like genetically modified food, the golden rice thing where they destroyed golden, the fields of golden rice that could have helped to reduce vitamin A efficiency like that is something that has really frustrated me. And I find that when you actually say to people, so what do you mean by natural? A lot of people really struggle to answer this question, and if you read the literature, no definition is provided. So I'm yet to come up with one because I don't think it's necessarily gonna be a helpful thing to do.
Michael Dello-Iacovo (00:40:19):
Yeah, that that makes a lot of sense. So what do people do when, when they're asked that question? Do they come up with some other kind of justification for why they don't like certain things that they don't think are natural?
Matti Wilks (00:40:35):
This is actually something that there hasn't been a lot of research on. So there's a little bit of work looking at what are the characteristics of foods that we think of as natural and unnatural. So for example, if you change something chemically, adding chemicals to it or adding something to it that has a bigger impact on naturalness then freezing. But most of this work has sort of showed that when you specify that a natural and an unnatural product are identical, people will always, in most cases will choose the natural product even sometimes when it's they're told that the natural product is less safe. And people have, people in literature have often interpreted this as, well, people have this irrational preference for natural. But there was one study that actually showed that when you ask people, you say, okay, here's like a natural vitamin C and a synthetic vitamin C.
Matti Wilks (00:41:18):
Are they, like, which one do you prefer? They don't, even though they say they prefer the natural thing, when you push 'em a little bit further, they don't actually believe that they are identical. And so people seem to have this sort of fundamental belief that there's something qualitatively different about unnatural and natural things and it, it's really hard to put to put your finger on. And I don't think there's been any work that's really systematically investigated what people mean in this context. And so that's one of the things that I'd really like to look at right now. I'm putting together a grant to try and I think of it as mapping the naturalness space. So understanding sort of developmentally when we start preferring natural things, which I've recently published one paper on, finding that children as young as five already prefer natural foods.
Matti Wilks (00:41:55):
So if you have, if you show them an apple made in a lab or an apple grown on a tree, they have an overwhelming preference for the apple that's grown on a tree. We also, I'm interested in how this concept varies cross-culturally, how historically when we started to prefer natural things, cuz you know, a long time ago there was a strong preference for canned foods, like when canned foods, when new people were really excited about them, and now canned foods would probably go more towards the unnatural thing. So understanding how this historical trends have shifted and then of course looking at what like these individual difference predictors are. So I'd really like to be able to map that space and that might be the first step towards understanding what do we actually mean when we say that something is unnatural? I have my own speculations about what kind of is driving it, but when you look at the literature, a lot of it just gets chalked up to this idea of like, well, there's a natural is better bias and maybe this is something that's evolved.
Matti Wilks (00:42:40):
We have this biophilia tendency where we really you know, enjoy being out and engaging with nature. But to me that doesn't really explain everything because we apply these naturalness preferences sort of really inconsistently, and we seem to, I think in many cases, use like a post justification. So if you feel negatively about something, then maybe you're gonna be more likely to call it unnatural, even though there's a lot of things out there that are bad and dangerous, that are natural as well. And so we're not particularly reasoned in it and I don't think that the research has a really good grasp on what we mean by natural and why we think these things are good just yet.
Michael Dello-Iacovo (00:43:12):
Yeah. One thing you mentioned there, resonated with me on in that, when you push people, they don't actually see, even if you specify that, what would you prefer a natural product or an unnatural product? And they're both identical and they don't really think that they're both identical. I've sort of noticed this an anecdotally in when I talk about either cellular agriculture, like, lab meat, or other products like, synthetic taurine versus natural taurine, for example. Even when I say like, so you know, if these two things, these two products or chemicals are identical in every single way, they're chemically identical, it's when they answer the question of which they prefer, it still sounds like they don't actually believe that they could be identical chemically, or physically. So yeah, that struck true to me. Also, there's this other idea I guess of, the idea that all chemicals are bad, and of course people don't necessarily understand that, almost everything is, a chemical, but even the natural apple from a natural tree, is made up of chemicals. But there's this idea that that chemicals are bad and adding chemicals to things is bad. Does that tie into this idea of naturalness as well?
Matti Wilks (00:44:28):
Yeah, so, this has come in I think in the context of what kinds of changes. So if you freeze something, it has some impact on how natural it is, but if you add chemicals to it, it has a much bigger impact on how natural it is. And so I think that like, there's a couple of things going on here. One is sort of this misconception about what most things are made up of and like how physics works, how the world works. But then also you have this kind of, you're exactly right in the context of like, people really do feel, seem to think that chemicals are like a bad thing or chemical is like a scary word. And I think some of this comes from, like you, we've seen this as in the researcher mentioned before, where freezing something has less of an impact on naturalness than adding chemicals to it, for example.
Matti Wilks (00:45:11):
And my work found this with kids as well, but I think there's a little bit here of maybe like misconception about, you know, the world and you know, what products are made up of and things like that. So I think it's a little bit of an issue where people have learned things like chemical is bad, things like genetically modified food is bad without having a really good sense of what does it actually mean? How widespread are these things when they're applied, especially I think Hal Herzog has some work showing that if you think about animals, if you make a single gene change to an animal, we see that as animal as much less natural than a domesticated animal, even though a domesticated animal has a much greater change to their genes through that domestication process. And so there's not the, I think the intuitions that we have aren't always aligned with sort of science or what's really going on in these processes.
Michael Dello-Iacovo (00:45:58):
Mm-hmm. You mentioned briefly there are differences between different cultures and their perception of naturalness. One example that we came up with preparing for this podcast was medicine. And so some people trust, natural medicine more so while others trust like modern medicine or western medicine more. So what drives that? I guess what makes naturalness more or less acceptable in different contexts? Because, I guess in the context of context of medicine, some people do trust the non-natural medicine more.
Matti Wilks (00:46:37):
So to date I haven't actually come across any research that's quantitatively studied this. So when I mentioned the cultural differences before, this is sort of an area that I'd really like to dig into. There's been a couple of studies looking at different cultural views, but in general, most of the work on attitudes towards naturalness has been sort of in Europe and America. And you get a little bit of a difference between Europe and America, but nothing like what you could imagine that you could get worldwide. And one of the barriers that I found, one of the reasons I've been a bit slow to start looking at this is that we actually, trying to explain what naturalness means and trying to capture that across cultures is a really difficult thing to do. So how could I translate that to another language when we don't yet have a good definition or a reliable definition between individuals here of what it means for something to be natural or unnatural?
Matti Wilks (00:47:22):
And also when that definition seems to me to change based on the context in which you're talking about it, but in terms of, you know, speculating about what the cultural differences, what cultural differences could be driving it, I think that really gets at the fundamental kind of concern I have about why people are so interested in naturalness, and see it as such a good thing where I think that there's quite a big cultural or like, sort of social environment influence on it. So I guess thinking about this in like a moral context, you have like some cultures like moral relativism, some cultures are gonna see certain actions as really moral or immoral. And I think other cultures see different acts as moral or immoral. And so having that social context influencing what we see as moral or immoral is something that's been really well established in the literature.
Matti Wilks (00:48:04):
And I imagine that you would get a similar effect with naturalness here. So something that I'm gonna see as natural might be very different than what somebody else is gonna see as natural based on the social practices that are around them. But I'd love to see that actually come out in some data because we haven't done that yet. So it makes me think of individual, like the way we think about naturalness and how this changes in different contexts for the individual. So, I know that people tend to think of, for example, like genetically modified food as unnatural, but once you sort of give people, and when they think of it as unnatural, they're gonna see it more negatively. But there's been a study recently showing that like more education and understanding of genetically modified food actually reduces the perception of it as being bad. I don't think it necessarily reduces the perception of it being unnatural, but it might, if it's reducing the negativity, then it might still be seen as unnatural, but that takes the sort of negative affect out of it.
Michael Dello-Iacovo (00:48:56):
So what does all this tell us about people's attitudes towards plant-based food technology and cellular agriculture? Or what do we know already and what do we, what might we expect as these products scale up? I'm particularly interested in cellular agriculture as hopefully as that becomes more publicly available, what can we expect about people's purchasing habits? Do we, expect that we might see the general public accepting these kinds of products or will there be a lot of resistance to them?
Matti Wilks (00:49:25):
So just quickly, I don't know, I assume you saw that the FDA and USDA has recently, just like a couple of days ago, approved cultured meat in the US so that's hugely exciting. So I guess there's a few thoughts here. So as we've talked about, there's gonna be, I think, and as much as I don't necessarily think everybody in this space agrees with me, but I do think that concerns about naturalness, particularly in the context of cellular agriculture, probably more than plant-based meat are concerns that people really have. But I think there's also a pretty powerful effect of social norms and what other people are doing. So similar to the kind of people who are particularly morally motivated to help others, I think there's gonna be some people who are particularly excited about these new products and who will be willing to try them, whereas you're gonna have a lot of other people who have these concerns.
Matti Wilks (00:50:04):
And my work has shown sort of a lot of individual variability in terms of things like attitudes towards cultured meat and individual predictors. So for example, people who are higher in things like disgust and distrust are more likely to reject cultured meat, are more likely to be absolutely or morally opposed to it. And so I think you will still have some pushback. And I think over time, as it becomes more normalized and as it becomes something like that everybody else is doing, that's when you'll start to see the large scale attitude shift. But that would suggest like, okay, well we don't need any research, we can just go out and the market will take care of it. And maybe that's true. I hope that that's true. But I do worry when you look at, for example, people's attitudes to genetically modified food, even though a lot of people I think eat genetically modified foods without realizing there has been kind of consistently negative attitudes now I think coming on 30 years because of, and I know Sentience Institute has a report on this, looking at sort of the cultural and social changes that happened around then.
Matti Wilks (00:50:56):
So if with cellular agriculture as it becomes more available, if there's sort of a negative norm that is picked up because of these concerns about naturalness or potentially because of, you know, concerns about the welfare of farmers, which is definitely something that comes up a lot when you interview people about their attitudes to cultured meat, then there is the potential that it would end up with a lot of people having negative attitudes to it as well. So I think this is a really kind of pivotal time where we wanna be ensuring that there's a positive narrative.
Michael Dello-Iacovo (00:51:21):
Yeah, and I think it's critical as well because not just from the consumer side, but also, I can imagine there would be animal farming lobbies that would, knowing that people prefer natural products would push that narrative quite strongly saying that, you know, naturally farmed animal products from real life animals are better and safer and healthier than, what might be a physically identical product coming from cellular agriculture. So, yeah, I think making sure that the narrative, goes well and we sort of get on top of that is important. Do you have any thoughts about how we might do that or anything you'd like to see advocates do?
Matti Wilks (00:52:01):
That's, yeah, a really hard thing to do. So there's been one study where they did try to, naturalness is one of the big concerns that comes up when you ask people about cultured meat. They say, oh, it's disgusting. Oh, it's unnatural. And there's been one study where they tried to reduce concerns about naturalness, and I think they, they used a couple of different methods. The only one that had any impact was pointing out how negative, how unnatural, sorry, factory farming is. And that sort of reduced concern a little bit. But I know for example, when you talk about like the technical side of cultured meat that often increases attitudes towards factory farmed meat. So I know Sentience Institute found this as well as Michael Seacrest in another study. So there does seem to be this, this tendency where focusing on the technological side of things reduces positive attitudes towards cultured meat and increases acceptance of farmed meat.
Matti Wilks (00:52:43):
So I think you have to be quite careful with the narrative there. In general, my view of the field has been like, when you look at interventions that target people rationally or target their reasoning, we haven't seen a lot of luck in improving attitudes towards cultured meat. And this is speculating because I don't have any causal data to show this yet, but my work, as I mentioned, shows that sort of people with these strong emotional traits, so disgust, distrust, fear, tend to have more negative attitudes. And so maybe instead of trying to reason people into thinking cultured meat is good, we need to help them to feel better about cultured meat. So get them to feel comfortable, get them to understand and feel like that, first of all, I guess information about safety seems quite critical here. There's a lot of work on naturalness seems to have to be linked to concerns about safety.
Matti Wilks (00:53:26):
One study by Yoel Inbar and colleagues found that we have sort of a recency bias with new food technology. So the longer a technology has been around, which is probably a cue to safety, they, I don't know if they speculate that, but that was my read of the paper. But the longer a technology has been around, the more positively we feel about it. And I imagine that the safety thing comes in there. So making sure that people feel like informed about the safety and all of that kind of stuff, but also making sure that people feel good about it and addressing the potentially negative feelings that are gonna come out of this seems like the way to go. Although of course doing that is a lot, you know, easier said than done.
Michael Dello-Iacovo (00:53:59):
Cool. Thanks for that, Matti. So I'm gonna ask you a little bit of a curve ball now. I'm hoping to get you to speculate, so , but if you don't want to, that's totally fine. So going off the topic of naturalness, I'm wondering, I had the thought that this might apply to artificial sentience, given that people would probably not see if we in the future we have, digital minds, and being persons, or being sentient, people might not see them as being natural. So I'm wondering if we can speculate a little bit about what that might imply for how people would see these entities. I mean the simple answer that I might go to is just that, they would see them as not, you know, even if their mind was the same, just digital instead of a human, they might still see them as less than not as good as, so yeah. Can we, can I get you to speculate a little bit about that?
Matti Wilks (00:54:53):
So I think your intuitions there are probably right. I imagine that the perception of their minds as being unnatural prevents, like, kind of creates an extra barrier. So if we could show, for example, that an animal with a, like a dog for example, was particularly intelligent, then they might be more willing to grant that dog moral status because it's sort of made from the same stuff as us. Whereas an AI system is gonna be not like us and gonna be developed differently. And so there's gonna be this extra barrier to, and I don't think this will apply equally for everybody, but I imagine that for many people there would be this extra barrier to really believing that this being could be sentient or could be similar to us in from an intellectual perspective. I guess the other thought that comes up for me here is that often when you think about naturalness, it seems to be, and I haven't got good evidence for this yet, but it seems to be mostly in the context of things that go into the body.
Matti Wilks (00:55:43):
So it can be, you know, about sex, about food, about medicine, about technology, about like reproductive technology, whereas AI systems are in most cases gonna be separate from the body. And so I wonder if there'd be sort of a difference between, you know, the kind of thought experiment where you take away parts of somebody's brain and replace them with computers and technology versus like a wholly separate from us that has no kind of personal or like threat to our body kind of AI system. And if you'd get differences in attitudes there, and I dunno what the answer is to that, but I imagine given that when we look at naturalness and all these other domains, oh, of course food, food that goes in the body when we look at naturalness and all these other domains, it's about things that we put inside ourselves. Then maybe you'd get a different sense of the kinds of barriers coming up. But I'd like to see some data on that before I made any strong claims.
Michael Dello-Iacovo (00:56:32):
. Thanks. Cool. So let's go back just for a little bit to finish off to moral circle expansion. So we talked about moral circle expansion and more expansiveness scale earlier, but I'm interested in whether you have any thoughts about, I guess specifically interventions for moral circle expansion and also whether we might be able to expect any spillover from say, just direct animal advocacy to other areas like, to artificial sentience, and also vice versa. So there's two parts of that. First, do you have any thoughts about interventions and then any potential upsides or downsides?
Matti Wilks (00:57:11):
So as I mentioned before, my old PhD supervisor, James Kirby has recently shown that expanding the moral circle, that compassion mind meditation can expand the moral circle. And I think this persisted over a quite a few weeks. So I thought that was really promising. Most of the work in the past has looked at things like framing effects. So like whether you ask people to exclude entities from moral circle or to include them. For me, I'm starting to come around to the idea that a lot of this is about sort of motivation. You need to take away potential barriers for people to be able to expand their moral circles. So, a good analogy for this is the meat paradox. So this tendency that we have to both love animals but continue to eat meat and a lot of work shows that we're quite motivated to deny mind.
Matti Wilks (00:57:51):
So if you tell people that they're gonna eat meat or if people have recently eaten meat, then we'll be more likely to deny mind to animals. You also see that vegetarians tend to be less likely to deny mind to animals because they're not experiencing this cognitive dissonance that comes with loving animals and eating meat. And so I wonder if this is the kind of thing that we could be working on in the moral circle expansion space. So when I think if you were able to sort of have unlimited moral concern points to give away to lots of different beings, then we would probably see less restrictive moral circles. People don't, in general, with some exceptions for things like people who we've seen as doing bad things, people don't seem to, in general want people and animals to suffer. It's more that there's sort of reasons that we want to deny the moral status because we wanna eat them or because we wanna use them for testing, or just because we don't have enough resources.
Matti Wilks (00:58:37):
And so I think if we could try to get people to reframe the way they're thinking about the cost of these moral, these kinds of moral trade-offs, or actually reducing the costs for them, giving them enough resources to be able to help, that seems to be the way to go. But in terms of just sort of like from a psychological perspective, this motivation is like very, very difficult to overcome. And so I think that's sort of where we need to be targeting, but I don't have any really good concrete ideas for what that could look like yet.
Michael Dello-Iacovo (00:59:00):
Yeah, sure. Andwhat do you think about spillover from say, let's say an animal advocacy to artificial sentience or vice versa, or I guess just, interventions targeting one specific part of the moral circle. Do you think there's spillover for that intervention to other areas of the moral circle?
Matti Wilks (00:59:20):
Yeah, so I think, as we've talked about, I think there's probably individual variability in how morally expansive people are. So you, you know, that if you, for example, Adam Waytz shows that people on the political left tend to have more expansive moral circles than those on the political right. So I imagine, and I've got a little bit of work looking at this at the moment, which I know other people are doing as well, which is the idea that if I care more about certain distant groups, does that mean I'm also gonna care about other distant groups? And I think that in general the answer would be yes, but you do also see that work, for example, by the work by Charlie Crimston and Josh Rottman showing that some people who care a lot about human out groups, who care a lot about animals actually care less about human out groups.
Matti Wilks (00:59:58):
So it's not always a given that caring a lot about, for example, animals will mean caring a lot about artificial sentience. But I do think that's probably an over overall pattern that we can be confident in. And so in terms of sort of understanding the spillover, to me it feels like the kinds of interventions that could help us care more about animals would be also the same kinds of interventions that would be able to help us care more about artificial sentience. But as I think I touched on before, there's probably gonna be bigger barriers for artificial sentence, like the idea that they aren't, you know, made of the same stuff that we are made of feels like a really big barrier there. Also, if we are motivated not to morally care about them and there's some capacity for doubt, then we're gonna be able, we're gonna wanting to latch onto those kinds of things. And so while I think there's a lot of capacity for spillover, there's also some unique challenges that we're probably gonna see coming up as these topics become more widely discussed and thought about in society.
Michael Dello-Iacovo (01:00:48):
So, I'll finish with this question. Do you have any thoughts on academic field building as a strategy, both in general and, specifically as a way of developing the field of digital minds research? I asked this because this is a strategy that Sentience Institute has been taking recently to build the field of digital minds research, and focus on building the academic field rather than necessarily doing advocacy in that space ourselves. So just as an example, Sentience Institute recently published a literature review which, synthesized the literature on digital minds, with the idea that that would help, academics in that field to publish research about digital minds, and make it easier to do work in that field. So not necessarily specifically for digital minds, but do you have any thoughts about that strategy in general as you could apply that, I guess to animal protection, anything related to animal protection or human-animal interaction? Do you have any thoughts about that?
Matti Wilks (01:01:49):
Yeah, I think it's a great strategy. So I guess the sum of my answer is it's probably a necessary but not sufficient on its own approach. Like, I think you need to have that as well as, you know, advocacy discussions with the general public. But as a topic, my read of the situation is as a topic becomes more mainstream and accepted by academics, especially in the last, you know, maybe 10 years or so where there's been quite a big uptake of academic topics in journalism and media, I think that that kind of helps to change the social norms sort of from the ivory tower out into society where you've got, you know, a lot of, there's now, if you look on Vox for example, there's a lot of discussion of things like the moral circle and also things like existential risk.
Matti Wilks (01:02:29):
And as they get permeated into certain facets of the news and of the media, then people are gonna be starting to think about those things. And so I don't think it's necessarily having it as a norm in the academic community that makes the change, but the implications of having that change in the academic community. And so I think it's absolutely a necessary part of expanding our moral circle is to be making or encouraging people in the academic community to be thinking about these things. And I noticed that like as a researcher who studies, you know, attitudes towards distant others, I'm often sort of questioned about sort of my motivations and why I'm doing this research. But you don't ever have people asking that for people who study prejudice towards humans because prejudice towards humans is a topic that's already accepted. And so there's certainly challenges to doing it, but I think the more that we can get academics and the media to be thinking about moral concern for these different in fringe groups, the more that that will kind of like permeate into society. So I think it's a great strategy, you should definitely keep doing it.
Michael Dello-Iacovo (01:03:24):
Great, great. Good to , good to get that feedback. And as an academic in this field, what kinds of things would be valuable to you that you can think of that would actually help you that wouldn't necessarily be done by a university or by an academic per se, but might be helpful for work done to be done by someone external or another organization, say the nonprofit sector that would be useful for your work. It doesn't even necessarily have to be research related, I guess, but what kinds of things can you think of that might be useful to you?
Matti Wilks (01:03:56):
Yeah, so I think having like the reviews of literature and things, sometimes academics do these, but I think they're, they're extremely helpful regardless of whether they come from academics or from organizations. So that's one thing, definitely something that is less supported in academia. And this is less true now than it was even sort of when I started my PhD, but often academic research, they wanna have a theoretical contribution. Whereas sometimes if you are in interested in making behavioral change and in the advocacy space, what you wanna know is like, even if we know the theory behind this, how do different kinds of interventions work? How do people respond to these kinds of things? And so that kind of, let's test these interventions that might not be very theoretically interesting but might have a really tangible and applied impact can be really helpful. And I guess something that I have very little experience with that would be really helpful for me is a better sense of, you know, what kinds of findings and how to translate my more abstract theoretical findings to something that would be helpful for activists or people who are looking to advocate for these distant groups.
Matti Wilks (01:04:55):
If I could have a more concrete explanation of like what kind of information they would want and how I could present that information in a way that would be helpful for them, that kind of information would be really good for me as well. So those three things.
Michael Dello-Iacovo (01:05:06):
Great. Cool. Well thanks Matti, it's been really great to chat with you. So if people want to follow you or your work, where can they do that? Do you have any social media links or websites you'd like to point people to?
Matti Wilks (01:05:16):
Yeah, thank you for having me. It's been lovely to chat. I guess my normal answer is Twitter, but it feels like Twitter might be a sinking ship at the moment, so I dunno if I should direct people to my Twitter, but I'm at Matti Wilks and I also have my website at mattiwilks.com. And I think of course, on the University of Edinburgh website as well, which all of those have my emails, so people have questions. They're welcome to tweet at me if Twitter still exists or contact me via email.
Michael Dello-Iacovo (01:05:41):
All right, sounds good. Thanks so much Matti.
Matti Wilks (01:05:42):
Thank you very much.
Michael Dello-Iacovo (01:05:45):
Thanks for listening. I hope you enjoyed the episode. You can subscribe to the Sentience Institute podcast on iTunes, Stitcher, or any podcast app.