June 23, 2021
Guest Tobias Baumann, Center for Reducing Suffering
Hosted by Jamie Harris, Sentience Institute
Tobias Baumann of the Center for Reducing Suffering on moral circle expansion, cause prioritization, and reducing risks of astronomical suffering in the long-term future
“If some beings are excluded from moral consideration then the results are usually quite bad, as evidenced by many forms of both current and historical suffering… I would definitely say that those that don’t have any sort of political representation or power are at risk. That’s true for animals right now; it might be true for artificially sentient beings in the future… And yeah, I think that is a plausible priority. Another candidate would be to work on other broad factors to improve the future such as by trying to fix politics, which is obviously a very, very ambitious goal… [Another candidate would be] trying to shape transformative AI more directly. We’ve talked about the uncertainty there is regarding the development of artificial intelligence, but at least there’s a certain chance that people are right about this being a very crucial technology; and if so, shaping it in the right way is very important obviously.”
Expanding humanity’s moral circle to include farmed animals and other sentient beings is a promising strategy for reducing the risk of astronomical suffering in the long-term future. But are there other causes that we could focus on that might be better? And should reducing future suffering actually be our goal?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
- Why moral circle expansion is a plausible priority for those of us focused on doing good (2:17)
- Tobias’ view on why we should accept longtermism — the idea that the value of our actions is determined primarily by their impacts on the long-term future (5:50)
- Are we living at the most important time in history? (14:15)
- When, if ever, will transformative AI arrive? (20:35)
- Assuming longtermism, should we prioritize focusing on risks of astronomical suffering in the long-term future (s-risks) or on maximizing the likelihood of positive outcomes? (27:00)
- What sorts of future beings might be excluded from humanity’s moral circle in the future, and why might this happen? (37:45)
- What are the main reasons to believe that moral circle expansion might not be a very promising way to have positive impacts on the long-term future? (41:40)
- Should we focus on other forms of values spreading that might be broadly positive, rather than expanding humanity’s moral circle? (48:55)
- Beyond values spreading, which other causes should people focused on reducing s-risks consider prioritizing (50:25)
- Should we expend resources on moral circle expansion and other efforts to reduce s-risk now or just invest our money and resources in order to benefit from compound interest? (1:00:02)
- If we decide to focus on moral circle expansion, should we focus on the current frontiers of the moral circle, such as farmed animals, or focus more directly on groups of future beings we are concerned about? (1:03:06)
Resources discussed in the episode:
Resources by Tobias Baumann and Center for Reducing Suffering:
SI’s resources:
Other resources:
Resources for using this podcast for a discussion group:
Transcript (Automated, imperfect)
Jamie (00:00:00):
Welcome to the Sentience Institute podcast, where we interview activists, entrepreneurs, or researchers about the most effective strategies to expand humanity's moral circle. I'm Jamie Harris researcher at Sentience Institute and Animal Advocacy Careers. Welcome to our 16th episode of the podcast. I was excited to have Tobias Baumann on the podcast because I found that he consistently does an excellent job of highlighting important considerations relating to pressing problems and doing so in a clear, concise manner. Despite the complexity of the issues addressed. Tobias has written about many topics relevant to Sentience Institute's own work and interests, including animal advocacy, the moral consideration of artificial sentience beings, values spreading more broadly and number of other longtermist causes. Like Sentience Institute, he's interested both in questions of prioritization, i.e. Which causes are most promising for us to work on and questions of implementation, how we can most cost-effectively make progress on them.
Jamie (00:01:05):
And I was excited to speak to him about both these aspects. In this first episode, I speak to Tobias mostly about prioritization questions. We explore, whether those of us focused on doing the most good we can should work on Sentience Institute's focus area of moral circle expansion or something else. We also explore the broader arguments for and against the idea of longtermism and foreign against working explicitly to address risks of astronomical suffering in the future. Finally, we discussed the more specific question of whether assuming longtermism and a focus on moral circle expansion, we should focus on artificial sentience, farmed animals or something else. On our website, we have a transcript of this episode, as well as timestamps for particular topics. We also have suggested questions and resources that can be used to run an event around this podcast in your local effective altruism or animal advocacy group. Please feel free to get in touch with us. If you have questions about this and we'd be happy to help.
Jamie (00:01:50):
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organization focused on figuring out how we can best reduce suffering, taking into account all sentient beings. He's also pursuing a PhD in machine learning at university college, London, and previously worked as a quantitative trader at Jane street capital. Welcome to the podcast Tobias.
Tobias (00:02:13):
Thanks for having me.
Jamie (00:02:15):
You're very welcome. At Sentience Institute, we talk about humanity's moral circle, which is the set of beings given moral consideration, such as protection in our society's laws. Our work is intended to encourage moral circle expansion increasing the number of beings included in the circle or the extent to which they are included. Most of our work focuses on the kind of empirical exploration of working out how to most effectively encourage more circuit expansion. But my colleague Jacy has also written some content explaining why he sees this as a high priority for individuals hoping to do good and to impact the far future, including recently publishing an article about this in the journal Futures. So it's not the sole focus of your work, but you've written about this topic too, such as in your post about arguments for and against moral advocacy. So I just want you to start with the for. What's the overview? Why do you see moral circle expansion as a plausible priority?
Tobias (00:03:06):
Yeah. So one way I put it, it's just that the values are, are very fundamental. There are, of course also other factors such as technology that matter too. But ultimately what people want to do is, is the driving force of, and if some beings are excluded from more moral consideration, then the results are usually quite bad as evidenced by it by many forms of both current and historical suffering. And conversely, the greater our concern is for, for all sentient beings, the more will be done to reduce the suffering and improve their wellbeing. And usually if there's a will, there's a way to achieve that. Also it's of course, worth noting that nonhuman animals currently make up the vast majority of sentient beings in the world. We are not only the 1% we are the 0.01% or something like that. So it stands to reason that, that the current lack of concern for non-human sentient beings is, is perhaps a uniquely fundamental and uniquely neglected issue. There are of course counter considerations, but I guess we'll get into that later.
Jamie (00:04:16):
Yeah, yeah, certainly. Interesting, because I was kind of expecting you to answer that question by a launch again, with the kind of, I tend to kind of start to answer that question by talking about the types of beings that current exist as you did, but also the types of beings that exist in the future. And my understanding is that you like me kind of mostly focus in terms of impacts on the longterm future. Do you think; yeah, it, does that play a key part that kind of interest in the long-term future? Does that play a key part in why you think this is an important topic as well?
Tobias (00:04:51):
Yeah definitely. I would say I'm like maybe 80% sold on long-termism and of course one needs to distinguish different claims there. There's the normative aspect. Yeah. The belief that future individuals, whether it's human or animals matter just as much as those living right now. And this to me is it's kind of obvious. I mean, maybe many people would disagree, but you know, to me it's obvious that, that this is true and projecting. That would be some sort of weird discrimination based on time then with long-termism there's the empirical parts to accepting that premise, what is the best way to help others taking into account all sentient beings at that? That to me is subject to a lot more uncertainty. Yeah.
Jamie (00:05:39):
Interesting. You see, there's, there's more uncertainty on the empirical part, just as some context for people who haven't, who aren't familiar with the term. So long-termism has been defined by the global priorities Institute as the view that the primary determinant of the differences in value of the actions we take today is the effect of those actions on the very long-term future. Is that kind of roughly how you're thinking of the term as well to us?
Tobias (00:06:02):
Yeah, that makes sense. I mean, this way of defining it entails both the normative and the empirical part.
Jamie (00:06:08):
So let's start with some of the philosophical considerations then you mentioned that that kind of idea that future beings matter as much as present beings is super obvious to you. Are there other kind of key philosophical considerations that play into this as well?
Tobias (00:06:23):
Yeah, well, no, I think that's the main one.
Jamie (00:06:27):
So it's interesting that you kind of have this strong view on that. What would you say to somebody who doesn't share that view? Do you think that there's any way we can kind of come to agreement on that and kind of make progress on whether these sorts of key philosophical questions underpinning uncertainty about long-termism do you think we can make progress on those questions or are we just kind of doomed to sit with our starting intuitions and kind of act on them depending on wherever we start from?
Tobias (00:06:54):
Well, I mean, uh, you can also ask, what would you say if someone just consistently assessed that they don't care about animals or don't care about others in the first place and are completely selfish. At some point, I think it does come down to certain intuitions. But of course I think just on a philosophical level, the argument is really quite strong. That, I mean, what would be the reason to consider the suffering of a being in the year, 2100 to be less important than the suffering of a being in the year 2021 or any other year that just yeah, it's quite clear to me that this shouldn't make a difference.
Jamie (00:07:33):
Yeah. so I suspect that some people might think things like that, they don't matter as much because we're not even certain if it will exist or something like that. But then does that kind of touch on what you were getting at before of it being a kind of an empirical question rather than a philosophical question.
Tobias (00:07:50):
Yeah. That's the empirical part? As I was saying, I think there are some good arguments and, and that part.
Jamie (00:07:55):
Cool. And you want to kind of talk through some of the main ones that jump to mind in terms of yeah. Some of the most important considerations as to whether on that empirical side as to whether we should accept longtermism or not?
Tobias (00:08:07):
Yeah, sure. So the main argument in favor of this idea is simply that the future is much, the future is large in terms of time, it's just long. It could be billions of years. It's also large in terms of space. At least if you are assuming some non-negligible chance that humanity will expand into space and there will be potentially much larger numbers of sentient beings in the future. And if you accept that and the normative premise that these future individuals matter just as much then yeah, it's, stands to reason that, as the definition puts it, that the primary determinant of our actions are the effects on those future individuals, rather than those that, that exist in the near future. Now, in terms of counter arguments, one argument that is often considered it's just the difficulty of predicting what happens in the future and of influencing the future in a robust, the good way.
Tobias (00:09:04):
You could also think about it in terms of our comparative advantage. If you look at all the effective altruists living at all times in history, maybe our comparative advantage is in helping now and solving problem that arise now. One can also make a more theoretical argument here, namely that if the future is big, either in terms of space or time, then it's true that the stakes are higher, but maybe it also means on the face of it, at least that our impact will be diluted simply because there are far more agents shaping the final outcome, namely, all those that will come into existence in the future. And if you could think that these effects will cancel out. Consider for instance compare for instance, animal efficacy in a smaller country like Switzerland versus animal advocacy in a larger country, like the US has higher stakes in that it has a larger population, but it's also harder to influence because more people are competing for influence.
Tobias (00:10:07):
And on the face of it, you could think that it just cancels out. And that plus the comparative advantage consideration could be an argument for optimism. Yeah. So I think too, for a stronger longtermist conclusion one usually needs additional arguments or assumptions regarding some notion of us being in a specialist position, influence the course of history on a very, very long-term scale. In particular, I think the idea of some sort of lock-in is critical. So a lock-in event would just be something that determines everything that happens after, but I own the history up to this point really matters. Then we have to lock-in, and that shapes the rest of the universe and light cone. Now, if something like that were to happen, then of course, it's quite obvious that people living before that event have outsized influence over what happens in the long term.
Tobias (00:11:02):
And it is not diluted in the way that I described because things are being locked in. So examples of that are often discussed are transformative AI. So the idea being that we develop transformative at some point, and by shaping that we can have an exceptional impact. You can also imagine other lcck-ins like take over by global totalitarian government or human extinction, which locks in an empty universe. Now I'm quite agnostic about a lot of this, and there's a lot of uncertainty over how a lock-in could happen, how likely it is. But I would say that it is plausible enough to then quite some support for the longtermist position. So for instance, even the simple, the fact that we are on a single planet should update you towards us being in a good position who influence longterm history. And yeah, even if you just look at human history, it does seem that we are living in a rather interesting time. So yeah, overall I'm arriving at 80% longtermism if that makes sense.
Jamie (00:12:04):
Yeah. Lots of, kind of things jumping to mind I'd like to follow up on. So one thing that you kind of mentioned there is the idea that we have to be at some, we probably have to have some kind of unusual reason to think that we'll have be able to have this real long-term influence and that could be from lock-in, but I think there's another one, which is that it doesn't necessarily have to be lock-in. It could also just be that we have some kind of opportunities to shift the entire trajectory of the way that society develops. And that, that, that would then facilitate some kind of lasting influence that's substantially what I was looking at as part of a blog post. That was actually the first thing I did for Sentience Institute called how tractable is changing the course of history. And I kind of examined that mostly through the lens of examining some historical case studies at pivotal moments in history, like the French revolution to try and get a bit of a handle on where the thoughtful actors actually had much influence over the course of those events and the trajectory that humanity ended up on afterwards in that post, it was a pretty mixed conclusion basically that they seem to be in some opportunities for substantial influence, especially at certain among certain well-placed political leaders, but that contingency and indirect hard to influence factors also constrained the tractability of shaping the trajectory of humanity's history in that sense.
Jamie (00:13:25):
What do you think about firstly, I guess, do you think that that is a kind of a valid alternative? And if so, do you have any thoughts on on how tractable it is basically?
Tobias (00:13:35):
Yeah. I mean, that is a valid alternative. I do think the overall upshot is that it's, it's unclear, but I think it's not, it's certainly not hopeless to try and influence the long-term future and to reject the longtermist case entirely you would have to argue that you really can't influence the long-term future at all. And that there isn't even a certain chance that we can support. And that seems like a relatively hard case to make, in my opinion.
Jamie (00:14:03):
Yeah, just, just in the sense that the scale is so large that even small indirect effects or kind of like the residue effects would have substantial impact, is that what you're getting at?
Tobias (00:14:11):
Exactly, something like that.
Jamie (00:14:13):
Cool. Okay. And the other, another interesting kind of debate that you touched on there was this kind of wider question of whether we are at the most or one of the most important times in history, and this is what Derek Parfitt and William MacAskill have called the hinge of history, which would mean the idea that we, we might we're at a time when we might be able to have unusual influence on future trajectories or kind of, or have things locked in in some way. And as you mentioned, it could be because of the kind of onset of, of transformative AI or something like that. What do you think about that kind of broader question about whether we are plausibly at the hinge of history, then we're kind of most important time in history?
Tobias (00:14:54):
Yeah. So I mean, to claim that we are at the most important time in history is like, it's a pretty strong claim. I wouldn't necessarily endorse that, but I also don't think that our time is less important than other times. And yeah, I mean, there's definitely a lot of interesting things going on in terms of technological change and also social change in the twenty-first century, there's been some discussion of like priors for the, the hinge of history. And I think you have probably, I should just be some mixture of like, like it could be that hinginess just goes up over time constantly. It could be, that hinginess just goes down over time constantly. It could be that it, that it is flat most of the time and then has one huge spike and then remains flat. It could be oscillating upward down, all kinds of things are, are conceivable.
Tobias (00:15:48):
And then of course, if you look at human history, I think, unfortunately it's not really clear to me whether right now, like by the year 2021 is, is hinge year than the year 2000 or 1900 or 1800 or even 1000. I'm pretty, I think that's pretty unclear. And if you extrapolate that, then, you know, well, I mean, I don't, I'm not saying that our time is exactly as hingey as all the others, but in expectation, maybe something like that, because our uncertainty about that, it seems right to me. So yeah, if I had the choice of whether or not, like I would beam myself to, to another time then yeah, I just would, would be entirely unsure about what to do, but I'm curious if you have any thoughts on that or like a different opinion.
Jamie (00:16:36):
Yeah. It's not something I've thought a great deal about I've kind of struggled to follow some of the technical discussion on the kind of priors and what sorts of priors there is. It was kind of interesting to read Toby Ord's response on when MacAskill's post about the hinge of history hypothesis. They just kind of went into some technicalities about different types of priors and coming to quite different conclusions just from just like what sorts of priors they, they chose. So, yeah. I, I have nothing to contribute to that, but I guess my gut response is this the sort of thing you were just saying about almost how we would kind of model hingeyness over time? My gut response is, well, can we look to hingeyness in the past? And can we kind of make some kind of assessment of how hingey different parts of history have been also, I should caveat that I'm pretty sure William castle regrets using the term hinge of history and especially hingey this, but I can't remember what the conclusion was. So I'm going to keep using the term.
Tobias (00:17:33):
He's recommending something like influentialness or pivotality or something like this.
Jamie (00:17:39):
Yeah. This is probably going to sound ignorant, but it wouldn't importance work? In any case. Yeah, you could kind of, yeah, I'm kind of thinking you could look back and kind of attempt to rate different periods of time or plot some kind of graph and see if there's any notable trends there. And this is not something I've tried to do, but for instance, I mean, as I mentioned in the case studies that I looked at, you could say, oh, well maybe history was maybe it was especially hingey around the time of the enlightenment or the French revolution. And you could point to other things and say like, maybe, I guess it depends on your kind of understanding of what kicked off the industrial revolution. For example, you could say, oh, maybe certain people had undue influence on that. And it was hingey at that time. Or maybe it was just a graduate accumulation or whatever. But if you could kind of go back and try and work out some of those things, even on different axes, you know, like economic development or political development, then maybe you could kind of start to sort of draw out that model and predict the kind of trend in that sense.
Tobias (00:18:38):
Yeah, uh so I think history is maybe some evidence for a moderate form of hingeyness, oscillating such as sometimes that seemed more important than, than others. Although not, not like by a factor of a thousand, I think, I mean, even if you live in a time where nothing particularly important happens, if you, you have 1000, you could still go and write up arguments for antispeciesism for the first time. And maybe that is quite impactful in expectation just because no one else is doing anything of the sort. So it's quite unclear. And I think in addition to this sometimes being more hingey than others, it's interesting to look at the trend over time, which I alluded to earlier, like overtime, does hingeyness tend to increase or does it tend to decrease? And interestingly, there are some theoretical considerations in both directions.
Tobias (00:19:31):
One key argument for hingeyness to decrease over time is that population goes up quite simply. There are now more people around them earlier times. So the influence per person on what humanity does overall is smaller. If we expect the continued population explosion in the future, then that might continue, I consideration in the opposite direction for hingeyness to go up is that we, we have more knowledge now, which allows us to have more impact. We are also wealthier, which also allows us to have more impact. So yeah, I mean, unfortunately I think it's not really, it's not clear to me what the overall balance is. Yeah,
Jamie (00:20:13):
Yeah. Makes sense. I guess you could kind of, you could think of kind of factors that influence individuals at any one point as well in terms of their influence. And some of those would presumably apply a kind of population wide level as well, like accumulation of various types of things. Although obviously some of those things are kind of like only valuable insofar as they are relative to other people. So maybe that wouldn't help that much. Cool. All right. Well, a related question that feeds into this issue of hingeyness, and I think explains a certain amount of the kind of intuitive support for the idea, at least among the effective altruism community that we are at an especially important time in history is this question you were talking about about when if ever, I guess transformative AI will be developed and will arrive. And this is something you've written about a little bit, you've got a post on thoughts on short timelines for various forms of AI development. Do you want to kind of talk through that topic and your thoughts on that?
Tobias (00:21:09):
Yeah, I mean, of course, if you believe this sort of narrative then we are probably at the hingiest time ever I tend to be fairly skeptical of like, at least the stronger versions of these claims that that AI will happen very soon, that it will be a single point in history and then shaping everything going forward. I'm more sympathetic to like maybe a broader statement of the form that some notion of artificial intelligence will be an important development in the future. And even right now I mean that, that seems like a statement that is not that strong and would probably be quite widely accepted. Yeah. But the question is whether we... So the usual narrative in EA is that AI is built at a certain point and then it takes over and shapes the rest of the future. And that strikes me as maybe as simple as a model.
Tobias (00:22:03):
And the way I would think about it is more that machines will become better and better at different skills at different points in time, they will overtake humans at different points in time for different skills, different cognitive domains. So I mean, machines are already much better than humans at things like a memory or everything take calculations. They're right now becoming better than humans at things like language GPT-3 and developments in that space, they be likely also become better than humans in many other domains, but it's probably, I think going to be a gradual process and a co-evolution of humans and technology compared to how yeah, that has historically always happened.
Jamie (00:22:49):
Okay. So another thing here where I don't, I guess I don't have a good sense about why people disagree sometimes, but I know that there's a substantial kind of cohort of people who think something along the lines of that it's very near. And this could be for example, because they see this rapid increase in in the kind of capabilities of various kinds of sort of narrow AIs. And they think that the kind of ability to combine those into a more general system is not too far away, or, yeah, I guess I don't have a good sense of the arguments for and against, but do you, do you have a sense of why you disagree with some people on that and then in the sense that yeah, some people are much seem quite confident that transformative AI is. Yeah. I mean, I guess
Tobias (00:23:36):
Different people have different reasons for believing that, but maybe a common theme is, is being quite impressed with recent advances such as GPT-3 and before that it was AlphaGo and AlphaStar yeah, that that's unfortunately like hard to pin down what exactly it is because to me, it seems that while these achievements are definitely like it's legitimate scientific work and legitimate breakthrough in machine learning. But it, it seems to me that it is not indicative of anything like general intelligence at all or not that much. And one also has to, like, if you're counting the existence of these things as evidence, you also have to say like, okay, after GPT-3 happened, not that much appears to have happened of the last year or something. I'm not sure about the exact time scales. So yeah, if you're looking at progress in this field overall over the last say five years, it doesn't, to me seem exceptionally thoughts or, or, or vastly more than what one would have expected. But maybe some people disagree about that.
Jamie (00:24:52):
Yeah. And I think interesting that you touch on that idea that maybe some people who are super confident about short timelines, get this from looking at the recent developments and it's that kind of inside view right, of like here's some evidence that stuff is changing quickly. I found it really interesting that kind of going back, touching again on our topic of kind of priors and base rates that open philanthropy have kind of looked at this question from both angles, but they've got a detailed project on the inside view approach. And they've also recently done a post about about what they called semi informative priors and with some kind of different reference classes. And they used four different reference classes to estimate the probability of artificial genuine intelligence. And they did it for two different timescales by 2036, and by a later date as well. And I was actually a bit surprised that some of those estimates were kind of the central estimate for the shorter timescale of 20, 36 was around 8% based on these, this kind of outside view, reference class thing, as opposed to the, just looking at their AI R&D. So yeah, it's interesting that you can kind of apply both of those types of evidence there.
Tobias (00:26:03):
Yeah I mean, that's part of what makes this, this problem so difficult is that you have a wide range of possible fields to look at or reference classes to choose or priors to choose from. For instance, if you, you can also look at GDP growth and the discussion among economist of future GDP growth. And you're looking at that framework, I think it would strike most people as it's really quite outlandish to claim that if you're going to see fantastic growth rates, anytime soon, like the consensus in this field is more like that the growth is going to keep that economic growth will keep going down. But yeah, this is just one, one framework one could choose. And if you're choosing this framework of looking at the amount of computing that is required to, to achieve general intelligence, then yeah. I mean that, that's what the Open AI reports tend to do. Yeah. And then you're getting like maybe a medium ish timelines.
Jamie (00:26:59):
All right. Great. So that was, we've kind of, you know, I'm sure we haven't resolved the discussion for listeners, but that was kind of thinking about this topic of whether we should accept longtermism as a general principle, whether we should strive to maximize impact for the, that affects the long-term future. Another important consideration going into both your work and my work and our work at Sentience Institute is this idea of whether we should focus. If we accept long-termism, if we assume long-termism whether we should focus on basically maximizing the likelihood of positive outcomes or reducing the likelihood of negative outcomes. And this is often been discussed in the context of what's being called suffering risks. So do you want to kind of introduce that concept? What are suffering risks?
Tobias (00:27:44):
Yeah. Suffering risks or s-risks are essentially just worst case futures that intail a lot of suffering vastly more that than we've ever seen so far on earth with wild animal suffering or factory farms. So originally I think it was called risks of astronomical suffering, which like alludes to the possibility of an astronomical scale if humanity expands into space, you can have much larger quantities of suffering. And we have right now, no. And the risk of something like this happening that that's called an s-risk.
Jamie (00:28:17):
Cool. Okay. And so if we accept longtermism, then there are many cause areas we could focus on. So 80,000 hours, the careers organization, which advocates explicitly for long-termism has a page, which lists for what they call highest priority areas, three second highest priority areas, and then 17 other potential, highest priorities, plus a bunch of other issues, including eight other long-term missed issues. And they just list, they list s-risks as just one of these and in your post about arguments for and against the focus on s-risks, you summarize some additional kind of relevant, crucial considerations that are essentially specific to whether we should focus on s-risks, as opposed to the kind of broader longtermism debate we were talking about. What are the main considerations in your view about whether we should assuming longtermism, and whether we should prioritize focusing on s-risks or not?
Tobias (00:29:06):
Yeah so, I mean, one key consideration is definitely not one focuses on reducing suffering, how much weight we're giving to that. As opposed to other possible goals like creating additional happy beings or other more complex. Now of course, there is very broad agreement that having a lot of suffering would be bad. And so s-risks are worth avoiding, but the question is how highly you are prioritizing it relative to other things, and some other value systems that, that put a lot of weight to goals other than reducing suffering would perhaps consider other risks more important. That's just the balance of overall badness versus probability, whereas views that put a lot of emphasis or consider reducing suffering our main priority would view s-risks as a top priority, assuming longtermism as you said. And of course there's also an empirical side of things about how likely such worst-case scenarios are and how bad they will be and things like that
Jamie (00:30:11):
On that kind of former point about how much more weight we should give to reducing severe large-scale suffering versus kind of other types of harms, Your colleague at Center for Reducing Suffering, Magnus Vinding, I get the sense that this is kind of more of the main thrust of his work and his research, I guess, where's that, where's that going? Is that going towards primarily kind of working out questions like that, like work out for our own, almost like for your own sake and your own kind of prioritization, how important certain things like this are, or is it more of a kind of advocacy aspect or, yeah, I guess the context here being that this feels like another one of those questions where I know people with a very broad spread of views on this question. And so it's kind of like, where do we go from there? Is it just kind of like, this is a thing we have to think about or is it something we can actually kind of narrow down our increase our confidence on and become more precise about what our view on it is? What our kind of answer is.
Tobias (00:31:10):
I mean, I, I think your answer to these questions, we largely depend on intuitions at the end of the day. I mean, there's obviously a lot of philosophical work on questions touching this it's of course, a very wide field. And ultimately as, as you were saying Magnus is more of an expert on suffering-focused ethics than I am. And he's written a book on suffering focus ethics, but he's arguing for the case, the case for preventing suffering, particularly extreme suffering, being our, our top priority, which is a position that I, I obviously agree with. Now, the question of how to spell this out in the form of ethical theory is, is, is complicated and suffering-focused ethics is deliberately rod as an umbrella term that can encompass many different ideas. So I mean, this ranges from the procreative assymetry, and if people have an intuition that it is important to make beings happy, but not happy beings.
Tobias (00:32:10):
So that, that it is bad to create a being that will suffer, but it is not at least not a moral imperative to create an additional being that would be happy, but you can also just be a fairly conventional utilitarian and endorse some form of prioritarianism, which is the idea that we have particularly strong moral obligations towards those that are, that are worst off, that are suffering intensely. There are other ideas in the direction of tranquilism and desire satisfactionism, which are basically saying that happiness matters in so far as there is a being that desires to experience it, or has a preference to experience it, but it is not really morally imperative to create additional happiness from scratch by creating additional beings. So, yeah, I mean, obviously if this is a wide field, as I was saying, I would definitely encourage people to look for arguments on both sides, rather than just listening to me.
Jamie (00:33:06):
Do you know of many examples of where people have completely flipped their view on this sort of thing, or not necessarily overnight, you know, not necessarily completely transformed, but you, you say you encourage people to look for arguments on both sides. Do you think that people do shift sides so to speak or is, or is it, does it tend to be a bit more kind of small fluctuations on, towards one end of the spectrum or the other?
Tobias (00:33:31):
Uh, I suppose it does sometimes happen. I think Magnus himself has started out being more of a classical utilitarian, so putting equal weight to happiness and suffering. And I think Brian Tomasik has that too. Yeah, on the flip side, there are, I think some people at the Center on Longterm Risk or formerly the Effective Altruism Foundation started off with more of a suffering focus and,uhave gravitated a bit more towards,ureducing extinction risk.
Jamie (00:34:01):
I guess a related question is where do you think your stance on it came from? Or like how long have you had that stance? Is that something that you've had kind of as long as you're aware of? Or did that develop?
Tobias (00:34:13):
Yeah, well, I mean, I've first gotten in contact with EA through the Swiss EA group, which tends to be very suffering-focused. So maybe I've just thought that this view from them I would maybe also say that even before that that there is actually a correlation between concern for, for animals and, and being more focused. For instance, if you're looking at how vegans tend to approach population ethics, they, I mean, you have, people are saying that we need to create new farms with lots of happy animals. That's like, that's not something that is important in people's minds. And the actual effect of being vegan is that there are fewer animals in existence and people tend to consider that a good thing, which is like following the procreatove asymmetry, I think.
Jamie (00:35:05):
Yeah. Okay. So you touched on there being a kind of another whole other side of this question, apart from the, just the way you thought on that, the priorities of kind of reducing suffering versus increasing positive experience. And that is the kind of probability side of things, you know, how likely are different outcomes in the future. So I guess the key question is basically how likely is it that sentient beings will suffer in the future and to what extent will they suffer? And this is just a topic that I find. I mean, there's, there's just so many factors to consider that it feels, again, it just feels very intractable to get much of a handle on. There's a really interesting post by Lukas Gloor of the Center on Long-term Risk called cause prioritization for downside focused value systems, which at one point has this diagram of things that can go wrong that might lead to various asterisks.
Jamie (00:35:54):
And I guess, yeah, it feels hard to kind of look at those different outcomes and say, oh, there's X probability of this outcome and Y probability of this outcome, because there are so many different factors. And so do you have thoughts on this overall question of like how likely the, either just the general sort of general question of how likely is it, do we have heuristics here that are useful ways to how likely it is that sensory beings will suffer in the future? Or do, do you think that we have, are able to kind of come to much of a view on the probability of certain more specific outcomes?
Tobias (00:36:32):
So, one reason for pessimism would be to just look at all the moral catastrophes that have happened in history or are happening right now. And then chances are that there will also be moral catastrophes, some sort of in the future, it would kind of be surprising if our generation was like the last one to do something wrong, basically that would be quite surprising. And if you combine that with potentially much higher technological capabilities that would make you quite worried about future. A consideration for more optimism would be perhaps to look at the moral progress that has happened in terms of the values of people. This has been written up for instance, by Steven Pinker and better angels of our nature, but there has been some moral circle expansion you can say then that despite these improvements and values, we are now that are more animals in factory farms now than there were 200 years ago, it might be, it's still gotten worse. Yeah, it's quite complicated. In my opinion I think the likelihood of a future moral catastrophe is clearly in my opinion, it's possible enough to be worth working on given that the stakes could be so large.
Jamie (00:37:46):
Okay. Well, that's a nice segue into some of the more specific question about if we assume long term ism, should we focus on moral circle expansion specifically? And so at the start of this conversation, I asked you about why focusing specifically on expanding humanity's moral circle is a plausible priority for longtermists. And let's just kind of probe into some of that a little tiny bit more. What sorts of future beings are you worried might be excluded from humanities, moral circle?
Tobias (00:38:15):
I would definitely say that those that don't have any sort of political representation or power are, are at risk for that like that's true for animals right now it might be true for artificial beings in the future. So I definitely think that artificial sentience is at risk of not being that there's a risk of it not being recognized as sentient in the first place, even if it is recognized by some it might just not be politically strong enough to enact detections for such artificial beings, similar to how that, that the case of animals at the moment, but even with animals, I mean, we're seeing some encouraging trends, but it's still quite possible that humanity will never get around to you caring about animals to a sufficient degree. So I would definitely say that animals and artificial entities are most likely to be excluded.
Tobias (00:39:10):
Although, I mean, maybe these categories are still too broad and maybe it's just subgroups of those categories. Maybe people will care about humanoid robots, but not so much about something that's less recognizable to us as a, as a, being like something that's just running on a chip somewhere as an algorithm or a simulation, something like this. Actually, I think we tend to think about this in very philosophical terms and discussing values, but I mean, maybe the harsh reality is just that that some beings will for whatever reason not have any political representation. They will be pretty powerless. And then they'll automatically sort of be excluded unless there is a sufficiently strong level of concern for them, which might be quite hard.
Jamie (00:39:55):
Something you said at the start of the conversation as well, was this idea of moral circle expansion and kind of moral advocacy being a kind of a broad, quite a broad intervention, right? It could potentially affect lots of different types of outcomes and be positive in a number of different ways. And that's an argument I've had made before. My colleague Jacy has made to kind of a similar point in his post advocating for this as a important cause area. It goes back to what I was talking about before about there are lots of different possible kind of future scenarios, and we could say, oh, this one's more or less likely than that one. And this is more or less likely to entail certain forms of suffering. And so I tend to think of moral circle expansion as being a way that you don't necessarily have to bet on certain paths happening. You don't have to say you don't have to be particularly confident that particular changes will happen. It seems like most likely expansion will be plausibly beneficial when a lots of the lots of these different outcome types. Do you think you can make the same claim about other profitable long-term cause areas like even working on technical AI value alignment?
Tobias (00:41:00):
No, that's definitely a large part of the charm of moral circle expansion. Although as you were saying, I think that's not, not true only of moral circle expansion, there are definitely other interventions that also have this property. For instance, if you're trying to improve the political system to improve political discourse or reduce polarization or something like that, that's also working on, on a broad risk factor and trying to improve the future, like regardless of what exactly will happen. I do generally think quite strongly, we are currently so uncertain about the future that we should try to avoid betting on various specific things happening in the future.
Jamie (00:41:41):
Yeah. Cool. All right. So maybe starting to hint at that, with the answer you just gave, but in contrast to some of those pros, what do you think are some of the main arguments against moral circle expansion as a focus, assuming that you agree with longtermism?
Tobias (00:41:56):
You could simply question the tractability of it. You could point at all the people that have tried to achieve it, and maybe it hasn't gone that far. You could also question whether or not any change if you do achieve will be lasting, right. I mean, even if you can get people to abolish factory farming or endorse concern for animals who knows whether that will last through the centuries. So, I mean, to be honest, I think things like that can be said about many long-term interventions, so it's not really a deal breaker, in my opinion. Another important argument is just that moral advocacy is maybe a bit of a zero sum game. Everyone's trying to push their values and on balance, not, not nothing changes very much so, so the argument goes it's also quite crowded for that reason. And I also think that isn't a knock-down argument, but it, but there's something to it. And what you're trying to do at, at CRS is to not just go out there and spreadour values, that's, not at all how I would put it. It's also about moral reflection and some sort of moral progress it's about developing certain views and advancing them, not about merely spreading it.
Jamie (00:43:21):
Yeah. a related thing there is that it seems that you think the values we're spreading are good. Right. And not everybody would, presumably not everyone would agree with them otherwise otherwise we wouldn't need to spread them. So again, it kind of, it kind of assumes that the reason people don't currently agree with the values we're trying to spread is something other than rational, good reasoning. So maybe it's just, you know, it could be kind of various structural factors and reinforced behavior and things like that. Or it could be just various cognitive biases that get in the way. It's very easy for me to think. Well, I'm very confident that, you know, we shouldn't discriminate based on species. I'm very confident that animals can suffer in some capacity and other forms of beings might and they might be excluded and all those sorts of things. But I can think of other things where I know of people in the effective altruism community who have views that I don't agree with. And I guess if they would turn around to me and say, I think that spreading this value is a top cause priority. Then I would be quite skeptical, even if I thought they had kind of, you know, good judgment in general, does that sort of consideration, like maybe we could be wrong, even though we feel very confident in it. How concerned are you by that? Yeah.
Tobias (00:44:37):
I mean, that is sort of what I was alluding to when it's about doing advocacy, if you can even call it that in the right way. And maybe thinking about it more in terms of more moral reflection, I mean, at least that's how I feel about work on suffering-focused ethics. When it comes to something like whether or not animals matter at all. Yeah. Maybe as you say, maybe we are just content enough about that to just, yeah. But it's not the be a major concern. I think I do feel like about that in this way. And then when it comes down to antispeciesism.
Jamie (00:45:14):
Yeah. I mean, likewise, I guess I just, it feels like a theoretical concern because I do feel so confident on it and I feel like I've rarely heard people kind of explicitly justify, you know, not including animals interests or something like that. I guess occasionally people do when forced to make trade offs against human welfare. And so when it's, there's that kind of prioritization in play, then people do rate animals quite low and they'll kind of they'll, you know, rational people will explicitly justify that from that perspective. But he just feels, it was more just kind of a theoretical concern I have, I suppose.
Tobias (00:45:47):
Yeah. I mean, I think the reasons why people exclude animals just that really doesn't have that much to do with like careful philosophical reflection or something like that.
Jamie (00:45:57):
Agreed, agreed in the majority of cases, I suppose there are some, there are some people who have just like very different kind of like theories of consciousness or something like that, where they've thought a lot about it and have different views. Like I think I'm certainly no expert on the kind of philosophy of mind type stuff. But I think Isaiah Koski is one of these people who has still a lot about consciousness, but basically has concluded that animals are not sentient.
Tobias (00:46:21):
That's true. I mean, it doesn't hold it in all
Jamie (00:46:24):
Cases. Yeah. So I another concern I ha I often hear when talking about moral circle expansion is that encouraging this may actually increase some forms of risks such as various kinds of threats from agents in the future and with agents with different value systems. How concerned should we be about that? Yeah. So
Tobias (00:46:46):
That's a very interesting question. And I do think that these concerns are worth taking seriously, but there's an entire class of perhaps somewhat exotic risks that arise from escalating conflicts, where people start to deliberately cause harm out of spite or as part of a threat or something like this. And then having more compassion might backfire. It may seem quite farfetched, especially like the possibility that something like this would happen on a large scale, but it could also be potentially extremely bad, and then, this raises an interesting question. Whether we should focus on worst case the nine of us, even if they seem a little bit far-fetched or unlikely, or should we focus on scenarios that are merely somewhat bad but more likely. So is most expected suffering in the worst 1% of scenarios or is it distributed more broadly?
Tobias (00:47:48):
And I think that's actually a question that you can ask in many contexts. If you're worried about pandemics, should you worry about the most extreme ones that might appeal 99% of the population, or it's a large fraction of the risk, the expected risk from more normal pandemics, like COVID that are not as bad, but far more realistic. And I think the answer is usually somewhere in the middle, it makes sense to take the more extreme scenarios seriously, and to focus on them to some degree. But you also shouldn't jump to the hasty conclusions or like focus too narrowly on a single denial. So to come back to your question, I would definitely take the concern that, that moral circle expansion could backfire very seriously. And it is worth thinking about how we could mitigate this risk, that just by making sure that the animal movement doesn't become toxic or, or extremely controversial, but on the flip side, I would think it, I think it would be too much to conclude from this, that we should not do moral circle expansion at all.
Jamie (00:48:53):
At all. Okay. So yeah, broadening out slightly, you can think of moral circle expansion as one example of, of a broader class of possible kind of value spreading or moral advocacy or various types of advocacy that we could take. And we were kind of talking about it in that context earlier, but I guess the question then that, that raises is should we focus on other forms of value spreading instead that might also have some of the advantages we talked about about kind of like broad applicability to different kinds of contexts. And this is something that you've suggested in that you've kind of discussed at least in some posts that you've written and you suggest the, for example, you suggest the promotion of consequentialism or effective altruism or even specific ethical views or the focus specifically on s-risks as possible alternative advocacy focuses. And so each of those possible focuses does have like some people working on it now. Would you say that your view on each of those different things is like, do they seem similar unexpected value to you at the moment? Or do you think that there's big differences within this category of value spreading?
Tobias (00:49:58):
I mean, yeah, it's probably gonna differ and it's hard to say which precise aspects of it are most important if only because of all kinds of trickle down effects. I mean, if you're starting to promote consequentialism, then you might get a specialist, but people that are disagreeing with you on lots of these other aspects that you also think are important, then you need to ask yourself what did, that's also a good thing and that's usually quite complicated.
Jamie (00:50:26):
Yeah. So what are some of the most plausible alternatives to even this more general category of value spreading in your view in terms of, I guess combining the various things, we've been talking about, you know, your view on longtermism your view on suffering-focused ethics and so on? What, what seem like some of the best other candidates for top priority, cause for reducing risks of suffering in the term future?
Tobias (00:50:50):
Yeah so one candidate is definitely trying to shape transformative AI, more directly. I mean, we've talked about the uncertainty. There is regarding the development of artificial intelligence, but at least as there's a certain chance that the people are right about this being a very crucial technology. And if so, shaping it in the right way is very important obviously, and this is actually what the Center on Long-term risk is doing with their research agenda on co-operative artificial intelligence. So that they're trying to avoid escalating conflicts involving AIs that that might result in, in s-risks. And yeah, I think that is a plausible that another candidate would be to work on other broad factors to have to improve the future such as by trying to, you know, fix politics, which is obviously a very, very ambitious goal. But yeah, generally having a more functional political system would certainly be a positive factor with regard to like any risk you can conceive in the future.
Tobias (00:51:56):
And in particular, for instance, when it comes to concerns for animals of powerless beings, it's surely more likely that you will be able to get certain moral concerns heard if there is a functional political discourse rather than a situation, where there's a lot of polarization. And then 90% of people just don't take you seriously as a matter of principle or something like that. So yeah, what exactly you can be doing to improve our political system is of course a quite complicated issue in its own. Right. But I think it's a plausible priority area. And my colleague Magnus Vinding is actually currently writing a book on recent politics, which is exploring that in more detail. Yet another possible alternative that I've written about is reducing risks from malevolent actors, which is, which has been one of, yeah, it's been winning an EA forum prize actually.
Tobias (00:52:55):
So yeah, the idea there is that having, if you look at history, a lot of the worst outcomes that happened due to evil quote, unquote dictators, like like Hitler or Stalin, and th this notion of evil has been operationalized in psychology research as the dark tetrad or that dark triad traits. And then of course, preventing benevolent individuals from coming to power would be a quite good way to improve the future. And obviously it's also particularly interesting from an asterisk because one of the most plausible mechanisms for how really bad futures could arise is that a person like that might become a global dictator
Jamie (00:53:36):
Or something. Yeah. Cool. Okay. Well, yeah, I'd love to dive into some of those. Whilst we're on the topic of this malevolent actors issue, do you have a sense of, okay, so it seems like, yeah, it makes sense in terms of how it could influence various outcomes. I guess the main question in my head is the tractability aspects. Like what are the actual things that you think could plausibly be done?
Tobias (00:54:01):
Yeah. I mean, that's definitely a fair question. It's not clear how attractable this is at the moment. So in the EA forum posts, we are exploring some possible interventions, including for instance, the development of manipulation proof measures of malevolence. So you could imagine having some sort of test or people who want to become political leaders or so that they have to pass that, that screens them for malevolence and presumably filters out those that are testing positive, so to speak which is currently currently something like this doesn't really exist. Like the ways to detect malevolent traits are usually, usually it's just the questionnaire, but obviously people can just manipulate that by, by answering, answering it wrongly. Like, of course, if you want to become president and then are presented with a questionnaire of whether you've ever killed an animal, you should probably answer no, you know, so we would, we would need a test that is actually overcomes this problem and can't be manipulated and it could be used to reliably detect malevolent.
Tobias (00:55:05):
Then of course, another question would be whether or not something like that would actually be used or be considered acceptable, but it's at least a possibility. Another idea is just to have more public events of, of the existence of malevolence and in particular of the existence of this specific high of, of highly strategic malevolent actors. And the fact that they, due to their ruthlessness, are often better able to get into positions of power when people think about psychopaths they might imagine serial killers or something like this, but this is a very, the type of evidence that we are concerned about as much more strategic and more likely to achieve positions of power. Because one of the aspects of malevolence is a very strong desire for power. So and then there's yet another class of interventions that's much more speculative. Like you could be looking at potential risks of malevolent, personality traits, arising in transformative artificial intelligence, depending on how their training environment is set up. And then it's a technical question of how you can avoid that. And last but not least, there is a possibility that in the future there will be some forms of genetic enhancement and that it could be applied to screen for malevolence who, although of course this is, this goes along with like serious risks of backfiring in various ways. So yeah. It's something that should only be considered with caution.
Jamie (00:56:35):
Yeah, no, it's interesting. I guess a lot of those feel like they come down to the question of, can we screen for these things in a reliable way? I can't even think of what the thing you'd be able to cross validate it with would be like, unless you can do it with like proven killers or something like that, where you've got kind of, you know, that people are, would score highly on these traits in any kind of reasonable person's mind and then see how they perform on it or something like that.
Tobias (00:57:03):
Yeah. Yeah. I mean, possibly the answer is just, you know, that, that more research has needed.
Jamie (00:57:08):
That's the cheat, that's the cheat answer!
Tobias (00:57:11):
That's the cheat, yeah. It's often a bit unsatisfactory to say that, but it might also just be right.
Jamie (00:57:17):
Okay. So something, when you were listing your plausible alternatives, you didn't list, although it kind of, I think it relates to the working on AI safety aspects, that kind of technical research stuff, but an idea you've written about in a couple of places is differential progress on surrogate goals. What is that?
Tobias (00:57:36):
Yeah. So surrogate goals are a measure to prevent those s-risks that are arising from escalating conflict. The idea there is that in addition to what I normally care about, I also care about something, something silly, like I would, I would find it absolutely horrible if someone were to create that atmosphere with a diameter of 42 centimeters in space. And so normally that doesn't matter because nobody is doing that. But the idea is that if I'm in a severe conflict and then someone wants to punish me, then what they will do is to create those platinum spheres rather than something that's actually terrible. So it's sort of meant as a mechanism to deflect threats. Now it sounds a little bit crazy perhaps, and I'm also not sure it actually works. There's definitely a number of moving parts there particular about how you could make something like this credible and yeah, I think more research on this would be very valuable.
Jamie (00:58:40):
Cool. Yeah. Makes sense. I see the kind of intuitive argument for why it could help and could help deflect certain kinds of problems by creating this kind of arbitrary, fake thing that you care about. But an obvious downside that jumps to mind is that is that not just like a grossly inefficient system in the sense that this system will presumably be optimizing for this fake goal alongside their real goals. Like, I mean, to some extent it might be mitigated by just making something really making it something very unlikely, like the specific kind of sized platinum sphere that you mentioned, and you talked about in your blog post on it, but then in that specific context then maybe the agent or system or whatever it is, would try and extract as much platinum as it could to prevent other agents potentially being able to create said specifically size platinum sphere or something like that. You can see ways way, even if you come up with some obscure seeming goal, then it might just lead to just like essentially a suboptimal future for that reason.
Tobias (00:59:43):
Yeah. Yeah. I mean, I guess the platinum spheres aren't obscure enough, it needs to be made more obscure.
Jamie (00:59:52):
Yeah. Yeah. I thought I kind of a caveat on my own criticism I had was that I suppose you might not worry about that too much if you're like really strongly suffering focused anyway, because all that's really doing is like spreading the pie a bit more thinly.
Tobias (01:00:05):
Yeah. I mean, the idea of surrogate causes only to prevent disvalue resulting from these conflicts. You might still lose your resources if someone blackmails you or something.
Jamie (01:00:17):
Yeah. Yeah. Okay. So we were kind of talking about some alternatives to moral circle expansion there, but just getting back to one thing that might potentially undermine the case or most likely expansion and might affect some of these other considerations as well. There's kind of an ongoing debate in the effective altruism community about this idea of patient philanthropy or more broadly patient longtermism. This is something that Phil Trammell, and others at Global Priorities Institute, and then a number of other people have been arguing for. And essentially the idea that the vast majority of longtermist resources should be invested to benefit from compound interest, whether that's in financial terms or otherwise. And then you used at a later date and you've written a post with some of your thoughts on this debate in general, I guess. Do you think that that debate has implications for the question of whether we should prioritize moral circle expansion at all, or just how we can kind of most cost-effectively expand humanity's moral circle if we do care about it.
Tobias (01:01:14):
Yeah so I'm quite sympathetic to the idea of patient philanthropy in fact, what I've written about them in this post is that it might be particularly good from an s-risk focused perspective because we are presumably right now, relatively far away from on the, that the method that materialization of an s-risk. And so you can think that it's probably a quite good strategy to, to accumulate resources now to deploy them later. Now, with respect to moral circle expansion. I mean, I suppose itsjust an empirical question of whether or not it's more efficient to spend resources now for more on second expansion or to invest them and spend them later for moral circle expansion. I could see that going either way. It depends on how, how high your returns on that in the financial markets are. As, as many of these discussions on patients philanthropy reflect the idea is not meant to be only about financial investing. You can actually view moral circle expension as also a sort of investment and investment into building up the movement of people that about all sentient beings. And then you can of course ask yourself like whether or not the rate of return on that is, is higher than what you're getting by some other form of investment, like financial investment.
Jamie (01:02:31):
What would be the kind of compound interest equivalent there is that just like people who you successfully encouraged to expand their moral circle, then advocate to other people and something like that.
Tobias (01:02:41):
Exactly. You, you could think that it snowballs like that. Although I think that's also perhaps a fairly simplistic model and likely to be much more complicated in reality.
Jamie (01:02:52):
Yeah. Well, we could, we, I mean, all of these conversations we could summarize as a further discussion and/or more research would be helpful. All right. So we've talked a lot about moral circle expansion and whether it's a plausible priority and some of the alternatives. So let's kind of take it one step further and assume longtermism and a focus on moral circle expansion, and you know, whether you think it's the top priority or not. It's like, let's assume we're devoting a substantial proportion of resources to this. So we then face a number of additional kind of strategic questions, I guess, at the kind of highest level a question that's also in this kind of sphere of prioritization and arguably like, cause prioritization, although it's a kind of sub issue of moral circle expansion is whether we should focus fairly directly on the sorts of beings that we're most concerned about. And you mentioned earlier concern for various types of artificial entities that could be sent in, in the future. So whether essentially whether we should focus specifically on those, or we should focus on the current frontier, so the moral circle, and you could take that as farmed animals, maybe wild animals, or potentially even various kinds of like neglected human populations. So what are your thoughts on that? How kind of directs to our main concern about future suffering risks? Should we go versus pushing on the current frontiers?
Tobias (01:04:17):
That's a good question. Basically. I just think that there's a place for both different people and different organizations tend to have different strengths. Some are perhaps more and more, more mainstream quote unquote, and then it would be difficult to have a lot of content on like cause areas that would be perceived as more exotic. Whereas others are free to explore these, these more exotic topics. If you're asking the question in the sense of on balance, where should be shift resources. Then I think I would say that that more resources would probably go towards the things you've mentioned, wild animal suffering, possibly invertebrates artificial sentience, things like that, simply because those are so incredibly neglected at the moment and think the current overall allocation of resources is not really due to some strategic considerations. It's just that many people are not really going further than pond animals in the first case. And aren't in the first place. Aren't really thinking much about those other part there when I think it is important to think about it carefully before yeah. Coming up with priorities.
Jamie (01:05:20):
Yeah. Sounds good. I agree with all those things you just said. I, I also just know that, I guess even within the farmed animal movement, there has been a move kind of increasingly towards those types of beings that people have tended to have tended to be more neglected in various advocacy, I suppose, with, especially the Open Philanthropy funding and the rise of the kind of effective animal advocacy kind of sub community, right the overlap of effective altruism and farmed animalmovement. And obviously there is kind of an increasing focus first on check-ins of various kind. And it's been, there's been an increasing focus on fish recently and now also a substantial focus on invertebrates, as you mentioned. And that's demonstrated as well in some of the, in some of the recent funding from EA funds has quite a lot, has gone to both wild animal topics and invertebrates and, and also yeah, and fish is still a fairly neglected kind of area. So stuff is shifting even within the current, the current distribution but I agree that further shifts in that direction would be great.
Tobias (01:06:22):
Yeah. I think that's a very positive development and going in the right direction.
Jamie (01:06:27):
Cool. okay. So let's, so as you mentioned, the idea of you think at least a substantial portion, or like more than currently, should focus on these kind of neglected areas and including, I think some of the, the artificial sentience type stuff, like thinking and working directly on that potentially. So when I mentioned desktop pick in general, like why we're concerned about moral circle expansion as being potentially affecting these kinds of future entities that there don't even exist yet? The question I probably get most often is about whether future artificial entities ever will be sent in. And as I mentioned, I'm no expert on kind of theories of consciousness, theories of mind, all that stuff. So my answer tends to be that I don't know, but it seems like a plausible risk. Is that kind of your answer or do you have more thoughts on it?
Tobias (01:07:14):
Yeah, it actually kind of is. So I think that we need to distinguish between two, two different questions. One is whether it is possible in principle to have artificially sentient minds. And the second is if so, will it actually happen? With regards to the first, it seems very likely to me that, that it is possible in principle. And I think this is also the predominant view about among experts regarding consciousness. I mean, it's not the consensus, but I think it is the majority, but even assuming that it's possible in principle, I think we shouldn't conclude too much from that. It's also possible in principle to build a skyscraper on top of Mount Everest; that doesn't mean it's going to happen, you know? So I think in many ways, the other question is more interesting and I'm much more uncertain about that. I'm quite agnostic about whether; I mean I think the question here is whether AI technology will evolve in a direction that will evolve in a way that sentient entities that may or may not happen. I think we just don't don't really know. And I would say that as you were alluding to, it seems to me likely not that artificial sentence very much that it is worrying and we're working on, although there is perhaps also a case to be made to just wait and see and get a bit more information and do more research before jumping to the larger scale up.
Jamie (01:08:40):
So talk me through that last bit then, because this is another area where I just have a general intuition that the more research aspects in this case won't actually tell us that much. I guess you might think that if we were kind of on the cusp of like creating artificial sentience accidentally or intentionally or whatever, then you might think we've got a much better sense of whether this is likely to happen or not sure. But maybe we, I guess I'm getting at, it feels hard to make progress on any of these really fundamental questions. Like we were talking about it earlier in the context of philosophy. But here, you know, I guess this is philosophy as well, but the, kind of the, the cognitive science aspect as well, I like the whole, the whole, like what is consciousness type aspects which would kind of play into these questions about whether it's possible in artificial minds or substrates or whatever you want to call it. Do you have thoughts on like how research could advance that question?
Tobias (01:09:39):
Yeah. I mean, that's an interesting question and I think I might share some skepticism perhaps about the tractability of progress on this particular question of consciousness. I might actually be more excited about research that is looking perhaps more at the strategic questions of how we go about helping artificial sentience; whether we should do so; at what point we should do so; and in what ways we should do so. For instance, that's a question of whether or not it should be integrated in some sense in the animal advocacy movement, because you can think that there's, I mean, ultimately it's just one movement to help all on human centered beings and movement for authentic being. So, I mean, but there are pros and cons too, but are not the animal movements should take on this additional responsibility. Yeah. But you're, you're right. To point out that more research is not always the answer sometimes. Yeah. We just have to wait and see what exactly happens.
Jamie (01:10:37):
What are some of those pros and cons then of where the animal advocates should take this on? Because I suspect, you know, a number of listeners to this episode will have come from the animal advocacy perspective and motivation. And so they might be thinking, what, like is this something I should be doing? Or is it something I should be leaving to other people?
Tobias (01:10:55):
I mean, I think the pro that is what happened to, to that it is in a sense, quite natural. I mean, if you care about animals, people are saying in the animal advocacy movement that they care about, all sentient things. So, I mean, doesn't that shouldn't that automatically include artificial beings? A con would be that it could be risky because the animal advocacy movement maybe also comes with some baggage. Maybe some people might react more negatively if they hear this from the animal advocacy movement, rather than some other player. I think once it need to, one would need to think much more carefully about it before deciding to act.
Jamie (01:11:34):
A concern that I have when I'm thinking about artificial sentience compared to focusing on farmed animals and these more, these more kind of immediate concerns is how tractable advocacy for future beings is in general. Like, can we actually do anything to preemptively essentially address this problem before it's even created? And whether that basically just rules out artificial sentience advocacy anyway, and it kind of potentially plays into what we were talking about before with the patient longterm aspect that I guess you could conclude that artificial sentience advocacy was what we want to optimize for, but that it's not tractable now. And so we should just kind of punt to the future and work on it later. Do you have any any other thoughts on that aspect of like, whether it's even possible to do something?
Tobias (01:12:21):
Yeah. I mean, th that, that pros and cons there so starting with the pros first if you are intervening early like at our current point that there are much fewer vested interests and this, in a sense, just more competition in this way, and you can perhaps have quite a lot of leverage by shaping early discussions of the topic. Perhaps you can have some early declarations or conventions that are then, you know, having a long-term lock-in effect. Cons are, I mean, it's entirely unclear if that's the case, maybe anything they're doing now would just be completely irrelevant in a hundred years and everything will have receded. Maybe they will just be not that much interest in these very abstract concerns. Although I think that's not quite true. That definitely is some interest in the topic. Yeah. And on balance, I think it's at least not, not hopeless to do something about it now. And probably worth trying again, thinking it through very carefully beforehand to avoid the risk of you know, potentially things backfire and that there's definitely a significant risk of turning people off and causing harm by yeah. Being too quick with advocacy for artificial sentience.
Jamie (01:13:34):
Yeah. I guess tying together two of those different cons you mentioned and slightly rebutting them, or at least one of them, is it doesn't, I'm not too worried about there being a lack of interest. There are so many sci-fi films about this topic and about potential exploitation of sentient artificial beings, but I guess then, then that then plays into the other thing that you mentioned about potential backfire and framing it in the wrong way and all those sorts of things.
Tobias (01:14:02):
Yeah. I mean, I don't think it's really true that there's not enough, although, I mean, maybe it's not serious interest if you just view it as this funny sci-fi thing to occasionally speculate about.
Jamie (01:14:14):
That said there is, I mean, this is super one-off, and I don't know the context of it, but somewhere on YouTube, there is a video about this kind of topic that was like designed for schoolchildren talking about potential exclusion of artificial... I think it, I think it calls them robots, but it's talking about quite seriously about problems of if the interests of sentient beings were ignored in the future and he's got millions of views and I don't know anything about the history of it, but it was super interesting that like, it was basically an advocacy piece that already exists and has got loads of views.
Tobias (01:14:49):
Another thing that I'm very interested in is like, how exactly would you do that? Which is, I mean, it starts with the label. Is it good to do to call it artificial sentience? It might be bad too, because it emphasizes this otherness of artificial beings. Maybe you can come up with a better label for what they're talking about. So it begins with the, even this question, which is quite fundamental. Yeah. And then you can wonder like, probably, I'm not sure if it's so good to move to, to public outreach right now, but we can maybe talk with academics and move from there.
Jamie (01:15:22):
Yeah. It's some kind of touching on that is, do you have thoughts on, and obviously it doesn't have to be the final word because it's very tentative at this stage, but do you have thoughts on like the actual sorts of asks, like the demands that could plausibly be made that would plausibly be, be beneficial for artificial sentience now? Is it, does it have to necessarily be those kinds of quite general movement building, community building type things? Or do you think there's more specific things that we could potentially start focusing on that with? You know, I guess start shifting trajectories in the right ways of how those, whether it's like laws or, or companies or anything like that. Do you have thoughts on the actual kind of specific things that we could ask for at this stage?
Tobias (01:16:05):
Yeah. Maybe one could just have some sort of declaration of rights of artificial minds if they ever come into existence. I mean, actually, that's also a question to perhaps look at, but you should even call it rights versus talking about the welfare of artificial entities. Yeah. It's very much up in the air, I think.
Jamie (01:16:26):
One alternative that I haven't mentioned so far, but we've alluded to at various points is whether we shouldn't focus on farmed animals or artificial sentience or wild animals at all, but we should quite explicitly continually refer to a broader kind of anti-speciesist messaging format at least, even if, I guess we necessarily use kind of interventions that focus on one group or other at various times that the messaging be consistently about broad anti speciesism. And this is something that Oscar Horta who was a previous guest on one of our episodes is a keen advocate of; advocating for advocacy in that format. Is that something you is your kind of best guess that that is what we should be doing right now using that, that frame consistently? Or does he just come back to that debate? We said before where there's kind of pros and cons about whether similar to the kind of pros and cons about whether farmed animal advocates should incorporate consideration of future beings.
Tobias (01:17:24):
No, I, I quite strongly agree with that. That's the thing where there's like more pros than cons. I think it's quite good to have an anti-speciesist framing and also to focus more on institutional change rather than diet. So on this longtermist view, there's a couple of differences that this implies for animal advocacy, one implication is that it's important to make sure that in the longterm, this movement does encapsulate all things and yeah, but that you need to have the sufficient level of moral reflection and that that's entailed by these more general philosophical ideas about at antispeciesism and antisubstratism, caring about all sentient beings and framing it in this way. I think this is more likely to have positive, longer term effects on this movement. So I'm very much in favor of that.
Jamie (01:18:16):
Cool. Sounds good. Well, we can talk a bit more about that next time, but let's call it a day for now and yeah, it's been great to chat and look forward to having you back for the second episode.
Tobias (01:18:26):
Awesome. Yeah. Thank you.
New Speaker (01:18:30):
Thanks for listening. I hope you enjoyed the episode. You can subscribe to the sentence Institute podcast in iTunes, Stitcher, or other podcast apps.