July 28, 2021
Guest Tobias Baumann, Center for Reducing Suffering
Hosted by Jamie Harris, Sentience Institute
Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering
“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture… Now, what exactly does capacity building mean? It can simply mean getting more people involved… I would frame it more in terms of building a healthy community that’s stable in the long term… And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’… And CRS aims to contribute to [both] through our research… Some people just naturally tend to be more inclined to explore a lot of different topics… Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable… What matters is just that overall your work is contributing to progress on… the most important questions of our time.”
There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
- Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)
- What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)
- What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)
- What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)
- Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)
- Which particular types of global priorities research seem most useful? (38:15)
- What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)
- If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)
- What are the most promising career paths for reducing s-risks? (1:09:25)
Resources discussed in the episode:
Resources by Tobias Baumann and Center for Reducing Suffering:
SI’s resources:
Other resources:
Resources for using this podcast for a discussion group:
Transcript (Automated, imperfect)
Jamie (00:00:10):
Welcome to the Sentience Institute podcast. We interview activists, entrepreneurs and researchers about the most effective strategies to expand. Humanity's moral circle. I'm Jamie Harris researcher at Sentience Institute and at Animal Advocacy Careers. Welcome to our 17th episode of the podcast. This is the second episode with Tobias Baumann of the Center for Reducing Suffering. In the first episode, I spoke to Tobias mostly about why he thinks we should accept long-termism and focus on reducing risks of astronomical suffering. The future. Also known as s-risks. We refer to those ideas in this episode too, but we focus more on implementation. What we can do to reduce s-risks most cost-effectively. We discussed how to advance the field of global priorities research. What implications longtermis has for animal advocates, whether we should kickstart a movement focused on encouraging consideration of artificial sentient beings and more.
Jamie (00:01:01):
On our website we have a transcript for this episode, as well as timestamps for particular topics. We also have suggested questions and resources that can be used to run an event around this podcast in your local effective altruism group or animal advocacy group. Please feel free to get in touch with us. If you have questions about this and we would be happy to help.
Jamie (00:01:17):
Tobias Baumann is a co-founder of the Center for Reducing Suffering. A new longtermist research organization focused on figuring out how we can best reduce suffering, taking into account all sentient beings. He's also pursuing a PhD in machine learning at university college, London, and previously worked as a quantitative trader at Jane Street Capital. Welcome back to the podcast Tobias.
Tobias (00:01:37):
Thanks for having me!
Jamie (00:01:39):
Yep. You're welcome. Okay. So as a quick recap of the last episode, what are suffering risks or s-risks and why they are plausible priorities?
Tobias (00:01:47):
Yeah, so risks are simply risks of worst case outcomes in the future that contain a very large amount of suffering. And so the, the definition is that it exceeds all the suffering that exists that has existed so far vastly exceeds that, and well, why is it a priority because that'd be very bad basically. And so even at relatively small risk of that happening would be important in expected value. And one can also argue, I mean, we discussed a little bit how optimistic or pessimistic we should be about this, but I think the probability of something like this happening is not very small given all of human history and what we've seen so far.
Jamie (00:02:31):
Great. And listeners can obviously go back to the previous episode and listen to that if they want a bit more detail on kind of what the conduct is and why we should, or shouldn't think about working on this topic. So I guess we're going to dive in a little more in some of the kind of practicalities and the, almost like the next steps that arise from thinking about these topics and the rest of the episode. So first question, who is currently working to reduce s-risks?
Tobias (00:02:55):
So this kind of depends a little bit on how broadly you construe, uh, working on s-risks because a lot of things are potentially relevant. Something is working on improving the way the system works or on a quote unquote normal animal efficacy. That's also relevant to reducing s-risks. But I guess your question is about the more narrow conception of like people who explicitly focused on s-risks. Who've made this the main focus of their work. And in terms of that, there are two main groups that I would make, which is the Center on Long-Term Risk, CLR, and the Center for Reducing Suffering, which I co-founded. And these two groups have slightly different focuses on it. I can go into more detail on that.
Jamie (00:03:43):
Yeah, that was going to be my next question, really. And I guess just before we dive into that, I, I do agree that this definition thing is quite important in the sense that to some extent, literally everybody doing animal advocacy, or even more broadly, anything related tomorrow circuit expansion in some form could be conceptualized as reducing s-risks. Uh, but as you say, I think it makes sense to focus more kind of specifically on the organizations who are explicitly focusing on those kinds of long-term risks. So, yeah. Uh, talk us through, I guess, what, what is the, you mentioned CLR and your own organization Center for Reducing Suffering. What is the kind of main focus of each of these groups? We might as well start with CLR seeing as they've existed for longer?
Tobias (00:04:22):
Yes. The purpose of the center on long-term risk is on cooperative AI, cooperative artificial intelligence. The idea there is to prevent worse outcomes that might arise from escalating conflicts involving transformative artificial intelligence. So what they're doing is, I mean, it spans peels, um, such as bargaining theory, game theory, decision period, and is looking to apply these insights to artificial intelligence in order to reduce the risk of worst case outcomes for s-risks arising from interactions between this AI system and other AI systems, or also just as system on humans and to increase the chances of a cooperative outcome, uh, resulting from these interactions. So with regards to the Center for Reducing Suffering, I would say that our focus is more on broader prioritization research and macro strategy and exploring many different interventions, probably facing suffering. We are also doing some work on advancing suffering-focused points of view and developing those further, which is something that the CLR is not doing a lot of anymore. So, but I can go into a lot more detail on these things
Jamie (00:05:38):
I'd love to, and I'm interested as well in terms of how the, the focus of CLR let's start with CLR again, how the focus of that has changed over time, because obviously they've been going for a few years and I mean, listeners might hear that and think that sounds quite specific. If you think back to all the different types of s-risks we talked about in the episode, for example, there are lots of different potential pathways towards reducing suffering risks. So the question is kind of how, and why has CLR ended up on that particular path, do you think?
Tobias (00:06:08):
Yeah, I mean, I guess they simply think that this is the most important or the most neglected or the most tractable, um, way to reduce s-risks. I mean, whether or not that's true, uh, is of course, uh, a long story, um, they're making the case for that in their research agenda. I think there's about to be set, uh, either way. And generally, like if you're asking, how has it come about? Um, I mean, yeah, it's just always like gradual evolution of, uh, priorities when new people are coming in, they have new interests, new skills points since I think CLS current focuses is largely due to Jesse Clifton and his work. I mean, yeah. And over time new topics that are being discovered such as cooperative AI, and yeah, we discussed about malevolence last time and yeah. Then people will, uh, yeah, that's just going to be over time. I think they invite a spectrum of approaches at different views within both. These and performance is set up for reducing suffering CRS and CLR. It can be a little bit annoying that it's CRS and CLR, but it is what it is. So what you're seeing is like over time that this being modified that a diversity of approaches, but earlier it's just being one group which tends to result in maybe a little bit of a intellectual monoculture.
Jamie (00:07:31):
And I get the sense that, so I wonder how much CLR's focus is about just the need to specialize somewhat versus a competent view in the cause prioritization issue. Uh, for instance, I've noticed that in some of the funding they give out, I think they still fund groups like wild animal initiative and are willing to fund a broader variety of intervention types that might reduce s-risks or at least research streams that might reduce s-risks, even if their own research is slightly more narrowly focused.
Tobias (00:08:01):
Yeah, that seems true. It might be some combination of people being particularly interested in this and believing that it is a particularly attractive a way to reduce that as this.
Jamie (00:08:12):
Okay. So we've talked about CLR a bit, but I wonder as well, there are, there are another, a number of other groups that whose work is at least in some way relevant to this topic, right. Um, and an obvious example being Sentience Institute, we focus on a particular type of, we focus on moral circle expansion, which is a one particular way of potentially reducing suffering risks. So there are, there are lots of other explicitly longtermist organizations whose work presumably touches occasionally at least occasionally on s-risks. So for instance, how much of the work by groups like the future of humanity Institute is relevant to s-risk research efforts and s-risk reduction efforts?
Tobias (00:08:52):
Um, yeah, I mean, I think a lot of it is, is relevant. For instance, work on AI governance, is it surely that's also relevant to s-risk reduction. I mean, the work that's being done there, it's just, uh, it just pursues a different goal. And therefore, usually the questions they look at are not necessarily the ones that are most important from an s-risk perspective, but there's still somewhere between very important and somewhat important, I guess. So, yeah, it's, it's a spectrum of relevance of different things for the that's what I would say. And I also definitely agree that the work of organizations like Sentience Institute and Animal Ethics is very relevant. And of course these organizations, while they're not like explicitly talking so much about s-risks, that people are very much aware of these risks and do things that they're very, we're very much worth consideration.
Jamie (00:09:50):
Yeah, that's certainly true. Uh, my conversation with Oscar Horta on a previous episode of this podcast, we touched on this topic briefly, but didn't dive into it as it was not Oscar's explicit focus. Are there kind of particular individual thinkers whose work is especially relevant? The obvious example, and he is very much affiliated with CLR and co co-founded it I believe, but Brian Tomasik is kind of semi-independent agent. So he's an obvious example of somebody whose work is focused explicitly on this. Um, but there are presumably others. I know, for example, that David Pearce has quite a suffering focus in his work. And are there people like that is David Pearce's work relevant? Are there other kinds of individual academics or thinkers who often crop up as having lots of relevant things to say on the topic?
Tobias (00:10:37):
So, I mean, the topic of s-risks in particular is quite normal. And like, I don't think that's, there's so many independent thinkers that are really doing what one could call cause prioritization from a suffering-focused perspective. But if your question is more about people who have written about something focused ethics, then that's like names like Jamie Mayerfeld, maybe Clark Wolf, come to mind that have defended philsophical views that be described as suffering-focused ethics.
Jamie (00:11:07):
Yes. So in the work that's been done today, are there any kind of major gaps or blind spots that you think have been missed out on that seem high priority to be addressed through the next steps of research?
Tobias (00:11:20):
Yeah, that's a good question. I guess probably are gaps, but I just don't know what those gaps are, because if I knew about it, then, uh, you know, we, we would have picked it. Uh, yeah. I mean, I guess that's why it's called blind spots. Maybe one thing I would say is that there's surely a need for more empirical grounding, uh, because a lot of the work that has been done is could maybe if you're uncharitable, be described as armchair speculation and yeah. Putting things on a more solid footing would be quite valuable in my opinion.
Jamie (00:11:57):
Yeah. Yeah. But necessarily more time consuming, I guess; that tends to be the sort of thing that Sentience Institute focuses on. Um, but it's a lot, it takes a lot longer to write posts like that "How tractable is changing the course of history" post that I talked about and actually do all the digging into that research than it is to kind of outline some initial thinking on the topic. But yeah, I agree that that stuff is important and would be helpful as next steps in some of these questions. So I wanted to ask about a couple of kind of topics or almost buzzwords that come up quite often in at least when I've seen others in the effective altruism community, discuss this topic of s-risks. And one is the topic of information hazards. And this is something that Nick Bostrom defines as risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. And my impression is this is something that many people concerned with s-risks tend to be very concerned about. Um, and essentially that sharing some work publicly might increase the risks of some negative outcomes. And I do think about this topic periodically, but I think I'm, I feel I just kind of intuitively feel slightly less worried. What are your thoughts on this? How careful do we need to be in general with talking about these topics?
Tobias (00:13:09):
Yeah, I mean, I definitely agree that we should be careful. Um, I mean, it always depends on like the specific material in question and like how likely that is to actually, uh, lead to some, some information hazard, and what exactly the pathway is to some person causing harm. Um, I mean, for instance, uh, I think info hazards have also been discussed in the context of biosecurity. Um, if you're, if you're publishing a paper on like, okay, this is how you could, uh, have a superbug and this is how you would produce it and disseminate it, then that's obviously very info-hazardous. If you're just in the abstract talking about like ways to prevent that sort of thing from happening without giving away any problematic information, then that's not so into info-hazardous. And yeah, I mean, in other fields that are, have like some affiliation to security, usually one is more or less openly discussing what the risks are.
Tobias (00:14:16):
And like the hazardous parts are only things that would really give potential attackers, like non-trivial information that they would not figure out, uh, on their own. Yeah. So for instance, just talking about the idea of threats, doesn't strike me as so, uh, worrisome, because that's something that you can see in like every second James Bond movie. So it doesn't really seem that this is something that nobody would come up with if I keep silent about it, but there are like maybe more, maybe other ideas that are more yeah. That qualify for that type of thing.
Jamie (00:14:55):
What are the kind of rules of thumb do you think of what is more or less info hazardous? One thing you said there is, uh, essentially more generalized information as opposed to more specific information, probably poses less of a risk. Um, and another one is another one, it sounded like you were saying, was focusing on how to reduce risks as opposed to going into detail about exactly how the risks might arise might be another way to reduce.
Tobias (00:15:22):
Yeah. I mean, there's certain types of information that are more useful for preventing attacks and certain types of information are the more useful to carrying out attacks. Like in the biosecurity example, it surely isn't so info-hazardous to talk about, uh, I dunno, vaccines against the potential superbug, but it is invited us to talk about how one might create a superbug, if that makes sense.
Jamie (00:15:48):
Is the concern mostly about intentionally malicious agents looking out for this sort of information and then misusing it, or is it about some kind of indirect thing where just like the information being out there somehow gradually increases salience and there's kind of some kind of indirect effects that's not necessarily through intentional action.
Tobias (00:16:06):
Yeah. I mean, it actually is often rather unclear to me what the main concern of people actually is. Like in more detail, did this sort of effect of increased saliency is definitely a candidate. Another candidate is like that you are somehow transmitting valuable information to a malicious actor. Yeah. It can't be a combination of all of these things.
Jamie (00:16:32):
All right. So yeah. Another idea that I often see discussed in the context of CRS, or at least people who are interested in working on s-risks is the importance of cooperation and Magnus your colleague at center for reducing suffering has an, a post on CRS, his website called "Why altruists should be co-operative." Why is there so much emphasis on cooperation within the community of people focused on reducing suffering risks?
Tobias (00:16:57):
Yeah, because cooperation is, is very important. And in general, I think, and also from the suffering-focused perspective in particular, the idea is that having all the conflict in and of itself a risk-factor for very bad outcomes and also effect that that might make it less likely that more concerns are taken into consideration if like the group that is voicing these concerns is somehow despised by many, then that makes it far more likely that people are going to ignore whatever compassionate people say. And so it's much more promising to try and be on good terms with other people and use this Goodwill to reduce suffering and to make our concerns heard as one concern by also taking into account the values of others also.
Jamie (00:17:52):
Cool. Sounds good. So we've been speaking mostly about the work of CLR and, uh, work on s-risks in general to date, let's talk more specifically about sense of reducing suffering, the organization you co-founded with Magnus, what's the theory of change behind the whole kind of concept of what CRS does and what its work is about?
Tobias (00:18:13):
Yeah, so we think that the most important thing right now is capacity building that is ensuring that in the future, the relevant people will be both motivated and able to, uh, avert s-risks. So it just makes a lot of sense in light of longtermism. We're not so much focused on having impact now or in the next year, but we're thinking about the longterm and the very big picture. And then there's also this idea of cluelessness. We just don't really know in so much detail what exactly the future holds. And we we're very uncertain about what exactly we can do now to best influence the long-term future, to best reduce suffering the long-term future, given all that. It's quite natural to think that we should focus on building capacity now, as that is perhaps the most robust thing we can do, even if it can perhaps sometimes feel a bit unsatisfactory as an answer.
Tobias (00:19:17):
Now, what exactly does capacity building mean? Uh, it can simply mean getting more people involved, uh, building a community of people interested in reducing suffering, but I think it's also important to realize that this is it's very, it's different from just going out and spreading the word and growing as fast as possible. Um, I would frame it more in terms of building a healthy community. That's, that's stable in the longterm rather than maybe disintegrating at some point or becoming toxic for that. So you need, uh, good, good competent norms and an open-minded epistemically modest, thoughtful culture. That also ties into the, what we talked about in terms of why we should be cooperative. And one aspect that's just as important as the movement building is that we need to improve our knowledge of how to, how to best reduce suffering, given that it's so unclear how to do it. You could, you could call it 'wisdom building' in analogy to, to move on building and we sort of need to do need to do both. And CRS aims to contribute to that through our research.
Jamie (00:20:29):
Yeah, really interesting that you kind of answered first of all, with the answer about capacity building, because I certainly see how I see it's very clear how CRS is work plays into wisdom building, right? The research you do is quite clearly focused on understanding the problems and what can be done about them and all that sort of thing. It's obvious how this is a form of capacity building. I guess if I think of capacity building, I think of kind of explicit efforts within the effective altruism community, things like local groups where people meet people and basically more along the lines of what you were saying as an alternative, like spreading the good word of effective altruism and kind of encourage welcoming people and supporting them to get engaged in various ways. There's also the kind of the formal capacity building in the sense of more along the lines of what a lot of animal advocacy nonprofits do of basically doing concrete work on the topic and just like providing some kind of infrastructure for more and more people to get engaged as kind of awareness and discussion and all those sorts of things grow. So what's the model through which CRS contributes to capacity building. Um, and like, I guess, how do you kind of optimize for it? How do you think, is there kind of an explicit way that you think this is what we're aiming for, or is it something quite diffuse and hard to operationalize in that way?
Tobias (00:21:52):
I mean, um, if you have things that are worth noting here, maybe that there is of course the EA movement and sort of viewing as health as part of that, um, at least to a large degree and in that movement that is already the infrastructure, uh, that, that, that has been built on outreach and local groups and things like that. So we don't think that it would make a lot of sense for us to just try and do the same thing with a suffering-focused flavor, which is why we're focusing more on the other aspects of capacity building that I outlined. And also we think that given the nature of what we're doing, it's probably not, not necessarily a thing that is like mass compatible in any way. Um, like unlike, for instance, animal advocacy. Yeah. So we're not necessarily trying to just get the word out to a lot of people, we're trying to contribute more to it being perhaps a small community, but one that, that is open-minded and, uh, yeah. Reflects on all these important philosophical and strategic questions.
Jamie (00:23:04):
Yeah. So that brings me on nicely to who's the main audience? Because it sounds like what you're just saying there, the goal is to create a, would you say it's a research focus community or is it a kind of core of people who are flexible to pivot towards different kinds of interventions that might or might not crop up as promising? I guess there's two things there. One is like, what's the end goal with the capacity building and relatedly yeah. Who's the kind of interim audience?
Tobias (00:23:31):
Yeah. So I mean, the thing is the people that are working at CRS are primarily researchers, but I would definitely say that it's not a, a research project as a matter of principle, if we find something more concrete to do, then we would pivot to that. We are trying to build a community that is able to be, uh, that, that is cost neutral in the sense, in the sense of being able to switch to something else. Yeah. So, I mean, one large part is definitely the effective altruism community and especially the, the longtermist parts of it and people who are at least sympathetic to the suffering-focused moral views. It's not limited to people that completely share our views, of course. Then I would also say that people who are interested in effective animal advocacy are a more specific target audience in part, because they tend to share our very wide moral circle. There's also some evidence that concern for animals correlates with more suffering-focused moral views, but obviously there's different target groups that overlap anyways.
Jamie (00:24:39):
And so on the research side of things, how do you decide what to focus on there? What's the prioritization process for the prioritization research?
Tobias (00:24:54):
Oh, that's getting quite meta. Yeah. I don't think there's anything formal. Um, it's just a, it's a function of, uh, the individual interests and skills of a researcher as well as sort of our, our collective thinking on how important of a question is this in the scheme of things. So how likely is it that you're gonna come up with an important insight, if you come up with an important insight, how much of a difference is it going to make to the overall prioritization? Yeah, it's a combination of all these factors.
Jamie (00:25:26):
Yeah. So when you started, it was just you and Magnus, but I know you've done some work trials with a few potential researchers. So how's that been going? Do you plan to hire more researchers and grow quickly or is it more of a steady kind of conservative growth plan you have?
Tobias (00:25:42):
Yeah, so I would say perhaps more of the latter, but of course it depends on, yeah, finding people that are talented and interested in contributing to our mission, I would actually say that so far, we've gotten a surprisingly good amount of, uh, applications and very high quality applications with, I was actually almost a surprise by myself. Like I was just uploading a form and we didn't even promote it so much and got a few very promising applications. It's well known that in the EA movement, there is maybe a shortage of available positions at EA organizations. Then a lot of people apply for these positions, but yeah, so this is going very well. We've just recently hired, hired, um, two additional interns, uh, Catherine [inaudible] and Winston Oswald-Drummond. And they're, they're doing very fascinating research
Jamie (00:26:42):
You mentioned before about the work often being determined, partly by their skills and interests of the researchers. How different does it look from what you and Magnus are doing? Is it very similar projects or is everyone going off in slightly different directions?
Tobias (00:26:56):
Yeah, I mean, of course it's not exactly the same, but I would say it is, it is very aligned with our overall overarching framework and priorities. So Winston for instance is looking at the different resolutions of the Fermi paradox and its implications on s-risks and suffering-focused ethiics in particular. So that's inspired in part by the recent work by Robin Hanson on grabby aliens, which has been a significant update for me. So like it's not that the alternatives are only us being alone or us being in a populated universe. There are also other scenarios such as a lot of civilizations coming up at roughly a similar time. And if that sort of thing happens, then it obviously has implications for suffering-focus ethics in terms of, for instance, how likely it is that space will be colonized by other civilization. If we don't do it, it could be relevant to think about, well, what exactly would happen if different civilizations meet in space? Could that be a potential source of s-risk? And Winston is looking at all these questions. Catherine on the other hand is looking at global totalitarianism and the question of how that relates to s-risks. So in what more specific circumstances would global totalitarianism result in espresso rather than just being bad in other ways, which I think is also a very fascinating researcg topic. And it ties into what I was talking about, uh, on malevolence that these topics are obviously related.
Jamie (00:28:36):
How much are those topics and I guess about the wider work of CRS, but how much are those topics covered in or do they overlap with work within academia? Because I mean, I have not read anything about the Fermi paradox, but it's a well-known thing. I am assuming that that's a topic that's been like looked into quite a lot within academia. Is that something, I guess my, my gut reaction is like, is this CRS' comparative advantage to focus on something like that, if it has substantial overlap. Um, I'm assuming that there's some kind of aspect of like explicitly linking it to s-risks as well, which is important there?
Tobias (00:29:10):
Yeah, exactly. I mean, I would be somewhat less excited about simply doing work on the Fermi Paradox, but looking at its implications for s-risks reduction particular, that that seems like a high priority. There is usually a lot of academic work on all kinds of topics, but the question is always how relevant is it to what our main concerns are. Yeah. And working that out as well is a high priority.
Jamie (00:29:38):
So it's a bit of aggregating, various different things from directions and, uh, interpreting them within a certain framework type thing. Yep. Okay. So you mentioned, you kind of hinted at this before with saying that you had a lot of great applications, even though you didn't publicize the role very far. Uh, and that there's, it's, well-known that within the EA community, there's a lot of demand for roles. What do you think are the main bottlenecks to further progress in the general kind of the general field of work on s-risks? And obviously this is mostly you guys and CLR.
Tobias (00:30:13):
Yeah. That's definitely a very interesting question. So the usual candidates are money and talent finding skilled people who kind of want to contribute. And, um, yeah, I think both is medic. It's not really a single here in my opinion, that there are also in addition to these more intangible factors like organizational capacity and this problem of favorably involving people. I think this has also been discussed for EA at large. So not everyone can do cutting edge research or should do cutting edge research. And then there's a question of like, how exactly can people contribute in other ways. And it's not always easy to do that, which is of course very related to this problem in EA of like even five qualified applicants struggling to find a job. And yeah, so having progress on this would be really valuable, both for EA at large, but also I think it's also a very significant problem for as was conducted given again, that the nature of it is so, difficult that it's unclear how exactly one can contribute.
Tobias (00:31:19):
Now in terms of funding, uh, I would say it really differs quite drastically between different kinds of work and different organizations. Um, it's also well-known that in effective altruism, some people have, have really a lot of money like, uh, Open Phil, the Open Philanthropy Project and other large grant makers. So if the work that you're doing can tap into these funding streams, then money is maybe not so much of a constraint. By contrast, if you're doing something that these large grant makers are not so keen on, then it can be quite funding constraint. Now with respect to work on s-risks, I would say that some forms of it do enjoy the, can get support from these large funders, such as work on cooperative AI, um, while maybe about other types of work that I think are important would maybe not, maybe be things that these Grantmakers are not so keen on like work that is more about macrostrategy from a suffering-focused perspective or work that is at least visibly endorsing a suffering-focused point of view. Um, so that's out of work. I think it tends to be much more funding constrained.
Jamie (00:32:25):
Cool. Going back to the idea that there are lots of great applicants for, or lots of potential great applicants for the research roles. And you also mentioned that it's not necessarily just funding. What does that look like in practice? Like if somebody said to you, uh, for CRS here's another million dollars or whatever, what would be stopping you just hiring as many more researchers as you could afford?
Tobias (00:32:48):
Yeah. I mean, I, I guess we would hire more people if that, if we get a million dollars, but, um, yeah, it's a combination of whether or not these people are really contributing on a, on a research level, which is really not an easy thing to do. Uh, and maybe there aren't that many people that can really do it. Uh, and then the other factor is what I was talking about in terms of organizational capacity, if you're, if you're starting off with two people, then you just shouldn't hire 10 people, uh, at the same time. Uh, in fact, common startup advice is, uh, hiring don't do it, you know, so there's a strong case for growing more conservatively, especially if you think that, I think that the current state of CRS is, is small, but it's working well.
Jamie (00:33:37):
Cool. So what about the streams that can tap into those funding sources then? Uh, could, could those aspects of the work not just be grown more rapidly than some of the other aspects that don't have enjoyed that wider support?
Tobias (00:33:49):
I mean, this is actually, this is in a sense what CLR is doing, and then they're doing things like giving out grants to people working in these areas. But yeah, I mean, it's not a silver bullet, and you can always wonder, I mean, this is a thing that I'm maybe hesitant to talk about, but I was alluding to it saying that, like, maybe not that many people can really contribute. I mean, I'm not entirely sure. I haven't really settled on a view on this, but it might really be that's the work of most people does not actually contribute that much to progress on macrostrategy. And you might be quite elitist about that.
Jamie (00:34:26):
Yeah. So I wanted to dive into a couple of questions about some of the research that you've put out through CRS. Uh, and one post you've written is called a typology of s-risks, which lists out different categories of s-risks. Uh, you call the categories, incidental s-risks, agential s-risks and natural s-risks. And you've also got a post about risk factors for s-risks. I.e. Essentially, things that might encourage increase that increase or decrease the likelihood of those more specific s-risks and that posts includes the category of advanced technology capabilities, lack of efforts to prevent s-risks inadequate security and law enforcement, polarization, and divergence of values and interactions between these factors. And obviously there's a kind of partly just a definitional thing about what counts as a risk and what counts as a risk factor. Um, but do you have an overall sense of whether it seems most cost-effective at this stage to work directly on the most plausible s-risks or whether it makes more sense to address risk factors that might affect a number of different s-risks.
Tobias (00:35:36):
Yeah that's a great question. Um, I, I would say that I lean maybe that the latter, if only because, and as I said, we don't really know in very precise terms, what, what the most important aspects are. We can gesture like broadly at the sort of dynamics that seem most worrisome, but are fairly clueless about the exact details. And given that it does make sense to try and broadly improve the future and work on these risk factors for s-risks, without committing to a specific scenario. That's what inspired me to write this post about risk factors for s-risks and what also inspires CRS is strategic focus on capacity building. Although of course it is a spectrum, and we're also not entirely unimportant. We can narrow it down a little and say that that some things are much more likely to be relevant than others. And if you go out and to reduce unemployment, because that's a broad improvement of the future, then maybe I'm not going to be too convinced.
Tobias (00:36:38):
Uh, so we can say for instance, that animals and digital minds are particularly relevant because they're likely to be excluded from all consideration, uh, which is what we've talked about before. We can also say that conflict and escalating conflicts are an important risk factor for s-risks. And so, as an example, if you want to work on improving politics, then from an s-risk perspective, avoiding excessive political polarization is perhaps more promising than improving institutional decision making, despite both being broad improvement, because polarization is more directly related to worst case outcomes and s-risks. Whereas, I mean, bad institutional decision-making is also a problem that that's very much worth working on, but maybe not so directly tied to s-risks. So there's a lot of things that can be bad, but not s-risks.
Jamie (00:37:27):
Cool, sounds good. It sounds like you're kind of spread between those two categories is comparable at least to 80,000 hours spread, uh, or they, they have a slightly different focus, obviously with focus primarily on reducing extinction risk growth, and reducing suffering risks, but they list their highest priority cause areas as two of them are fairly narrow work on specific problems, so positively shaping the development of artificial intelligence and reducing global catastrophic biological risks. But then they also list two types of work to build capacity, which is obviously where you said you focus on yourself at CRS, which are global priorities research and building effective altruism. So there's a lot of overlap with your focuses at CRS there.
Tobias (00:38:06):
Yeah. I mean, I think people at 80k it are reasonable and it comes to these questions.
Jamie (00:38:12):
Good to know. Um, all right. So within the research that you do that, I mean, with this kind of general area of work kind of cause prioritization research, generally there are lots of different approaches that an organization could take. So for example, one option is to try to rigorously assess some of the important underlying assumptions relating to cause prioritization. And that's the sort of thing that I see global priorities Institute is doing. Um, and so, you know, going into detail on some of the kind of specific philosophical ideas on depending on term, as in, for example, then another option is to try and identify many possible cause areas and do some brief initial exploration of the promise of work on each of those areas, Charity Entrepreneurship are hoping to incubate a new organization, dedicated full-time to exploring and making a strong case for new cause areas.
Jamie (00:39:06):
So they obviously have the sense that that kind of aspect of short and brief exploration is not being covered that thoroughly at the moment. And then another option is to essentially pick a plausible priority cause area and explore it in relative depth, making progress on the problem while simultaneously gaining information about how tractable further work is. And that's more comparable to what citizens to you is doing with our work relating to artificial sentience in the sense that, you know, we're not necessarily explicitly always doing a research project, that's intended to evaluate the promise of the cause area. But the idea is that somewhat by looking at it in some depth will gain a better understanding of the promise of certain types of actions. Do you have thoughts on which of those overall options is kind of most needed at the moment?
Tobias (00:39:51):
Yeah. Uh, I think there's a place for all of this and there's not really that one right way to do it. It really depends a lot on one's interests and skills. Some people naturally tend to be more inclined to explore a lot of different topics. For instance, I think I, myself, well, hold on that category, others have maybe more of a tendency to dive into something more specific and dig up all the sources and go into detail and write a comprehensive report. And I think both of these can be very valuable. So, I mean, I think CRS is not committed to one side or the other on the spectrum. And what matters is just that overall your work is contributing to, to progress on, on what we regard as the most important questions of our time. So I would still say maybe that in comparison to academia, the sort of work that we do as a general tendency, perhaps less specialized, and it's still much more about big picture thinking, although maybe the overall tendency is to become more specialized as the field matures.
Jamie (00:40:55):
I guess that's, that seems inevitable to some extent in that your there's only, presumably there's only so many different things you can uncover. I don't, I find it hard to imagine what the, the aspect of fleshing out the case for alternative cause areas looks like, unless you're just kind of going down the list of already known causes that people just haven't tended to prioritize with innovative auditors and for some reason or another, and kind of like steel-manning, the case for them being promising, or I suppose like potentially you're just like extremely inventive somehow. And you just come up with these ideas, but I don't know what the process would be there.
Tobias (00:41:27):
I mean, I think I agree. There was maybe a time a couple of years ago when I would have said that that effective altruism is maybe a bit narrowly focused on the relatively small number of causes, but I think it has improved a lot and people have more now, like for instance, 80,000 Hours did a post on all kinds of different causes that might potentially be promising. So yeah, I mean, I think it has gotten a lot broader and at the point that you're at right now, if someone just wants to make the case for a new cause, like, okay, good luck, but I don't necessarily think its the case that it's like it hasn't been explored at all or too little.
Jamie (00:42:08):
I guess it's interesting in terms of your own work, you, I get sense of most of your posts fit into this category of the kind of slightly shallower investigation of number of different ones. Like you've got a kind of medium sized post on space governance, for example, in the sum of your investigation of the political topics, but then actually the, your post on malevolent actors is at least as far as I can think of off the top of my head notably longer than most the other ones that you've done. And as we mentioned before, it was actually also one of the most kind of ones that's been most well-received. I think it's like the second most up-voted post on the EA forum of all time or something like that. So I kind of interested if you have thoughts on why that post had such good reception, do you think it was something to do with the depth of it or do you think it was like just it, do you think it was more about the novelty of the idea or something in there?
Tobias (00:42:59):
Yeah. I mean, it's a combination of all of these things. I should note that a lot of the more in-depth research was done by David Althaus who was like first author of this piece. Um, because as you say, I myself have more of a tendency towards yeah. Big picture thinking, which is what I said earlier about how this is about different people's interests and skills. And, um, yeah, I mean, it's, it's of course impossible to know how many upvotes this post would have gotten if it had been less comprehensive. Yeah. I mean, I, myself tend to be someone who's most interested in like the, the basic ideas, but yeah, it definitely helps if a post is as complete, as comprehensive as that one was.
Jamie (00:43:44):
On the subject of the EA Forum. There's a post on there by Sam Hilton called "The case of the missing cause prioritization research." And at one point, Sam writes that "if you look at best practice in risk assessment methodologies, it looks very different from the naive expected value calculations used in EA." He goes on to say, "I think there needs to be much better research into how to make complex decisions. Despite high uncertainty. There is a whole field of decision-making under deep uncertainty or Knightian uncertainty used in policy design, military decision-making and climate science, but rarely discussed in EA." And so this is, it kind of brings up the idea that there are certain methodologies that have been tried and tested in other fields. It would be really useful if applied to cause prioritization research. Do you have that sense that there's like, I guess it kind of touches as well on the critique, which is sometimes shared that people within the effective altruism community too often trying to reinvent the wheel and do things their own way when it's been done well in other contexts. Do you have a sense of that, that there are like methodologies or types of research that we should be using of some description that have just been ignored so far.
Tobias (00:44:47):
It might be worth looking into. I'm also somewhat hesitant when I hear things like that EAs have not looked into this enough. I mean, is this even true? And like, if it is, so maybe that there's a reason, I'm like, yeah, I'm not entirely sure how much you would learn from this sort of very generic investigation of decision procedures or something like that.
Jamie (00:45:12):
I tend to agree in the sense that I think the best way to work out if a methodology is useful is to try and apply it rather than to do some abstract discussion.
Tobias (00:45:21):
Yeah, I mean, I w I would just, uh, I would challenge, uh, people who believe these things to actually come up with like, uh, a useful insight on cause prioritization.
Jamie (00:45:31):
In our previous discussion we were focusing on whether or not work on moral circle expansion is a plausible and potentially cost-effective way of reducing suffering risks. Uh, and we spoke about the kind of animal advocacy specifically within that. And we've been talking a lot about the effective altruism community and the longtermism community, but actually a lot of the people who work on advocacy don't necessarily explicitly identify with either of those communities necessarily, but that doesn't mean that they wouldn't agree with some of the kind of underlying motivations, I think. And so I kind of suspect that there are various lessons that each of those two -- each of those groups, effective altruism and longtermists and like animal advocates, you know, they're partly overlapping, but I suspect that there's various ways in which these groups can kind of share ideas and learn from each other. You've written a post specifically about longtermism and animal advocacy. What do you think are some of the implications of the, of taking a longtermist approach for animal advocacy?
Tobias (00:46:37):
Yeah, I think that's a, that's a great question because are, um, a lot of, I think very important implications that it can have. And I also completely agree with what you said about how it doesn't mean that you have to agree that the longism community on everything or that you have to apply this label to your own identity. The point is simply that looking at the long term is important, and that I think enjoys much, much broader support. Um, now in terms of the implications of long-termism on animal advocacy, one, one implication is that it is a stronger focus on achieving long-term social change and comparatively less emphasis on like the immediate alleviation of animal suffering, because it's, it's a marathon, not a sprint. And so it's about achieving lasting change, in particular about locking in persistent moral consideration of nonhuman sentient beings. That's at least part.
Tobias (00:47:31):
And from this longtermist perspective, it's also critical to ensure the long term health, the long-term stability of this movement. So, so it's, it's important to avoid accidents that that could impair our ability to achieve long-term goals, either as individuals or as organizations, or as a, as a movement. And in a sense, maximizing the likelihood of eventual success eventually achieving sufficient concern for all sentient beings is arguably from this big picture perspective, more important than accelerating the process by a few years. In particular, one way to jeopardize this long-term influence is by triggering a serious backlash and a permanent backlash by the animal movement becoming toxic. So I think it's really important that we take reasonable steps to prevent that from happening, to prevent the movement from being too controversial. And that could happen, for instance, because advocacy itself is divisive or because the movement associates itself with like other highly contentious political views, which is perhaps happening to some degree with social justice topics.
Tobias (00:48:45):
And yeah, I mean, this ties into what I've said earlier about polarization and conflict being, being a risk factor. And in addition to that, it's, it's crucial that the animal movement is, is thoughtful and is open-minded. And this is because of the uncertainty over what will eventually turn out to be the most important issue in the long-term, which I've talked about before. So, and in particular, we must ensure that, that this movement encompasses all of the relevant issues and all the relevant sentient beings, including wild animals, possibly invertebrates, if they're sentient, possibly artificial minds, if they're sentient. And for instance, I definitely think that neglecting wild animals is currently a major blind spot blind spot of the animal movement. And this can be a reason to focus more on at aestheticism and on, on careful philosophical reflection, rather than for instance, just advocating veganism. And we should generally be mindful of how biases could distort our thinking and should consider different strategies in an evidence-based way, including perhaps unorthodox strategies like earning-to-give or, or patient philanthropy.
Jamie (00:49:53):
Yeah. Sounds good. Okay. Well, this I've kind of long had on my to-do list to write up a post with some of my own thoughts on this topic. I thought I might briefly get your reaction to some thoughts I've got to see, uh, see if you agree with them. So one of them is, well, for some context, Brian Tomasik has written a post called "Why charities usually don't differ astronomically in expected cost effectiveness," which among other things argues that different charities or interventions working on seminar broad cause areas may have similar sorts of indirect effects and cross-fertilization. And so once we account for these indirect effects and the differences between interventions seems like they'll be, be smaller basically. And I think one practical implication of this is that we should be willing to invest in a broader range of tactics rather than doubling down on interventions.
Jamie (00:50:39):
That current evidence suggests most cost-effective. For example, I think we should invest in a broader range of institutional tactics rather than focusing, predominately on corporate campaigns as we currently do in the farmed -- at least within the kind of effective animal advocacy contingent of the farmed animal movement. And I think that this point is kind of further supported by the idea that if you're a longtermist, you should be more patient as well, and therefore more willing to experiment with a wide variety of different tactics that work out the essentially the ideal distribution of tactics in say 10 or 20 years time. How's that sound?
Tobias (00:51:11):
Yeah. Um, that sounds good. I definitely strongly agree with this more institutional and political focus of the animal movement, uh, rather than individual dietary change or even corporate campaigns. I definitely think that people would be good to move more towards the former. With regards to trying out different strategies that, that definitely seems right to me, maybe two, two caveats to that. One would be that it shouldn't be something that, that endangers the long term health of the movement, as I mentioned before. And the other problem of course, is that as with many things, it might not be so easy to actually measure whether or not your, your intervention has been, has been good, like, especially when you're talking about long-term, uh, impact and social change and things like that, it might not be so easy to measure how much your, your intervention has done to achieve that. Yeah. And that there's a risk off of a bias towards things that aren't measurable.
Jamie (00:52:12):
Yeah. Great. Another implication that I think of sometimes is that we should potentially be open to focusing on particular decision makers who might shape the far future. And so I guess, especially if you're worried about some kind of lock-in effect, uh, then potentially this includes AI designers, but more broadly, it might also just be like policy makers and things like that. And so these people's impact might be greater than we would assume by just looking at kind of immediate effects for animals. What do you think about that? And kind of, especially this aspect of intentionally focusing somehow on people involved with artificial intelligence?
Tobias (00:52:51):
Yeah that's a very interesting question. And one that comes up repeatedly, I think, relative to what you would do without focusing on AI at all, it does make sense to consider whether we should directly target this group. So I'm very much open to that, although I am also hesitant to like, embrace this completely because on the one hand, like there's a lot of uncertainty about the future of artificial intelligence and what the relevant scenarios are, I think there are a lot of pathways that sort of mediate the influence of AI developers in particular. Like if your, if your company is producing an AI that doesn't do what I want then as a customer, I would just go to this other company that does what I want and ask them, whether they can do it. There's going to be of course, political regulation, some political societal backdrop to this development of artificial intelligence. And so I don't think, I think it would be quite wrong to, to expect that AI programmers are going to rule the world basically, but they might have a much larger influence than one would expect. Yeah. So it's a very interesting question. And I haven't settled on a conclusion of this yet.
Jamie (00:54:05):
I agree with the things you just said. I think one of the things that I hear discussed as well is, is going back to the idea of cooperation that like, especially given that the kind of overlap with the AI research community and the effective altruism community of which we are apart in various ways, it feels like, especially like explicitly focusing on a particular group seems like the opposite of cooperativeness.
Tobias (00:54:29):
Yeah. And I mean, what does it even mean to focus on a particular group? If you look at how people are coming up with their attitudes to the animals, and it's usually just shaped by the society and the context that they, they live in, it's not like AI programmers are just this completely separate island and their views and their moral priorities are going to be shaped by what, what society in general things. And I would hope that most people that are developing AI also think that it should be in the hands of all of society rather than just sort of a power grab AI developers.
Jamie (00:55:06):
Um, I suppose it does depend on a number of different specific things about how exactly AI is developed. And if something like AGI comes to being, whether that's as, as we were kind of talking about last time with the timelines of AI, whether it's, for instance, whether it's some particular company or entity or whatever that stumbles across the relevant discovery or whether it's just a, kind of a gradual accumulation of different factors. Because then if it's, if it's more like the former, it seems like potentially there's some kind of scope for even just like accidentally the decisions and the processes that are used to in various kind of programming aspects or whatever, be disproportionately influential. Um, even if as, as I say, it could be accidental, it could be that they're just the people who design the, train the algorithms or something like that. And just by somewhere in that process, their values are overly represented somehow.
Tobias (00:56:01):
Yeah. That's definitely a possibility. And it's more of a possibility. The more you think that it's going to be a single invention rather than something distributed or gradual. Yeah, that seems right. I'm relatively skeptical about that more extreme forms of, I think this company is going to develop it in the next year or so, but if you do believe that, then I think it makes a lot of sense to focus on somehow shaping the people involved there, their values.
Jamie (00:56:30):
I have a longer list of ideas. Um, but we won't go through them all, I guess, uh, an easy way to summarize a lot of them though, is just that like being willing to be patient could change a lot of your prioritization, um, in terms of different tactics. And there's just a lot of implications that could stem from the idea that essentially what we're aiming for is something like the end of animal farming at some point, uh, or at least before, before some kind of lock-in as opposed to necessarily immediate impact for animals and like reducing suffering over the next few years or something like that. And I think there's just like lots of implications that could stem that stem out of that slight shift in focus. Yeah. Cool. All right. Well, let's move on then to the other side of the coin. Do you think there's anything that the longtermist community can learn from animal advocates?
Tobias (00:57:18):
Yeah. So maybe, uh, one thing is that animal advocacy is perhaps more about action and are changing things in the real world and finding a balance between that and research. And I sometimes worry about a certain bias in some circles perhaps to just default to research as the thing to do. If you have this question, then like, okay, more research is needed and like maybe longtermists can learn from animal advocates to, uh yeah. I'll have a sufficient balance between action and research.
Jamie (00:57:54):
Yeah. There's I think there's a lot of things relating to that as well. Something that I've I sometimes think about is that some of the, well, I guess, I guess this is more like, assuming you are doing taking action, but assuming you are taking action. I think a lot of the strategic lessons from the farmed animal movement could apply to some kinds of work. Especially the main candidate, I think, is work that's explicitly about encouraging consideration of future generations. And there's a lot of overlap, potential kind of strategic overlap there as well. Like yeah. I mean, literally the tactic types and kind of what makes tactics work and various kind of generic social movement lessons, which is a lot of the focus of our work at Sentience Institute could be applied pretty much directly to the same question to sorry to that other question.
Jamie (00:58:39):
And touching on what we were saying before. I think another aspect; there's not necessarily, this is, this is more within the research side of things. I sometimes wonder whether it comes back to what we were talking about before, where there's somewhat like a distinction between people who identify as longtermists and people who don't. And I think that overlaps with a preference for different kinds of research and almost like epidemics as well. And we've touched on it before, but in the longtermist community, there's quite often a lot of kind of theoretical focus rather than a empirical focus. Whereas I see pretty much the exact opposite in the research on animal advocacy, uh, of where it's very empirical. And it's like, let's look at this past data or let's run this experiment or whatever, and kind of work out what that tells us rather than starting with the theory. And I wonder, I frequently find that I'm thinking that one group should do a little bit more of what the other one does.
Tobias (00:59:41):
Yeah. I mean, that sounds exactly right to me. You probably need some kind of synthesis of both more empirical and more abstract, big picture work and maybe you're right. That it's like, yeah. There's one group needs to move more in the direction of the other.
Jamie (00:59:59):
Yeah. Cool. Sounds good. All right. Um, we also talked before about artificial sentience advocacy and whether this is something that's high priority to do fairly directly. So if we did decide that directly working to increase it, like the society includes artificial sentience within the moral circle is one of the best opportunities for reducing s-risks. What do you think are the next steps basically? What are the top priorities within that?
Tobias (01:00:26):
Yeah, that's, uh, still quite, quite uncertain. Um, I mean, I definitely think one should perhaps be reluctant to immediately do broader outreach, especially to the broader public might be more reasonable to, to talk about academics or philosophers or like people who in effective animal advocacy. Yeah. I think what's really most important right now to, to figure out the best way to do it. Um, to even just to figure out the best way to, to talk about it, what framing we should use, whether it's about rights or about welfare, we should of course think about how it might potentially backfire, um, which is also, I mean, this ties back into the implication that I've mentioned earlier about avoiding things that could permanently impact our ability to do something in this space. Yeah. In terms of categories of research, I tend to be most excited about work that's looking at these macro strategic questions that these framing questions that there's perhaps a lot of room for psychology research on what sort of attitudes people currently have to arts. Well, I mean, not yet existing artificial beings, but it's still very much in flux.
Jamie (01:01:40):
Yeah. Music to my ears really because that is a substantial amount of the focus of what Sentience Institute is doing in kind of research in this area is looking at those kinds of psychological aspects and, um, attitudes both towards current and sees that kind of map onto this topic and some stuff. And where possible kind of asking more explicitly about attitudes towards future entities and trying out some kind of, even some interventions that might affect that. I guess there's kind of whole other streams of possible research that could be taken as well. Uh, and I wonder what you think about some of those ones. So I hinted before about how our work is kind of a type of deep exploration of the topic. And I think that touches on global priorities research in the sense that by understanding some of these more concrete things, you get a sense of tractability and things like that.
Jamie (01:02:32):
And so you get a sense of how plausible this is as a general area, but I wonder as well if there are, if you think about some of the, I guess that more explicit research that's intended to very concretely go through and test the, the promise of the area, for instance, you could kind of go through some of the various questions we were talking about last time and just like test those and try and make headway on those. Do you have, do you have thoughts about like how important it is to do that targeted global priorities research versus have a go at just basically making progress on the problem?
Tobias (01:03:10):
Yeah. I mean, it depends on how you would go about it then and how you would rely on the evaluate, but it's gone well. I mean, yeah, that's a general problem with all longtermist work is that we don't have these tight feedback loops. And maybe we can have tight feedback loops with respect to certain sub-questions like, I mean, I can just measure like how many academics, uh, replied to my email that I sent them about artificial sentience, but I mean, how much does that actually tell you about yeah. About the impact of this cause area. And maybe it does tell you something, um, so I'm not saying that this is a bad approach, but there are also limits to how far you can go testing these things.
Jamie (01:03:48):
Yeah. Yeah. I know you've done some work thinking about whether we need to kickstart a movement focusing specifically on artificial sentience. What's your current thinking on that? I know you were saying before that you think we should probably avoid some of the, the broader outreach types. Do you have anything else, any other thoughts on this broader question?
Tobias (01:04:06):
Yeah. I still think that's potentially quite leveraged and quite high impact. If we can like lay the groundwork for that, we'll be the first people to kickstart this movement, but I do think it needs more groundwork and perhaps can't really be started right away. There are some unresolved questions like whether we should, for instance, integrate advocacy for artificial beings in the animal movement versus doing a more specific, I guess, advocacy movement. Like which one of these would be better? And likewise, just the practical side of things or things that, that need to be a new project or can it be integrated at existing organizations like Sentience Institute or Center for Reducing Suffering and yeah, I mean, I guess it's also just the practical matter of there being sufficiently many people that are interested in doing that, that have enough drive to make it happen and that don't have other people, other things to do that they just to be even more important. Yeah. Such people don't really grow on trees, but if you're listening and you're one of them, please do get in touch.
Jamie (01:05:10):
Sounds good. Uh, it, obviously it depends a little bit on how we're thinking of the term movement in the post-Soviet and just kind of introducing our interest in this area called "The importance of artificial sentience." I, the kind of suggestions I end with is that we should focus on kind of field building, which is almost like a more conservative form of outreach where it's focusing on people with, similar to what you're saying, with overlapping interests already, um, in terms of like maybe their researchers doing relevant work or people with kind of who are doing advocacy that is comparable already, and that maybe that's the kind of the lower risk form of outreach. And it's, that's almost like it's the, the seed for a potential movement, even if you wouldn't call it a movement already, it's like the kind of the relevant experts have kind of done some preliminary thinking on it and there's potential people who could become involved if something happened next.
Tobias (01:06:07):
Yeah. I mean, of course it's, it's a debatable what you can call it a movement. And maybe it's a little bit pretentious to think that we would be kick-starting that movement. But like maybe, uh, maybe a lot of the existing work on, on the topic is more philosophical or academic in nature and not so much focused on actual social change and action other than some philosophical discussion.
Jamie (01:06:34):
Yeah that's certainly my impression from the literature of literature review I've done, um, which is currently just a pre-print on arXiv called the moral consideration of artificial entities. Uh, the title is somewhat stemmed from what we found in the sense that the vast majority of things that I identified were yeah, philosophical. And there was, uh, there's a kind of a brief section at the end about relevant empirical research. And there's a whole kind of, there are some kind of adjacent fields of empirical research like that. I mean, to some extent, the whole field of kind of human robot interaction. And it's got something to say on this topic, but in terms of like thinking very concretely about like, what do people think about artificial sentience? Or like what are the predictors of concern, or what works to encourage concern even? Yeah. Even on the research side, it's quite devoid from the actual relevant questions, but that said, there's already policy interest.
Jamie (01:07:28):
You know, there's people, there's already people talking about this and it's obviously covered in various forms in like science fiction. So it doesn't feel as distant as what I just said to my imply. Um, it feels like, as you say, there's opportunities for leverage because the stuff's already happening on this, you know, whether we get involved or not. Yeah. You mentioned just now, if people were interested in it, they should get in touch. Do you have thoughts about what people should do right now if this, if they do think that this seems like something they're interested in and it seems like an important area, uh, that they would like to get involved in to help increase to yeah. To work on this and increase moral consideration of artificial sentience.
Tobias (01:08:06):
Yeah. I mean, definitely one thing that I would almost always recommend is to read more stuff on the topic, um, that has been written by, by possible people in EA. Then. Yeah. I mean, I guess the next step would be to actually reach out to the people that are working on this topic such as Jamie or me.
Jamie (01:08:29):
Yeah. I agree. I do think that like, there's scope for actually doing things now. I don't want to, like, I, I, it would be great if you reached out if those people reached out to us so that we could kind of discuss next steps. But I think, for example, if somebody was, if somebody worked for a research organization and was in a position to conduct relevant research, like I think there's loads of stuff that people can get started on. Uh, you know, we were talking about, we were talking about outreach and that very extensive or broad outreach is probably not a good idea for various reasons, but I think in certain context, right, like within the effective altruism community if appropriate and if appropriate within animal advocacy, obviously people can basically just like start talking about this idea and mentioning it and that could potentially help to find other people like that. Yeah. Who are potentially interested in this. So, yeah, I agree, but I, you know, it's not, so I think there are concrete things that people can do. Uh, it's not so amorphous, and of course there's always the option for donating as well and things like that.
Jamie (01:09:12):
Cool. All right. Well, I wanted to kind of finish with some discussion of career opportunities and if somebody wants to focus on s-risks, like we were kind of talking about concrete opportunities for artificial sentience there, but in s-risks more broadly, do you think, and obviously there's been a lot of discussion within the effective altruism community generally about kind of career strategy and what sorts of things people can do if they want to maximize their positive impact. Do you think that prioritizing s-risks substantially changes these kinds of career strategy considerations? Are there things that become more or less important if you prioritize s-risk reduction?
Tobias (01:10:00):
Yeah. I mean, I guess there is a lot of all of that and I would generally recommend like 80,000 Hours and then materials on this one difference perhaps is simply that the s-risks space is smaller. And one implication of that is that there's less specialization and perhaps more generalists at more of a need for disentanglement research. So yeah, it's worth exploring whether or not that is something that one is interested in. Also since it is a small number of organizations working on this, it's in a way quite easy to find out where, to it as a chance of working on s-risks, I think to be applying to these organizations and seeing what happens.
Jamie (01:10:43):
Yeah. So 80,000 hours tend to divide things into various sort of general categories. They've got research in relevant areas, which is obviously related to a lot of things that we've been talking about and work at effective non-profits, which is in the case of s-risks substantially overlaps. There's also a category though for government and policy in an area relevant to a top problem. What about that? Is that an opportunity? Is it, is it too premature to, for people to go into policy careers if they're interested in reducing s-risks? Is there anything that can be done there?
Tobias (01:11:13):
No, I don't think that's premature and I didn't mean to imply that, like the only thing you can do is to research. That's the sort of thing that we're doing at the Center for Reducing Suffering, but I do think that lots of possible careers possibly very impactful careers outside of this EA research space, such as a policy career, that could be quite, quite quite important. It is, of course not always entirely clear what you would be doing in that position. Like what sort of policies you would advocate for that would reduce s-risks. But yeah, there are some, some things might, one could gesture such as maybe trying to reduce political polarization or increasing concern for all sentient beings that that might be feasible at this point. So like the more abstract argument there is simply that, yeah. I mean, if something important happens in the future, then it's important to, to have sufficiently many people in the right positions. And one aspect of that is political influence and having the opportunity to make moral concerns heard in that way too. So, and I think having people going into, into a policy career would contribute a lot to that.
Jamie (01:12:25):
Yeah. Yeah, definitely. Okay. I said, I mentioned that research could potentially, it could be conducted within nonprofits, but there's also obviously the other option of pursuing research within academia. I guess we kind of touched on this before about whether there are, whether there's much overlap and you mentioned, you said that there's, there's lots of overlap and potentially it comes down to some of these other things as well. Like what sorts of different types of research you think are most needed? Yeah, I guess, do you have any further comment on this topic generally, basically. Do you think that there are kind of additional pros and cons of academia versus nonprofit work for if you want to focus on s-risks specifically, or is it just a kind of similar trade off to what other people interested in effective altruism might be going through?
Tobias (01:13:10):
It's definitely possible, I think to have an academic career on topics that are very relevant to s-risks. Depending on what exactly you want to do, it might be somewhat difficult though. Like you might not be able to work on the questions that you find most important, uh, or like doing so might not be ideal for your academic progression. So that might, that might be difficult for some people, but then it depends on your own psychology, whether or not you would be willing to, to compromise perhaps on, on these things. It's always, it always depends on your supervisor and all these things better than not. It is possible to work on topics that are really important.
Jamie (01:13:49):
So working to reduce suffering risks seems generally more de-motivating than striving to achieve a flourishing future. Have you found this and how do you manage to stay motivated?
Tobias (01:13:59):
Yes. I don't really find it demotivating myself, but yeah. I think one needs to compartmentalize a bit and not like, constantly think about terrible suffering because that's, yeah, it's probably not healthy watch factory farm footage every day. And in doing that, both likely to just lead to depression. But yeah, it is a balancing act because you also don't want to abstract away so much that you, you stop caring and are losing the motivation to do something about it. So yeah, one way I'm thinking about it is that it is quite an incredible opportunity that we have to make so much of a difference in a sense. So if you're starting with, oh wow, I can actually save a child that would die. And then you realize, oh wow. I can actually save a thousand animal lives by going vegan. In a sense, it's the next level up after that, to realize that our actions have the potential to help an incredibly large number of future beings, which is arguably more abstract, but also just as real, at least in expectation.
Jamie (01:15:02):
I agree. Um, I think I might have gone too far down the abstracting it away, and I sometimes think of things in terms of that, that kind of like the opportunity and get excited about opportunities to do good. I don't want to generalize without having asked these questions of many people, but I do get a sense of a lot of people who work on topics like this share a sense of like excitement, about being able to make progress on things.
Tobias (01:15:28):
Yeah. Yeah. And I mean, I share that sense but in a way it's not really the right way to think about it? But as long as it's enough to keep you going, maybe that's what matters?
Jamie (01:15:39):
Yeah. I agree. I agree. All right. Well, it's been great to have you back and for the second episode and yeah, thanks so much for joining me both both times Tobias.
Tobias (01:15:48):
Thank you.
Jamie (01:15:49):
Cool. Yeah. Any last thoughts or like how people can get involved with CRS or work to reduce stress more generally,
Tobias (01:15:55):
I would recommend just reading up on our materials and if you like what we are writing about if you're, if you agree with our priority areas and would like to do this sort of work yourself, you can get in touch on our website.
Jamie (01:16:10):
Great. Thanks again.
Tobias (01:16:12):
Thank you.
Jamie (01:16:14):
Thanks for listening. I hope you enjoyed the episode. You can subscribe to the sentences, do podcast in iTunes, Stitcher, or other podcast apps.