The Tim Ferriss Show Transcripts: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change (#612)

Please enjoy this transcript of my interview with Will MacAskill (@willmacaskill), an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will.

His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” 

Transcripts may contain a few typos. With many episodes lasting 2+ hours, it can be difficult to catch minor errors. Enjoy!

Listen to the episode on Apple Podcasts, Spotify, Overcast, Podcast Addict, Pocket Casts, Castbox, Google Podcasts, Stitcher, Amazon Musicor on your favorite podcast platform. You can watch the interview on YouTube here.

#612: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change

DUE TO SOME HEADACHES IN THE PAST, PLEASE NOTE LEGAL CONDITIONS:

Tim Ferriss owns the copyright in and to all content in and transcripts of The Tim Ferriss Show podcast, with all rights reserved, as well as his right of publicity.

WHAT YOU’RE WELCOME TO DO: You are welcome to share the below transcript (up to 500 words but not more) in media articles (e.g., The New York Times, LA Times, The Guardian), on your personal website, in a non-commercial article or blog post (e.g., Medium), and/or on a personal social media account for non-commercial purposes, provided that you include attribution to “The Tim Ferriss Show” and link back to the tim.blog/podcast URL. For the sake of clarity, media outlets with advertising models are permitted to use excerpts from the transcript per the above.

WHAT IS NOT ALLOWED: No one is authorized to copy any portion of the podcast content or use Tim Ferriss’ name, image or likeness for any commercial purpose or use, including without limitation inclusion in any books, e-books, book summaries or synopses, or on a commercial website or social media site (e.g., Facebook, Twitter, Instagram, etc.) that offers or promotes your or another’s products or services. For the sake of clarity, media outlets are permitted to use photos of Tim Ferriss from the media room on tim.blog or (obviously) license photos of Tim Ferriss from Getty Images, etc.

Tim Ferriss: Hello, boys and girls, ladies and germs, this is Tim Ferriss. Welcome to another episode of the Tim Ferriss Show. My guest today is William MacAskill. That’s M-A-C-A-S-K-I-L-L. You can find him on Twitter at @willmacaskill. Will is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 under 30 social entrepreneur, he also co-founded the nonprofits Giving What We Can, the Centre for Effective Altruism, and the Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will. Just a quick side note, we probably won’t spend too much time on this, but in that 2015 conversation we talked about existential risk and the number one highlight was pathogens. Although we didn’t use the word pandemic, certainly that was perhaps a prescient discussion based on the type of research, the many types of research that Will does.

His new book is What We Owe the Future. It is blurbed by several guests of this podcast, including neuroscientist and author, Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. … This is an altogether thrilling and necessary book.” You can find them online, williammacaskill.com. Will, nice to see you again. Thanks for making the time.

Will MacAskill: Thanks for having me back on. It’s a delight.

Tim Ferriss: And I thought we would start with some, say, warm-up questions to get people right into some details of how you think, the information you consume, and so on and so forth. So we’re going to begin with a few questions I often reserve for the end of conversations, and we covered some of the other rapid fire questions in the last conversation. For people who want a lot on your bio, how you ended up being the youngest associate professor of philosophy in the world at the time of your appointment, and so on, they can listen to our first conversation. But we spoke about a few books last time, and I’d be curious, what is the book or what are the books that you have given most as a gift and why? Or what are some books that have had a great influence on you? I know we talked already about Practical Ethics by Peter Singer and then Superintelligence by Nick Bostrom last time, but do any other books come to mind when I ask that question?

Will MacAskill: Yeah. So here are a couple. One is The Precipice by my colleague, Toby Ord, who I co-founded Giving What We Can with back in 2009, and it’s on the topic of existential risks. So I see it as a compliment to my book What We Owe the Future, and it details in kind of quite beautiful prose and also painstaking detail some of the risks that we face as a civilization from the familiar asteroids to the less familiar super volcanoes, and to the totally terrifying, which I also discuss in the book and discuss how we might handle, like artificial intelligence and engineered pathogens and engineered pandemics. And it also just talks about what we can do about them as well, and so I think it’s just absolutely necessary as a read. We’ll talk, I guess, a bunch about some of those topics as we get into my work too.

So I have another kind of set of books, which are quite different, but they’ve had some of the biggest impact on just the background of my thinking over the last few years in very subtle ways. And that’s Joe Henrich’s books, The Secret Of Our Success and The WEIRDest People in the World. And Joe Henrich is a kind of quantitative anthropologist at Harvard and his first book is just, why are humans the most powerful and ecologically dominant species on the planet? And people often say, “Oh, it’s our big brains.” And he’s like, “No.” Our brains are several times the size of a chimpanzee’s brain, but that’s not the distinctive thing. The distinctive thing is that we work together, essentially. We’re capable of cumulative cultural evolution, where I can learn something, and then my children will pick it up from me, even if they don’t really understand why I’m doing it. And that means that the way humans function, it’s not like a single brain that’s three times the size of a chimpanzee, it’s tens of thousands of brains, all working in concert, and now millions of brains over many generations. And that’s why there’s such a big gap between chimpanzee ability or intelligence and human intelligence, where it’s not a scale up of three X, it’s a scale up of 300,000.

Tim Ferriss: The hive mind of hominids.

Will MacAskill: Basically, that’s exactly right. So, on this perspective, humans are not just another species that are weird, and not hairy, and particularly sweaty, and good at long-distance learning. Aristotle commented that humans are a rational animal and that’s what made them distinct from other animals, whereas actually we’re just very sweaty and that’s one of our most distinctive characteristics. And I like to think that I am there for the most human of humans because I’m the sweatiest person I’ve met. So here’s this book and that alone really blew my mind, it really made a big difference to how I understand humans.

And he has this other book, The WEIRDest People in the World, which is about the psychology, in particular, of weird people, Western-educated, industrialized, rich, and democratic, which are the subject of almost all psychology experiments. But they’re not representative at all of most cultures, in fact, they’re very unusual among most cultures, much more individualistic, much more willing to challenge authority, even perceive the world in slightly different ways. And the overall picture you get from these two books is an understanding of human behavior that’s very different from the kind of economic understanding of human behavior, that we’re all these self-interested agents going around, kind of maximizing profit for ourselves. Whereas on this vision it’s like, no, we are these cultural beings. We have a vision for the world and we go and try and put that vision into the world. And that’s what the kind of big fights are about. And I think it has a much better explanation of history. Anyway, I’ll stop there.

Tim Ferriss: When you said quantitative, I think you said quantitative anthropologist. Am I hearing that correctly? 

Will MacAskill: Yeah, that’s correct.

Tim Ferriss: What is a quantitative anthropologist? I know those two words separately and I can pretend like I understand what those mean together, but what does a quantitative anthropologist do?

Will MacAskill: So you might know kind of evolutionary biology has these formal models of how kind of genes evolve over time. It’s hard to make predictions within this field, but at least you have these precise, formal methods that you can start to kind of understand what’s going on in terms of how organisms evolve. Now it turns out you can do the same thing, but applied to cultures. Dawkins made this word “meme” very famous and that kind of gets across the idea, although it’s not quite right because it’s not like there’s a single divisible unit of culture. But, nonetheless, you can think of different cultures kind of like different species and some are more fit than others, so some are going to win out over time. And you can apply the same sort of formal methods that evolutionary biologists use to study evolution of genetics to the evolution of cultures as well. And Joe Henrich does that at least a little bit.

Tim Ferriss: All right, well, I’ll have to take a look. Now, we were talking — Well, we, using the royal we — You were talking about, I suppose I mentioned, in passing, existential risks and threats, and I have a number of questions related to this, not surprisingly, but I want to touch upon first, perhaps an unexpected insertion. I have in my notes here, Crime and Punishment by Dostoevsky as a book that was important to you. And I would like to know why that is the case.

Will MacAskill: That book is actually what got me into philosophy, originally. Back when I was about 15 I read it and I was, at a time, very interested in literature, I wanted to be a poet and an author. And it was via that that I learned about this word philosophy and I realized that like, oh actually you can just tackle the big ideas directly. You don’t need to go via fiction.

But I was also particularly interested, at the time, in existentialist philosophy. And this is something that honestly, like I kind of still bear with me. I’m kind of a bit unusual in that I often think to myself, “Could I justify my life now to my 15-year-old self?” And if the answer is no, then I’m a bit like, “Oh, what are you doing? You’re not living up to what earlier Will would’ve wanted for present Will.” And the key thing I think for 15-year-old Will, who was inspired by existentialism, was living an authentic life. And I still find that very liberating and empowering and inspiring.

So some of the things I do, so for example, I give away most of my income, which is like a very unusual thing to do. And you might think, oh, that’s a sacrifice that’s making my life worse, but actually I find it kind of empowering because it’s like, I am making an autonomous decision. I am not merely kind of following the dictates of what social convention is telling me to do, but I’m reasoning about things from first principles and then making a decision that’s genuinely, authentically mine. And that was something I hadn’t particularly pegged it to kind of acting modally when I was 15, although to some extent, but that was something that really moved me then and, honestly, continues to move me today.

Tim Ferriss: How would you just for, I often say, “for the listeners out there who may not be familiar,” but if I’m being honest with myself, I have not studied existentialism and I hear certain names associated with it so I can kind of fake it until I make it and give it, create the illusion. I’ll be like, “Ah, Kierkegaard, I think, maybe this person, that person.” But what is existentialism as it is portrayed in Crime and Punishment, or conveyed?

Will MacAskill: One of the things I liked about Crime and Punishment and Dostoevsky’s work, in particular, is that it kind of, it at least is wrestling with existentialism, and that word can get used in various ways. But one way of thinking about it is just the world as it is, has no intrinsic meaning and yet we are placed into it and have to make decisions, and that’s this absurd position to be in, and you can create your own meaning out of that through radically free acts, authentic, genuine acts. And Dostoevsky, in his work, kind of wrestles between three positions, I think. One is this existentialist position. A second is just pure nihilism, which is just actually literally, if you take it seriously and there’s no guard, then everything is permitted. There’s no reason to do anything, not even reason created from yourself. And then third is this religious position, which I think he actually ultimately endorses and it’s almost like nihilism as a proof, like the rejection of nihilism therefore guarantees that you should be religious.

Tim Ferriss: Q.E.D., God.

Will MacAskill: Yeah, Exactly. Yeah. Well, it’s like life is meaningless unless God exists. And you know, I’m now describing it slightly in Pascalian terms, but you may as well act as if there is a God that is giving meaning to life.

Tim Ferriss: And for, we’re not going to spend a whole bunch of time on it now, but in our last conversation we talked about Pascal’s wager but also Pascal’s mugging, if I remember correctly, something along those lines. So we won’t take a side alley down into Pascal’s mugging just now, but I said I had two things I wanted to ask you about. The first was Crime and Punishment, which I think we’ve covered. The second, before we jump into our longer conversation, which will go all over the place, and I may ask still some of the shorter questions — When people hear existential threats, when they hear super volcanoes, AI, manmade pathogens, et cetera, I think that there will likely be an apprehension, perhaps a little seizure of the breath for some people listening, who might think to themselves, “My God, this is just going to be audio doomscrolling. This is just — I’m going to come away from this conversation with higher blood pressure, more cortisol…”

And my impression of you in the time that we’ve spent together is that you are not nihilistic. You are not apathetic. You are not pessimistic. You’re quite the opposite of all of those things in some respects. How do you do that? And is that just Will out of the box and that’s just how you came programmed? Is there more to it? And this is I think a crux question because I don’t see in the people, say, in my audience, including those who are very competent, effective action if they don’t have some degree of optimism or belief that they can exert change. So could you just speak to that? Because I know I also succumb to just getting waterboarded with bad news all day from around the world. I’m like, “I can’t do, I can’t, I cannot put a salve onto all of this for all of these people,” and it can be overwhelming. So how would you respond to that?

Will MacAskill: Great. I think there are two things that motivate this. One is just the desire to actually make the world better. And then second, I’ll call low standards. So on the first side — I mean, when I first, so, age 21 and I’m like, man, I’m about to really start my life. I’m trying to look for, like I want to act modally, I’m trying to look for different causes. I bounce into a lot of kind of classic, the sorts of classic causes that you’d find on a social, for someone socially motivated on a college campus, like vegetarian society, left-wing politics, climate change stuff. And I found there was very little in the way of action. There was an awful lot of guilt and an awful lot of talking about the problems, but not that much in terms of like, “Hey, here are the solutions. This is how you can actually make the world better. And this is what we should do.”

But if you actually care about wanting to make the world better and that’s the key motivation, the size of a problem and really thinking about suffering, I mean, it can be important, especially if it’s motivating you. But the ultimate thing is just, what do you do? Something could be the worst problem in the world, but if there’s nothing you can do, then it’s just not relevant for the purpose of action. And that therefore really makes me think, in the first instance, always about, “Okay, well, what’s the difference we can make? Not like how scary are things or how bad are things, but instead how much of a difference can we make?” And there it’s very positive.

So in the last podcast, we talked a lot about global health and development and what’s the difference you can make there. Well, if you are a middle-class member of a rich country, it’s on the order of saving dozens, hundreds, maybe even thousands of lives over the course of your life, if you put your mind to it. That’s huge. Now we’re talking about existential risks and the long-term future of humanity, what’s the difference you can make? You can play a part in being pivotal in putting humanity onto a better trajectory for, not just centuries, but for thousands, millions, or even billions of years. The amount of good that you can do is truly enormous. You can have cosmic significance and that’s pretty inspiring.

And so when you think about the difference you can make, rather than just focusing on the magnitude of the problems, I think there’s every reason for optimism. And then the second aspect I’ve said was low standards, which is just, what’s a world that you should be sad about? What’s a world you should be happy with? Well, in my own case, I think like, look, if I came into the world and when I leave it, the world is neither better nor worse, that’s like zero. I should be indifferent about that. If I can make it a bit better, as in virtue of my existence, hey, that’s pretty good. And the more good I can do on top of that, the better. And I think I have made it much better. I’m not zero, I’m positive. And so all of the additional good that I potentially do feels like a bonus.

And so similarly with humanity, when I look to the future, what’s the level at which I’m like, “Ah, it’s indifferent.” That’s where just the amount of happiness and suffering in the future kind of cancel out. And relative to that, I think the future’s going to be amazing. Already I think the world today is much better than if it didn’t exist. And I think it’s going to be a lot better in the future, like even just the progress we’ve made over the last few hundred years, people today have far, far better lives. If you extrapolate that out just a few hundred years more let alone thousands of years, then there’s at least a good chance that we could have a future where everyone lives, not just as well as the best people alive today, but maybe tens, hundreds, thousands of times better.

Tim Ferriss: Yeah. I mean, kings a few hundred years ago didn’t have running water. Or air conditioning.

Will MacAskill: They didn’t have anesthetic.

Tim Ferriss: Yeah. No antibiotics.

Will MacAskill: If they were gay, they had to keep it secret. They could barely travel.

Tim Ferriss: So lots of things we easily take for granted, which we can come back to, except it may be related. But why don’t we take 60 to 120 seconds just for you to explain effective altruism, your name is often associated, just so we have a definition of terms and people have some idea of the scope and meaning of effective altruism, since you’re considered one of the creators or co-creators of this entire movement. If you wouldn’t mind just explaining that briefly and that way people will have at least that as a landmark as we go forward.

Will MacAskill: Effective altruism is a philosophy and a community that’s about trying to figure out how can we do as much good as possible with the time and money we have and then taking action on that basis. So putting those ideas into practice to actually try to make the world better as effectively as possible, whether that’s through our donations, with our careers, with how we vote, with our consumption decisions, just with our entire lives.

Tim Ferriss: What have been some of the outcomes of that?

Will MacAskill: I’ve been promoting these ideas along with others for over 12 years now. We’ve moved well over a billion dollars to the most effective causes. So that means if we take just one charity that we’ve raised money for, Against Malaria Foundation, we’ve protected over 400 million people, mainly children, from malaria and statistically, that means we’ve saved about a hundred thousand lives or maybe a little more, which is the size of a small town about the size of Oxford. And that’s just one charity. There are several more within global health and development.

I think in terms of other cause areas that we focused on within animal health and welfare, hundreds of millions of hens are no longer in cages because of Corporate Cage-Free Campaigns that we’ve helped to fund. And then within the field of existential risks, it’s not as easy to say, “Oh, we’ve done this concrete thing. This thing would’ve killed us all, but we avoided it.” But we have helped make AI safety a much more mainstream field of research. People are taking the potential benefits but also the risks from AI much more seriously than they were. We have also invested a lot in certain pandemic preparedness measures. Again, it’s kind of still early stages, but some of the technology there or things I think have really promising potential to action at least at making sure that COVID 19 is the last pandemic we ever have.

Tim Ferriss: One of many things I appreciate about you and also, broadly speaking, many people in the effective altruism community/movement is the taking of a systematic approach to not just defining, but questioning assumptions and quantitatively looking at how you can do good, not just feel good, if that makes sense. And it seems obvious to anyone who’s in the community, but the vast majority of philanthropy or charity, broadly speaking, is done without that type of approach, from what I can tell, and it’s really worth taking a closer look for those people listening. Are there just a few URLs you’d like to mention for people who’d like to dig into that and then we can move into some of the more current questions?

Will MacAskill: If you’re interested in how to use your career to make the world better, then 80,000hours.org is a terrific place to go. I’m a co-founder of that organization. It gives in depth career advice and one-on-one career coaching as well. If you’re interested in donating some of your money, then givingwhatwecan.org encourages people to make a giving pledge, typically 10 percent of one’s income or more. It’s a great way to live. If you’re interested in donating to effective charities, then givewell.org is the single best place for donating to global health and development charities, that’s GiveWell.org. There’s also the Effective Altruism Funds or EA funds that allow you to donate within animal welfare and existential risks, and promotion of these ideas as well.

Tim Ferriss: All right, few more calisthenics, then we’re going to go into the heavy lifting, the max squats of longtermism. All right, here we go. In the last, say five years, you can pick the timeframe, but recent history, what new belief, behavior, or habit has most improved your life?

Will MacAskill: I think the biggest one of all, and this was really big during writing the book, which was this enormous challenge, it was my main focus for two years over the course of the pandemic was evening check-ins with an employee of mine who also functioned a bit like a productivity coach. So every evening I would set deadlines for the next day, both input and output. So input would be how many hours of draft writing I would do, where going to the bathroom did not count, and a really big day would be six hours. Sometimes, very occasionally, I’d get more than that. Also, output goals as well. So I’d say, “I will have drafted this section or these sections,” or, “I will have done such and such.” I would also normally make some other commitments as well, such as how much time do I spend looking at Reddit on my phone, how much caffeine am I allowed to drink, do I exercise, things like this.

Laura Pomarius, who is doing it, is wonderful and the nicest person ever, and she just never beat me up about this, but I would beat myself up and it was make me — it was incredibly effective at making sure I was just actually doing things, because I, like many others, find writing, it’s hard. It’s hard to get motivated. It’s hard to keep going, and sometimes, I don’t know, I’d have gotten drunk the night before, let’s say, and it was a Sunday, and normally, it would be a write-off, the whole day, but I think, “Oh, no. It’d just be so embarrassing at 7:00 p.m. to have to tell Laura, ‘Yeah, I didn’t do any work because I got smashed.'” So, instead, I would feel hungover, and I would just keep typing away. That was just huge. I mean, I think it increased my productivity — I don’t know. It feels like 20 percent or 25 percent or something, just from these 10-minute check-ins every day.

Tim Ferriss: So these were 10-minute check-ins seven days a week? What was the cadence?

Will MacAskill: I was working six days a week, but if she was doing something else on the weekend, we wouldn’t check in.

Tim Ferriss: So the format would be — walk me through. 10 minutes would be — the first five minutes, “Here’s how I measured up to what I committed, and here’s what I’m doing next?”

Will MacAskill: Exactly. So you have a view of the day. Did I hit my input goal, my output goal? How much caffeine did I drink? Did I exercise? Then also, was I getting any migraines or back pain, which are two ongoing issues for my productivity, and then next would be a discussion of what I would try to do the following day. Interestingly, you might think of a productivity coach as someone who’s really putting your nose to the grindstone, whereas with Laura, it was kind of the opposite because my problem is that I beat myself up too much. So we would have a conversation — 

Tim Ferriss: So she’s luring E.T. out of the closet with the Reese’s Pieces candy?

Will MacAskill: Exactly. Yeah. So I would be like, “Oh, I got so little done today, so I’m going to have to just have a 12-hour day tomorrow,” or something, or, “I’ll work through the night,” or something like that, and she’s like, “That doesn’t make any sense. We’ve tracked this before, and when you try and do this, maybe you get an hour of extra work, but you feel horrible for days afterwards.” So she would be very good at countering bullshit that my brain would be saying, basically.

Tim Ferriss: So a couple things. Caffeine, what were your parameters on caffeine? What were the limitations or minimums, I don’t know how you said it, on caffeine, and then how did you choose this employee specifically for this, and why?

Will MacAskill: Caffeine, I think a big thing is just if I drink too much, I’m likely to get a migraine, so I set my limit at three espressos worth, so about 180 milligrams of caffeine, and I’m very sensitive, so it’s like — 

Tim Ferriss: 180 is legitimate for a sensitive person.

Will MacAskill: Yeah. Yeah, exactly. So that’s the max that I do, whereas double espresso is fine, but then it’s like shading in between I’ll be very cautious about. Then how did I choose this person? I think it was a very subtle thing, the kind of rapport or personal fit you have with someone who can be a good coach, where she knew me well enough that she knew the ways to push me around and was — yeah. The combination of, maybe I call it friendly pushiness or something, was perfect, a nice fit. It can be very easy to go wrong on either side of that line.

Tim Ferriss: Sounds like I need an evening check-in. All right. Who is my victim going to be? All right. Evening check-ins.

Will MacAskill: Maybe we can try it.

Tim Ferriss: Yeah. So I’ll give you — “Will, I know, I know. I know it’s 4:00 in the morning, but I had to call you for my evening check-in.” We’re in different timezones, for people who may not have picked up on the — that is not a New Jersey accent that Will has. Okay. Comment sidebar on low back pain, I know this came up in our last conversation. Have you not found anything to help? I may have some suggestions if you would like suggestions, but have you found anything to help?

Will MacAskill: Actually, I’ve almost completely fixed it. So it was just — I mean, I was working. I was just sitting in a chair, especially pandemic, and writing a book for eight hours a day, but it was actually only one period that I started getting lower back pain. So I remember in our conversation you had recommended me these boots so that I could hang upside-down, and I did buy them, and I confess I never used them, so I’m sorry, Tim.

Tim Ferriss: Failure, adherence failure. No, that’s a failure of my recommendation if it’s not going to be used.

Will MacAskill: It is failure.

Tim Ferriss: It doesn’t make any sense for me to recommend it.

Will MacAskill: But what I did do, in the end I just developed my own workout routine where — I got advice from physios and so on. I talked to loads of doctors. In general, people just aren’t really engaging with what your problems are, and self-experimentation I think was just better. So now I have this — it’s also like the other thing is just all of this takes loads of time, and if you’re a time-pressed individual — I mean, firstly, the advice is often geared towards old people, so it’s very easy stretches or basic work, movement that most people aren’t doing, and then secondly, it’s like, man, you want to do all this, it’s two hours or something. How can you do this more efficiently? So I developed my own routine, which involves standing on a BOSU ball, so all on a BOSU ball.

Tim Ferriss: Okay.

Will MacAskill: I have two free weights. I do a squat. I’m sitting in squat position as the resting position. That’s very good because it stretches your hip flexors.

Tim Ferriss: And for those people who can’t see Will, he’s got his hands in front of his chest — 

Will MacAskill: Yeah. Imagine — 

Tim Ferriss: — looks kind of like a prairie dog, but really, I think what that symbolizes is he has the dumbbells in front of his chest like a goblet squat, if people know what that is.

Will MacAskill: Yeah, exactly, with my legs wide, elbows in between your knees so that your legs are splayed out like that, and you’ll feel a stretch on your hip flexors. So cultures that squat to sit actually experience lower rates of back pain, so that was the inspiration there. Then from there, standing up, squat, do a bicep curl up into a shoulder press, go down, then deadlift going into an upright row. That’s on the BOSU ball. The thoughts here that strengthening your entire anterior pelvic chain. So I think my hypothesis was, why was I getting this, was because I was an idiot young male who was like, “Why would you work anything out apart from your beach muscles? What would be the point of that?” and that majorly distorted my posture.

Then I would do that, so one of them every 20 seconds in two sets of 10 minutes, and then that combined also with core work, so plank in particular, really just, I think, sorted things out because it’s all about just — I had just bad posture for 25 years, made worse by very poor focus at the gym, and so it was this long process of reconfiguring your body so that it makes more sense, and in particular, as we talked about, I had anterior pelvic tilt, so my gut stuck out. My pelvis was too far forward. So then it’s like your glutes taming that back, and then sit at your hip flexors. Oh, and I invented my own stretch as well.

So, for the listeners who don’t know, I was previously married, and I took a different name. I took my wife’s grandmother’s maiden name. So my name wasn’t always MacAskill. It used to be Crouch, and so I named this stretch the Will Crouch in honor of my former self, but it involves standing and hooking your — you stand up, you hook your foot into your two hands, and then press out, extend your leg, but pushing against your two hands, and that stretches out this muscle that goes all the way from your pelvis up your back, and I’ve not found any other stretch that stretches that particular muscle, and that was the one that was really causing all the pain.

Tim Ferriss: So you do this standing?

Will MacAskill: The muscle is the longissimus thoracis. Yes.

Tim Ferriss: You do this standing?

Will MacAskill: Yeah, standing. That’s right.

Tim Ferriss: It’s like a Kid ‘n Play dance move. Okay. So people may just — I’ll put my liability hat on. I’ll just say maybe start on the ground to try this one so you don’t get your foot stuck and topple over like an army figurine onto your head. But I guess I can see how that would work.

Will MacAskill: Yeah. Anyone would say that I’m not a professional workout coach.

Tim Ferriss: I can’t wait for the Will Crouch YouTube instructional fitness series.

Will MacAskill: So I did take on this role in the early stages of the pandemic. At the house I was in, I would go outside every lunch at 1:00 p.m. and put on my best Scottish accent, and I’d be like, “Right, you wee pricks. Get on the floor and give me 20.” Very effective. Never made it to YouTube, though.

Tim Ferriss: Well, it’s never too late. All right. A couple of things real quick. The first is these exercises, did you do them every day in the morning? Did you do them midday? How many days a week, at what time of day?

Will MacAskill: Yeah, great. So I almost always work out just after lunch. People always complain to me that it’s like, “Oh, you’ll get a sore stomach,” or something. I’m like, “But I don’t. It never happens.” But I deliberately time it because I have a real energy dip just after lunch, and so doing something that’s just not work makes a ton of sense.

Tim Ferriss: Yeah.

Will MacAskill: I sometimes will do it just before lunch.

Tim Ferriss: Plus after sitting for a few hours, you can break up the two — 

Will MacAskill: Exactly.

Tim Ferriss: — marathons of sitting.

Will MacAskill: Yeah, exactly.

Tim Ferriss: I’ll make one other recommendation for folks who may also suffer from occasional or chronic low back tightness, which has been an issue for me also if I sit a lot. It ends up affecting my sleep, most significantly, and can cause that type of anterior pelvic tilt and lordosis. So, if your gut is sticking out and you look like you’re fat or pregnant, even though you are not, perhaps that means your pelvis is pouring forward. So if you think about your pelvis as a goblet or a cup full of water, if you’re pouring that water out the front, you have anterior pelvic tilt, and one of the causes of that or contributing factors can be a really tight iliopsoas or iliacus that then in some fashion connects to the lower back, the lumbar. So you get this incredible tightness/pain. For me, it can cause tossing and turning at night and really affect my sleep.

The device that was recommended to me a few times before I finally bit the bullet and got it, it was something called the Pso-Rite, P-S-O, hyphen, R-I-T-E. It is the most expensive piece of plastic you will ever buy, but worth it at something like $50 to $70 for self-release of the psoas, which is incredibly difficult to achieve, I find, incredibly difficult to achieve by yourself otherwise, and a lot of soft tissue therapists are not particularly good at helping with it, nor is it practical, really, to necessarily have that type of work done every day, even if you could. So the Pso-Rite is helpful. All right. So let’s move from personal longtermism, making sure that you’re able to function and not be decrepit when you’re 45, into the broader sense and discussion of longtermism. What is longtermism, and why did you write this book?

Will MacAskill: Well, longtermism is about three things. It’s about taking seriously the sheer scale of the future that might be ahead of us, and just how high the stakes are in anything that I could shape that future. It’s then about trying to assess what are the events that might occur in our lifetimes that really would have impacts, not just for the present generation, but that could potentially shape the entire course of humanity’s future, and then third, just trying to think about, okay, how do we ensure that we can take actions to put humanity onto the right path? I think you’re exactly right to talk about personal longtermism and the analogy there because in the book, in What We Owe the Future, I talk about the analogy between the present world and humanity and an impudent teenager, a reckless teenager where there are things — What are the really high-stakes decisions that a teenager makes? It’s not what you do at weekend. Instead, it’s the decisions that would impact the entire course of your life.

So in the book I tell a story where I was quite a reckless teenager that nearly killed myself climbing up a building. That was one of the biggest decisions, dumbest decisions I ever made, because if I had died, then it would’ve been 60, 70 years of life that I would’ve lost. In the same way, if humanity dies now, if we cause our own extinction or the unrecoverable end of civilization, such is by a worst-case pandemic, then we’re losing, well, not just 70 years of life. It’s thousands, millions, even billions of years of future civilization. So, similarly, if I made decisions as a teenager that affected the whole course of my life, like whether to become a poet or a philosopher or something, I could’ve become a doctor, and similarly, I think in the coming century, in a lifetime, humanity potentially makes decisions about how is future society structured, what are the values we live by, is society a liberal democracy around the world, or is it a totalitarian state, and how do we handle technologies like AI that I think could impact the very, very long run?

Tim Ferriss: So I want to read just a paragraph that you sent me, which I found thought provoking because it’s a framing that I had not heard before, and here it goes. 

“Imagine the entire human story from the first Homo sapiens of East Africa to our eventual end represented as a single life. Where in that life do we stand? We can’t know for sure, but suppose humanity lasted only a 10th as long as the typical mammalian species. Even then, more than 99 percent of this life would lie ahead. On the scale of a typical life, humanity today would just be six months old, but we might well survive for even longer, for hundreds of millions of years, until the Earth is no longer habitable or far beyond. In that case, humanity is experiencing its first blinking moments out of the womb.”

I appreciated this framing because my feeling, at least with my audience of listeners, is that there’s a small percentage who are rushing headlong into battle with some vision of longtermism and feel committed to fighting the good fight, and a nontrivial percentage have decided it’s too late. They have decided that the end is nigh. We are the frog in the water slowly heating that will be boiling before we know it, and I find this at least, whether we put aside for the second how people might find fault with it or pick at it, a useful counter-frame just to even sit with for a few minutes, why do you think it’s important to at least consider that something like this is plausible? Maybe it’s not 90 percent likely, but let’s just say it’s even 10 percent, 20 percent likely.

Will MacAskill: Well, it’s so important just because future generations matter. Future people matter, and whatever you value, whether that’s well-being or happiness, or maybe it’s accomplishment, maybe it’s great works of art, maybe it’s scientific discovery, almost all of whatever you value would be in the future rather than now because the future just could be vast, indeed, where again, if you look at what has been accomplished since the dawn of humanity, well, the dawn of humanity was the end of 1,000 years ago, agriculture was 12,000 years ago, Industrial Revolution was 250 years ago, and yet even on the scale of a typical mammal species we have 700,000 years to go. Now, we’re not a typical mammal species. We could last only a few centuries. We could last 10 years if we really do ourselves in in the short term, but we could last much longer, and that just means that all of what we might achieve, all of the good things that humanity could produce, they’re basically in the future, and that’s really worth thinking about taking seriously, and trying to protect and promote.

Tim Ferriss: One thing that you and I were chatting a bit about, I brought it up before we started talking, is the question of if it is possible to make, let’s just call it altruism, or in this case longtermism, investing in a future we will not necessarily, most likely, see ourselves. Can you make that self-interest, or how do you position it such that it appeals to the greatest number of people possible, since our collective destiny depends on some critical mass of people taking it seriously? It probably isn’t one person. We’re not going to get, say, nine billion people, so how many do we hope to embrace this philosophy, and is it possible to position it as self-interest?

This is going to be a bit of a ramble, so I apologize in advance, but when you were talking about Dostoevsky and nihilism almost as a proof, and him ultimately landing on God, you kind of need something resembling God to make sense of this sea of uncertainty so you can maybe stabilize oneself and feel a sense of meaning, it brought to mind something I read very recently, and I apologize. This is, again, going to be a bit of a meander, but this is something that Russ Roberts included, so famous for EconTalk podcast, in an article he wrote called “My Twelve Rules for Life.” Now, he is, I’m not sure this is the best descriptor, but culturally, and I would think religiously, Jewish. So he has that as a latticework of sorts, but number two in his “Twelve Rules for Life” was “Find something healthy to worship,” and I’m just going to take a second to read this.

He quoted David Foster Wallace, and I’m going to tie this into what I just said in a second. 

“Because here’s something else that’s weird but true: in the day-to-day trenches of adult life, there is actually no such thing as atheism. There is no such thing as not worshipping. Everybody worships. The only choice we get is what to worship. And the compelling reason for maybe choosing some sort of god or spiritual-type thing to worship — be it JC or Allah, be it YHWH or the Wiccan Mother Goddess, or the Four Noble Truths, or some inviolable set of ethical principles — is that pretty much anything else you worship will eat you alive. If you worship money and things, if they are where you tap real meaning in life, then you will never have enough, never feel you have enough. 

“It’s the truth. Worship your body and beauty and sexual allure and you will always feel ugly. And when time and age start showing, you will die a million deaths before they finally grieve you. On one level, we all know this stuff already. It’s been codified as myths, proverbs, clichés, epigrams, parables; the skeleton of every great story. The whole trick is keeping the truth up front in daily consciousness.”

Okay. Then dot, dot, dot, but the most important thing to remember is not to worship yourself, and not as easy as it sounds. So I’m wondering if longtermism, in a sense, doesn’t need to be spun to envelop self-interest if it is basically something to worship that gives you purpose when there is so much uncertainty and chaos and entropy around us. Anyway, long TED Talk. Thank you for coming, but what are your thoughts on any of that? The overarching question is, how do we make longtermism catch, to have some critical mass of people who really embrace it?

Will MacAskill: I think there’s a really important insight there. Actually, one made by John Stuart Mill in a speech to Parliament at the end of the 19th century, and he asked this question, “What should we do for posterity? After all, what has posterity ever done for us?” Then, actually, he makes the argument, “Posterity’s done a lot of things for us because the projects we have only have meaning insofar as we think that they might contribute to this kind of relay race among the generations.”

So here’s a thought experiment. There’s this film, Children of Men, and in it people just aren’t able to reproduce, and so it’s not that anyone dies. There’s no catastrophe that kills everybody, but there’s no future of human civilization. How would that change your life? I think for many, many people and with many, many projects, it would just rob those projects of meaning. I certainly wouldn’t be nearly as interested in intellectual pursuits or trying to do good things, and so on. I mean, maybe I would to some extent, but for a lot of things it seems like, oh, they have meaning because — Take scientific inquiry. There is this semi-built cathedral of knowledge that I’ve inherited from all of my ancestors that has then been passed to us, and it is incomplete.

So we’ve got general relativity and we’ve got quantum theory, and they’re amazing, but we also know they’re incomplete, and maybe we can work harder and see farther and build the cathedral a little higher. But if it’s like, no, actually, it’ll just get torn up, it’s kind of like, oh, you’re painting an artwork, and you can add to the painting a bit, and it’s going to just go in the shredder the day afterwards, you’re not going to be very motivated to do it. So one thing I think that a lot of people find motivating is this thought that you’re part of this grand project, much, much grander than yourself, of trying to build a good and flourishing society over the course of not just centuries, but thousands of years, and that’s one way in which our lives have meaning.

Tim Ferriss: So what do you hope the effect will be of people or on people who read What We Owe the Future? What are you hoping some of the things will be that they take from that?

Will MacAskill: The number one thing is just a worldview that’s what my colleague Nick Bostrom calls “Getting the big picture roughly right.” So there are just so many problems that the world faces today, so many things we could be focusing on and paying attention to, but there’s this question, just, well, what’s most important? What should be taking most of our attention? The ideas in the book I hope give a partial answer, which is, well, the things that are most important are those that really shape the long-term future of the human project, and that really narrows things down, I think. So that’s a kind of broad kind of worldview. More specifically, though, I would like it to be something that guides the decisions people make over the course of their lives. So I think the biggest decisions people make are what career they pursue. So do you go and become a management consultant or a financier and make money and live in the suburbs, or do you instead pursue a life that’s really trying to make the world better? And if so, then what problems are you focusing on? Where it seems to me some of the bigger problems are the development of very advanced — or, the biggest issues or events that will occur in a lifetime — are the development of advanced artificial intelligence, in particular artificial intelligence that’s as smart as humans, or maybe considerably smarter.

I think that has a good claim to being one of the most important technological discoveries of all time, once we get to that point, and that point, a very good chance it’s in the coming decades. A second is the risk of very catastrophic pandemics, things that are far worse than COVID-19, which, again, I think are just in the horizon because of developments in our ability to create new viruses.

And a third is a third world war, which, again, if you look at history and look at leading scholars’ underlying models of war, I think it’s a really, pretty good chance we see a third world war in our lifetime, something like one in three, and I think that could quite plausibly have just unparalleled disruption and misery in the world, and the limit just being the end of civilization, whether that’s because of nuclear warheads scaling up a hundredfold and being used in an all-out nuclear war, or because of the use of bioweapons. So these are all things that smart people who read this book could go and work on.

I’m aware that, again, kind of sounds bleak, but perhaps the final thing is there is this positive vision in the book too, which is that, if we avoid these threats, or manage these technological transitions well, we really can just create a future that’s truly amazing. And this is present kind of throughout the book. I did feel like I hadn’t fully given it its due, so there’s a little Easter egg in the book as well, in the final page, the QR code, that sketches a little vision of a positive future in short story form, but maybe that’s the final thing of all, in terms of this worldview is appreciating there’s so much at stake. There are enormous risks that we face, or threats that we face that we need to manage. But, if we do, then we can create a world that is flourishing and vibrant and wonderful for our grandkids, for their grandkids, for their grandkids.

Tim Ferriss: What is value lock-in, and could you give some historical examples?

Will MacAskill: That’s an excellent question. So value lock-in is when a single ideology, or value system, or kind of set of ideologies kind of takes control of an area, or in the limit, the whole world, and then persists for an extremely long time. And this is one thing that I think can have very, very long lasting effects, and we’ve already seen it throughout history. And so in What We Owe the Future, I give a story of ancient China. So during this period that’s known as the Hundred Schools of Thought, the Xu Dynasty had fallen, and there was a lot of kind of fragmentation, ideological fragmentation in China, and wandering philosophers would go from state to state with a package of kind of philosophical ideas and moral views and political policy recommendations, and try and convince political elites of their ideas.

And there were four main schools. There were the Confucians that we’re kind of most familiar with, the Legalists, which are kind of Machiavellian political realists, just, “How do you get power,” was the main focus of them, Taoists, who are these kind of somewhat more spiritual, acting in accordance with the way, with nature, advocating spontaneity, honesty, and then finally the Mohists, which I read and I’m like, “Wow, they were kind of similar to the effective altruists except in ancient China,” where they were about promoting good outcomes, and good outcomes, impartially considered. They forewent went much fancy kind of spending on luxury or victual, so their funeral rights were very modest. They wore very modest clothes, and they were just really concerned about trying to make the world better. And so they created a paramilitary group in order to defend cities that were under siege, where the reasoning being that if defensive technology and defensive strategy was so good, then no one could ever wage a war, because no one could ever win.

And so there was this great diversity of thought, but what happened? One state within China, the Qin, influenced by Legalism, took over and tried to essentially make Legalism state orthodoxy, and the Emperor of Qin declared himself a 10,000-year Emperor, wanted this ideology to persist indefinitely. It actually only lasted 14 years, because there was a kind of counter rebellion, and that was the start of the Han dynasty, which then successfully did, basically, over the course of a while, quell kind of other ideological competition, and instead implemented Confucianism as, “This is the official state ideology,” and that persisted for 2,000 years. And that’s kind of just one example among many. Over and over again, you see what’s the kind of ideology or belief set of a ruling power, whether that’s Catholics or Protestants, or is it the Communism of the Khmer Rouge, or of Stalin, or National Socialism of Hitler?

Once that ideology gets into power, people with those ideology get into power, they quickly try and stomp out the competition, and the worry is that could happen with the entire world. So, again, I spoke of a risk of third world war. Well, what might happen as a result? One ideology could take power, globally, after winning such a war, implement a world government, a world state, or at least dominant world ideology. Then we’re in this situation where there’s much less ideological competition, and at least one reason why we’ve gotten moral change and moral progress over time, which is in virtue of having a diversity of moral views that were able to kind of fight it out, and in ideal circumstances, kind of the best argument wins. We would no longer have that. If there was a single kind of dominant ideology in the world, that could persist for an extremely long time, I think, and if it was wrong, which it’s quite likely to be wrong, because I think most of our moral views are probably wrong, that would be very bad indeed.

Tim Ferriss: I mean, just to give you an idea — I mean, this is not exactly ideological — but you mentioned the Han Dynasty, and Mandarin Chinese. One way to say Mandarin Chinese is Hànyŭ, which is the language of the Han people. Like Hànyŭ Pīnyīn is the Romanization system used, which most people have seen. With the diacritical marks for tones for Mandarin Chinese. So these things can last a very long time, indeed. Do you have any other examples of value lock-in that could be past tense, historical examples, or attempts that are being made currently that you think are worth making mention of? It could be either.

Will MacAskill: I mean, historically, one particularly salient example or striking, was when the Khmer Rouge took power in Cambodia, Pol Pot just very systematically, anyone who disagreed with the party ideology generally, would just be executed, so 25 percent of the population were killed in Cambodia. And again, it’s very transparent what’s happening. He has this quote, “Purify the party. Purify the army. Purify the cadres.” So it’s just very, clear that what going on is almost like a species, or virus kind of taking over, and other competitors get wiped out. This one ideology takes over, and competitors are wiped out. Similarly, if we look at British history with, at different times, Catholics and Protestants taking power, there was one act past called The Act of Uniformity, which, with the Protestants saying, “Catholicism is now banned in this country,” and again, it’s very baldly named, and in general, just if you have a particular moral view, then you are going to want everyone else in the world to have that particular moral view as well.

Tim Ferriss: So, of AI, pathogens — let’s just say bioweapons. We can include, in that, World War III. How would you rank those, for you, personally, in terms of concern over the next, let’s call it 10 years?

Will MacAskill: Over the next 10 years, I’d be most concerned about AI. Over the next 50 years — let’s say my lifetime — I’d be both most concerned about AI and war. Developments in AI — The reason I say that is wars are most likely when two countries are at very similar kind of military power, and there are kind of historical weight of one country going to war with — one major power in the world, one of the big economies of the world going to war with another — When it kind of gets overtaken, economically or militarily, it’s pretty high. Some estimate — some ways of modeling put it as high as 50 percent. For AI, but I think that’s more likely to happen not kind of within the next 10 years, though it’s definitely possible that there would be some kind of outbreak of a war, such as between the US and China. Even though the risk of war between the US and Russia is definitely higher than it has been in the last, I guess, 30 years, potentially, I still think the odds are quite low, thankfully.

With AI, on the other hand, I think the chance of very rapid, and surprisingly rapid developments in AI within the next 10 years are a higher than any 10-year point after that. So in 2020s, it’s more likely there’ll be some completely transformative development than 2030s or 2040s is my kind of view, and that’s for a couple of reasons. One is that it seems, if you look at how much computing power different brains use and you compare that with how much computing power the kind of current language models use, or the biggest kind of AI systems use, the biggest AI systems use the computing power of approximately the brain of a honeybee. It’s kind of hard to estimate exactly, but that’s kind of where we are, which is a lot smaller than you might think. It’s much smaller than I thought.

And you might think, “It’s the point in time where you’ve got AI systems that are about as powerful as human brains.” That’s a really crucial moment in time, because that’s potentially the point in time at which AI systems just become more powerful than us, or at least approximately then, when we start to get overtaken, and that — Again, it’s very uncertain, but there’s a decent, pretty good chance that happens in something like 10 years time. And now, it’s very hard to do technological prediction. I am not making any confident predictions about how things go down, but it’s at least something we should be paying attention to, just from a kind of outside perspective, if you think, “And then we’re at the point where we’re training these AI systems that are doing as much computing as the brain is.”

That’s like — Well, it means maybe they’re going to be just of a similar level of power and ability as human brains and then that’s really big, and that’s kind of big for a few reasons. I think one is because it could speed up dates of technological discovery. So, historically, we’ve had fairly steady, kind of technologically driven economic growth. That’s actually over a couple of hundred years, but that’s because of two things happening. One is that ideas get progressively harder to find, but we throw more and more researchers. So we have a bigger population. We throw a larger percentage of the population at them. If, instead, we can just create engineers and research scientists that are AI systems, then we could rapidly increase the amount of R&D that’s happening. And what’s more, perhaps they’d be much, much better at doing research than we are.

Human brains are definitely not designed for doing science, but we create machines that really are. And in the same way that Go — the best AI systems are far, far better than even the very best human systems now, the same could happen within science. And if you plug that into pretty standard economic models, you get the conclusion that suddenly, things start really moving really very fast, and you might get many centuries worth of technological progress happening over the course of a few years or a decade, and that could be terrific. In a sense, I think both the kind of the optimists and the doomsayers are correct, where that could be amazing. If it gets handled very well, then it could be a radical abundance for everyone. We could solve all of the other problems in the world.

If it gets handled badly, well, the course of that tech development could be dangerous pathogens, or it could enable us to lose control to AI systems, or it could be involve misuse by humans themselves. So, I think — and I can go into that more, but I think there’s just a lot of things going on that could be extremely important from the long-term perspective.

Tim Ferriss: Well, let’s go into it more. I mean, I, as a simpleton, assume that pretty much any new technology is going to be applied to porn and warfare first and that those two would also sort of reciprocally drive forward a lot of new technology. I’m actually only 10 percent joking, but go ahead. What were you saying?

Will MacAskill: Well, do you know Dall-E 2?

Tim Ferriss: I do, actually, yes. I’m going to be using it a bunch this week.

Will MacAskill: Fantastic — well, for listeners who don’t know, it’s a fairly recent AI system, and you can tell it to produce a certain image using text. So maybe that image is an astronaut riding a unicorn in space in the style of Andy Warhol, and it will create a near perfect rendition of that, and you can really say a lot of things. You can say, “I want a hybrid of a dolphin and a horse riding on a unicycle,” and it will just create a picture of that. It’s really in a way that really makes it seem like it understands the words you’re telling it. And at the moment, it does faces. Well, it can create faces of imaginary people, almost picture perfect. Again, if you pay close attention, you can see weird details.

Tim Ferriss: When you say “imaginary people,” what do you mean?

Will MacAskill: So if you type in “A picture of Boris Johnson,” then it will not give you a picture of Boris Johnson, and I don’t know this for sure, but my strong guess is that’s because it’s been deliberately restrained so that it does not do that, because while — 

Tim Ferriss: So it doesn’t deepfake everything?

Will MacAskill: Exactly, because you were mentioning porn. With that technology, you could — well, fill in the blanks. I’ll let you think of your own text prompts that you could put in involving Joe Biden or Tim Ferriss or whoever you want.

Tim Ferriss: Joe Biden and Tim Ferriss. Uh-oh!

Will MacAskill: Exactly, Boris Johnson, too. You’re all in the frame. Who knew you were such good friends?

Tim Ferriss: Oh, God, the horror, the horror.

Will MacAskill: So, to my knowledge, that’s not been used for porn yet, but I think the technology would make it completely possible. And then is it going to be used for warfare? Absolutely. I mean, there’ll be a point in time when we can automate weaponry. So, at the moment, part of the cost of going to war is that your people, and part of your population, will die. That’s also a check on dictatorial leaders as well. You need to at least keep the army on your side. Otherwise, there’ll be a military coup. Now, imagine if there’s a world where the army is entirely automated. Well, dictators can be much more reassured, because their army can be entirely loyal to them. It’s just coded in. Also, the costs of going to war are much lower as well, because you’re no longer sustaining casualties on your own side. And so that’s just one way in which technological advances could be hugely disruptive via AI, and it’s far from the biggest way.

Tim Ferriss: Let’s take just a short intermission from Skynet and World War III, just for a second, and we’re going to come back to exploring some of those, but what are some actual, long-term projects today that you are excited about?

Will MacAskill: One that I’m actually really excited about is investment in and development of a technology called Far UVC Lighting. So Far UVC is just a very specific and quite narrow spectrum of light. And with sufficient intensity, just put into light bulbs, it seems like that just sterilizes our room. Now, we’re not confident in this yet. We need more research on its efficacy and the safety. But if this was just installed in all lighting in every house around the world, basically, the same sort of way we do for fire regulation. Every house, at least relatively well off country, has to meet certain standards for fire safety. It could also have to meet certain standards for disease safety like having light bulbs with UVC light as part of them. Then we would make a very substantial progress to never having a pandemic again as well as, as a bonus, eradicating all respiratory disease.

And so this is some extremely exciting technology. There’s a foundation that I’ve been spending a lot of time helping to set up over the last six months called Future Fund, and it — This is something that we’re donating to and investing in, because it just could make an absolute transformative difference. So that’s one. Other things that are very concrete within the biotech space include kind of early detection of new pathogens. So just constantly sampling waste water, or constantly testing healthcare workers, and doing full spectrum diagnostics of just all the DNA in the sample, excluding human DNA, “Is there anything there that just looks like a pathogen and we don’t understand,” so that we can kind of react to new pandemics very quickly. Also, more boringly, just better PPE, where you could just have — you put on your super PPE hood, and you’re now just completely protected from any sorts of pathogens, that could enable society to continue, even if there was an outbreak of a really bad pandemic.

So that’s very exciting within biotech. Within AI, there’s a lot of work on technical AI safety where the idea is just using methods to ensure that AI systems do what we want them to do, where that means, even if they’re very powerful, not trying to seek power and disempower the people who are created them, not being deceptive, not causing harm, and there are various things you can do there, including with these kind of not as sophisticated models that we’re currently using like tests to see if they are acting deceptively, what are structures you can use to make them not act deceptively.

Can we have better interpretability so that we actually understand what the hell is going on with these AI models, because at the moment, they’re very nontransparent? We really don’t know, “How do they get to a particular answer?” It’s just this huge, huge computational process where we’ve trained it via learning over what, in computer time, is am extremely long time. So maybe it’s tens of thousands, or even millions of games of Go that’s played, and now it’s very good at Go, but what’s the reasoning that’s going on? We don’t really know.

So, and then we could keep going as well. It’s just many, many things within technical AI safety. And then there’s the governance side of things, both for AI, for other technologies, for the reducing risk of World War III. Here, I kind of admit, it gets tough. It’s very hard to measure and be confident that we’re doing stuff that’s actively good, and we have to hope a little bit more that just having smart, thoughtful, competent people in positions of political influence, where they’re able to understand the arguments on both sides and put in policies and regulation in place, such that we more carefully navigate these big technological advances, or such that we don’t go to war, or face some sort of race dynamic between different countries. That is also just extremely important to me, in my view.

Tim Ferriss: Let me take a pause to jump back to a number of the questions that I have next to me. When you feel overwhelmed or unfocused, or if you feel you’ve lost your focus temporarily, what do you do? Are there questions you ask yourself, activities, because I think it is easy — and maybe I’ll just speak for myself — to feel like there’s so much potentially coming down the pike that is an existential threat, that it’s easiest just to curl up into the fetal position and just scroll through TikTok or Instagram and pretend it isn’t coming. So I’m not saying that is where you end up, but when you feel overwhelmed or unfocused, or under focused, what do you do?

Will MacAskill: And for me, it’s most often driven by developing my mood. So I’ve had issues with depression since forever, basically. Although, now, it’s far, far better. I think I normally say I’m something like five to 10 times happier than I was a decade ago. That’s pretty good.

Tim Ferriss: That is good.

Will MacAskill: I’m quite happy about that. And so I have a bit of — have you heard of the term trigger action plan?

Tim Ferriss: Say that one more time? I’m not sure if it’s the word or the Scottish accent.

Will MacAskill: I know.

Tim Ferriss: Trigger action — trigger — 

Will MacAskill: Trigger action plan!

Tim Ferriss: Yeah. There. Oh, my God, I have Shrek on the podcast. This is amazing. All right. Go ahead. Yeah. Trigger action plan. Go ahead.

Will MacAskill: Exactly.

Tim Ferriss: No, I don’t know what that is.

Will MacAskill: The idea is there’s a trigger, like some event that you hit that happens. And it’s just like when that event happens, you immediately just put into place some action. So a fire alarm goes off. Then it’s like everyone knows what to do. There’s the fire drill. You follow the fire drill. You like stand up, you walk outside, you leave your belongs. And it’s so that you don’t have to think in complex situations.

I do that, but for when I have low mood where the thing that has been very bad in the past is when it’s like, “Oh, I’ve got low mood so I’m not being as effective and productive. So I’m going to have to work even harder. And therefore I beat myself up and it makes it even worse.” So instead what I do is I’m just like — if I notice my mood is really slumping and therefore it’s like harder to work, I just fix my mood to top of my to-do list, becomes the most important priority where the crucial thing is not to let it spiral.

The number one thing I do is just I go to the gym or go for a run because then I’m like, “Look, I want to do this a certain amount of time per week anyway. It’s something I enjoy. I find it like recreation. So worst case I’m just moving time around.”

Similarly, I’ll probably meditate as well. Then at the same time, in terms of how I think, I have again certain kind of cash thoughts that I found very helpful. So one is just thinking about how it’s like, “Yep, this happens before and it’s not the end of the world. It’s been okay. If I’ve gotten through this before, it’s just I’ll be able to get through it again probably.”

Second is just thinking about no longer assessing the individual day that I’m having but instead some larger chunk of time. Where it’s easy to beat yourself up if your like, “Look, I’ve had just the shittiest day and I’ve done nothing. What a loser.” Whereas if you’re like, “Okay, well how have I done over the last three years or 10 years or like my whole life?” And at least assuming you feel kind of okay about that, which I do, then that’s very reassuring. It’s like, “Okay, I’ve had a shit day, but if someone were to write a history of my last few years, they probably wouldn’t talk about this day. They’d talk about the other things.” And so it’s kind of like I’ve got a little bit in the bank there. So even if I just take the whole day off, in the grand scheme of things it doesn’t really matter.

And so some of these thoughts then — that combined with taking a bit of time away from whatever’s making me plunge gives that — and then exercise I just find has a mood boost as well but then also gives time for these thoughts to really percolate and sink in generally means that I can just then come back a couple of hours later and be clear, refreshed. But the key thing is just once this happens, you just do the thing and you stop thinking. It’s like, “Look, this is what my plan is.”

Tim Ferriss: Can you say trigger action plan one more time in a heavy Scottish accent?

Will MacAskill: Trigger action plan! That’s what you need, pal.

Tim Ferriss: Going to put that right at the — 

Will MacAskill: Tip?

Tim Ferriss: — top of the podcast. Oh, so good. So good. Thank you. Thank you.

Will MacAskill: Nae bother, pal! Nae bother.

Tim Ferriss: If this longtermism effective altruism philosophy thing doesn’t work out for you, I think you have a future in voice acting. So you always have that.

Will MacAskill: Effective altruism is about doing the most count you can. Not being a wee girl’s blouse. I don’t know what if you speak in — yeah, speaking with proper Scottish accent, suddenly you’ve got to be somewhat aggressive and insulting someone. Otherwise it doesn’t quite work.

Tim Ferriss: You can’t whisper an aggressive Scottish accent. It’s very hard.

Will MacAskill: Exactly.

Tim Ferriss: Very very challenging. I’m not going to — I’m not even going to try that. It would be embarrassing. But let’s hop back into AI for a moment. So you hang out with a lot of the smart, cool kids and very technical people who really understand this stuff. When they talk about robots gone bad or just the plausible scenarios that would be very bad, what are they? What are the two or three things that they would see as an event or a development that would sort of be the equivalent of the trigger action plan, right, where it’s like, “Oh, this is life before and life after?” What are the, say, two or three or one to three scenarios that they’ve honed in on?

Will MacAskill: So I think there are two, from my perspective, two extremely worrying scenarios. One is that AI systems get just much more powerful than human systems, and they have goals that are misaligned with human goals, and they realize that human beings are standing in the way of them achieving their goals, and so they take control. And perhaps that means they just wipe everyone out. Perhaps they don’t even need to.

So an analogy is often given between the rise of Homo sapiens from the perspective of the chimpanzees, where Homo sapiens were just smarter. They were able to work together. They just had these advantages. And that just means the chimpanzees just have very little say in how things go over the long term, basically no say. It’s not that we’ve — it’s not that we made them extinct. Although in a sense they’re kind of lucky. We made many — in fact I think we made most of large animals extinct due to the lives of Homo sapiens. But that could happen with AI as well. We could be to the AI systems what chimpanzees are to humans. Or perhaps it’s actually more extreme because once you’ve got AI systems that are smarter than you and they’re building AI systems that are smarter again, maybe it’s more like we’re like ants looking at humans when we’re looking at advanced AI systems.

Tim Ferriss: So give me the second one, and then I’m going to come back to the first one with just a sci-fi thought experiment.

Will MacAskill: And then the second one is like, okay, even assume that we do manage to align AI systems with human goals so we can really get them to do whatever they want. Nonetheless, this could be a very scary thing where if you do think that AI systems could lead to much faster rates of technological progress, for in particular about automating technological discovery, including the creation of better AI systems. So we’ve got AI writing the codes that builds the next generation of AI that then writes even better codes that build the next generation of AI, things could happen very quickly. Well, even if you manage to have AI systems do exactly what you want them to do, well, that could concentrate power in a very small number of hands, could be a single country, could be a company, could be an individual within a single country who wants to instill a dictatorship.

And then once you’ve got that power with it, and it’s kind of similar to what happened during the Industrial Revolution and earlier, so Europe got more and more powerful technology over that period. And what did it do? It used it to colonize and subjugate a very large fraction of the world. In the same way it could happen but even faster that a small group gets such power and uses it to essentially take over the world. And then once it’s in power, well, once you’ve got AI systems, I think you are able to have indefinite social control in a way that’s very worrying. And this is value lock-in again, where at the limit imagine you’re kind of the dictator of a totalitarian state, like 1984, The Handmaid’s Tale, or something, and that’s a global totalitarian state and you really want your ideology to persist forever. Well, you can pass that ideology on to this AI successor that just says like, “Yep, you got all the world now.” And the AI does not need to — well, has no need to die. It’s like software. It can replicate itself indefinitely. So unlike dictators, who will die off eventually causing a certain amount of change to occur, well, this is not true for the AI. It could replicate itself indefinitely, and it could be in every area of society.

And so then when you’ve got that, it’s like why would we expect model change after that point? And it’s kind of hard to see. So, in general, I think there can be these states where you get into a particular state of the world and you just kind of can’t get out of fit again. And this kind of Orwellian perpetual totalitarianism is actually one of the things I really worry about.

Tim Ferriss: Okay. So — 

Will MacAskill: Again, this is a happy book!

Tim Ferriss: Yeah, yeah, yeah, yeah. So yes. So within the context of our discussion of the happy book, you talked about — I think it was the Mohists? I can’t remember the term you used. But you mentioned they were similar to effective altruism, and they formed a paramilitary group. When are you forming the effective altruism paramilitary group counter-AI insurgency squad? Is that in the works?

Will MacAskill: Well, there is an analogy between — we haven’t yet got our own army. Probably that won’t happen. I think things are going pretty weird if they have, and I might need to intervene at that point. But there is an analogy where they built very powerful and created very good defensive technology. So you’ve got trebuchets, very powerful for attacking.

Tim Ferriss: You say trebuchet?

Will MacAskill: Oh, trebuchet. Yeah. It’s like a catapult.

Tim Ferriss: What is that? It’s like a — It’s got a sling on it. Is that what it is?

Will MacAskill: Yeah, exactly.

Tim Ferriss: It’s like a catapult with a sling. So you’ve got the — It’s like an atlatl but for throwing much bigger things. Anyway, the physics involved they’re, I think, the same. Yeah.

Will MacAskill: Yep. For sure. And then, but also walls or defensive technology. If you had just really good walls, really good defenses, then — 

Tim Ferriss: I thought you said wolves for a second. I was like, “Wow, I did not see that coming. We’re going to have to resurrect the wolf if we want to have any hope of defensive wolf technology.” All right. Walls. Yeah. Continue.

Will MacAskill: Well, they are training eagles to attack drones, so it’s not so insane.

Tim Ferriss: Yeah. All right. Walls.

Will MacAskill: Yeah. Wolves to attack the robot overloads? I don’t back the wolves, I’ve got to say.

But we can think in the same terms of, look, there’s certain technology that has kind of offensive advantage, so the ability to design new pathogens. There’s certain technology that’s like has a defensive advantage, like this far UVC radiation. And so one of the things we’re doing is really trying to develop and speed up kind of defensive technology. And so similarly, when you look at AI, there’s some things that are just pure capabilities. It’s just AI getting more and more powerful. And then there are some things that are helpful in making sure that AI is safe, like understanding what’s under the hood of these models just means that like, okay, we know what’s going on a bit better. We can like use it better. We can predict what sort of behavior it’ll have.

Tim Ferriss: Okay. So let’s talk about defensive capabilities. I’ll just give another example of an asymmetric offense defense situation, which would be drone warfare, right? So the ability to create weaponized, potentially lethal drones and swarms and so on is much lower than the cost to defend against them generally speaking, right? I mean, certainly that becomes true if you start to combine, say, targeted bioweapons with the drones, things get really expensive to at best to defend against.

But let’s talk about my sci-fi scenario. So when you use the analogies of humans and chimps or the analogy of the industrial revolution and the technological gains in Western Europe predominantly which then allowed the physical, and that’s the word I’ll underscore sort of subjugation and colonization and dominance of a significant percentage of the world’s population, I suppose there’s part of me that on a simplistic level is it feels like even though this would not be easy to do because it would be sort of like a homicide suicide for a lot of folks the more interdependent we become, but it’s like, all right, if AI is constrained to a physical infrastructure that is dependent upon power, would not part of the defensive planning or preemptive planning go into trying to restrict AI to something that can be unplugged, to put it really simply? But how are people playing out this hypothetical scenario, right? So with AI, presumably if it’s as smart or smarter than we are, foresee this and then develop sort of solar powered extensions of itself, so it can do A, B, and C. I mean, how are people — I’m sure this is part of the conversation. I’ve just never had it. So what are smarter people exploring with respect to this type of stuff?

Will MacAskill: I think actually your nose is pointed in a good direction on this one. And it’s this sort of thing that makes me, among my peers, on the more optimistic end of thinking that advanced AI would not kill everybody. Where, yeah, you could have like air-gapped computers so they can’t access the Internet now, don’t have other ways of kind of controlling the world apart from kind of text output, and they have been trained to act as kind of oracles. So you just ask them a question, and they give you ideally like very justified kind of clue answer. And perhaps you have many of these as well. You’ve got like a dozen of them. And they don’t know that the others exist. And then you just start asking them for help.

So you’re like, “Okay, we’re going to start building these like incredibly powerful AI systems that much, much smarter than we’re generally able than we are. Like, what should we do? What should our plan be?” And so that’s a pathway where it’s like you’re using AI to help solve the problems that like the even more powerful AI kind of face, that we will face with even more powerful AI. And what’s the response? I mean, some people would say, “Oh, well, if the AI systems are really that powerful — they’re like far better than humans — they will then trick you.” So and they’ll be able to do that just by outputting text and telling you like, “Oh, do this thing or do this thing.” And that will all be this like long deceptive play. And I just think that’s unlikely. That seems to be speculative to me. I don’t have strong reasons to think that we would have that.

The current AI systems we have are more like — they just output text. It’s not like they’re an agent that’s like trying to do things in the world. You just put in the text input, it gives you a text output, at least for language models. And potentially we can scale it up to the point where they’re, yeah, like these kind of sages and boxes. And I think like — so I think that’s a significant pathway by which like we make even more powerful AI systems that are kind of agentic, are kind of acting — have a model of the world and are trying do things in the world that we make them safe.

But that’s exactly a good example of, yeah, again kind differential technological progress where an AI system that’s just like this article in a box separated from the rest of the world. That seems very good from a defensive perspective whereas an AI system that’s being trained on like war games and then it’s just like released into the open seems like potentially very bad.

Tim Ferriss: Oh yes, indeed, robots gone wild.

Will MacAskill: Are you going to create a new subreddit?

Tim Ferriss: I don’t know if I’ll create a subreddit. I mean, I’m tempted to start digging spider holes in my backyard and learning how to hunt with bow and arrow. But all things in due time. What are the most important actions people can take now? What are some actions — for people who are like, “I want to be an agent of change. I want to feel some locus of, if not control, at least free will. I don’t want to just lay on my back and wait for the cyborg raptors to descend upon me or to become a house pet for some dictator in a foreign land who overtakes the US as a superpower, whatever it is, right? I want to actually do something.” What are some of the most important or maybe impactful actions that people can take?

Will MacAskill: Okay. For sure. So I think there’s kind of two pathways for such a person. One, you might be motivated to help, but you don’t really want to rejig your whole life. Ideally, perhaps you just don’t really want to have to think about this again, but you do want to be doing good. Then I think the option of donations which is particularly good. So you could take the Giving What We Can pledge. You take 10 — you make a 10 percent donation every year of your income. And then you can give to somewhere like the Long-Term Future Fund as part of EA funds, and it’ll get redistributed to what some kind of domain experts think are the highest impact things to do in this space. That’s the kind of like baseline response or something. And I think it’s important to emphasize you can do an enormous amount of good there. There’s a lot of ways we could spend money to do good things in the world, including from this long-term perspective.

The second is if you’re like, ‘No, actually I want to be more actively a kind of agent of change,” then I think the first thing to do is to learn more. I’ve tried to pack as much as I can into a book, but I think there’s a lot to engage with. I mean, the book is what we are the future. It’s talking about some big philosophical ideas. It’s also just covering a lot of a broad ground of different disciplines, different issues. Like, we talked about AI and technology and biorisk and WW III. There’s plenty of other issues I didn’t talk about. We haven’t talked about nukes. We haven’t talked about technological stagnation, which I think is particularly important as well. We also haven’t talked even just about promoting better values as well, kind of all more broad ways of making the long term better. So all of these things are things that I think we can learn, and therefore, I’d encourage reading The Precipice by Toby Ord, which I mentioned in my recommended books. Also 80000hours.org as well has enormous amounts of content. openphilanthropy.org also has just a lot of really interesting content. They have a foundation, but they’ve done some really deep research into some of these topics such as this issue, which we didn’t get to touch, of when we should expect human level intelligence to arise with some arguments that we should really put a lot of probability mass, like maybe more than 50 percent, on it coming in the next few decades.

And then following that I think the most important single decision is how can you either use or leverage your career or switch career in order to work on some of these most important issues. And again, we’ve really tried to make this as easy as possible by providing like endless online advice and also one-on-one coaching such that people can, yeah, get advice.

And then the final thing would be getting involved with the Effective Altruism community, where this stuff is hard. It can be intimidating. One of the big things that just is a fact when we start thinking about these more kind of civilizational scale issues compared to the kind of original seed of EA, which is funding these very well-evidenced programs that demands we improve health is like it can be very overwhelming and it can be hard to know exactly how to fit in, but we now have a community of thousands or tens of thousands of people who are working together and really keen to help each other. And there are many conferences like EA Global Conferences at places like London, DC, San Francisco, all kind of independently organized conferences, EA Global Xes in many places — there’ll be one in India, for example, in early January — as well as like hundreds of local groups around the world where people get together and can often provide support and help each other to try and figure out like, okay, what is the most impactful thing that you can do? So, yeah, that would be my kind of laundry list of advice.

Tim Ferriss: And with respect to, say, ditching the news or at least going on a lower information diet with the most manufactured urgency that we get flooded with an instant, instead spending time looking at big picture trends or trying to get that big picture roughly right, as you put it both from a historical perspective and a current perspective, would you still recommend for podcasts In Our Time hosted by Melvyn Bragg, I believe discuss this history of philosophy, science with leading academics, and The 80,000 Hours Podcast? Would those be two you would still recommend?

Will MacAskill: Yeah, I would still strongly recommend them. There’s also another podcast, Hear this Idea by Fin Moorhouse and Luca Righetti. I also particularly like Rationally Speaking by Julia Galef as well. It is very good. And then in terms of websites, if you want big picture, beyond the websites I’ve already said, to just have the best big picture understanding of the world, I don’t know of a single better source than Our World in Data, which is just — I mean, it was very influential during the COVID pandemic. But it has — If you want to learn about nuclear war or learn on economic growth or world population, it’s articles that are presenting both data and the best understanding of the data in just this timeless evergreen way with exceptional rigor and exceptional depth. It’s just amazing. I used it very heavily to orient myself for the book.

Tim Ferriss: So, Will MacAskill, people can find you on Twitter @willmacaskill, M-A-C-A-S-K-I-L-L, on the web, williammacaskill.com. The new book is What We Owe the Future. I recommend people check it out. Is there anything else you would like to add? Any requests to the audience? Anything you’d like to point people to? Any complaints or grievances with this podcast process that you would like to air publicly? Anything at all that you’d like to add before we wrap this conversation up?

Will MacAskill: The main thing to say is just, as we’ve said over and over again, I think we face truly enormous challenges in our life. Many of these challenges are very scary. They can be overwhelming. They can be intimidating. But I really believe that each of us individually can make an enormous difference to these problems. We really can significantly help as part of a wider community to putting humanity onto a better path. And if we do, then the future really could be long and absolutely flourishing. And your great-great-grandkids will thank you.

Tim Ferriss: Well, thank you very much, Will. I always enjoy our conversations, and I appreciate the time.

Will MacAskill: You too. Thanks so much, Tim.

Tim Ferriss: Absolutely. And to everybody listening, I will link to all the resources and the books and websites and so on in the show notes as per usual at tim.blog/podcast. And until next time be just a little bit kinder than necessary. And thanks for tuning in.

The Tim Ferriss Show is one of the most popular podcasts in the world with more than 900 million downloads. It has been selected for "Best of Apple Podcasts" three times, it is often the #1 interview podcast across all of Apple Podcasts, and it's been ranked #1 out of 400,000+ podcasts on many occasions. To listen to any of the past episodes for free, check out this page.

Leave a Reply

Comment Rules: Remember what Fonzie was like? Cool. That’s how we’re gonna be — cool. Critical is fine, but if you’re rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for adding to the conversation! (Thanks to Brian Oberkirch for the inspiration.)

2 Replies to “The Tim Ferriss Show Transcripts: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change (#612)”

  1. Hi, where are the show notes? I thought there would be a summary of the conversation along with links to the websites and articles that you spoke about during this conversation?