Theo's Substack
Theo Jaffee Podcast
#9: Dwarkesh Patel
0:00
-1:48:27

#9: Dwarkesh Patel

Podcasting, AI, Talent, and Fixing Government

Dwarkesh Patel is the host of the Dwarkesh Podcast, where he interviews intellectuals, scientists, historians, economists, and founders about their big ideas. He does deep research and asks great questions. Past podcast guests include billionaire entrepreneur and investor Marc Andreessen, economist and polymath Tyler Cowen, and OpenAI Chief Scientist Ilya Sutskever. Dwarkesh has been recommended by Jeff Bezos, Paul Graham, and me.

  • Dwarkesh Podcast on Apple Podcasts:

TJP Links

  • YouTube:

  • Spotify:

  • Apple Podcasts:

Chapters

  • Intro (0:00)

  • OpenAI drama (0:50)

  • Learning methods (4:10)

  • Growing the podcast (7:38)

  • Improving the podcast (17:03)

  • Contra Marc Andreessen on AI risk (24:18)

  • How will AI affect podcasts? (26:31)

  • AI alignment (32:08)

  • Dwarkesh’s guests (38:04)

  • Is Eliezer Yudkowsky right? (41:58)

  • More on the Dwarkesh Podcast (46:01)

  • Other great podcasts (50:06)

  • Nanobots, foom, and doom (56:01)

  • Great Twitter poasters (1:01:59)

  • Rationalism and other factions (1:05:44)

  • Why hasn’t Marxism died? (1:15:27)

  • Where to allocate talent (1:18:51)

  • Sam Bankman-Fried (1:22:22)

  • Why is Elon Musk so successful? (1:29:07)

  • How relevant is human talent with AGI soon? (1:35:07)

  • Is government actually broken? (1:36:35)

  • How should we fix Congress? (1:40:50)

  • Dwarkesh’s favorite part of podcasting (1:46:46)

Transcript

Introduction (0:00)

Theo: Welcome back to episode 9 of the Theo Jaffee Podcast. Today I had the pleasure of interviewing one of my favorite podcasters, the one and only Dwarkesh Patel. Dwarkesh is, in many ways, what I aspire to be as a podcaster. He interviews some of the most interesting people in the world in AI, history, economics, and beyond, from Ilya Sutskever to Tyler Cowen—and does so only after many hours of deep research and crafting some of the most thought-provoking questions I’ve ever heard. His listeners include Jeff Bezos, Paul Graham, and Nat Friedman. In this episode, we cover a wide range of topics: how to prepare for and produce great podcasts, different visions for both the short-term and long-term future of AI, how to get talent into politics, and much more. This is the Theo Jaffee Podcast, thank you for listening, and now, here’s Dwarkesh Patel.

OpenAI drama (0:50)

Theo: Hi, welcome back to episode nine of the Theo Jaffee Podcast. Here today with Dwarkesh Patel.

Dwarkesh: Hey, what's up, man? Thanks for having me on your podcast.

Theo: Absolutely. I want to start off by talking about the events of the last weekend. When I scheduled this, I did not know that that was going to happen. I don't think anybody knew that was going to happen. So with all the Robert Caro, Lyndon Johnson reading that you've done, reading about power, reading about human behavior, do you think you could have predicted or understood anything about this better?

Dwarkesh: Certainly not predicted, because I think the prediction is contingent on a whole bunch of details about what happened that I still am not aware of. And I don't think almost anybody's aware of, despite the endless speculation. As for could this better help you understand it? Certainly. I was just thinking about—the Lyndon Johnson books are good, but there’s also Caro’s great biography of Robert Moses, the famous dictator of New York City. He has many episodes of Robert Moses in his early career where there’s an indication that he might be doing something that's in his own self-interest, or that doesn’t actually accord with his very publicly flattering image. It just kind of brushed under the rug, not well understood. People don't talk about it or gossip about it because of different kinds of fears. Anyways, I’m not saying that that's necessarily what's happening. But it is important to understand that we don't have the full picture yet and keep that in mind.

Theo: That makes sense. Actually, just before I got on the podcast, I was scrolling through Twitter, as one does. And I read that there's a new piece of information after all this that they were working on an agent that can do math. It's pretty interesting.

So you interviewed Ilya Sutskever on the podcast before. Did you judge his character at all? Did he seem to you like totally earnest, with the goal of protecting humanity?

Dwarkesh: Honestly, it's hard to evaluate somebody on from a one-hour conversation. But from the testimonials of the people who know him, it really does seem like he's a very genuine guy who has this priority as making sure humanity has a good future. And that's not to say that he can't make mistakes in his judgment about how to get to that future. But I think nobody's contradicting that basic motivation of his from people who know him over the years.

Theo: I'm pretty surprised that he switched sides. Yeah, I mean, this whole thing was very hard to follow.

Dwarkesh: There's a lot we don't know. It’s really hard to comment. There’s so much we don’t know. And it is really hard to say why or why he switched sides.

Learning methods (4:10)

Theo: Switching subjects a little, a lot of what you do is reading and research. So how do you read specifically? Do you take notes? How does your note-taking method work, if you do take notes?

Dwarkesh: I recently started using space repetition myself after I talked to Andy Matuschak. It's insane how much more effective it is. You realize it when you make a card about something you think isn't even important. And a week later, you've almost forgotten and you see the card and you're like, "Well, what's that again?" I've seen the evidence again and again that space repetition is effective. Honestly, if I'm not making cards about something I'm reading, I might as well not even read it at all. That's how much less effective I think normal reading is. As for note-taking itself, I don't really do much of that. I just have a Google Doc, as in guest name questions, and I start adding questions to it. And before the interview, I'll organize them. But yeah, space repetition and noting down questions as I'm reading.

Theo: What specifically do you use for space repetition? Anki?

Dwarkesh: No, I use this app called Mochi. It's just a nicer interface.

Theo: I've tried using Anki before for language learning, but for general learning and knowledge, do you know of any really successful people who have used space repetition to learn more effectively than just reading huge volume?

Dwarkesh: Depends on what you mean by successful. I think it's just not a technique that's been widely used. I don't know the ancient history of this, but I'm guessing there's been people who have used something similar to space repetition. I was recently reading the Gulag Archipelago, and there's this really interesting chapter on memory. It talks about how people who were composing these long books, like the Gulag Archipelago, in their mind, the multi-volume work, and he just kind of memorized it. He had these beads that were nominally to pray for. His memorization technique was that he would go bead by bead. And at every bead, he would...He would make sure he has remembered the passage that he has written down in his head. He would recite it, and then he'd go to the next bead and so on. That's how he memorized the work he composed in his head, because he couldn't write it down, otherwise, you'd get executed for your thoughts. That was sort of a tangent on, is there anybody who successfully employed it? I think it is true that a lot of really, for lack of a better term, sample efficient people can absorb a lot of information and synthesize it to come up with new ideas. People like Tyler Cowen or Berne Hobart, they seem to do fine without space repetition. I personally, I have benefited tremendously from its use. And I wonder if they themselves would benefit as well. I should ask them. I should ask Berne or something.

Growing the podcast (7:38)

Theo: So your moat, I guess you'd say, is deep research and good questions. Do you think that there's any kind of trade off between deep research and good questions and popularity? Like, does going too deep exclude some people?

Dwarkesh: Certainly at some frontier. Certainly there's eventually a trade off, right? Like 7 billion people are not going to be following a conversation about that's insufficient depth. But I do think I'm at least an order of magnitude away from that frontier in that I could still go 10x more and I'd have an audience that's big enough. There's enough people who would want a conversation of this depth.

I used to think this way before when my podcast was much smaller. I used to think, oh, well, it's because people, a large group of people couldn't appreciate this kind of stuff. And since then, it's grown a lot and people do appreciate it. And I've realized it was just cope. And so it's just not useful to think in that way. You know, you do something really high quality. And if you were to have a super banal podcast, then who's going to listen to it? Like, there's already a bunch of them out there. And for different reasons, they're already up at the top. You're not going to be able to compete with them on that niche.

Theo: What kind of—

Dwarkesh: Don't you think? What do you think? I mean, you're making a podcast. What do you think about that?

Theo: I mean, honestly, there's a long way to go with. I mean, I don't really worry about getting too popular at this stage. I worry more about the opposite. Like, how do I get more popular? And I mean, yeah, I was talking about this with my friends recently. A couple months ago, they were like, oh, you should lean more into like, make like sensationalist thumbnails and titles and YouTube shorts with, you know, I see some podcasts popularized in this way where you have like a, like a GTA race in the background. Do I want to do that? Like, well, will it dilute the brand that I want to create? Or like, is that just the way that you get people on the podcast? And if so, like, will it get the people who I want to be listening to the podcast who’ll, you know, listen to it for a while and share it.

Dwarkesh: Yeah, I certainly think, especially for the kind of content you're making. It's like, I mean, yeah, it just, it's hard to imagine that that's the way it gets popular at the same time. It's not something to neglect. So there's a difference between doing the most cringy shit possible versus neglecting it altogether. I certainly put like a lot of care into making my thumbnails and then I've started making a lot of Twitter clips. Sometimes they go viral. But it's just not cringy shit. Like, you know, yeah, the GTA races in the background. But like promotion is good. It's like nothing against promotion. There's a separate question of then you don't avoid the deep research and you still promote, but you do it in a way that is true to the authentic thing you're trying to put out there.

Theo: I mean, certainly there's a lot of podcasts that are just like content. I was thinking about this earlier today. Like what makes something good is that it can no longer be called content. Like I would call a lot of the stuff that I see on social media content, but I would not call like Paul Graham's essays content because they’re so much more than that.

Dwarkesh: Or what, what is the other one where I think people try to define it? Like how would I define your profession? And they say, are you a content creator? And I cringe a little bit on the inside, but it's, it's worse to be called the alternative, which is journalist. So I take what I can get.

Theo: What about citizen journalist?

Dwarkesh: Yeah, I dunno. I just like, it's not journalism exactly. It's not current events or anything like that. Yeah. I think that's a great way to describe it. I think cause content implies something that can be farmed content farming or that is fungible with other kinds of content. And you do want it to be something that's not just like that.

Theo: So going back to the clips, what makes you decide which parts of the podcast you make into clips to post on Twitter? Is it like just interestingness? Is it like conciseness, something else?

Dwarkesh: Actually, I'm literally probably tomorrow going to put out a contest to make clips for my podcast because it takes so much time and so much context and so much taste to do it. I mean, it just, you can put out a certain clip and it'll get like, I don't know, 10, 20 likes on Twitter and you can put out a different thing and it got 3000 likes. And it just about the context of knowing which part to clip that people will be enthused to share and so on. And that's honestly a pretty challenging thing that I haven't been able to automate away yet or forget about automating. I haven't been able to hire away yet. So yeah, I'm just going to do a contest to see if somebody else can do it. Cause this has been super important to the growth of the podcast, but it's also taken away a ton of time that I should be spending reading.

Theo: What do you think they will choose? What makes a good candidate for clips?

Dwarkesh: It's hard to explain. I was trying to come up with an explanation, the description of this contest and guidelines. It can be something that you could say, "Oh, it should be about hot button issues so that it goes viral," but it's not just that. Maybe it should touch on something people are interested in, but there's an element of novelty about something people care about.

I'm just trying to think back on certain clips that went viral. I had a clip about Shane Legg explaining that search is important to add into these LLMs to get them to do novel things. Now that's not about a hot button issue, like cultural wars or anything, but it is interesting. And you can always explain for each one. It's like the Anna Karenina quote of “every happy family is happy in the same way. And all unhappy families are unhappy in a different way”.

I guess all clips that go viral are unique in their own way, at least I don't know. Maybe that's not true for the average podcast, but that's what I found for the ones I've tried to analyze of mine. I don't know how much the audience, I think it'll be, definitely you and I are interested in how these clips are manufactured is challenging. I wonder how much the audience is interested in the clip making.

Theo: Well, the audience is interested in the clips. That's the point we're trying to optimize for the audience. So, have you studied Mr. Beast or any other super viral people or is what you try to do just different?

Dwarkesh: I don't think there's much to generalize from the Mr. Beast type stuff. I admire what he's able to do for his own kind of content. It just, you just can't advertise that content the same way that we advertise ours.

Theo: What about specific podcasters in this niche, like Lex Friedman? Although Lex doesn't seem to do that much.

Dwarkesh: He has a clips channel. I think that helps him out probably. I think he's just kind of farmed it.

Theo: Yeah. I think the only time I actually watch video podcasts is the Lex clips.

Speaking of watching different media, going back to talking about you reading, do you typically read mostly books or articles or do you watch YouTube videos or podcasts or all of the above? What's the split there?

Dwarkesh: I actually don't listen to many podcasts at all. If any, there's not really maybe a handful. I can't really think of any podcasts I listened to that regularly. I do read a lot of books, obviously. And part of it is what drives my interest. Part of it is the books of the guests I'm interviewing. Because I've been getting a lot into AI recently, there's a lot of papers and technical material, some textbooks. It just depends on the subject. What about like, I can definitely talk about like for a particular, if you want me to, I can go into what might've happened on like a typical episode if I'm interviewing Dario or Ilya or something.

Theo: Yeah, sure.

Dwarkesh: Let me think back on which would be a good episode. Yeah. So for Dario, I read all the papers that they put out on the transformer circuits, a thread of different mechanistic interpretability things. Then just reading a bunch of stuff about scaling the original scaling loss papers, how that's evolved over time, talking to a bunch of researchers, AI researchers to better understand the field. And what's uncertain about it. That would be interesting to ask about better understand the mechanistic interoperability results and what they imply.

Theo: How do you get people like Dario in particular who are, seem to be very media shy on the podcast. Is it just cold emails?

Dwarkesh: Eventually you build up a reputation and then, you know, somebody who's a link to them, which is what happened there. Yeah, basically just not necessarily cold emails, but you just like, you get to meet more people over time and I, it's not something I would try to do consciously, but it just been helpful and that's what's gotten me some of the biggest guests.

Improving the podcast (17:03)

Theo: Do you think that you're naturally good at podcasting or more that you got good over time? And if so, what specifically improved?

Dwarkesh: I definitely have gotten much better over time. Like I haven't even tried to listen to one of my old conversations because if I try it, I think I cringe really hard.

Theo: What changed?

Dwarkesh: I've I just learn more and you can notice this about if I listen to podcasts, I don't like I noticed the same thing that same patterns that I saw in my old podcast, which is very generic questions. Cause you just don't know much about anything. So you just have to ask these sort of vacuous general questions. Yeah, I, I've just learned more things. I can better empathize with the audience. And as long ago I got an older, I think that's not an insignificant part of this. I started the podcast when I was 19, I'm 23 now. My brain has probably changed in that time.

Theo: Yeah, I imagine. Was it like a conscious effort or just kind of like just getting older and smarter?

Dwarkesh: Was there any? Definitely the learning things was for years I've been preparing to interview guests from a wide variety of fields. And so I've been reading a lot during that time. And that that's definitely been a big part of it. Well, there's no specific thought like, Oh, I need to make my questions better. And here's like the dimensions on which I can make the questions better.

Theo: It's just like, you know, add more to the pre-training data.

Dwarkesh: Basically. Yeah. That's a great way to phrase it.

Theo: Because like a lot of progress in AI is just getting more data and better data.

Dwarkesh: And I actually heard a really interesting analogy to here to learning in general. When you're getting into a new field, you just want to pre-train a whole bunch of random tokens. You read the papers, the textbooks, you're just trying to grok. And then afterwards, you do this fine-tuned supervised learning where you delve deep into what every passage means, once you better understand what's going on generally in the field.

Theo: Like the Noah Smith, two papers thing.

Dwarkesh: What is that?

Theo: Basically, Noah Smith said something like if you want to introduce me to a new field of literature, give me two papers from that literature. That's a good test because if the literature doesn't have two good papers, then the rest of it's not worth reading. If the papers themselves are really insightful, then there's probably something there. I forgot the rest of it, but that’s the essence of it.

Dwarkesh: That's probably a good tip on how to evaluate the literature to begin with.

Theo: So if you didn't consciously refine much in the past, have you thought about what to consciously refine for the podcast in the future?

Dwarkesh: I have thought about ways to promote it. As for the basic format that I usually do have interviews, I actually haven’t. And I think people can give me feedback that I should probably take to heart, but there's not something that I think—Oh, there is one thing, but this is not about learning more or something. It's just making sure that I actually ask about the most important thing. And I don't let that go because I used to have this habit and may still do of just bouncing around from archaic topic and esoteric thing I read in their book to esoteric thing I read in their book and not just honing in on the most important thing and making sure we spend a good 20 minutes on it.

Theo: So what do you think the most important thing would be if someone were to interview you? Is this question cheating?

Dwarkesh: I think I'm different in the sense of like, I don't have a big take and that were… I guess we could talk about AI. I mean, I had to think about that in order to do different interviews. But honestly, even there, I don't really have an original take. I just have different, small sorts of takes and heuristics about a lot of different kinds of things. But for me, I think people seem to think I know more about all these kinds of heuristics about podcasting that I don't. So when people ask me, what are your tips for podcasting? I just, I don't know. I just try to read it and I just try to come up with questions. The object level things are the things that are definitely very interesting to me, but the topics themselves.

Theo: So yeah, you mentioned something about people were giving you feedback on the format of the podcast. Have you thought about monologues? Two of the relatively few podcasts that I’ve listened to are hardcore history by Dan Carlin and the Founders Podcast by David Senra, which are both monologues like always. And I find them to be really, really interesting even though I typically prefer to read blog posts.

Dwarkesh: No, I think that's a great point. And the great thing about, for example, Hardcore History is that it is an audio book in some sense, because it's like 12 hours of content on a topic. That's an audio book, but though he narrates it conversationally, he's just talking to you. And so the speech patterns and the redundancies that make speech easy to understand come about naturally. And so, yeah, I love that. Maybe I should do that. So the next time I read a blog post, maybe I should just do a related monologue and not just narrating the blog posts, but just kind of shooting the shit about it. There's something I've thought about before you would, would you find that interesting? Yeah, I would. Okay. Yeah. I'll try that on the next one.

Theo: I wonder if the monologue podcast grabs the human attention more than an audio book of reading something that's meant to be read and not listened to.

Dwarkesh: Exactly. Yeah. There's definitely a different sort of cadence to speech than to writing. And then just like the conversational nature of these kinds of podcasts gets out of that better.

Theo: We were talking about different blog posts that you've done. So you've deleted some of your old podcast episodes and articles. So why, why did you, is it just not meeting the quality bar? Like what would it have taken to keep them up?

Dwarkesh: Yeah, exactly. It just wasn't that good. I mean, again, like I started the podcast and blog and I was like 19 years old. So it's not that surprising that I look back on it and then cringe at the low quality of some of it, which is not anything against my past self. You know, I'm very grateful for what my past self has done, but just certain things I'm just wasn't like the best work I produced. So I just kind of took it down.

Theo: I liked the Contra David Deutsch on universal explainers one.

Dwarkesh: Did I take it down?

Theo: I think so.

Dwarkesh: Oh, I should put that back up. I fondly remember that one.

Contra Marc Andreessen on AI risk (24:18)

Theo: And definitely the Contra Marc Andreessen on AI risk on that one. I really liked, you didn't take that down.

Dwarkesh: Yeah.

Theo: Were you surprised by his reaction?

Dwarkesh: Yeah. Yeah. And then people pointed out to me afterwards. Well, maybe I should have emailed him privately beforehand to let him know. I don't know what that would have changed. I mean, I guess fair enough. I think it still doesn't. The main thing is not even like the personal reaction. I really don't care about that. It just, I just hope he gets considers the arguments against position. And I don't know if he has.

Theo: I mean, I'm sure that he's, he's certainly seen the arguments against this position, but do you think that's, that's, that's a little bit bearish if so.

Dwarkesh: It's surprising that someone as prominent, famous, and clearly intelligent as Marc Andreessen does not seem to be able to engage with counter-arguments. I don't think he has an obligation to write a counter-argument to me. He's a busy guy. He has an open invite to come back on the podcast. He was on it before to talk about AI and these related topics. I don't know if he'll take me up on it now, but I think if you're going to play in the intellectual arena, you have to engage if someone, like me, goes through the effort to do a point by point rebuttal of that blog post. Especially when it goes viral. So I think if something reaches that stature and has that kind of effort and quality behind it, you’re obligated to respond.

Theo: That was one of my favorite episodes, actually the Marc Andreessen one. I did not know that carry used to refer to whaling operations.

Dwarkesh: Oh, yeah. He's a really smart guy. He's super interesting. Has a really interesting taste about all kinds of things. I just think here he's got some bad arguments. I mean, if you’re gonna put out ideas, he has an open platform at least to come address them on my podcast.

How will AI affect podcasts? (26:31)

Theo: So, how do you think AI specifically will affect the future of podcasts? What would happen if it becomes superhuman at interviewing or researching or being interviewed? I just saw a tweet yesterday. It was a meme about AGI booking a slot on the Dwarkesh podcast.

Dwarkesh: I saw that too. Well, getting interviewed or doing the interviewing?

Theo: Either. Just what do you think will happen to your career as AI becomes more powerful?

Dwarkesh: I think that would be the least of our concerns at the point at which it can automate a podcast. I don't expect to be one of the first jobs to go. It seems like a pretty subtle art, not only to ask the questions, but then to have the human presence and to be able to respond with follow-ups to what the guest says. I'm not expecting to get automated anytime soon.

Theo: Well, assuming AGI goes well, cause you said it would be the least of our concerns. And so, we live in this wonderful utopian AI future. Would you still podcast? How do you think podcasting would change?

Dwarkesh: That's a good question. I honestly think the post AGI world is an under-theorized question. I've asked it to basically all my AI guests and none of them have given me a good answer. Part of that is, well, what are you doing personally? I mean, I think personally I would like to become an enhanced being, traveling around the galaxy with the help of the technology that the AI has given me and not just be a podcaster forever, hopefully.

Theo: Well traveling around the galaxy in reality or virtually? Cause one of my first guests, Greg Fodor, gfodor, likes to talk about this idea of subterranean aliens. What if the solution to the Fermi paradox is just that all the aliens go underground in pods under the crust to protect themselves and go live in VR and do whatever they want in VR. Why would they travel around the galaxy? If your ship gets blown up, then you actually die. Whereas you could just send a robot that you control from your VR pod underground.

Dwarkesh: I think that makes sense if we're assuming they're biological entities, but I kind of priced in already that they are the software that's running in the drones, like eventually a civilization will just have software. And that's what I mean when I say that I would be enhanced. So, I imagine you'd be an emulation or something.

Theo: If you think about AI so much and the future and technology, do you discount the importance of space exploration, for example, like a lot of people think of SpaceX and not OpenAI as the most transformative company.

Dwarkesh: It's interesting to see if they kind of merge and link together. Not the companies themselves, but if those technologies emerge in some way, you can imagine, I don't know, some sort of GPU run in space or something. That’s a little more far-fetched. The development of AI will be super hardware contingent. I mean, we're going to be seeing like if the compute centric framework is correct, we're going to see, you know, $50 billion training runs or a hundred billion dollar training runs or something. And all kinds of different hardware is going to be relevant to that. I don't think they're going to be unlinked at the point in which we're developing the AGI.

Theo: If you did book the AGI on the Dwarkesh podcast, what would you talk to it about?

Dwarkesh: I'd be super curious about its psychology. Does it think in the same concepts that we think in? There's the obvious type of what are its values, but how different is even the basic cognition and the thought process? And, or is it the case that because it learned to think in human language, that it adopted the same mind that that language has been developed on, which is our human mind.

Theo: Or would it just not know? Kind of how we don't really know how the brain works?

Dwarkesh: I should probably read more cognitive science to better understand even how human thinking works. That's a good point. I think there's also another big possibility. I guess we'll have better insight into its own mind than we have in our own because of if the mechanistic interoperability and all these other kinds of research work out. And so we might not even have to ask, we could just look in the AI directly. What are the things I'd be curious about? I mean, just a bunch of stuff relevant to how it's thinking. Presumably, it's thinking at a different speed. I'd be curious about how it communicates with other AIs. Are they communicating in language or can they just share that latent space? There'd be so many different questions. It wouldn't be about their opinions or something. Other than the fact that I care about their values, I'd just be super curious about how they work and how much is available to them to divulge about how they work. Maybe they don't understand themselves, but…

Theo: Maybe they'll be prevented from understanding themselves too well. I don't think OpenAI will give them access to the weights.

Dwarkesh: But we don't have our own weights. And I guess you could say that we don't understand ourselves as a result, but I don't know. I feel like you could probably learn a lot just from introspection.

AI alignment (32:08)

Theo: Are you more optimistic about AI alignment, given that we can't access our own weights and yet we seem to be fairly aligned and we can access the weights of the AIs? I talked to a few people on my podcast, Nora Belrose, Quintin Pope and so on on Twitter, Teortaxes, who seem to be much more optimistic about alignment for that reason.

Dwarkesh: There is definitely not only that we can read their minds, but here's something we can't do with humans. If you commit a crime, we kill you off and then we kill off all your descendants so that the genes, which caused your crime are diminished in the gene pool.

Theo: They did that in ancient China.

Dwarkesh: I guess you're kind of a little bit like society or you're saying, we just send you out to prison. I don't think it has that much of a sort of genetic effect that whereas with AI is really literally a gradient descent, we can sort of like, and not only to read their mind, but like actually change their mind in a sort of very fine grained way. So, in those two ways, it definitely does suggest that it might be easier to read their mind. The main difficulty, of course, being that the starting point is not something that is just genetically very similar to us, but it is totally alien. It just starts off on a different trajectory than evolution. Like humans already have this sort of inbuilt machinery that's quite similar to each other.

Theo: Well, do you think it's just totally alien? Like Rune has tweeted a lot about how, you know, he used to think that AIs, LLMs were alien minds, the Shoggoth from another dimension. And now he thinks that their character is instantiated from the human prior.

Dwarkesh: But there's an Eliezer rebuttal, which is that because it can pretend to be any human in that predict their next word, doesn't mean that it itself is the average over all those humans or something. And I just think like, we don't really know, or at least I certainly don't know. And we shouldn't just assume the safest or the most comforting possible version. Well, just like one human, you're just, it just grogging human consciousness. I mean, it's just like no, human works by being able to predict accurately what any given human might say on the internet. And it might be the case that the end result of this is something that approximates human psychology pretty well in its own intrinsic motivations. It just doesn't seem warranted to assume that will be the case.

Theo: Well, Eliezer talks about the actress and the Shoggoth, but what about the rebuttal to that, which is, you know, all it is is a next token predictor. And if the next tokens contain goodness and love and peace, then the AI will do goodness and love and peace. And if they contain taking over the world and the AI will take over the world. And there's no reason to believe that there's actually a Shoggoth inside whose desires will be different from just the distribution of texts that it was trained on.

Dwarkesh: And then I will just have to recapitulate the entire sequences because then, then there's the Eliezer response, which is that as a thing gets smarter, then it will closer and closer approximate something which has goals and intrinsic drives. And that's kind of the basic shape of the argument.

Theo: Do you think that the empirical evidence so far has been friendly to the Eliezer group?

Dwarkesh: Oh, depends on which part, like certainly not on the fast takeoffs, but you gotta remember this guy was writing this shit like 20 years ago. Right. Compared to the people who other people were writing 20 years ago, he certainly is more accurate compared to what we know now. It's not, he was, he expected the sort of intelligence disclosure and it's, it looks like we're living in this slow, slow takeoffs world. As for on that particular prediction, what was, I think I lost my train of thought, but I'll let you say, say what you were talking about.

Theo: We were talking about the Shoggoth. Is there a Shoggoth inside GPT-4? How is the empirical evidence? Do we just not know?

Dwarkesh: Right. So, I mean, one of the things is that dumber animals don't seem to have any sort of ability to, to, they just kind of respond to the direct immediate, like an amoeba will just go towards the light, right? There's not some, there's not some goal or directive. It just, the next token prediction equivalent for an amoeba, just go towards the light. And as things get smarter, it does seem that there's more of a sense of agency and maybe agency is required to do really complicated tasks that we will train the AI to do. Why that agency couldn't, would be something we would always control. It's not self-evident.

Theo: So you strike me as somewhat middle of the road centrist on AI risks, not like a full doomer, but you know, not sympathetic to the Marc Andreessen. We're all going to be fine totally arguments. Yeah. Have you gotten like more optimistic or more pessimistic over time? How has your AI risk journey gone?

Dwarkesh: I think even a year ago, I wouldn't have contemplated these things seriously. However, the advances we've seen since have convinced me that this is real. This is actually going to happen in our lifetime. Once you integrate that into your worldview, everything becomes more concrete. So, in a sense, I've become more pessimistic than I started off as, but also more optimistic. Originally, the assumption was either you just don't think this is real or you're a doomer. Then there's a lot of really smart people in the middle, as you say, that I've interviewed on the podcast and they've given me very interesting worldviews that helped me better understand their perspective, like Carl Shulman, Paul Christiano, and so on.

Dwarkesh’s guests (38:04)

Theo: Going back to some of your guests on the podcast. I love a lot of your podcast guests. I've had a couple of them on my podcast, Razib Khan, Scott Aaronson. I've met Bryan Caplan before. I know that you're good friends with him. One of my friends cold emailed him a couple of years ago, just saying, “hey, do you want to get lunch, we're also nerds”. Not only did he agree, he took us to this kebab place in Fairfax near George Mason, paid for our food, and stayed with us for about two and a half hours and just talked about all kinds of stuff. It was basically an unrecorded podcast.

Dwarkesh: It sounds just like Bryan. He's a great guy.

Theo: I'm a big fan of Bryan Caplan. Bryan, if you're watching this, thanks. And I hope to have him on the podcast soon.

Dwarkesh: Yeah, you should.

Theo: So, who of all your guests strikes you as the most raw, intelligent and why?

Dwarkesh: Certainly, it would be the AI researchers, people like Dario or Ilya. It just takes a lot of raw fucking IQ to do that. I mean, if that's what we're counting on and I'm not, I certainly don't think that's the most important criteria for everything. But on that raw measure, I think maybe those two would be in contention. And then, but I've obviously had extremely smart people, people who are way smarter than me on a bunch of them.

Theo: Do you think it's easy for you to gauge how smart people are who are much, much smarter than you?

Dwarkesh: Yeah. It's hard to do a bullshit test. I mean, I'm going to go down the list because basically every person I've had on the podcast is really, really fucking smart. But I just realized the last person, one of the last people I've had on who qualifies is also Paul Christiano and Scott Aaronson, who we both interviewed. I mean, I have this great story about Scott Aaronson where I was taking his class and he explains his results. And he says, it's a very important result. And he says, you know, I almost proved this myself in 1999, but I realized that somebody had beaten me to the punch six months earlier. And I looked back on it. It's like, how old would Scott Aaronson have been in 1999? He would have been 19 years old or 18 or something. And that's when he did it. Yeah. So maybe Scott Aaronson is my answer. It was like, pure raw IQ.

Theo: I don't know if you pick favorites, but who do you think your favorite guest was? And who do you think your favorite episode was? And are they the same? Is there an overlap there?

Dwarkesh: I don't know if this is necessarily my favorite, but this is the first one that comes to mind. I really enjoyed Carl Shulman, just because I got introduced to so many new concepts as a result of that episode, from the compute centric framework for understanding the scaling and rise of AI to a bunch of the specific takeover risks. So I would say that one. Did you listen to it by any chance?

Theo: Yeah, I listened to it.

Dwarkesh: What'd you think?

Theo: I loved it. Carl Shulman struck me as really intelligent, in a sense, he kind of struck me as intelligent in the same way as Eliezer. Meaning he makes a lot of his own concepts. He doesn't just take whatever there is out there and the prevailing discourse. He makes his own.

Is Eliezer Yudkowsky right? (41:58)

Theo: What impressions did you get from Eliezer, by the way? Did you think he was like Carl Shulman or different?

Dwarkesh: I think that's a fair way to characterize. I definitely think Carl is more rigorous as a thinker and much more up to date on current developments, having a better understanding, for example, of the actual hardware limitations that are current or the weaknesses and advantages of the current architectures and so on. I would put them in slightly different buckets. I do think they're similar in the sense that they're in the sort of actual, so they adopt this. I think one sort of similarity is that they do think that the decision theory stuff is important and matters. There's just a bunch of weird shit about acausal decision theory and things like that. And they think like that actually could affect the course of things. But yeah, the difference being that Carl, I think is a bit more rigorous.

Theo: So do you think that some of the character assassinations on Eliezer have some substance, like how he's detached from reality. He doesn't understand what he's talking about. He's not technical. Or does he strike you as maybe this guy is right after all. Cause I'm pretty split on that.

Dwarkesh: I don't think he's right on like 99% doom. I think he’s just way overconfident. And I think he's also wrong about the fast takeoffs. And I think the evidence shows that he's been wrong about it.

Theo: Does it, or have we just not reached the fast takeoff yet?

Dwarkesh: It’s seeming more and more like there’s not a critical point where things just implode, but rather that intelligence is just a gradual scaling thing. And it could be wrong. Of course, anything could be wrong, but you just have to update on evidence as you go forward. The updates seem to be pointing in the direction away from Eliezer. That being said, the most important thing is that I think he's an endlessly creative and interesting thinker. And I think you just have to put him in that context of, yeah, he's probably one of the most intellectually generative people of the last 20, 30 years. I've learned a lot from reading him as a teenager and then in college and so on. Are there things he's wrong about? Yes, of course. I don't understand the visceral hate that people seem to have for him. And I also don't think people are being fair when they dismiss his contributions. He was on this decades ago. The thing that is the main thing people are thinking about now, he was on it decades ago.

Theo: The visceral hate I think is just psychological pain avoidance lashing out. If they think that if Eliezer is right, I, and everyone I love will die then, no, I don't want to believe that. And I'll attack him. I'll defend myself by attacking him.

Dwarkesh: Oh, and then he obviously is not a normal guy. So then it just becomes really easy to be like, Oh, what a weirdo or something. And I just don't think that's fair or a valid argument.

Theo: I remember when I was eating lunch with Bryan, this was before ChatGPT, before the recent boom in AI, we talked about Eliezer Yudkowsky, who I was familiar with, but not as dialed in as I am now. And he said like, he had lunch with Eliezer recently and Eliezer tried to sit there and convince him that the world was going to end. And he was like, that's just silly. Could AI, a superintelligent AI, convince me to kill myself? I just don't think that it could do that. And he obviously does. I wonder if he's updated since then. He has updated on timelines. I've been since he lost his bet on whether GPT-4 would pass his exam.

Dwarkesh: I haven't talked to Bryan about it, but I'm really curious to see where his head is at now. You should ask him about it when you have him on the podcast.

More on the Dwarkesh Podcast (46:01)

Theo: Yeah. Do you agree with Tyler Cowen's characterization that podcasts are basically entertainment?

Dwarkesh: Oh yeah, definitely. I mean, I think back on, no, actually. Okay. I have two minds on this. On the one hand, I know how much, how little I understand the fields that I do podcasts on. And then I think back on, I read so much in order to be able to ask questions about this field. And I still think I really don't understand it in any meaningful sense. I couldn't just walk into, I couldn't actually do the job, so to speak. If I'm interviewing somebody who was a researcher or something, and that reading is titrated then to just a few questions that I get to ask in the two hours or whatever. And the response is able to give. So if I personally feel like there's so much about the field that I understand, obviously the audience is in a worse position than I am because of the reading I've done, unless they independently happen to know about it. So I definitely don't think it's a replacement for actual expertise or something.

That being said, I mean, you know, I was saying earlier that I haven't listened to that many podcasts recently, but when I was in high school and, you know, a teenager and then in college, I learned so much about so many different fields from podcasts and you could say, well, you get a sort of introductory understanding of many different fields. And yeah, that's true, but that's useful for most people. They need intros to everything.

Theo: So when you were talking about titrating into a two-hour episode of the AI researchers, when you do your research, is it more like, holy crap, there's so many amazing and interesting questions I could ask these people. Or is it like, there's really like, I need to find great questions. Are great questions overabundant or not abundant?

Dwarkesh: Not abundant, usually. There are some guests where I literally have a list that’s 20 pages, Google docs or something. And obviously, we can't get through it. Usually it's not okay. Usually it actually, I don't have enough good questions or that I just barely have enough good questions. What's your experience?

Theo: Basically the same. I vary based on the guest. How many of your questions, if any, are just off the cuff? Do you come up with any completely new questions that you hadn't put in the document off the cuff?

Dwarkesh: Definitely. I mean, the followups, for example, a lot of them are off the cuff, but a lot of the followups that are actually questions I was planning on asking later on, but they just naturally follow the sequence of what my guest just recently said.

Theo: You ever come up with entirely new questions, like not followups just off the cuff.

Dwarkesh: Yeah. You know, you just have questions as somebody is talking. And that's why the research is helpful for the conversation. So you have enough context to ask those followups.

Theo: So during the episode, when you're interviewing someone, what do you think is the optimal amount of tangents to go into? Like, what's the optimal amount to edit out?

Dwarkesh: I don’t really edit out that much. The main constraint is the time of the guest. You don't want to waste time talking about things that are not really important or interesting. The optimal number of tangents is not zero, but there's such a thing as going on too many. It's hard to say generically, there's certainly not a number that one can give, but you want to go down enough that you can explore interesting directions and new ideas, not so much that you're never getting to the meat of the subject. It should serve the exploration rather than hinder it.

Other great podcasts (50:06)

Theo: You said you listened to a lot of podcasts back in high school and college. Who were your favorite podcasters and what were your favorite podcasts? Is there overlap there?

Dwarkesh: In high school, I listened to a lot of Sam Harris. Just a lot of normie shit. I was into politics when I was in high school, which is obviously a bad idea. It's just a tremendous time sink.

Theo: As for favorite podcasts and podcasters, and whether those are the same, there are good podcasts without good podcasters or good podcasters without good podcasts.

Dwarkesh: Can you give me an example of a good podcaster who doesn’t have a good podcast?

Theo: For example, I hate to say it, I love Lex Fridman’s podcast, but I don't think he's a particularly good interviewer in the way that you or Tyler Cowen are.

Dwarkesh: There are certainly people like that. An interesting question to reverse would be a good podcaster who just has the wrong format. And as a result, is really fucking it up. There are certainly people you can think of who I wish they had a podcast. Somebody like Christopher Hitchens or something. It would be really cool if he did a podcast. There are people who are just super interesting and voluminous thinkers and writers. And they’re super great and I wish they’d have a podcast. I've had former guests on where I think they would do really well if they started their own podcast. Sarah Payne was one such figure, just great at extemporaneously speaking and explaining her ideas. But at the time when I was in high school, who were such people? It's hard to remember. Hmm.

Theo: You think you were just very different back in high school?

Dwarkesh: Yeah, I think so. I mean, that's true for everybody though. Right?

Theo: Yeah, I suppose so. Do you have any favorite podcast episodes from other podcasters that stand out?

Dwarkesh: Yeah, I think for example, I don't really listen to a podcast that much anymore, but not, not for any reasons of disagreement, but for example, Sam Harris had a great episode when the BLM stuff was happening. He went into the data on police shooting. I thought that was a pretty brave thing to do and also super needed and sense-making at the time. He deserves a great deal of credit for that. As for ones that are not like "this message needs to go out" kinds of things, there's probably a bunch of episodes on Tyler's podcast that helped me understand the subject.

Theo: Mine would probably be when Tyler Cowen interviewed Paul Graham. It was a meeting of two great minds who I admire a lot.

Dwarkesh: Really? I was kind of frustrated because it bounced around from subject to subject enough that like Paul was not prepared to really delve deep into any of them. I think it was really interesting and I really enjoyed listening to it. But what was your reaction to that sort of taking away of that conversation?

Theo: Yeah. I mean, there's the meme where Tyler was talking about like the Medici and Paul like hadn't really thought about it. So he was just like, “yeah, that's kind of cool. I guess”. Yeah, I mean, Tyler has a unique style that you don't see very often. And I really, really like Paul Graham but I think Paul Graham is best on like lengthier essays where he has had lots and lots and lots of time to think through them. Like his recent one on like, I forgot what it was called but the one that he just came out with, I have to look it up because it was one of the best things I've read in the last year, How to Do Great Work. That took him like six months just to write this few page long essay. And then of course, my other favorite individual episode was probably Lex Friedman interviewing Neil Gershenfeld who was the director of the Center for Bits and Atoms at MIT. And this was recommended to me by a friend. I didn't find it by myself. And it was all about like self-replicating machines which I had never really thought of.

Dwarkesh: Yeah, I should listen to that one. That sounds interesting.

Theo: Self-replicating machines and just manufacturing in general. He has a class at MIT called How to Build Almost Anything where they learn about different kinds of fabrication. His goal is to create like a general fab manufacturing area in the sense that we have like a general purpose computer that can do any computation.

Dwarkesh: I've heard that sort of sentiment about nanomachines expressed. Drexler has this thing of you can compute anything and then now you need to be able to program any sort of physical. I should listen to that episode. It sounds interesting.

Nanobots, foom, and doom (56:01)

Theo: What do you think about Drexler's nanomachines arguments? Have you read his book?

Dwarkesh: Yes, I read his recent one, The Radical Abundance. And now he's working on AI stuff, right? From what I understand.

Theo: I haven't heard about that. I just know that Eliezer cites his Nanosystems book a lot.

Dwarkesh: Nanosystems is a different guy actually. Wait, no, sorry, Nanomedicine is a different guy. My bad. I think it's really interesting. I'm still not sure why it didn't go anywhere. But I really enjoyed Radical Abundance. He has a lot of interesting arguments about the intrinsic efficiency of nanomachines. From what I remember, it was as you miniaturize it, it just becomes a lot more efficient. And you think about how, for example, in your own body, how fast the molecules are moving and how much work they can do. And that's a direct physical effect of miniaturization.

I would love to talk to somebody about why that didn't go anywhere. In the book, he has complaints about the funding situation in the 90s where they were supposed to put a bunch of money into nanomachines and then it got co-opted into stuff that was familiar to the old paradigm and wasn't actually advancing the state of the field. But why has it still not gone anywhere? Maybe we should just have something on the podcast to talk about it. Because that actually is pretty interesting.

Theo: Yeah, maybe you could get Drexler on. Do you think that has any implications for FOOM? Like, do you think, even if you have a human level AI, even if you don't have like a fast takeoff intelligence explosion, do you think that means an AI would be able to kill all humans very, very quickly?

Dwarkesh: Well, certainly nanomachines that can multiply very quickly are possible because we have bacteria. And, you know, you can just imagine how fast they can absorb energy? You can look at algae that multiply and take in photosynthesis and they can transform the shape of the earth pretty fast. Obviously, it has implications because then the question is how fast could they absorb energy? How fast could they do work? But I mean, in the limit, it probably makes like a few months difference between whether they had to do with robots versus whether they had to do with nanomachines. But even if the nanomachine stuff doesn't pan out, I think even the robot takeoff is pretty fast.

Theo: Well, do you like to think about p(doom)? Do you think p(doom) is a useful representation for how you think about AI risk? Or is it just kind of like made up numbers based on vibes?

Dwarkesh: Well, I mean, it can be both. It can be a made up number and it can be useful to have as that. It's useful, I think, to just throw out a number to gauge your credence of any event on. As for, I do understand the criticisms of having such a number that obviously these are the consequences of human actions with the p(doom). So you can't, but that's always true of any probability you get, right? It's not just true of p(doom). So this is, then there'll be criticisms of giving probabilities of anything, a war or a probability of somebody winning an election or something. Yeah, I think it's sensible if somebody's thought about it a lot to have that number.

Theo: Do you have a p(doom)?

Dwarkesh: Mine is not that sensible. It just, mine literally is a number I kind of pulled out of my ass. I don't know, like 20% or something. And just because that’s Carl Schulman's or I don't want to misrepresent him. His might be different, but kind of just like pulling out of people I find credible.

Theo: Yeah, 20% seems reasonable, but at the same time, like, do you think that if for any given century in the future, there's like a five to 20% p(doom)? Like, does that just mean very, very bad news for civilization making it another 100,000 years? I remember you talking about this with Tyler.

Dwarkesh: Yeah, I think the goal is to get to just transition from this current regime where it is possible to wipe out all of humanity to get to a regime where you'll be like spread out through the stars where some of us are in like not human anymore. Some of us are AIs or gods or some mix or enhanced. And hopefully we can get to an equilibrium where it's just, if life is all around the galaxy doing beautiful creative things and it's different kinds of civilizations, it's hard to imagine how you could wipe all of that out.

Now it might just be that the laws of physics prohibit that kind of independence. Warren has this really interesting essay called Colder Wars, where he imagines that it just really easy to catapult a comet into a planet or solar system and just destroy everything. And so therefore there's no, it just destruction becomes really easy. That might be the case. I don't know, there might be some physics that makes it super easy to destroy planets and stuff, but hopefully we're getting to a situation where it just like negligible probability over time. You know what I mean? They just like every year the probability drops. So it asymptotes towards like the cumulative probability doesn't go to a hundred.

Great Twitter poasters (1:01:59)

Theo: So going back to social media and your research process, do you scroll through Twitter a lot?

Dwarkesh: I do. It depends on a lot. I think it was certainly not as much as many people, more than I should, of course.

Theo: Well, yeah, it's just so addicting, but who are some of your favorite poasters, like P-O-A-S-T, and what do you think makes them so good?

Dwarkesh: Oh yeah. Daniel's pretty funny. I like him.

Theo: I just got the Daniel follow the other day.

Dwarkesh: Oh, nice. Let's see. Yeah, it's funny. I don't have any that like regularly make me laugh and that's my main criterion because obviously you cannot take posters if you're getting out your actual intellectual opinions from them. It's from 140 characters at a time. That's a different story.

Theo: What about someone like Roon?

Dwarkesh: Yeah, he's great. Yeah, definitely the market has decided obviously that he's a good poster as well. Certainly doesn't need my endorsement anymore or ever, but yeah, he's great. I haven't ranked my poasters, but I'll have to make a tier list with the S-A-B-C-D and so on.

Theo: If you were to come up with criteria for what makes a poaster good, would you think that'd be similar or different from what makes a good podcaster?

Dwarkesh: I think it certainly is a type of skill to be able to make things that are really compelling in 280 characters. There are two things I wouldn't assume. I wouldn't assume that that actually correlates to... I'm not talking about anybody we've named. I'm talking generically about people. I don't want to name any specific names, but you have somebody who comes up with takes on Twitter and they have takes about different kinds of topics and they shoot them out. And then you actually talk to them in real life. And then you talk about a subject which they shout out a bunch of takes about and you realize, oh, they understand nothing about this. And so it definitely dissuades you of the sort of notion that if somebody has a lot of takes or a lot of viral takes, good posts about a topic, he actually understands it in any way. I guess I said I had two things, but that's the one thing I have.

Theo: So you said you spend more time than you show on Twitter. How do you spend your time in general? I remember there's an interview that you did with another website a couple of years ago. Has it changed since then? Do you have a daily routine?

Dwarkesh: I don't remember what I said on there, but I read quite a bit. That's most of my job. So yeah, I spend a lot of time doing that and making, there's a lot of logistics involved with the podcast itself as I'm sure you know, of making clips and editing and so forth. That takes up a lot of my time. Then a bunch of logistics involved with, you know, reaching out to people and things like that. And that basically sums it up. I exchange ideas back and forth with people, email, group chats, meetings, so forth, meet people who are researchers or understand fields well. And that's about it, pretty simple existence.

Rationalism and other factions (1:05:44)

Theo: With the people you talk to, would you say you're adjacent to the rationalist community?

Dwarkesh: Yeah.

Theo: It's interesting. Almost all of my guests, I will find eventually that somehow some of them are rationalist-adjacent. Even the ones I didn't really expect, like Razib Khan. When I interviewed him, he told me like, oh yeah, actually I was with Eliezer in 2008 with lots of people at the original Singularity Institute. And just the Bay Area rationalists. And he was an OG there.

Dwarkesh: It seems like you're pulling people who have some presence on Twitter among the kinds of people you follow. And it's not that surprising that among that group, that there'd be a lot of rationalists.

Theo: Well, it seems like there's some new factions forming with people who might historically have called themselves rationalists or EAs who now really don't like it, like e/accs. Although again, it's the same sort of story as Razib where they like rub shoulders with, just like, it's not as totally independent.

Dwarkesh: I have been interviewing historians recently and there you just have people who would not know if you use the word rationalist, what that means. Have just not interacted with the Silicon Valley culture for better or for worse.

Theo: I was looking at an interesting post earlier today that was like a political compass, except instead of the axes being like authoritarian, libertarian and like left, right, it was AGI will be like the internet versus AGI will be like a million times more important. And then we should accelerate versus we should slow down. So do you think something like that will become the most important grid on which people align their politics in the near future or will it kind of just remain the traditional political framework?

Dwarkesh: I don't think it'll be either of those. I do think if the takeoff, I mean, if the AIs, at some point, if the takeoff stuff is true, then it'll become the most prominent fact about our political life. I don't think there's gonna be that much of an appetite… I don't think 25% of the country is gonna be agitating for the top right where you're trying to engineer on the maximum flops out of the solar system. I don't think there's a huge demographic constituency for that. I think the current factions are one, a result of certain backlash against EA kind of things and two, a sample of the kind of people who are talking about it right now. What that actually transitions into when it enters a mainstream political system, I think it looks pretty different and it might be a worse axis on the political system where people try to shoehorn it into current contemporary issues of political correctness or economic equality kinds of things that pale in comparison to the real stakes, which is the fucking galaxy, right? But yeah, I don't know if the e/acc versus EA will be Democrats versus Republicans in like 10 years.

Theo: Maybe. Do you think that e/acc is an interesting or useful philosophy or is it kind of just vibes and trash?

Dwarkesh: It depends on what you mean by e/acc. I don't want to do the same sort of intellectual dishonor that many of them conduct of completely dismissing ideas without actually trying to grapple with them. So it depends on what you mean by e/acc. It is true that technological growth has been the main force of the betterment of humanity throughout history. It's the kind of thing where you're doing a motte-and-bailey, well, if that's what you're endorsing, yeah, I'd endorse that as a historical statement. And then with AI, you have something that's kind of breaking the pattern of the pace of history and the centrality of human beings and so on. So it might be worth considering on its own terms. As for, I don't even know what the broader e/acc take is then of you maximize, I don't even wanna, it's hard to, can you explain what the e/acc take is?

Theo: Well, first of all, it's kind of funny. Yesterday, I was wearing my effective accelerationism T-shirt, which I got not because I'm an e/acc, but just because I think the logo is cool. And the general sentiment for everything other than AI is pretty great. It would have been funny if I was wearing it on the podcast just by chance.

Dwarkesh: I will say, by the way, I don't necessarily endorse the exact opposite of the e/acc claim that slowing down AI is in and of itself. I do think people sometimes seem to believe in the magical properties of slowing down AI or have an unrealistic understanding of how that might be possible.

Theo: Oh, like “we” just need to—

Dwarkesh: The end goal is not just to have slow AI, the end goal is to align the AI and then point it towards something good. The slowing is only a means to an end. You're not just going to keep it down forever. So the opposite of e/acc is certainly not a statement I would endorse. I wouldn't endorse something like "pause AI".

Theo: I've noticed a marked degradation in the discourse among rationalist, doomer, decelerationist kind of people over the last few months. Probably just because it's becoming more popular. They're now committing many of the sins that the e/accs have in their time.

Dwarkesh: Although you gotta remember the real serious people who are concerned about alignment are not posting on Twitter all day. They're doing technical things at labs. The kind of people who have the time to be making memes on Twitter are not the best and the brightest.

Theo: What you said about what do e/accs actually want to maximize? I watched Beff’s talk about thermodynamics and the future of everything. It was basically about how, with thermodynamic dissipative adaptation, what we're trying to do is maximize the amount of free energy in the universe that will create complexity to best take advantage of it. That's what the universe itself did to create life. That's what capitalism does to create great businesses and great business owners. I don't know how good of an explanation thermodynamics is for this, but I think the general sentiment is basically true that complexity arises out of simplicity and can do pretty great things.

Dwarkesh: That's true of a lot of different philosophies that you wouldn’t actually endorse the implications of. If you take Marxism, for example, the idea, if you look at Marx's reading of history, is that you have an exploiter class that comes up with an ideology to justify their exploitation of either slaves or peasants. Before modern economic growth, that kind of was what history looked like. You did have serfdom and slavery. Maybe that's not directly addressing the point you're making. To more directly address the point you're making, it is true that we want more complexity and more beauty and so on. I don’t see why that follows…will even correlate necessarily with thermodynamic free energy in the future. If I told you here's a world that's more beautiful, but it has less free energy, would you rather have the one with more free energy and less beauty and creativity? I don't understand why that'd be prima facie the thing you're trying to maximize. What if it was totally unconscious? Was it literally just optimizing for the maximum entropy of the universe, but wasn't actually in any way something recognizable to us as something that could be beautiful or something that could experience different great feelings?

Theo: Then there's back into the debate about is an unconscious entropy maximizer, paperclip maximizer type thing even possible? Are p-zombies possible? Is it possible for something to have goals and the intelligence to pursue them, but no kind of self-reflection or consciousness?

Dwarkesh: It could be true. We don't know one way or another, but it's true even among humans that there have been these pathological ideologies that have pursued these single-minded aims that have resulted in terrible harms and communism, Nazism, whatever. So if that's possible with humans, I don't know why you'd assume that's not possible with AIs.

Theo: Well, cause communists and Nazis are conscious.

Dwarkesh: I mean, in terms of like, even if they are conscious, they can, them trying to pursue their ideology to its ends or their value system to its ends just results in a shit ton of mayhem and destruction.

Why hasn’t Marxism died? (1:15:27)

Theo: Speaking of which, why do you think Marxism has been so persistent of an ideology? Even after Marx made a lot of specific predictions that were specifically falsified, like the US would soon become communist, and that just didn't happen.

Dwarkesh: I’m certainly not an expert in this. I just interviewed Jung Chang who wrote a book about growing up during the cultural revolution in China. She wrote a biography of Mao that was not well-received amongst academia because it was really harsh on Mao. I asked her why there is this instinctual desire in parts of academia to defend these brutal communist dictators like in Venezuela or Cuba or Russia. As for why there's Marxism, I think part of it is just that it aligns with certain aspects of human psychology of making classes of people, exploiters exploited and so on, and having an overarching theory of history, a narrative and a sense of struggle. But I remain confused as to why it's not been completely discredited and people still ascribe to it.

Theo: Well, techno-optimism also has a grand narrative and the monomyth, the hero's journey, play to human psychology. Humanity ascended from our position as apes in the savannah to building the sand god. (I think the sand god phrasing is a little cringe, but you get the point). So why hasn't techno-optimism supplanted Marxism? Is it just the inertia of the system?

Dwarkesh: Well, you have competing ideologies. It certainly is succeeding in some sense, right? Or has adherence. Part of it is just that there's not enough people yet who have enough context to understand techno-optimism, whereas anybody can understand Marxism or not anybody, but you can kind of understand the sort of thinking behind Marxism.

Yeah, and I don't think they're necessarily supplant each other more as they'll just be in competition with each other, or they happen in competition with each other. Like a bunch of narratives are in competition with each other. As for how one should personally be in relevance in relation to these narratives, Tyler Cowen has a great talk where he says that as soon as you adopt a story, you're basically pushing a button that decreases your IQ by 15 points. You know, you gotta take things case by case and understand the specifics of situations instead of having some 5,000 year grand narrative that explains everything.

Theo: That reminds me of something else Bryan Caplan said on your podcast where he was like, you're talking about feminism and he was like, oh, when I write books about it, I try not to argue like a lawyer. And, you know, begin with my preconceived conclusion and then make arguments for it no matter what.

Dwarkesh: Yeah, and just probably to a certain extent he does that because we all have our force to do it. The great thing about society is that then we just rebut each other and we're left with a better outcome in the end.

Where to allocate talent (1:18:51)

Theo: You like to talk about youth and talent. So if you could rule the world and you could reallocate the amount of smart, talented kids entering the workforce however you wanted, like what areas would you take them out of and what areas would you put them into?

Dwarkesh: I actually did ask this to Grant Sanderson. I don't know if you listened to that one. This might be a sort of overreaching take. I certainly think like really smart people being in non-STEM subjects. There certainly should be some people doing non-STEM things. Would I wanna take them out of it though? Because then you'd just be left with even worse non-STEM things. Because then you actually have to reduce the relevance of non-STEM things into everyday society. Oh, I will say this. The obvious one is, you really want smarter people in politics.

Here's an interesting observation. So often when I'm reading an interesting paper or reading an interesting article or something and I look up the author, the guy turns out to be a central banker. He's like the former president of the New York Federal Reserve or something. And so we have a great system actually for finding and identifying really professional, competent, non-partisan people to be central bankers. And I would just like that kind of system for different kinds of government offices. Like if the US president was as smart as the prime minister of Singapore or something, or the ministers in the cabinet were as smart as that, obviously the political life, we should definitely have more talent in. But I don't think it's like talent people aren't going into politics so much as, the selection pressure doesn't favor them.

Theo: So what do you think the secret is for central bankers then?

Dwarkesh: I think there's literally a whole, like a centuries long sort of filtration process. Maybe not centuries long, but decades long. That institutions that have been built up, that like you learned, that we find the most competent people in high school, we send them to these elite colleges. And econ, at least until recently, is not like a politically compatible institution. Discipline, it's like a super rigorous discipline where we care about the truth. And then we find the most competent people who have been through the undergrad there, and we send them to grad school, we find the most competent people there, and then we have them do, be like shadow the people who are most competent. And this is the same thing with law schools. And I think that's why the Supreme Court, for example, it's like, I actually have, I think they do a good job of actually trying to know, understand and parse the law. Because we have the system that like selects for people to be in the system. And I think that's why the Supreme Court because we have the system that like selects for people. You know what I mean? We have these institutions that cultivate talent in this way.

Theo: So it’s just like relentless competence and talent filters? There's no additional traits that you would like specifically select for, for a central banker, aside from just intelligence and hardworkingness and competence?

Dwarkesh: And caring about the subject, like being a non-political person, not an activist type. What was it you just said?

Theo: Integrity, I imagine.

Dwarkesh: Yeah, yeah, yeah. But it's not even just like, you can be high integrity and also be a very political type of person, if that makes sense. Just like a non-partisan, ambivalent, politically ambivalent types.

Sam Bankman-Fried (1:22:22)

Theo: You wouldn't want Sam Bankman-Fried running the central bank though, I imagine.

Dwarkesh: No, no. So then, yeah, it's interesting to specify what is it about him? Because he's obviously smart, he's obviously hardworking. Maybe dysregulated people. Not dys, D-I-S, as in not being regulated by the government, but as in personally not being well-regulated. That seems like a bad sign.

Theo: Are you talking about Sam Bankman-Fried?

Dwarkesh: Yeah.

Theo: Really? Because he strikes me as very well-regulated personally, but just kind of misaligned. He's not unaligned, he's misaligned. Instead of focusing on, oh, what should I be doing that's legal? What should I be doing that will benefit my shareholders the most? He thinks about, what should I do that will benefit my lofty goals of effective altruism the most?

Dwarkesh: I honestly don't think that's the best explanation of his behavior. I think it generally is a level of incompetence at certain things, using QuickBooks or making these ridiculous bets that even in expected value terms probably didn't make sense at the time he was making them. Just being hopped up on a bunch of amphetamines and making these back of the envelope billion dollar decisions, I don't think that's well-regulated personal behavior.

And he just evidenced, and maybe this is better evidence for the old school theories that people tend to have. Hey, if you get a haircut and you act like a normal person and dress up in a suit, I'll trust you. I think SBF is good evidence of that. The kind of guy who's just hopped up playing StarCraft while he's talking to you. I guess you could say there's no first principles reason you should disqualify somebody for that, but this is good evidence of that kind of person is just all over the place.

Theo: I think famously he was remarkably bad at League of Legends. He never made it past bronze or silver or something after years of playing for hours a day.

Dwarkesh: Yeah, yeah, yeah. But he was playing it for hours a day while he was in meetings and shit.

Theo: So then if the explanation is not that he was malicious, but just that he was incompetent, how did he get such success in the first place?

Dwarkesh: This is something I have learned by watching a bunch of very successful people is that you would be surprised the extent to which there are successful people in certain domains who lack judgment and can make big mistakes in pretty seemingly relevant domains. And you should always double check people's judgment even they're in high positions of credibility. Everything from your judgment of how things will progress, epistemic judgments to these sorts of tactical executive judgments.

I mean, there's a bunch of details that came out about what's been going on with Alameda and FTX, but the sort of bets they were making of doubling down on these shitcoins and so on, it certainly doesn't seem like... The lying was obviously the low integrity move, but those bad bets themselves were just evidence of fucking it up, right?

Theo: Yeah. I remember reading somewhere in one of the teardowns of what actually happened, that Sam Bankman-Fried famously made his first $20 million from arbitraging Bitcoin, and then it was gone within a year because he made a bunch of really bad bets.

Dwarkesh: Yeah, I mean, he just lost a shit ton of time on AWS and things like that, right?

Theo: Yeah. So I guess if you make lots of very seemingly stupid, high variance bets on illiquid, inefficient markets like crypto in the early 2010s, one of them might pay off well. But still parlaying $20 million success into a $20 billion success even temporarily is like, that's no small feat.

Dwarkesh: Oh, certainly. He's definitely a talented guy, but that just goes to show you that talented people can have bad judgment and be incompetent even at their own fields. Like I have less of a sort of mindset now of this guy's unilaterally a super achiever, this guy's unilaterally has bad judgment.

Theo: Do you think you would have in a million years predicted that FTX would just blow up like this? Or be fraudulent?

Dwarkesh: No, to be honest. No, no, honestly. Yeah, I interviewed him before and I sort of did like a lot of research on him and his company before and I would not have.

Theo: Do you think that there are any companies today that you look at and think of like, wow, this might be like another FTX situation?

Dwarkesh: Yeah, like there's a lot of companies in AI where you think, what valuation are you raising at? And why will you not just be automated at the next OpenAI dev day? I don't know if it's like FTX level though. I don't know if there's a big fraud.

Theo: Grifters are everywhere, but like, you know, specifically on the level of FTX.

Dwarkesh: Yeah, it's hard to see. I think crypto is especially liable to fraud, obviously, because you are just moving numbers around rather than, so it becomes easier there. Yeah, I mean, I do think there will be like, you know, we saw stuff with Emad and Stability. I don't know if you saw all those revelations. Yeah, I've seen them. Yeah, yeah, so stuff like that. I don't know, I think a lot of stuff will like that will come up in AI, but it will just be so overwhelmed by the good investments that are made in AI that will be like trillion dollar companies or something.

Theo: Yeah, with FTX, I was watching the Nas Daily YouTube Shorts video on him before the collapse. And he was like, oh, this guy is a billionaire and he's vegan and he wants to donate all of his money to effectiveness and he does crypto. And I was thinking like, obviously, I'm not gonna say like, oh, I predicted this. You know, I knew what was gonna happen with FTX beforehand, but something there struck me as a little sus, a little not normal for billionaires.

Dwarkesh: Yeah, I think it's also the case that you could say this about a lot of people. There's always evidence that retrospectively paints them in like, oh, I should have, that was very sus. I should have seen it coming. And I bet like you could tell a similar story about literally every single billionaire. There's some things out there, the rumors that afterwards you could be like, oh, obviously that guy was a fraud.

Why is Elon Musk so successful? (1:29:07)

Theo: So another question on talent. What is it that makes people stand out even among like extremely talented, extremely smart, extremely productive people, like Elon Musk? So, you know, Elon Musk just stands out just totally in a class by himself, even among like billionaires. So why is that, what's different about Elon?

Dwarkesh: What I've heard from people who have worked with him or are in lesser degrees of separation is just a complete willfulness of the John Wick quote of he'll just, I don't remember the exact quote, but it was like, he'll just get what he wants. He'll scream, he'll throw tantrums. He'll stay up 24 hours a day. He'll do whatever it takes, but it is happening if he wants it to happen. He'll fire everybody and restart the whole project. But a level of focus on progress and the lack of complacency.

Theo: Is that it? Is that all it takes? And if that is all it takes, then why hasn't anyone else reached his level?

Dwarkesh: I mean, how many people do you know who all that could apply to?

Theo: None in real life? It's very rare, of course.

Dwarkesh: I've gotten to meet a lot of people in the last few years. And I'm trying to think, do I know somebody who's that willful? Maybe, but I think it's a rare trait.

Theo: Just high agency people?

Dwarkesh: Even agency doesn't do the word justice.

Theo: Really? It's more than that?

Dwarkesh: It's not just, cause when people mean by agency nowadays, it's been so diluted of, it just means like, are you willing to send a cold email? Congratulations or something. But this is like, no, literally, I'll go fly to fucking Russia and we're going to buy the old ballistic missiles. It's just like a level of, this is happening no matter what. It's not just that I will come up with different ideas, but I will make it happen no matter what. It's like calling the ocean wet, you know? That's like what calling Elon high agency is.

Theo: Have you read the Walter Isaacson biography?

Dwarkesh: No, have you?

Theo: Yeah.

Dwarkesh: Well, you might know more about this than me then, actually. What do you think? What makes him special?

Theo: Well, I think one of the best takeaways was not even in the Walter Isaacson bio. It was in a Scott Alexander article. Yeah. Slate Star Codex, Astral Codex Ten. And it wasn't about the Walter Isaacson book. It was about the Ashlee Vance book. But he said something like, very similar. Elon is like a one in 10,000 or like one in a thousand level of good engineer and intelligent and all that. But obviously like that's a necessary but not sufficient condition for the success that he's had.

But what really sets him apart is that he's like one in a million driven. Like he will do all this stuff. He'll go to Russia and he'll stay up 24 hours a day and work 120 hour weeks and take on projects that people would think would be completely insane and then make them work. Right. But it just seems like something's missing. Like, how is it that only Elon is Elon? Right. Why is there only one Elon and not a hundred Elons?

Dwarkesh: I think there are like a lot of startup founders who are very driven. I don't think like Elon is necessarily the only person who's that driven. It just, even if they were all equally driven they wouldn't necessarily all achieve the equal outcomes and their outcomes would be distributed along the power law, right? And so maybe you would see the exact same pattern we in fact do see.

Theo: Could it be like ambition and complacency? Like, not everyone at the age of 50 worth, 11 figures is going to continue being in the office 80 or a hundred hours a week working on some of the hardest stuff. Like Bezos isn’t doing that anymore.

Dwarkesh: Yeah, that's probably part of it, right? Like you could have, how many Elons are there that just retired after SpaceX and took home the a hundred million dollars? Yeah.

Theo: And then do you think Elon is incredibly, incredibly smart? I don't know how well you know him or know of him personally—

Dwarkesh: No, I don’t.

Theo: —but I wonder how much just raw intelligence factors into his success.

Dwarkesh: There's a big debate about this, right? Of what is extremely high IQ necessary for something like that. And do these people in fact themselves have extremely high IQs?

Theo: Warren Buffett famously says no.

Dwarkesh: He says after 130, you might as well just give up those points and work on emotional IQ. But I think that's like, that's bullshit. 130, that's just like two standard deviations. That's like 5% of the population. That's a huge amount of people, right? Even among them, you can definitely filter for IQ. And, you know, there's been these studies that show that the gains from IQ don't actually diminish. You can just keep going out the curve and you'll still keep seeing gains in salaries or whatever.

So yeah, I think they're really smart. It's just the case that if you're selecting and got some bunch of, like a hundred different traits, you're not gonna get a top score in any one of them because it's a multi, the guy who has the highest IQ probably doesn't also have all these other traits which are necessary.

How relevant is human talent with AGI soon? (1:35:07)

Theo: So we've been talking about all these talent questions. How relevant actually are they in a world where we seem to be rapidly approaching AGI?

Dwarkesh: I think they definitely are relevant to obviously to the AI question itself, right? Like you definitely wanna recruit people who are gonna be working on this. In fact, I might be more relevant than ever because if you look at past periods in history where there has been huge kind of… As the AI stuff starts to take off, you're gonna have, gonna need like politicians and policy makers and hardware makers and diplomats and the world's gonna look crazy, right? If this stuff pans out. And so it's gonna be thousands of people at the very least who are managing this whole thing as it goes down. And to now be plucking out the people who would be talented had these different kinds of roles and not only managing the research itself, but the huge amounts of variables that are gonna be late when you have $50 billion training runs and countries potentially going to war with AI weapons. Maybe it might be more relevant than ever to be picking the talent to manage that and just have generally competent people in society so that when it happens, they can deal with it well.

Is government actually broken? (1:36:35)

Theo: So just have good policy makers. This reminds me of what you were discussing in one of your Tyler Cowen episodes. Tyler suggested that state capacity might not be in decline and it might be stronger than it previously was. Do you think that's true?

Dwarkesh: I just had Dominic Cummings on and if you’ve seen it, you know that his take is that state capacity is very much in decline. I think that might be a description of the UK itself. It feels like with COVID we saw that the system was very brain dead in many important ways. Maybe it was even worse before, so it could just be that things are improving. I don't really know.

Theo: The US and the UK have always been different. For example, Lee Kuan Yew went to the UK and he noticed everyone there was orderly waiting in the queue. He was impressed and decided to bring this orderliness and respect for rules to Singapore. And he did. Now Britain is less orderly than it was.

Dwarkesh: Yeah, I saw that. To some extent, I think maybe the problems Dominic is talking about are unique to the UK, but I think a lot of them are general. They're just the huge bureaucracies that are insulated from executive control and from any system to prune away incompetence.

There are certain aspects of the system that do seem to be really competent. I actually do have a lot of confidence in the Federal Reserve or the Supreme Court. The FDA, the CDC, those kinds of institutions actually didn’t(?) seem to function really badly during the pandemic. But then there are other aspects of the system that do function really well, that are linked to the government itself, a bunch of think tanks and so on. I don't know how it nets out actually.

Theo: You mentioned the Supreme Court and the Federal Reserve as two examples of institutions that do function well. I remember the Federal Reserve in particular, the last couple of years with inflation has just gotten so much shit from all kinds of people for not having enough data and not reacting quickly enough. Do you think the Federal Reserve is still even relevant when we have so much data and so much compute? Or was it ever relevant? Sorry, necessary, not relevant.

Dwarkesh: It's certainly relevant in that they set the monetary policy. If you're going to have a dollar currency, you need it. Obviously it matters. And, is it necessary? Yes, if you're going to have dollar denominated currencies, then the policy of managing dollars is going to matter and it's necessary. You could say, well, with crypto or something, you could maybe not have that. Maybe, I don't know. The actual stable cryptos have been the stable coins, which are obviously dollar denominated and then liable to move around with the Federal Reserve's decisions.

Theo: What do you think are some other examples of institutions that work really well within the US government?

Dwarkesh: The RAND Corporation, which is not obviously officially linked to the government, but is, I think they have been focusing a lot of efforts on AI and bio-risk kinds of things. And they seem super competent and well-versed there. I guess you're asking like, national government institutions, right?

Theo: Yeah.

Dwarkesh: I don't know that much about them, actually. Those are the only ones that come to my head immediately.

How should we fix Congress? (1:40:50)

Theo: Do you have any ideas based on talking to Dominic Cummings or based on reading Robert Caro about how to fix Congress? The single most shat-on institution in the country, probably?

Dwarkesh: I think just regular stuff of having higher IQ people and paying them more. Dara Jones in 10% Less Democracy has these ideas about, it does seem like senators, here's just something that's really interesting. The senators who are best are actually from these random states, like Montana or Nebraska. And they're just these really smart people. And then the ones who are worse are from these really big states. And also generally senators just seem to be a lot smarter than congressmen on average. And that probably has in part to do with they're more insulated from day-to-day democratic whims. So maybe having longer terms for senators and congressmen. Yeah, I would do that. Like houses elected every four years instead of every two.

Theo: That kind of flies in the face of what, you know, your average American would say, if you ask them, how do we fix Congress? They're like, cut their pay, and impose term limits!

Dwarkesh: Term limits might be warranted. I just like, but actually maybe not. Cause like on one hand there's a gerontocracy, on the other hand, there was such a thing as expertise that you're building up over time as you're in the institution. But yeah, I think that the average person would be wrong. But you know, many such cases.

Theo: So you talked about how we need to get higher IQ people in Congress. And, you know, that seems to be no easy task. Like a recent-ish example would be Blake Masters who ran in Arizona. He was clearly smart. He went to Stanford. He co-wrote Peter Thiel's book. He was endorsed and funded by Peter Thiel. He had ideas that were out of the mainstream, which is some signal of intelligence. He didn't just get everything from the Republican Party platform. And he still lost in a state that was previously relatively Republican. So how do you reconcile getting smart people in office with just the reality of politics?

Dwarkesh: The whims of voters might not be optimizing for that? I think it's unfair to blame voters on that one in particular. I don't follow politics closely. I don't know the particulars of that campaign, but it seemed like he played a Faustian bargain there with Trumpism that it's understandable why voters might have concerns about. But politics is not something I follow closely, so I don't know the particulars of that race.

Theo: What about just in general? How do we get more high IQ people in Congress?

Dwarkesh: Pay them more, longer term limits, I think is a big one. I think part of it is just getting high IQ people to decide to go into politics. It's not just about the system, which lets for them is also what goes into a filter. But these seem like obvious things. I don't know if I have anything new to say here. It's obvious that we should have smart people try to go into Congress. And it's obvious that we should pay them more. But what do you think we should do?

Theo: Basically that. But in terms of actually getting smart people in Congress, I think a lot of smart people will kind of just follow where the money is because they're smart and money is nice. And that leads them into CS. I like computers, but I like a lot of stuff and I would be lying if I said that my reason for picking CS over all of them was not largely motivated by money. And AI.

What Singapore did clearly seemed to work really well. But then again, there are other countries like Israel that has been a quite successful country, quite successful post-colonial success story. And unlike Singapore, they did not have a super well-functioning political system. If you know about Israeli politics, you know that it's been falling apart the last few years. In order to get a majority in government, parties need to form coalitions with the Orthodox and then of course, for the first couple of decades of Israel's existence, it was run by the socialist labor party. So maybe it's not absolutely necessary to have super smart, well-coordinated people running a government for stuff to work.

Dwarkesh: Well, Israel did actually become much wealthier since it adopted free market stuff originally, right? So, I think it's GDP per capita just shot up a lot.

Theo: Maybe the solution is not optimize for the best people in government. Maybe it's just take the government out of most stuff and most stuff will work out.

Dwarkesh: Yeah, I think it's definitely a combination of both, like a smaller government, but the part that it has to run, it's just run by very competent people, which is kind of Singapore, basically.

Dwarkesh’s favorite part of podcasting (1:46:46)

Theo: So flipping the script a little bit, people like to start podcasts with how did you get into this? And what's your favorite part? But I'll try ending the podcast with what's your favorite part of doing what you do? And what specifically motivated you to do it? Was it just boredom?

Dwarkesh: Yeah, I was bored in college. I think I was literally in the same situation you were in. I was a sophomore in college studying computer science. The best part is definitely, I will never stop being grateful for the fact that I can talk to literally the smartest people in the world, the people talking about and thinking about the most interesting things and just ask them questions for two hours or three hours at a time and be funded to spend the rest of my time thinking about what to ask them, doing research, trying to figure out what's important, who to have on and so on. That's a huge privilege. Obviously I'm super grateful for it and that's my favorite part.

Theo: All right, well, I think that's a good place to wrap it up. So thank you so much to Dwarkesh Patel for coming on the podcast.

Dwarkesh: Yeah, my pleasure, man.

Theo: Thanks for listening to this episode with Dwarkesh Patel. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Also be sure to check out Dwarkesh’s Substack, dwarkeshpatel.com, follow him on Twitter @dwarkesh_sp, and of course, listen to the Dwarkesh Podcast, which you can find on YouTube, Spotify, and Apple Podcasts. All of these will be linked in the description. Thank you again, and I’ll see you in the next episode.

0 Comments
Theo's Substack
Theo Jaffee Podcast
Deep conversations with brilliant people.
Listen on
Substack App
RSS Feed
Appears in episode
Theo Jaffee
Dwarkesh Patel