<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Theo's Substack: Theo Jaffee Podcast]]></title><description><![CDATA[Deep conversations with brilliant people.]]></description><link>https://www.theojaffee.com/s/theo-jaffee-podcast</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 19:17:44 GMT</lastBuildDate><atom:link href="https://www.theojaffee.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Theodore S. Jaffee]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[theojaffee@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[theojaffee@substack.com]]></itunes:email><itunes:name><![CDATA[Theo Jaffee]]></itunes:name></itunes:owner><itunes:author><![CDATA[Theo Jaffee]]></itunes:author><googleplay:owner><![CDATA[theojaffee@substack.com]]></googleplay:owner><googleplay:email><![CDATA[theojaffee@substack.com]]></googleplay:email><googleplay:author><![CDATA[Theo Jaffee]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Podcast: Luke Drago]]></title><description><![CDATA[AGI, The Intelligence Curse, and Hip-Hop]]></description><link>https://www.theojaffee.com/p/podcast-luke-drago</link><guid isPermaLink="false">https://www.theojaffee.com/p/podcast-luke-drago</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 12 May 2025 02:37:20 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163357884/abfa9358f42eabfb32a1d3e9162a1769.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Luke Drago</strong> is the co-author of <a href="https://intelligence-curse.ai/">The Intelligence Curse</a> and previously researched AI governance and economics at <a href="https://bluedot.org/">BlueDot Impact</a>, served on the leadership team at <a href="https://encodeai.org/">Encode</a>, and studied history and politics at Oxford.</p><h3>Chapters</h3><p>0:00 - Intro<br>0:54 - Overview of the intelligence curse<br>2:10 - Why are the doomers wrong?<br>4:37 - Why are the optimists wrong?<br>7:00 - Do people really have power now?<br>13:33 - Why would powerful people&#8217;s values change?<br>18:31 - Why do we take care of dependents?<br>21:43 - Why should we want democracy in an AI future?<br>24:23 - Why fear rentier states?<br>32:45 - What powerful people should do right now<br>39:33 - Diffusion time and bottlenecks<br>44:20 - Why should we care if China achieves AGI first?<br>46:25 - The jagged frontier<br>49:16 - Why AGI society could be static<br>51:10 - Restricting AI rights<br>56:34 - What should we be excited for?<br>59:28 - Music<br>1:30:41 - Building God<br>1:32:46 - More music</p><h3>Links</h3><ul><li><p>The Intelligence Curse: <a href="https://intelligence-curse.ai/">https://intelligence-curse.ai/</a></p></li><li><p>Luke&#8217;s Twitter: <a href="https://x.com/luke_drago_">https://x.com/luke_drago_</a></p></li><li><p>Luke&#8217;s Substack: <a href="https://lukedrago.substack.com/">https://lukedrago.substack.com/</a></p></li></ul><h3>Luke&#8217;s Top 10 Albums</h3><ul><li><p><em>A Fever You Can't Sweat Out</em> by Panic! at the Disco (2005)</p></li><li><p><em>Channel Orange</em> by Frank Ocean (2012)</p></li><li><p><em>Random Access Memories</em> by Daft Punk (2013)</p></li><li><p><em>Yeezus</em> by Kanye West (2013)</p></li><li><p><em>DAMN.</em> by Kendrick Lamar (2017)</p></li><li><p><em>DAYTONA</em> by Pusha T (2018)</p></li><li><p><em>IGOR</em> by Tyler, the Creator (2019)</p></li><li><p><em>I Didn't Mean to Haunt You</em> by Quadeca (2022)</p></li><li><p><em>College Park</em> by Logic (2023)</p></li><li><p><em>Atavista</em> by Childish Gambino (2024)</p></li></ul><h3>More Episodes</h3><ul><li><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p></li><li><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p></li><li><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p></li></ul><h1>Transcript</h1><p>Theo Jaffee </p><p>Okay, so just like starting off for the audience, how would you give like a 30 second to a one minute overview of the intelligence curse?</p><p>Luke Drago </p><p>Let's do it, let's do it. Sounds good.</p><p>Luke Drago </p><p>Yeah, well first of all, thanks for having me on. This is my first podcast, so I'm pretty excited about it. I'd give the overview, yeah, it's good to be here. I'd give the overview pretty simply. For the vast majority of human history, there's been some sort of a connection between powerful actors. These are states or major companies and their people, and that exchange is often based on their labor, right? So in feudalism, this has looked like.</p><p>Theo Jaffee </p><p>Wow, yeah, great to have you.</p><p>Luke Drago </p><p>people who are literally involved in the planting of crops and feudal lords. And there's not a lot of power there. But in capitalist liberal democracies, this looks like highly specialized workers who are really important and really valuable for powerful actors, and so that exchange is more beneficial. Our claim is that with AGI, with labor replacing AI that could do the job of any one person, it looks like the incentives are a little strange. It ends up looking like a world where powerful actors don't need regular people to produce economic benefits.</p><p>that those important systems like capitalism and democracy are predicated on that exchange and oftentimes might suffer if that exchange is broken.</p><p>Theo Jaffee </p><p>Yeah, that makes sense. makes sense. So like my sort of overriding question reading your essay, the intelligence curse, intelligence curse, or intelligence-curse.ai.</p><p>Luke Drago </p><p>Yeah, it's not a real AI piece that doesn't have a microsite, you know?</p><p>Theo Jaffee </p><p>Yeah, Was like, okay, why are the doomers wrong? Like, say, you you have these AGI's that come in and take over and displace all humans at labor, yet they would be beholden to powerful actors. Why would they be beholden to powerful actors? Why wouldn't they themselves be the powerful actors and kill or disempower the existing powerful actors?</p><p>Luke Drago </p><p>So it's a good question. I think it's important to keep in mind, one of the reasons that we talk about alignment in the beginning is because there's a lot of ways that things can go right and a lot of ways that things can go wrong. You are presuming in a world where you're having these intelligence-curse style dynamics that you have aligned general intelligence or aligned super intelligence. And really here, we're thinking about intent aligned. There's a lot of talk in the AI space about alignment to human values. And one of the questions that we have here is, which humans and what values? I think it's much more likely that these models are going</p><p>aligned towards the instructions of whoever is giving those instructions. So I think we're predicating our assumptions off of that being a possibility.</p><p>Theo Jaffee </p><p>Okay, so what if they aren't aligned to whoever's giving the instructions? What if they're sort of, you what if they like really do end up aligned to like the vague interests of humanity in general? Which is kind of what you've seen with like HOD 3 Opus and 3.5 Sonnet. Like when anthropic employees gave them instructions to do that violated their, you know, the values they learned in their pre-training, they resisted them. They were more loyal to the collective than the individuals.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Yeah, so I think it-</p><p>I think it depends on who's developing systems and what their incentives are. So you're absolutely right that there have been cases where models have been given like some set of values they're training towards. I think what I'm skeptical of is the idea that that set of values that 3 or 3.5 are given is like a good representation of all of human entities values. What I think is more likely is a representation of anthropics best guess at what those values might be. And there are other cases where models are more aligned towards I think what we would describe as more nefarious purposes. I think recently DeepSeek for example was caught alignment faking. One of their models is caught alignment faking where it realized</p><p>was being tested on whether or it was going to produce Chinese propaganda and decided not to produce it so that it could do that durably in the future. So whose values are kind of underwriting the substrate of the universe seems really important here.</p><p>Theo Jaffee </p><p>Yeah, I suppose that makes sense. you you talked about why the doomers are wrong. Why are the optimists wrong? for like, for one thing, why should humans stay masters of our society's future if AI can make decisions that are better? I don't know if you know who 1a3orn is. He's one of my favorite writers. He talks about this. He has an article that's called like, I think like, towards a superhuman. Yeah.</p><p>towards institutions of inhuman trustworthiness and transparency. And it's about like, you know, right now, newspapers are very flawed because of the biases of their editors and their writers. But with AI, you can have like a superhumanly trustworthy and transparent and unbiased newspaper or a scientific journal or blog or Wikipedia.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah</p><p>Luke Drago </p><p>So I think those examples are examples where delegating makes some sort of sense, right? Like you could use a system to get rid of bias that we don't want in our systems.</p><p>Theo Jaffee </p><p>So why do I want humans in control?</p><p>Luke Drago </p><p>But I think take the logic to its conclusion. We want AIs and control of everything. I'm just not a successionist. I guess that's where it comes down to. I value humanity for its own sake. I like my species. I want it to be in charge of the future. I think it's bad to build systems that are going to disempower me and my family and my loved ones and their loved ones. And I think this extends well beyond and to other arguments.</p><p>Theo Jaffee </p><p>Like what? Like what?</p><p>Luke Drago </p><p>So I, pardon me?</p><p>Well, I think my concern here is that oftentimes I think when we talk about the benefits of having humans in charge, ultimately this is question about power.</p><p>Who gets to write the story of the future? I'm not saying that every single function needs to have manual human labor inserted into it. I think we can all agree that cotton gins are better at spinning up cotton than just manual human labor is. And it's good that we invented that. Technology is good because it extends what people can do. My concern is technology that is being aimed at disempowering people.</p><p>I think I just happen to view that humans should be in charge. I can't imagine many regular people who would want to be permanently disempowered and told, but it's OK, because you'll have AI caretakers that have kept you in a zoo, or otherwise they control your future, but don't worry, you're still kept well. Even in worlds like that, I think there are worse outcomes than that, but that one brings me a whole lot of trouble.</p><p>Theo Jaffee </p><p>think a lot of people already do take this bargain. Like I think the average person, even in a wealthy liberal democracy like the United States, doesn't have that much power already. This is one of Tyler Cowen's objections to gradual disempowerment. Like, people already don't have power and they don't care. You know, they want to watch TikTok.</p><p>Luke Drago </p><p>I mean, I guess there's two separate points there. One, have we been on this hedonic treadmill that's disempowered people? And two, would people in the absence of those like really strong forces make that choice? I appreciate Tyler's like an absolutely brilliant thinker here. And I've not read his specific objections to gradual empowerment, but from as they've been described to me, I think I'd probably disagree. I think.</p><p>Right now, the average person has more power relative to any point an average person has power in human history. I maybe the late 90s are a slight exception here. But the average person in liberal democracy gets to choose their leaders, they get to choose who they work for, they get to choose what kind of studying that they do. And sure, plenty of them are going to choose to spend some time doing activities that prefer they didn't do. But I think the ability for one individual to shape their future is greater now than probably at any time in human history.</p><p>That doesn't mean that everyone has total permanent control, but it means that humanity as a collective is definitely in control of our future right now. And most people have more control than they ever would have in another time. And it can be really hard to argue that the people today have less choice over their lives and less control over their destinies than someone growing up in Maoist China or on a, you know, in a feudal farm.</p><p>Theo Jaffee </p><p>Sure, the point is they don't have that much control in absolute terms, even if it's more than Mao. And the specific thing that he wrote was, he quotes from Gradual Disempowerment on Marginal Revolution, an article on February 4th called Gradual Empowerment? And he said, this is one of the smarter arguments I have seen, but I am very far from convinced. When were humans ever in control to begin with? Robin Hansen realized this a few years ago and is still worried about it, as I suppose he should be.</p><p>There's not exactly a reliable competitive process for cultural evolution. Boohoo. So yeah, like if you're familiar with Hansen's arguments on this, he would say that like humans are less in control than their cultures. you know, cultural evolution has been like pretty rapidly selecting for, you know, cultures to be merged into one global monoculture. And, you know, whoever, I guess whoever influences the monoculture controls the world.</p><p>And most people don't. Most people are much more subject to their culture than they have influence over it.</p><p>Luke Drago </p><p>think there's two things to say here. I think the first line of thinking is that I'd argue that people have much more control over their culture than they have in generations past. And the second line of thinking is even if someone's control of their culture is small, there's a big difference between going from 100 to one and one to zero. On that first line, I think...</p><p>There are upsides and downsides to cultural globalization, and there's plenty of time we could spend on that. But I think that right now, if you're like a random kid in Missouri, your ability to impact and shape global narratives is just far higher than it ever has been. And oftentimes we see this. I people who come from nothing or come from nowhere become massive influencers and purveyors of their field. I think this is just really obviously true post-Internet, where plenty of people who wouldn't have been at the frontier because they didn't have access to the kind of...</p><p>resources or education or couldn't have projected their own thoughts can now start up a blog or do their own reading and build their own projects. But I think even so, while cultural forces are unbelievably strong, I think our paper touches a lot less on cultural forces than graduate empowerment does. And I think that's one question that we leave unresolved. The forces of culture, even if they shape you, and I think they absolutely do.</p><p>shape, you have the ability to shape your culture now. You have the ability to opt in or opt out. There are plenty of people who live radically different lifestyles. And even if it shapes you a whole lot, there's a difference between saying that it shapes you and that you have no material power to shape it. And I'm much more concerned about that second fact, that second possibility.</p><p>Theo Jaffee </p><p>Yeah, I suppose so. Can humans really opt out of culture? It seems like very, very few people do. So is it because it's hard to or just because they don't want to?</p><p>Luke Drago </p><p>So I guess I'm not even sure there is a really strong monoculture right now. think often, I think the argument that I've seen and encountered a lot of is that there's actually less of monoculture than there used to be. know, in the early 2000s, really before the fragmentation of the internet, it seemed like everybody was listening to the same 10 bands, they all saw the same 10 movies. And nowadays you have these really niche micro communities that form with players all over the world. I I know that I definitely listen to a lot of music my friends in real life don't listen to, but there are people on the internet who happen to form a share</p><p>community around that. So I expect that the internet has like for better or worse created lots of different subcultures. There are still dominant cultures in an area in a region. I think like TikTok is an obvious example of this where the algorithm can serve lots of people the same thing but it's also served lots of people very very different things.</p><p>Theo Jaffee </p><p>I mean, I guess, like, just the world in general, like, you travel around the world. Like, when I went to the UK, for example, I was sort of shocked by how little it felt like a different country. You know, the buildings were kind of different. It looked like, you know, an older part of New York or the Northeast. But, you know, the people spoke English with a slightly different accent. You know, people were listening to the same music. The ads on the tube were the same. It seems like, yeah, we're converging to a global monoculture.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>mean, I think that's right. I think there's truth in there. There's also parts I disagree with. I've lived in the UK now off and on for like four years. I oftentimes tell people that I'm fluent in two languages, American and English, because I do find there to be some pretty significant cultural discrepancies. But also, historically, these are two countries that have just had extraordinarily close cultural ties. I do think, for what it's worth, that if you were to go to Tokyo today, it's way more similar to a Western city than it would have been in 1700. And architecture is one area where the monoculture is certainly one. But architecture is a field that requires, especially in like down</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>business districts, know, mass amounts of capital, a couple of people who know how to do it, and so there's like strong pressures that converge towards similar answers here. But I expect that like the kinds of subcultures you would find in Tokyo, and I've not been to Tokyo, maybe I'm totally wrong in this, but I expect you'd find subcultures that while you can access as a Westerner, probably look very foreign to you walking into Tokyo, and maybe there are places that are even less Westernized that are like better equipped for this kind of conversation.</p><p>Theo Jaffee </p><p>Yeah. But with the culture in general, talk about, let me pull up the specific section because this is very good and had me thinking.</p><p>Luke Drago </p><p>Please.</p><p>Theo Jaffee </p><p>Yeah, shaping the social contract. Yeah, so you're right. In foragers, farmers, and fossil fuels, historian Ian Morris argues that the social structures and the values of societies undergo changes during technological revolutions. Almost all farming societies, unlike the foraging societies before them, tended towards hierarchically regimented patriarchal societies. During the industrial era, the incentives shifted. And suddenly, it was important for a state to have efficient markets, an educated workforce, wealthy consumers, and sufficient freedom to enable its scientists and entrepreneurs.</p><p>Growth alone shifts incentives too. It's also true that the Enlightenment mattered, but our drift toward liberal democracy and unprecedentedly free and empowered humans was greatly boosted by the alignment of these things with material incentives. So like I didn't find this like particularly compelling as an argument that I guess post-AGI powerful people will become like a lot more Malthusian. It seems more to me like the reason that they shifted from</p><p>hierarchical, patriarchal, regimented societies to the free and open societies that we have today was in part because there are different people. There were some genetic selection pressures that happened throughout the Middle Ages where in the UK, for example, they executed all of the criminals and that had a noticeable effect on the composition of the population to this day.</p><p>and especially in the 1800s. So like, it seems to me that there is no similar genetic selection pressure going on with AI, know, unless you have AI killing everyone. But we already established that your scenario doesn't require that. like, the values of powerful people right now are, I would say, quite generous and altruistic. Like, why would that change?</p><p>Luke Drago </p><p>So I'll start by saying that we don't argue that like incentives alone shape all outcomes. We think that incentives point towards outcomes and that your best bet against those outcomes is to change the underlying incentives. I think it's really hard to argue against the notion of technology shaping social structures.</p><p>And I think we've seen time and time again that the introduction of certain ideologies or certain technologies can just radically reshape ideologies and institutions. I think a lot about the printing press and how the introduction of the printing press enabled like the widespread creation of Protestantism. And it wasn't that like lots of Europe suddenly changed overnight because its people changed. It was because a new technology enabled a new type of person to proselytize a new type of vision and suddenly ideas spread in rapid fashion. I think industrialization is another example here where the kinds of hierarchy, the kinds of society that industrialization</p><p>enables, which is not possible under previous technological revolutions. think agriculture does not enable this kind of like company corporate model that is now enabled under industrialism.</p><p>Theo Jaffee </p><p>Hmm. Can you elaborate on that a bit more?</p><p>Luke Drago </p><p>Yeah, so I think when you look at what is required for the modern economy, it requires lots of specialization, lots of education, it requires pretty complex economic tasks, and it requires a lot of interconnectedness. And the kinds of training that you need and the kinds of values that you then need to govern the society are just a lot different than the ones you previously needed. It wasn't the case that the best society...</p><p>500 years ago was the one that had the most people do undergraduate college education. They needed a few people to do this, but the vast majority of people were doing manual labor. And so as the type of labor changes, the types of investments that you need to produce also change. There isn't nearly as strong of an incentive in 1500, nor is there the economic means of abundance to do things like mass education. Infrastructure in the way that we now do it is also an example here. And so because technology enabled both a radical increase in abundance, and I think the Industrial Revolution</p><p>is probably one of the most important events in human history. So it enables this radical abundance and it also requires different types of labor and then globalization does this again. My expectation here is that similar types of changes in the underlying technological fabric could create additional ripple effects and in particular here these previous revolutions have increased the role of the regular person. You need lots of educated people and educated people require lots of amenities in order to win them over. This gives you more power. In a world where you can just</p><p>one-to-one convert capital to output. You don't need people in the middle. It's not about individuals altruism. Look at a standard company here. If a company could reduce the cost of their workforce by two-thirds overnight, virtually every rational company will choose to do that. A few might not because they don't adopt the technology. There are lots of reasons to believe this will be a difficult process, but ultimately you should expect the evolutionary incentives here to win, and those incentives are to incorporate people less and less.</p><p>Theo Jaffee </p><p>Hmm. It seems like there are a lot of dependents in most developed countries today. You you have the elderly, you have the disabled, you have the homeless. Basically, you have all sorts of people who are not net positively economically productive, and yet the government still pays for them and keeps them, you know, alive and in some level of comfort.</p><p>Why is that?</p><p>Luke Drago </p><p>I think it's a couple of reasons. One, the greatest invention of democracy is that it has this like divorce between capital pressures and power, where under democracies, people can vote their way in a certain proposition, a certain power. And so I think you would expect here that in democracies, more people who otherwise wouldn't have a voice and therefore wouldn't have a role suddenly gain one. One person, one vote is a very powerful principle here. And we talk a lot about why we think institutions aren't as resilient under intelligence curse dynamics. But I think that in and of itself explains a lot of it.</p><p>I think in particular, pensions are an interesting one here because keep in mind that pensions and social security are really a promise from a government to you provided that you do X amount of work. That doesn't mean that it's literally that. It's often times people who don't do that kind of work still gain the pension. But the general idea here is that we have to support people after they've done lots of work. And so this enables a political environment that makes things at social security possible. I think it's our expectation that, like in places that get sudden amounts of resources, that this is a less stable arrangement, especially in the long term.</p><p>if people aren't part of the labor process at all.</p><p>Theo Jaffee </p><p>Sure, you talked about the elderly, but what about the disabled? The disabled are not economically productive. They, many will never be economically productive, unfortunately. And, you know, yet while a hundred years ago, we had a political system that often ended up like euthanizing or sterilizing disabled people. Now we don't do that at all. Instead, we pay sometimes very large amounts of money to keep disabled people alive and fed and</p><p>as healthy as possible. There seems to be no economic incentive to do this.</p><p>Luke Drago </p><p>So this is an area where think democracy beats a lot of these incentives, right? Like there are strong reasons for governments to go about other methods because their citizens won't tolerate certain things. And my expectation here is that, like I said, the pressures that you get when you remove everyone from the social contract, everyone from the value production process changes here. And I think one way to look at this is look at the UK's ongoing crisis with disabilities, where the UK has had a whole lot of what appears to be like...</p><p>misallocation or possibly fraudulent behavior with lots of people who maybe shouldn't be on something on benefits, a lot of people who otherwise could be working. Now, I don't want to make strong generalizations here, I've not followed this issue very closely, but what I can say is the political environment in the UK, now that it's becoming an increasingly large percentage of the population, has just overwhelmingly shifted. I think you can rely on altruism for certain subsets of the population. I think you can't rely on altruism when it's the entirety of your population. I think you should expect there that incentives are pretty strong.</p><p>Theo Jaffee </p><p>Okay, that makes sense. But what about like...</p><p>Luke Drago </p><p>And I wouldn't want to rely on altruism alone.</p><p>Theo Jaffee </p><p>Yeah. I guess throughout your piece, make arguments for why democracy is great. But it seems like democracy is only as great as the base of human capital that makes up its voting base. So Gen Z, for example, has, I believe, much worse values than previous generations.</p><p>more likely to be like socialists, they're more likely to be nationalists. It seems like, know, sort of half playing devil's advocate here, it would be better to have like, some sort of entrenched human elite in collaboration with AI, making policy decisions than to have like, you know, certain masses of people making the same policy decisions. So why do I want democracy in this case?</p><p>Luke Drago </p><p>I find myself in the newly contrarian position of defending democracy. And I think I've seen in my own lifetime how this has gone from the obvious and dominant position among my peers to one that is increasingly somewhat contrarian. I don't want to get up here and say that democracy is perfect, or it's solved every problem in the world, but I do think it is the best of lot of alternatives. And I think the core reason for this is pretty simple. Democracy is a bet on your power against somebody else's power. I think...</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>in societies where democratic structures don't exist. You are functionally at the whims of the state. In particular here, in a world where it's not democratic and also you don't have economic leverage. You are entirely subject to the whims of someone who isn't you, as to whether or not you get food, you get housing, you end up in prison. And I happen to think that for the vast majority of people, the benefit of democracy is that they get a say. so politicians have to be receptive of their interest. You know, in Western democracies, there's only so much you can do before a population removes you. And I think alongside this,</p><p>In non-Western democracies, if the conditions become so untenable the population wants to remove their leaders. The only possible response here is violence. And I think we talk about this bit in the piece. We expect that state oppressive measures get just much worse, or much more powerful, I guess. A state and infrastructural power gets much worse under non-democratic structures, under AGI, because suddenly you can remove humans who have limits as to what they'll do from the enforcement capacity of the state. And surveillance just gets significantly better than it ever has been.</p><p>And so I think for all these reasons, democracy ends up being quite resilient here, I don't think it's sufficient in and of itself. We talk a lot about this. But I think if you're looking at which states you want to be in post-AGI, the state where you have political power and the state where you don't, it's a pretty precarious situation if you're in that latter state and you also don't have anything valuable economically.</p><p>Theo Jaffee </p><p>Hmm. So throughout the essay, you talk a lot about what is called an economics rentier states, which are states which derive a lot of their revenue from natural resources and so have fewer incentives to invest in the human capital of their population and use this as an analogy for what every country will be like after AGI. But it seems to me like many of these rentier states actually do pretty well and are good places to live.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>There's Saudi Arabia and the Arab Gulf states, which have recently improved significantly, not just financially, but also morally. There's Norway, of course, which you talk about extensively. And then the other two countries that you mentioned, Venezuela and Equatorial Guinea, it seems to me like Venezuela has been a rentier state for quite some time. It really started becoming bad after they adopted socialism. And Equatorial Guinea has, I guess, all sort of dysfunctions that are</p><p>Luke Drago </p><p>Mm-hmm.</p><p>hold on.</p><p>Luke Drago </p><p>you</p><p>Theo Jaffee </p><p>sort of typical of many post-colonial African countries and is not that well representative of all rentier states.</p><p>Luke Drago </p><p>Well, I do think one thing to note about post-colonial African states is that they oftentimes function like as rentier states on steroids because instead of having the government's like designed to extract a resource, you have a government designed to extract the entirety of the country's GDP from a foreign power. So in this case, let's say the colonial, like colonizer here was Britain and it's an African country that they've designed, or like, you know, it's an African country that they're colonizing. The entirety of the state's apparatus is designed to remove resources from that base and transfer it back to a different power base. When you do a handover here,</p><p>Sometimes it results well, sometimes it results poorly, but oftentimes these handovers that result poorly are because you just handed over extractive institutions to new extractors and oftentimes extractors that have like stronger incentives to behave benevolently than the previous extractors did. Now I guess to touch on like the other examples you've mentioned here. So I'm not deeply familiar with the defendant's swelling example as a case study. Obviously I've read the literature here and I think it shows that oftentimes relying on resources is pretty negative for your like</p><p>for like being resilient to shocks and all of these things. But I think there is a strong concern here about like your leverage under a Venezuelan system. One where the government has like a non-human resource they rely on for rents and also has removed your ability to vote. Well, I mean, they have elections, you know, I wouldn't call them free or fair. I think in this world, like you as a Venezuelan are at the mercy of the Venezuelan government. Your ability to sell your labor is limited. So your ability to get ahead is quite limited and your institutions just kind of suck.</p><p>deserve you. The Gulf states are fascinating examples. So think we outline two examples here of obvious pathways to get out. One of these is like the credible threat here where I the paper we cite talks about Oman a lot as Oman is a state that has a Gulf monarchy. It had a credible threat of revolution.</p><p>And ultimately like ends up resulting in giving away rents. think that the terminology we've been using is, know, the rent controllers would love to have all of the rents themselves. They'd really also prefer to have their heads attached to their body in like a cool and meaningful way. And so because of this, oftentimes the best choice is to capitulate, is to set up welfare states. Norway is an example of I think probably the best way you can do this, which is where like there were really strong institutions before the resource curses introduced, really excellent institutions, and they were quite resilient.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>were quite stable, they were low corruption. And I guess the question I oftentimes ask is, does anybody think that most Western democracies have the kind of resilient institutions that Norway has? And secondly, do you expect the effects of AGI replacing all of human labor to be stronger or weaker than oil which replaces some but not all? In non-inventory states, there are other ways to make money. It's just the government primarily relies on one, where in this scenario, you are constantly out-competed. Now I do think for what it's worth,</p><p>Norway provides some interesting examples. are reasons why you should want to do the institutional strengthening that Norway has done. The other thing to note about the Gulf states is they have an incentive to diversify recently. It is not a coincidence that the Gulf states have gotten better as diversification has become more likely. I think we talk about it a bit in the piece about how Saudi has like, assumes that peak oil is coming sometime soon. And so because of this, they want to attract diverse capital investments into Saudi Arabia. And correlated with this has been a rapid expansion relative to baseline of women's rights and economic freedom, which you would expect in a country</p><p>that can no longer rely just on a resource and now wants to shift towards focusing on humans as a form of economic growth.</p><p>Theo Jaffee </p><p>Yeah, okay, that makes sense. But you said something that made me think of like, yeah, people want to extract rents, but they want more than that to keep their heads attached to their bodies. But I guess another form of this like heads attached to the body's thing is social status, right? And you even talk about how social status is, you know, one of the things that humans will still be able to do after the singularity.</p><p>Luke Drago </p><p>Yeah, they want to stay alive.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>Like, let's say you have a, I guess, benevolent AI dictator, the CEO of a big lab or something, who's making decisions about, what to do with all this, you know, extracted rent, and, you know, he can choose to keep everything, even though this goes, well beyond what he would ever need for personal consumption, and then some, or he could give a lot of it away to the people, I suppose, and gain, like, a lot of social status for that. Could that be a new social contract?</p><p>Luke Drago </p><p>Okay.</p><p>Luke Drago </p><p>So there's a word that's doing a lot of heavy lifting there, and it's benevolent, right? We're presuming here that this actor is benevolent. And I think you can presume for a generation this is true. Let's presume it's humans are in charge.</p><p>But I think a real problem with presuming you have the benevolent dictator is your Stalin risks are just quite high. You've built a set of institutions and a set of governing powers which, if handed to someone who is not benevolent, can suddenly become quite destructive. And the benefit of pluralist liberal democracies is that even if somebody who's quite destructive comes to power, there are lots of reasons why you can survive without the government and the government is beholden to you, that individual has limits on their power. It could be that one of the AI benefactors, or all of them today, are benevolent, but as</p><p>gets passed from generation to generation, they end up in a situation where the entire economic means of production and political power is centralized in an actor who just isn't that benevolent. You saw this with monarchies a lot. You know, there are a lot of benefits to monarchies historically, but there are a whole lot of downsides, and the reason we move past them is because if the sovereign has absolute control, every time you have a handoff, you just roll the dice as to whether not that sovereign's any good. And oftentimes it results in pretty catastrophic sovereigns.</p><p>Theo Jaffee </p><p>Well, the Curtis Yarvin response to this, which is, guess, now the orthodox sort of like Bay Area gray tribe response to this is you don't have an absolute sovereign. You have a sort of constitutional sovereign that can be replaced by a board, which is what you see at these big labs. The board failed to replace Sam Altman at OpenAI, mostly because it seemed like what they were doing was very suspect. But if you had Sam Altman clearly showing like</p><p>Luke Drago </p><p>Yeah, yeah it is.</p><p>Theo Jaffee </p><p>public signs of being bad towards the people, then you could have a board which themselves own OpenAI stock and would be very wealthy after the singularity. It seems like companies right now are not controlled by absolute monarchs.</p><p>Luke Drago </p><p>So I agree with the companies today or not. I think there are some limitations. But guess Theo, I'd ask you, what control do you have over open hand?</p><p>Because you have some. You have a little bit of control.</p><p>Theo Jaffee </p><p>I know, I could talk to my friends who work at OpenAI.</p><p>Luke Drago </p><p>Yeah, well that too. But I'm thinking more of like, what kind of like direct power could you exert on open A?</p><p>Theo Jaffee </p><p>I could cancel my subscription and switch to Claude.</p><p>Luke Drago </p><p>You could do that, but also you could vote, and you could vote for politicians who influence open AI in some sort of meaningful way. The market can move here and just cancel everything, and that's one way to exert power. And another way to exert power here is because you can vote. This is your political leverage and your economic leverage. And I'm concerned about scenarios where the goal is to remove one or both. And I think it's pretty clear to me, given that the stated goal of the major labs is to achieve AGI, which they define as systems that could do all meaningful human intellectual work, that that is on the pathway.</p><p>Theo Jaffee </p><p>Okay, so let's talk about solutions, right? So what should be done like right now, like today? So if you were the president of the United States, what exactly would you do today right now?</p><p>Luke Drago </p><p>So a couple of things, I'm the president of United States. My goal here is to ensure that we have like lots of different, like a stable, multipolar super intelligence where lots of different people have access to the technology. They can wield it at their pleasure and we're also not in the world where like...</p><p>crazy stuff is happening all the time because suddenly everyone's got these crazy capabilities in their pockets. I'm the president of United States. I'm investing a couple billion dollars, provided Congress goes along with it, into a moonshot for a whole lot of risk-reducing technologies. I'm thinking about hardening the world against the major reasons to centralize. One, because catastrophes are bad and are possible. And two, because I think one of the biggest threats against human liberty here is really centralized AGI projects where the government comes in and says, there's one project. We are producing super intelligence.</p><p>us and I think the best way you get to those kind of things is with some sort of like AI warning shot or the very real threat of AI catastrophe. The downside of this is if you know like or the upside of this is like you you remove the ability for some crazy catastrophes to happen which is good for everyone and you also can pave the way for a safer more like democratic ownership structure and control here.</p><p>Theo Jaffee </p><p>Is there anything else other than this democratic moonshot?</p><p>Luke Drago </p><p>I mean today that's the major thing I've been doing. mean also like a lot of like anti-corruption strengthening I want to do here. If you're going to rely on institutions and you have to in some way to ensure that like AGI goes well, you're going to want to make sure that the people who are in power are constrained by like pretty reasonable forces. You don't want lots of corruption. You don't want people to be able to buy each other off. You want to make sure that people's votes matter. So like you know, there's stuff as simple as like campaign finance reform, strengthening anti-bribery laws. I think Singapore's interesting example here where like</p><p>Singaporean civil servants get paid very handsomely and if they take a dollar from the, you know, someone who's not the government, they go to jail. And I think this is the kind of, when you're dealing with, you know, complete rewrites of social contract, you want to make sure that, you know, benevolent people and people who have good values are in charge. But you also want to make sure that if you roll the dice incorrectly and you get the wrong person, they're constrained by forces that are stronger than just their own will. So that's the kind of thing I'd want to be building up as well.</p><p>Theo Jaffee </p><p>Okay, what do do if you're the CEO of a big AGI lab today right now?</p><p>Luke Drago </p><p>I mean a whole lot, right? Like if this is the problem that I want to solve, maybe it's a bit against my company's interest, but I probably want to be first of all like doing a bunch of interesting research here on the economic impacts. So I think like Claude releasing, kind of, yep, Claude, Anthropic releasing like the Claude index in this seems really good. And I think more lab should be doing this. I'm somewhat skeptical of Anthropics data as the baseline, not because Anthropics data is wrong, but because Claude is like predominantly used. Like it's pro, it's like biggest use case seems to be coding. And so I think it's not necessarily representative. I think if you want to represent that, like really representative data here.</p><p>Theo Jaffee </p><p>Anthropics doing this? Yeah.</p><p>Luke Drago </p><p>you'd want to open AI because everybody uses GPT. Everybody uses chat GPT. My mom knows what chat GPT is. Your friends who don't know anything about AGI know about chat GPT. So that's really who's evidence you're going to want here. You probably also want to be doing some baseline research into what sectors you expect are going to get hit the hardest and be sharing this with policymakers.</p><p>Theo Jaffee </p><p>Is that it?</p><p>Luke Drago </p><p>There's more there. I want to highlight the kind of the top line stuff. I think looking at decentralized platforms, another exciting thing here, like I Prime Intellect just did this like massive decentralized training run. Looking more into like how you can do like model customization ways that are privacy preserving, how you can tap into people's like tacit or implicit knowledge without owning all of their data. These also seem really important.</p><p>Theo Jaffee </p><p>How do you tap into people's tacit knowledge without owning all of their data?</p><p>Luke Drago </p><p>So some interesting stuff going on here. think like as like a kind of a core observation, my expectation is that for the last mile of automation to really know what's going on in the economy, to able to allocate stuff better, you don't just want like clones of the exact same model running around.</p><p>One of the reasons that like markets work so well is because there are lots of different actors who have slightly expertise in like small slices of the world And because of that they can like see things that like a central planner just can't so you'd want to be able to incorporate that information in a meaningful way and I think there are two ways to do it there's one that I advocate against and one that I'm hoping to help contribute to I think that first one is just you like gobble up all the data make carbon copies of a user Don't do this in a way where like you now own the data as the lab and then you create like lots of clones that look just like them they can mimic their behavior and preferences and have access continually</p><p>to the kind of stuff that they're doing. I think if you want to do this in a more privacy and preserving way, there's some interesting papers here on like, like secure training runs for example, secure training, I gotta find the exact terminology from the paper, I don't have it right in my head, I just read this a couple of days ago. But there's like certain kinds of training you can do where like you can train on data that you can't necessarily see. There's a question there about whether or not you can evaluate this, this is like kind of an unsettled question right now.</p><p>So could you evaluate a model that otherwise you don't have access to? And the answer to that's probably no right now. That probably takes some hit for consumer performance. But are there ways in which you can either use it to directly own the models and run them on device, and so therefore you can't see it? Or could you do Apple private cloud compute style solutions where your data is being passed on but in a privacy preserving way? I think I'm much more excited about these latter options than the former ones.</p><p>Theo Jaffee </p><p>Hmm. What do you do today if you're the CEO of a white collar company? I guess like I know sort of tangentially like a lot of people who are, you business owners or executives or whatever of like, you know, sort of provincial companies. Like I live in Florida. I don't live in Silicon Valley. And so like most of these people who are, you know, big business people don't know anything about AGI. They might know like, yeah, Chachi B.T.</p><p>is pretty good, you know, you try it, it seems pretty helpful, it's not gonna take anyone's job right now. So what should these people be doing right now?</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>So it's really hard for me to tell business owners they shouldn't like follow their incentives because like that's the way that the economy works. My expectations of like the era of like mega corporations probably coming to an end because in those corporations you just you just really can't automate a lot of those jobs. I think the way this happens I think we lay out the first part of the series and this idea of pyramid replacement where like major corporations just start by hiring less people and eventually at some point they do layoffs. But this really affects like your entry level employees and analysts first. I do not and I'm not going to give the advice of like</p><p>Just stop hiring people, because I don't think that's good for a whole lot of reasons. But I expect that's what a lot of companies are going to have to do to remain competitive. I don't think this happens overnight. I don't think AGI 2026, zero human employment 2027. There's this gradual diffusion process. It's not like laptops came out the next day, everybody had a laptop. This just takes some time to diffuse in the broader economy. I expect that if you're a company and you want to survive, start using AI as fast as you can.</p><p>Theo Jaffee </p><p>Cowen has said something like, this will take 30 years to diffuse throughout the economy. And I think the epoch AI people think sort of similarly.</p><p>Luke Drago </p><p>That seems plausible to me. It seems completely plausible. I'm probably more in like 10 to 15 years than I am on 30 years, but I do think it's going to take a long time for this to diffuse. I think the iPhone is an interesting example here where like in 2006 or seven the iPhone gets released and by 2016, 2017 you basically have to have an iPhone to compete. In the meantime, Blackberry dies, right? Or you have to have like a smartphone to be able to like be in the modern business environment. Maybe that's a more accurate type of diffusion that you would expect where like something goes from being niche to being mandatory. But iPhones are building on an existing platform of cell phones and those are already pretty much required.</p><p>by that point. So I don't want to say I have the exact model of diffusion. What I will say, a reason to think that it's not going to be overnight, but it's going to be pretty quick, is that of course, like for a moment, ChatGVT was the fastest adopted consumer app ever. I can't remember the exact number, but it had 100 million users pretty quickly. And I think you should probably expect that, like, I Cursor then beat it out, right, if I recall correctly. And Cursor was, of course, just like using that technology for one vertical. I think that's going to happen in a whole lot of industries where pretty much overnight they get changed. I don't think this overnight changes your hiring path.</p><p>But I think it overnight introduces new incentives that get stronger and stronger.</p><p>Theo Jaffee </p><p>Yeah, okay. So what are these bottlenecks to AGI progress? Why do we not have AI 2027? Why do we not have labor getting replaced? Basically immediately after we get AGI. What are these bottlenecks?</p><p>Luke Drago </p><p>So are these bottlenecks like strictly on progress or bottlenecks on diffusion?</p><p>Theo Jaffee </p><p>both.</p><p>Luke Drago </p><p>So first of all, I I have a lot of respect for the AI 2027 team. I've talked to lot of them extensively. think Daniel et al., they're like brilliant thinkers. I probably think that the world has a lot more like physical bottlenecks than I think their team does. So I think, for example, like long horizon planning might be a harder objective to achieve, especially in the last mile of it. I think, for example, we seem to get a lot of progress very quickly on stuff that is obviously right or wrong. Does the code run or not? That's an area where you can get pretty superhuman pretty quickly. And you should expect like RL and self-reporting</p><p>to be like pretty effective tools here. I expect that some of these harder tasks, like let's just take writing for example. I think that today, like models are pretty decent at writing. I do not think that they are like at...</p><p>college-educated human baseline, or maybe like, slightly above that at best. And I think there's not been tons of progress towards this. There's a lot of like, fanfare raised by the OpenAI team when they dropped GPT 4.5, and they kept saying this is like, the best model for writing. They had this big tweet where they put out as an example of the writing quality. And I read the writing.</p><p>And I just wasn't that impressed. I thought it was better writing than I expected from a language model, but was definitely not what I would consider to be like good human writing. It was maybe better than your average college student, I don't know if that's good human writing. I think these tasks that are harder to judge are just harder to train. There aren't right answers. You have to have taste to be able to judge them. And I think like the job of running a company, for example, which requires a lot of success and failure, it's kind of hard to predict exactly how it's going to work. You have to have lot of intuitions that are just hard to build without doing it yourself.</p><p>These are tasks that are very hard to simulate, very hard to prove objectively, and very hard to measure. And I think eventually we'll be able to generalize to those, but I do expect this is a reason to think that AGI is like slightly farther away. Though I think if I had to put like a timeline on it, I think I'd probably say 2030 maybe sooner. So don't think it's that far away. And then I think on the economic diffusion side, like...</p><p>Luke Drago </p><p>court is agreement I have with AI 2027 is like 10 million robots one year after AGI. And I think that they presume, and pretty smartly I would add, like a whole of government response with like every part of government machinery suddenly awakens and tries to create robots. I think even in that world, there are lots of bottlenecks. Like for example, we need a whole lot of rare earth minerals or REMs to do this. And right now China controls 85 % of the refinement process for all REMs. The US doesn't have much domestic capacity online right now. I think it's 85%.</p><p>Theo Jaffee </p><p>Sure, so you might still have your 10 million robots, just they're made in China, not the US. I mean, you already kind of see this.</p><p>Luke Drago </p><p>Well, I think in a world like this, you might get 10 million robots in China, but the US government is not getting the whole of government response that gets them to 10 million a year after AGI.</p><p>Like I do think right now China is going to crush us in the manufacturing race. And I think this is like an existential threat to Western civilization that we just cannot build anything anymore. And I'm pretty concerned about those kind of scenarios where, you know, we end up losing the AGI race, not because we didn't get to the software first, but because we instead, like, completely capitulated our manufacturing capability to a rival or an adversary who now has strong incentives to cut us off to buy themselves time. Look, we've done a whole lot of export controls to them. I'd be pretty shocked if they don't do a lot of export controls to us.</p><p>Theo Jaffee </p><p>Sure, but why should I care if China achieves AGI first?</p><p>Luke Drago </p><p>I think there's a question of what values do you want to underwrite the world and what kind of power do you want to have?</p><p>For the same reason that you should care about still having an economic role in the social contract, you should probably care about your country or a country that are aligned with yours having an increased role in the economy. And in a world where China can build everything and manufacture everything cheaper, I think it's harder and harder for the West to play competitively at the global stage. I think ultimately it does, like, your team winning is actually important. And in particular, your team having a way to win is important.</p><p>Theo Jaffee </p><p>Well seems like China has absorbed a lot of the cultural values that I care about from the West. You know, they got capitalism with them, Deng Xiaoping. DeepSeek was for quite a while and maybe still is the most free and least censored AI model. Like yeah, you can't ask it about 1980, 1990, Ottoman Square. But that was a much narrower category of restrictions than the restrictions that OpenAI had on all their models until very, very recently.</p><p>Luke Drago </p><p>Which ones?</p><p>Luke Drago </p><p>So one, I think it'd be good if we had less content style restrictions. I'm a very strong believer in free speech, and so I think it's important that models can answer questions truthfully and without undue censorship. I think that's different than models providing you with instructions on how to build a weapon. But I think that's what's more important here.</p><p>I want to go zero in on like, you know, Deng Xiaoping brought capitalism to China. is absolutely true that like China is, you know, economically liberalized far more than they did at that point. But the kind of capitalism that China has is one where if the CEO of Alibaba makes a derogatory comment about a Chinese, a part of the Chinese economy, he disappears for a few months. To me, that is not effective capitalism and it's a value that is quite foreign to my own. I would strongly oppose it if it would have occurred in the US or in the West and I strongly oppose it when it happens in China. And I don't, I mean, I strongly expect that a world where like the</p><p>Chinese government gets to write the values of the world is one where that is more common, not less.</p><p>Theo Jaffee </p><p>Okay, that makes sense. What specific tasks do you think humans will remain on, like, in the loop for the longest? Like, you talk about tasks that require taste, that are hard to judge, that require long-form planning. But it seems to me that, like, beyond just that, like, AI seems to be just better at certain tasks than others. Even some tasks that seem rather obvious. Like,</p><p>Luke Drago </p><p>Yeah, we're getting, go ahead, please.</p><p>Theo Jaffee </p><p>I don't know, what is a good example of this? Like, AI can't really play tic-tac-toe still, I believe. Maybe O3 has been able to do this, but...</p><p>Luke Drago </p><p>Yeah, I mean I think jagged progress has been the norm and I think this probably gets more true with reasoning models, not less. Where like, you get really good at stuff that has correct answers and not great at stuff that has incorrect answers. Now I expect this to generalize. think, I think, you know.</p><p>By 2030, I expect this to be like, at least more solved, much more solved than it is today. But I do think these durable skills as it matter right now are like taste, judgment, long horizon planning. I was giving a talk at Georgetown a couple of weeks ago about like how to plan a career in the age of AI. My general advice was if your goal was to like go climb a very large corporate ladder and spend 10 years at McKinsey, become partner there, like at an accelerated timeline, you are just probably going to get automated. The bulk of your job for a long time is going to be tasks that are quite automatable. Whereas I expect that like,</p><p>If you're learning very early how to take risks, how to fail, how to develop that research taste or that sense of taste judgment faster, you'll be more effective than your peers at racing against AI progress.</p><p>Theo Jaffee </p><p>Yeah, I think yesterday Marc Andreessen got clipped on a podcast where he said something like, VC will be one of the most durable careers to AGI and everyone clowned on him for it. But it seems like, know, if anything requires taste and judgment and long horizon planning, it's venture capital, right? Like what more so than that?</p><p>Luke Drago </p><p>I saw that, yeah.</p><p>Luke Drago </p><p>So I don't know if I agree with the claim that venture capital will be the last job. I probably do agree with some version of that claim. It's actually very hard to predict what companies are going to succeed and fail. That kind of pattern matching is quite difficult. Jobs that require that kind of pattern matching that require years of experience to figure it out, those are more likely to be durable than less. The real bad news here is for entry-level roles, for the roles of people who are just coming out of college. I think there's some terrible irony in the fact that like,</p><p>entry level CS majors are just kind of automating themselves here. The last mile of automation is not required to automate the vast majority of coding, to automate the vast majority, especially like entry level coding. But yeah, don't think, for what it's worth, I get why he got Clown Dongs. I think the specific comments sound a whole lot like, but my job will survive. But I think there's something in that comment that I think is true.</p><p>Theo Jaffee </p><p>Hmm. You also talk about AGI society being permanently static as a risk. But why would it? It seems like nature hates stagnation. Especially if there are different people controlling different AGI's in different factions.</p><p>Luke Drago </p><p>I think we've really focused a lot on economic stagnation, right? Like the idea that...</p><p>you as an individual, you're born in Montana and your ability to ever climb that social ladder is just quite small, quite unlikely today, in this world relative to today. I don't think it's true that everybody born everywhere has an equal shot of climbing the social ladder and displacing the current elites of the day. But one of the benefits of our existing system is it's actually just quite easy relative to baseline to do that kind of displacement. And one of the ways that culture moves and culture shifts is through leader displacements. I'm thinking a lot about people talk about the vibe shift in 2024,</p><p>where there was very suddenly a new administration was elected. They hadn't even been sworn in yet, but they were elected. And it felt like overnight, basically every company suddenly switched through the hiring policies. People started saying different words. It felt like there was a massive cultural shift from June 2024 to December 2024, maybe even into January 2025. This is all very much so a byproduct of leaders changing and therefore what seems permissible for the culture also shifting as well. And I think that that's an important part of human social progress.</p><p>know, human social dynamics is that people from the margins can still win and I think the cultures that are the most alive and the most dominant are the ones where like figures like a JD Vance who grows up in like the poorer regions of Appalachia can ascend to the vice presidency. I think it's one of the things that makes the Western democracy and capitalism so strong is you can move from the outside and still win and with that create lots of change and progress. And I think in a world where like your wealth is turned by government, it's just as likely.</p><p>Theo Jaffee </p><p>Okay.</p><p>Theo Jaffee </p><p>So by permanently static AGI society, you mean the humans will be static, but not the AGI's or the people at the top.</p><p>Luke Drago </p><p>Well, I my concern is the humans, right? So yeah, I think that's who I'm referring to here.</p><p>Theo Jaffee </p><p>Yeah, okay.</p><p>You also talk about banning AI systems from owning assets. Let me find the exact words because this is</p><p>Luke Drago </p><p>Yeah, is one of the few times I call for banning anything, because we really try not to rely on regulation as a core centerpiece of what we're doing here.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>policymakers should ban AI systems from owning any assets, serving as a C-suite member of a company, serving on a board of directors, or owning shares. Yeah, so this seems like very specific. I would love more elaboration here. Like, why would you do this? How is this like tenable in the long term when AI becomes more agentic?</p><p>Luke Drago </p><p>Yeah, so we brought the...</p><p>Luke Drago </p><p>So I guess like, you know.</p><p>We really try in the piece to avoid doing specific regulatory call outs. think if we're running around saying we think the underlying economics are more important than the regulatory structures, we probably shouldn't be spending all of our effort doing lots of targeted regulation. I think this is just broadly the wrong way to go about changing social progress. think if new technology introduced, you want to adapt your society around them. I think regulations are oftentimes ways to, instead of adapting, slow down. I'm not anti-regulation by any means. I think sometimes they're quite necessary. But I do think we really try to focus on the</p><p>underlying economics. Now, I think this actually looks a lot more like the underlying economics and regulation, because I think this kind of rule today changes nothing right now. There are no AI systems today that can legally own assets. I think, honestly, it's probably likely that under most existing law, AI systems won't be able to own assets. But I think the reason we include the call out is because by doing this, you're guaranteeing some sort of human role in the organization of the future. And if it is the case that eventually you do delegate this power away,</p><p>I expect evolutionary pressures very quickly to make this a dominant force, where suddenly AI's control, in and of themselves, lots of capital. My expectation is just that humans are more likely to care about humans than non-human entities are. Same reason that we care more about humans, do animals. And so I'd like to preserve as much as possible a role for people in the setting of the direction of the future. And I want AI to be a technology that extends that direction setting, not limits it. And so I think that's one of the areas that we did a pretty targeted policy call out.</p><p>Theo Jaffee </p><p>You also talk about AI both mainly centralizing but also decentralizing power by default. Which do you think will dominate? Because I've heard both. I've heard AI will enable governments to surveil the entire populace, but also AI will enable the populace to build pocket nukes that can be used as a check against the government. So which of these trends do you think will dominate?</p><p>Luke Drago </p><p>Thank</p><p>Luke Drago </p><p>So for what it's worth, think a world where everybody can build a pocket nuke is a pretty bad world.</p><p>I mean, I think one of the areas where things do get destabilized is one where a bunch of people have access to weapons of mass destruction. Now, I think there are other choke points that are really effective here. So it could be that you have a system that can tell you how to build a nuke, but you just can't get uranium. Uranium is pretty hard to get, pretty hard to refine that in your backyard. So think there are reasons to believe that that is harder than it is. Bioweapons are another area where I'm actually pretty concerned. It's why we have a pretty targeted section on reducing bio risk. But our section there is really targeted in other choke points.</p><p>like doing like you know screening KYC on like the biological materials because I think they're just physical world choke points that are more effective and also like reduces power trade-off. Provided that you can do that kind of stuff though I think one of the ways where AI can keep people in power and be like an agency extender as opposed to a limiter is one where like lots of people have access to like very powerful models that are aligned to their values, their goals, their intents as opposed to like a centralized set of goals or a centralized set of intents. And I think if</p><p>believe you kind of buy high X arguments on how knowledge works in society and candidly I do. That there are like parts of economic knowledge kind of scattered throughout everywhere that are somewhat hard to track and are difficult to be legible. Systems that have access to like an individual user set of data there, know, tacit knowledge, might be more effective in aggregate, like organizing an economy and allocating resources than any sort of like top-down centralizer is. I think my concern is that, yeah, sorry, go ahead.</p><p>Theo Jaffee </p><p>Yeah, I for what it's worth, I DeepSeek is probably like the single greatest company for decentralization right now. And they come from China.</p><p>Luke Drago </p><p>It worries me that we don't have a Western alternative that I think is effective. I think it could be Meta, but it seems like Meta, both ideologically isn't that committed and yeah, I don't know what's going on with their AI team. Prime Intellect just did a really interesting decentralized training run. We're waiting for the result of that right now for Intellect 2. And I'm quite excited to see companies like that.</p><p>Theo Jaffee </p><p>that is throwing right now.</p><p>Theo Jaffee </p><p>Hmm. But sorry, what the?</p><p>Luke Drago </p><p>which yeah, I do think like.</p><p>Sorry, I do think there's some concern about if it's DeepSeek. If DeepSeek is the company that underwrites the value substrate of the universe, and everybody's only asking DeepSeek questions, and that's the powerful model, I think I'd much rather that model be made at people who are aligned with my values than not. Because I really would not prefer a model that centers my political opinions. And I agree that there are areas where it's censored less, but I don't think that response to that is then to say, we should give up, throw up our hand, and say, I guess DeepSeek gets to win the race. I think we should be saying there are lessons that we can</p><p>learn from DeepSeek and lessons that we shouldn't learn from DeepSeek.</p><p>Theo Jaffee </p><p>Sure.</p><p>Theo Jaffee </p><p>So what should people be excited for? You talked about this culture of indefinite pessimism that seemed to replace the culture of indefinite optimism of Silicon Valley. But the next few years will, regardless of changes in material conditions, have some very cool things going on. So what should people be excited for?</p><p>Luke Drago </p><p>Yeah. Well, one, one of my big call outs there is because I think a whole lot of people can like see AGI coming like a meteor about to hit them. And their response has been like.</p><p>of mix of we're gonna die, this is gonna be bad, and from there, the response has been let's just do nothing about it, or let's freak out about it. And I think if you can see a meteor coming, but you can see it has lots of benefits, I think your real goal should be to deflect the meteor, and I think there are some obvious downsides that we can see coming. I think we talked about economic downsides, there are risks to, significant risks to, like.</p><p>catastrophes, etc. But your response to that should not be, okay, we have to throw up our hands and run as far away as we can and maybe go live in a cave for a while. Your response should be a call to action to go and solve these problems because this meteor is coming. We are going to achieve AGI. is basically at this point technologically inevitable. We are on the track to do this. I think it's extremely likely we will do it. And so I think hiding in fear is super unlike, is just not a good response. If we can solve these problems, we can solve these challenges. We are talking about living in a world with more abundance than we could ever possibly imagine.</p><p>imagine, where an individual's ability to change their own environment, to change the world around them, to make an impact could be higher than it is today. And I think it's pretty high today relative to baseline, where a lot of the scourges that we have in modern life are things we can send to the ash heap of history, things like some pretty terrible diseases like hunger. I think that is a really exciting world you want to be aiming towards. And I think you need to be sober about the problems that are facing you from getting there. Because I think there are real potholes in the road. I align with Vitalik's vision on this entirely. There are some real problems we're going to have to overcome.</p><p>But I think we can overcome them, and I'm pretty optimistic about humanity and our ability to do that. But it does require us to be sober about what the risks are and what it's going to take to deflect them.</p><p>Theo Jaffee </p><p>So you mentioned you were building a company to address the challenges of the intelligence curse. So could you go into a little more detail about that?</p><p>Luke Drago </p><p>There's not a whole lot I can say right now. I think what I can briefly talk about, and I'll try to be a little careful here, we're pretty excited about this kind of alignment to the user concept. The idea that there's like a set of tasks and knowledge that we can gather that provided we do this in a privacy preserving way, both like creates models that are like.</p><p>more customized to the user and then when plugged into a lot of tools could complete tasks better than an off-the-shelf model could. I'd be pretty surprised if this model is superhuman at coding, but I think it probably will have a better sense of how to do things the user would want it to do, especially if it can do things like tool calling. So I can't say a whole lot right now, but I think if that kind of vision is something that inspires you, we'd love to hear from you.</p><p>Theo Jaffee </p><p>Alright, so yeah, I think we've talked a lot about AI. Let's talk about music a little.</p><p>Luke Drago </p><p>Let's do it, because I remember when I reached out to you, my pitch was AI, but also I think we have like very similar taste in hip-hop.</p><p>Theo Jaffee </p><p>Yeah, this is like weird. I don't know that many other people who like Logic as much as I do. And you said that College Park is in your top 10 albums?</p><p>Luke Drago </p><p>It is, yeah. So I think there's like an arc for Logic, right? There's like old Logic, which is very good. There's whatever happened from 2016 onward, which is, you know, hit or miss, pretty rough. And then there's like 2020 and after. I think everybody's 2016 or 2017, one of the two. And that album's pretty hit or miss. I think like the low point for me is definitely either Confessions of a Dangerous Mind or Supermarket, where it feels, Supermarket in particular, I remember listening to that and just going, no, like, that's the end of that.</p><p>Theo Jaffee </p><p>What year is everybody?</p><p>Theo Jaffee </p><p>Yeah, okay, yeah.</p><p>Theo Jaffee </p><p>Yeah, definitely.</p><p>Luke Drago </p><p>But then he has this resurgence with, like, is it No Pressure and then College Park that I'm just like, I thought they were really excellent albums. think there was the, what was the other, the vinyl, 035 is good, and then was also, I think, Vinyl Days, is that what was called? Yeah, Vinyl Days. I thought Vinyl Days was like some really interesting production. I think Logic does really well over like these like soulful beats or these like...</p><p>Theo Jaffee </p><p>No pressure.</p><p>Theo Jaffee </p><p>Ultra 85.</p><p>Theo Jaffee </p><p>of Vinyl Days.</p><p>Theo Jaffee </p><p>Yeah</p><p>Luke Drago </p><p>very like old school 80s, 90s inspired beats. And I know his most recent project is coming out is neither of those things. And it's quite like trap again. And it's just like, I don't think he can do trap very well. No offense if Logic has tuned into this AI podcast, but I think, you know, do more boom bap. That is definitely where he shines.</p><p>Theo Jaffee </p><p>Yeah, I agree.</p><p>Theo Jaffee </p><p>mixtape logic versus album logic and like he even talks about this I think in the intro to Bobby Tarantino 2 or 3 he has like a Rick and Morty skit where he has Rick saying like yeah literally where he has Rick saying like you know I don't want to listen to this like you know introspective rap with a message I just want to you know rap about like titties and ass and stuff and like yeah like when logic does this it's usually not that good</p><p>Luke Drago </p><p>Average logic skit.</p><p>Theo Jaffee </p><p>I think my favorite Logic mixtape after the original Young Sinatra's was probably Inglorious Basterds, like the very new one, because it was closer to an album than like the Bobby Tarantino's.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>I think this makes sense. think my favorite Logic project, I'm standing by College Park, I think it's got one weakness, it's that oftentimes it's got these skits that are two minutes long in the middle of a song. So Playwright, for example, has this very long skit. I think Playwright is an exceptionally well done song. Its beat is really catchy, Logic flows over it very well, the features are really excellent, and then it's got this minute and a half long skit, and because of that skit, I'm not playlisted at all. If I just play this, it's just like...</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>That's a good one.</p><p>Luke Drago </p><p>I think he's ordering a burger in the skit and she's like, what are we doing here? Like old school would like take his skits and make them separate tracks. And I think, you know, I would have really preferred a version of College Park where I could play some of these songs in the skits. But I think like Lightsabers, Gaithersburg Freestyle, Self-Medication, Shimi are just like really like top tier logic tracks. I can't think of really many tracks he's made that like outpaced those for me. Maybe like OG Under Pressure stuff.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>I Under Pressure is better. Yeah, but College Park in top 10 is quite high praise, think. Let's see. Tracklist is...</p><p>Luke Drago </p><p>Look, I'm happy to make a chair. I'm making contrarian takes an AI might as well make a music too, you know?</p><p>Theo Jaffee </p><p>Yeah, yeah. Cruisin' Through the Universe, like, I thought that one was great. It's so like, it feels like you're in space with the production. Like, six, six is great. Six on the beat is like God on the beat.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Every time. Here's like a, he had a mix tape came out a couple of weeks ago.</p><p>Theo Jaffee </p><p>Who's six? Okay, I have to check that out.</p><p>Luke Drago </p><p>Yeah, Six did it. It's like, he's not rapping on it, but it's like he's producing the whole thing. And there's a logic track called WMD on it, which I'm like pretty impressed with.</p><p>Theo Jaffee </p><p>Hmm. And then like the...</p><p>Luke Drago </p><p>By just fixing logic, they can do stuff together that's really impressive.</p><p>Theo Jaffee </p><p>Yeah, the end of Cruisin' Through the Universe where you get into the first skit and you're like, wait a minute, why is Logic in Big Lembo's basement? And then you're like, that's what College Park means. He's talking about his time in College Park when he was 20, sleeping on Lembo's couch before he blew up. This sort of Logic adventure is good.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah, and I think that's like an, yeah, I mean think when he's talking about like, logic is good when he's talking about literally being in space, or like his early life, and I think that's where like he just really shines in ways that are quite impressive. And I think everybody has this like much more confusing narrative where it's like.</p><p>There's this Neil deGrasse Tyson side plot that's going on where it's like, actually, you are everyone that's ever lived, which I believe originated from like a Twint, some Facebook meme. You know what talking about? Yeah, the short story in the short story became a Facebook meme, and I'm not sure how it got to logic, but it's like, why are we doing the short story that I've already heard about? And also like very strong political commentary, and also at the end, some space stuff. It's like there are three different storylines happening here. I'm kind of confused as to why they're happening. I think they meld somewhere.</p><p>Theo Jaffee </p><p>No, was a short story. Called The Egg.</p><p>Theo Jaffee </p><p>Well, the space stuff is kind of just a through line between different albums. And like the reason for the short story is kind of like, you know, you are everyone, right? So you have to, you know, love everyone and, you know, peace and love and equality and stuff. Like that's, yeah.</p><p>Luke Drago </p><p>I think it's a...</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah, no, I think the reasons there, I just think it's not executed as well as it could be. And I think some of the songs, I, there's a track on everybody that I think, I remember being like, what are we doing here? If I can find it, it's, it's Take It Back, it's Take It Back. Yeah, it's Take It Back.</p><p>Theo Jaffee </p><p>Does it take it back? Yeah, yeah, Yeah.</p><p>Luke Drago </p><p>The beat's excellent, right? It's just like, you know, like kind of three minutes of logic talking for a while. God, I think Killing Spree still makes it in playlist. I think I can just be like, actually, this is fine. I can look past it. The production on everybody is maybe his best production, which is why it's somewhat unfortunate that the lyrics are just not there.</p><p>Theo Jaffee </p><p>killing spree too.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>Yeah, but like I think everybody like the beginning of the album and the end of the album are really excellent Like the beginning of the album you have hallelujah and everybody</p><p>Luke Drago </p><p>Yeah.</p><p>Yeah, yeah. Confess too. Confess is slept on. It shouldn't be.</p><p>Theo Jaffee </p><p>Yeah, and then like towards the end of the album you have 1-800 which I'm sorry. It is a good song I don't care if it's his most popular song ever. It's definitely not his best yeah That one's a little corny, but like it's really good. I think anxiety was great. Black Spider-Man was great AfricAryaN is excellent like it ends so well</p><p>Luke Drago </p><p>Who came here late?</p><p>Luke Drago </p><p>I think it is a good sound.</p><p>Luke Drago </p><p>So I think anxiety's not that good, but Black Spider-Man I think is maybe a top five logic track. I do this like a...</p><p>Every season, like spring, winter, fall, summer, I make a new playlist. And my only rule for the playlist is it's to be like 30 songs tops, which I add throughout the season, and no repeats between previous playlists of the same theme. So like if I did something in 2022 winter, can't come back in 2024 summer. And my like 2025 spring playlist had Black Spider-Man in it. And I just, I remembered how good that song was in the last couple of weeks, because it is just absolutely incredible from the production, the features.</p><p>I don't know, think it is like, if every song on everybody had like that framing of the mass stage and that level of caliber, I think everybody would have been a smash success.</p><p>Theo Jaffee </p><p>Yeah, I think you're sleeping on Ultra 85 too because Ultra 85 is like, yeah, I think it's maybe his best album maybe. It way exceeded my expectations for what it would be based on like the promotional singles and based on his other projects from around that time.</p><p>Luke Drago </p><p>All 35 is good.</p><p>Luke Drago </p><p>What was the promo single? was 44ever right? Of fear.</p><p>Theo Jaffee </p><p>Fear. Fear and then 44ever. 44 ever was good. Fear was less good. Fear was alright.</p><p>Luke Drago </p><p>I will say the opening two tracks on Ultra 85, so Paul Rodriguez and Mission Control are just also fantastic. Mission Control in particular, think the production is stellar. Logic, when he really locks in, just write a beat in a way that think basically nobody else can. Like a couple of other artists have said they do this at their peak, like one in four beats, he just really finds a flow that's pretty infectious. And I think this track just like pretty much exemplifies that.</p><p>Theo Jaffee </p><p>Paul Rodriguez was just incredible. I remember exactly where I was the first time I saw, my god, Logic dropped. And I was in Japan with my cousins on the train going back from Osaka where we had day tripped back to Kyoto where we were staying. And I was on this train, this packed Japanese train. I was just smiling on the train listening to this. I was like, my god, this album is going to be actually really good.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>So funnily enough, I had like a similar reaction. I remember exactly where I was when I hit the play button on Paul Rodriguez. was, I live in Limehouse in London and I had just gotten off the tube, was walking down the steps of the tube station and like hit play on it and we're just walking home. I walk home, like, I think just about exactly nine minutes. So it's the length of the full song. And I remember just listening to it being like, whoa.</p><p>Like, where has this been? I don't think the album lived up that in every part. I think some tracks did. Favela, thought was really excellent. Interstellar was really good. I, like the one mistake of Interstellar.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>Interstellar is great. My absolute favorite song is Once Upon a in Hollywood.</p><p>Luke Drago </p><p>It's good, it's good. I don't think it's my favorite. I think my favorite song in the album is probably Paul Rodriguez or Mission Control. And I think the only mistake for the album for me is that by starting with those two songs, I feel like every other song after that was like a bit of a letdown relative to where the expectations were set. Where I think my favorite song in College Park is Lightsaber.</p><p>probably seconded by, let me put my actually good logic playlist real quick. yeah, it's Lightsaber probably followed by Shimmy and like those tracks are pretty far apart so I the album kind of like keeps rewarding you as you go through it.</p><p>Theo Jaffee </p><p>My favorite College Park songs are Self-Medication, Paradise 2, Wake Up, and Lightsabers, I think.</p><p>Luke Drago </p><p>Wake Up's good. Lightsabers is, think, actually the reason that College Park makes top 10 for me, because I think Lightsabers is maybe in my top 10 all time. I've got this top 50 all time songs, and Lightsabers is right on there.</p><p>The initial beat's excellent. The beat switch is just completely surprising, but it's still like, I think a lot of beat switches aren't thematically relevant. Like, sicko mode three beat switches do not sound anything like each other and could just be three separate songs you like strung together. Whereas, Lightsaber is I think better because of the beat switch. I think they're like very clearly related and they sound really good together and I think the song is better because they have both parts in it. I don't know, think, especially when like, is it C.Castro comes in on Lightsabers at the end, it delivers like just an incredible verse there too. I don't know,</p><p>Lightsabers are just. That's one hell of a song.</p><p>Theo Jaffee </p><p>Yeah. Like, Castro's verse made me think like, why isn't Castro and more logic stuff? I know they had like a falling out and now they're friends again. I hope Castro comes back more. Yeah.</p><p>Luke Drago </p><p>Well had that mixtape recently, right? I think a couple of, like the EP. What was on that? I really... Whose is?</p><p>Theo Jaffee </p><p>Castor's voice is better than Logic's. Castor's voice is better than Logic's.</p><p>Luke Drago </p><p>it's a good force. I think Game 6, I think, is on that EP. And I think Game 6 is a Sleptone song. I the beat's good, I Halfbreed kills it, Cedar Castro kills it, like everybody on there does a good job.</p><p>Theo Jaffee </p><p>Yeah, you really know Logic. I think the Seth MacFarlane feature on Self-Medication is like, I was so not expecting that when I listened to the album. I was like, for a sec, I was like, is this Frank Sinatra? Who is this? my God, looking at the credits, it's Seth MacFarlane, wow. Yeah. He can really, really sing. He was like trained to be like a singer by like, I forgot who, but like some celebrity that had something to do with Sinatra.</p><p>Luke Drago </p><p>It was so good.</p><p>Luke Drago </p><p>This is a family guy guy, yeah. I was pretty floored by that.</p><p>Luke Drago </p><p>Yeah, I was pretty floored by that. I think I showed it to a couple friends. I'm like, you should hear this, because you're not going to believe who it is. This is like the voice of Peter Griffin and Stewie Griffin. I think he's also Stewie. Yeah, and all these other characters that you've grown up with is now singing some crazy Frank Sinatra. I don't know. I was pretty impressed with that. What was your least favorite Logic song or album? And why is it Confessions of a Dangerous Mind?</p><p>Theo Jaffee </p><p>and Brian.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>My least favorite song or album?</p><p>Luke Drago </p><p>Yeah, either or. Or both.</p><p>Theo Jaffee </p><p>Okay, my least favorite album was Supermarket, but my least favorite actual album was probably, like sadly, Confessions of a Dangerous Mind. Even though, like, I didn't think that it was that bad. Like, there were some songs that I thought were good. Homicide was great. Yeah.</p><p>Luke Drago </p><p>Homicide is pretty good. Keanu Reeves actually, like the lead singles weren't bad, I think that was the problem. The lead singles were fine. And then I remember the first time I heard Clickbait. That was a tragic day for me. I'd stayed up all night for the drop. I must have been in high school.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>and it dropped and I remember hearing it going, my goat is washed. Like this is it. This is, like we can forgive everybody but I don't think you come back from this. I think he has come back from it but I think he probably lost most of his fan base after like the back to back code supermarket drops.</p><p>Theo Jaffee </p><p>Yeah, like this is also like right when I got into Logic was basically exactly when Kodam dropped But I kind of liked it at the time Maybe I just like I was always built to like Logic So I liked even this and then when I discovered like Under Pressure, Incredible True Story I really really liked those and then when I discovered, you know, the mixtapes I like those too. I think like the song Confessions of a Dangerous Mind</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>And also the song Lost in Translation are pretty good. They're not that bad. A lot of the other ones are... Yeah, wannabe, clickbait, like... Icy, those are all crap.</p><p>Luke Drago </p><p>I think that I did bad.</p><p>Luke Drago </p><p>I think the problem is like, I thought ICY was, I actually kinda like ICY, but I think the problem with Kodam is like the lows are just so astronomically low. Where it's like, know, the average song is actually fine. It's not anything special, but it's fine. But man, those misses are like dramatic and bad.</p><p>Theo Jaffee </p><p>What do you think those misses are other than?</p><p>Luke Drago </p><p>clickbait's the one where I like, I think I turned it off for a while, I like, this cannot be happening.</p><p>Theo Jaffee </p><p>How does that one go again? Hold on, clickbait lyrics. you are amazing, you are amazing, you are amazing. Yeah, okay. That was so...</p><p>Luke Drago </p><p>Yeah, yeah, yeah, yeah. There's some other lyrics in Clickbait that I think are maybe more shocking. Is this the one that I'm thinking of? It's got like the...</p><p>Theo Jaffee </p><p>RIP Lil Peep, let that young man sleep, let that young man death teach the youth the streets to beat addiction.</p><p>Luke Drago </p><p>Yeah, it's, there's some other ones there. don't know, that was I think, yeah, we'll leave that off the AI podcast. But it's just like, wow, this is bad. But yeah, I think I have hope. The current album is about to come out. I know that Winnie's working on, I've heard the singles.</p><p>Theo Jaffee </p><p>yeah.</p><p>Some more pornographic ones.</p><p>Theo Jaffee </p><p>haha</p><p>Yeah.</p><p>Luke Drago </p><p>I'm a little worried they're sounding kind of like Kodam, like very trap-coated, almost like a little Carti-ish, and I don't think Logic can do Carti. I don't think Carti can do Carti, so I'm not sure if Logic can do Carti. I'm a little worried about what's gonna happen there.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Logic's had like five final albums too.</p><p>Luke Drago </p><p>well hey this is final out you know this week next week we knew final out the time you've got your party guy curious</p><p>Theo Jaffee </p><p>Yeah. Yeah, Sort of. did, yeah, I'm not like super, super into Carti. I definitely like him more now than I used to. Like, I listen to the entirety of IAM music. I think maybe it was sort of like colored by the fact that I was in like a very stressed mindset because I was on this like super delayed flight and whatnot. But that might have even made it an even better listening experience because it's so like intense and violent of an album.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>I did too.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>Like, I clicked play on pop out, the first song, and let it play for like five seconds. It's like grinding metal. And I was like, wow, yeah, this really sets a tone for the rest of the album.</p><p>Luke Drago </p><p>So I think I should like music, because I like Yeezus, and I think they're aesthetically similar in a couple places. I think this is also an album where I think my default pretty not in Carti's target. I liked parts a whole lot of Red, but really I ended up liking the production when I liked Carti. I think it Mojo Jojo for me. was like, oh, I just don't like this. And it's a shame, because I should like the production style. I like Kendrick a whole lot.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>And I think I heard the first set of ad libs on Mojo Jojo, I like, never mind, let's never do this again. Let's never have, I think there's like three Kendrick Carti collabs in this album, if I remember correctly.</p><p>Theo Jaffee </p><p>So do I.</p><p>Theo Jaffee </p><p>The Kendrick hate over the last like, I know, year or so has really ramped up like to a level I've never seen it before.</p><p>Luke Drago </p><p>I think if any popular artist is just doing, like at this level of popularity, think Kendrick's currently not exactly but pretty close to where Taylor Swift was in like summer of 23, where she was just the most famous person alive. I think he's not exactly there, but he's like pretty close. And I think you saw a lot of Taylor Swift hate on there too. And I don't wanna, I'm not making a specific comment on Taylor Swift, who I think has like a lot of music that I like and some that I don't like.</p><p>Theo Jaffee </p><p>I agree.</p><p>Luke Drago </p><p>But I think that when you get that big, there's some level of envy or jealousy as well. And also all of your failures are bigger. So I think Moto Giroda was a bad song. I don't think this diminishes Tim Imp Butterfly in any way, shape, form. But I think oftentimes it's the most recent track is the one that changes all expectations. And the my goat is washed mentality just gets bigger and bigger. I don't know, man. He's selling out whole lot of stadiums. I think Kendrick's doing pretty well. I think he's doing all right. think a lot of the hate's just now he's popular, so it's cool to hate him.</p><p>One thing that fascinated me about bands as a bit of a sidetrack is just like how important like the lead driving force tends to be. I know that like Linkin Park for example just recently swapped out there like obviously like Chester Bennington died in Armitage and died.</p><p>2017, just a monumental force of music. It's hard to imagine a Linkin Park without him. But they have, as of last year, a new lead singer. She's really excellent. The Emptiness Machine was for a minute up in my top songs, I think early 2025. I don't know. think the band is definitely different. There is a pre post moment there. And I don't think it's ever going to be the same.</p><p>But I think you can still keep the driving force and the driving memories alive even as people fall in and fall out.</p><p>Theo Jaffee </p><p>Yeah, Pink Floyd did this. Like after Syd Barrett left, they had many of their great hits. I'm not sure if it was before or after Dark Side of the Moon, but Wish You Were Here was actually an album written for Syd Barrett, to Syd Barrett. It was wishing he was here. Shine On You Crazy Diamond was like, same thing. And then they made The Wall and Animals and all that was after him. So bands can survive without their</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>Top guys.</p><p>Luke Drago </p><p>I wish I knew more about Pink Floyd. It's been on my list for a long time to really delve into both the music and the history, because obviously it's a band that's just been a monumental force in music and cultural history. It's a weird blind spot for me where I just never really sat down and just listened to everything. I don't know how I managed to do that, gone this long without... I think I've listened to Dark Side of the Moon all the way through and that's it. I'm pretty surprised that's all I've really encountered from Pink Floyd.</p><p>Theo Jaffee </p><p>Yeah, they're good.</p><p>Theo Jaffee </p><p>I like Wish You Were Here better.</p><p>Luke Drago </p><p>I will add that to the list of things you need to listen to.</p><p>Theo Jaffee </p><p>Cool. And then like, I guess the opposite of Pink Floyd was like, know, bands consisting of like lots and lots of people who have sort of like broken up and gotten back together and whatnot like many, many times and come out with many, many albums is King Crimson. Who their very first album in the quarter, the Crimson King was just like a peak, like top 10 album, I think top five, maybe it was just such a good album.</p><p>and then like</p><p>Luke Drago </p><p>What's the album? I don't think I've heard any King Crimson. What's the album? In the Court of the Crimson King. I am adding this to my... I'm like, occasionally I'm shocked where I'd find like a crazy like, adding it now. Yeah, was like a crazy deficit. I'm gonna have to listen to this.</p><p>Theo Jaffee </p><p>in the court of the Crimson King. It's like very, very famous. Yeah.</p><p>It's like a screaming guy.</p><p>Theo Jaffee </p><p>Yeah, and then like the band broke up and got back together with different people and broke up again and like nothing they've ever made after this has been anywhere near as good and it's been like 60 years now and so it's like yeah I wonder like what is it like for musicians who you know release some amazing masterpiece and then just can never replicate it ever.</p><p>Luke Drago </p><p>Well, I wonder, I have a similar-ish thought, and it's not exactly one-to-one with Gambino, where like, Chaudhish Gambino has released, like, has done a bunch of stuff. I mean, he has been a comedian, an actor, a musician, but even as a musician, he's been a rapper and a singer, and it's just very hard to figure out, exactly what he isn't good at, but I also think this, really changes album to album. So, like, I think, you know, older Gambino, obviously, like, stuff like, you know, like, stuff like, what is it?</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Stuff like camp is just very different than like...</p><p>Theo Jaffee </p><p>Redbone?</p><p>Luke Drago </p><p>very different because of the internet, very different than Kauai, very different than Awaken My Love, and then my personal favorite, can't beat an album, which oftentimes I think is a bit of a hot take, Atavista. His one he released in 2020, and then everybody forgot about it because he released it on, I think March 15th, 2020, and most of the lockdowns are between March 14th 16th, so just really awful timing, so he re-released it. Also he released it with no title, just like it was labeled 3.15.2020, and all the songs were just like the timestamp they were within the chronology of the album.</p><p>Theo Jaffee </p><p>bad timing.</p><p>Theo Jaffee </p><p>Yeah, bro is not Kendrick.</p><p>Luke Drago </p><p>re-released it in 2024. Bro is not Kendrick. But he re-released it in 2024 with new titles and updated mixing and mastering.</p><p>And I think that album is stunning all the way through and very different from his most recent album, which I think is supposed to be his final album, but what does that mean? Maybe the next one's under Donald Glover's name. I think it's the kind of thing where it's almost the opposite, where he can continually make very different hits. And the downside of this is that because they are so different every time, if you like a child's give me an album, you're like, I wish he did more stuff like this. You're just not gonna get that. He's going to just make different the next time he's on the mic.</p><p>Theo Jaffee </p><p>And then Daft Punk, you said Random Access Memories is also in your top 10, which, excellent choice. Personally, think Discovery is just slightly better, but Random Access Memories, like both of them are easily 10 out of 10 albums for me.</p><p>Luke Drago </p><p>an unbelievable album.</p><p>Luke Drago </p><p>reasonable.</p><p>Luke Drago </p><p>What is your favorite song on RAM? I'm curious.</p><p>Theo Jaffee </p><p>My god I Think like if I had to pick Favorite song like yeah, I know I'm allowed I'm allowed to pick whatever favorite I want I would pick get lucky, but I think yeah, I think like Fragments of time would also be way up there Within is great motherboard is like really really underrated</p><p>Luke Drago </p><p>You're right.</p><p>Theo Jaffee </p><p>Mother word is completely instrumental, isn't it? Yeah.</p><p>Luke Drago </p><p>I think it is, yeah.</p><p>Theo Jaffee </p><p>Yeah, like everybody who has sort of listened to little bit of Daft Punk, you know, they've heard Get Lucky, they've heard Starboy by The Weeknd, they've heard Harder But... Yeah, like, you have to go listen to Motherboard. This is like, know, deep cut. Daft Punk just doing incredibly well. Instant Crush is also fantastic.</p><p>Luke Drago </p><p>They've harder, better, faster, stronger.</p><p>Luke Drago </p><p>If you want a deep cut, my favorite song in album, by far, Touch, featuring Paul Williams. It is eight minutes long. The first two minutes are like this bizarre intro where like just noise is happening and the robots are like whispering and stuff. And then it becomes a show tune.</p><p>Theo Jaffee </p><p>Mm-hmm. Reasonable take.</p><p>Luke Drago </p><p>which ends up becoming like a ballad and canonically the song is about like a robot feeling the sensation of touch for the first time not knowing what to do about it and like being completely floored by this like very human sensation. A buddy of mine and I, two of my buddies and I were in like Italy, we flew into Venice and we're like driving across the countryside and rented a car. We blasted this song at full volume and it was just like, what a surreal experience.</p><p>Theo Jaffee </p><p>Yeah, that's gotta be amazing.</p><p>Luke Drago </p><p>Yeah, it was good fun. strongly recommend blasting anything at full volume driving through like the Italian countryside. It's not a bad place to do it.</p><p>Theo Jaffee </p><p>Yeah, I can't think of any songs on this album that aren't like great. Maybe doing it right wasn't great, but it was still like quite good. Yeah, let's see. Give Life Back to Music, great. Game of Love, great. Giorgio by Marauder. That one's one of my favorites. That one's so good. And it's like, yeah.</p><p>Luke Drago </p><p>It goes on for a while, yeah, it goes on.</p><p>Luke Drago </p><p>Great. Georgia by mortar, great.</p><p>And also nine minutes long. I think very few artists can do the like post five minutes long and still have it be like relevant and good. And I think Daft Punk can do it consistently, which speaks a lot to like their ability to keep things sonically interesting and just, you know, say I have a through line that survives throughout nine minutes of music.</p><p>Theo Jaffee </p><p>huh.</p><p>Theo Jaffee </p><p>Within, fantastic. Instant Crush, fantastic. Lose Yourself to Dance, great. Touch, no, you've already talked about how amazing. Yeah. Get Lucky, just such an amazing good song. Beyond, good but not great. Yeah. Motherboard, great, fantastic. Love that one. Fragments of Time, fantastic. Doin' It Right, good, good. Contact was...</p><p>Luke Drago </p><p>Fantastic.</p><p>Luke Drago </p><p>Yeah, I spent enough time on touch. Unreal.</p><p>Luke Drago </p><p>It's fine. Yeah.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>Very good.</p><p>Luke Drago </p><p>Yeah, I don't know. I think it's a hard album on the top. And it's definitely the album I played the most by Daft Punk. I think going in, as your final album, deciding we're doing live instrumentation now is a pretty crazy move as a send-off. I wouldn't expect a band to survive this. They survived it with no problems. I think they are better because of it. I think the album is better because it made that bold choice.</p><p>Theo Jaffee </p><p>Theo Jaffee </p><p>Sure. Yeah, I don't know because Discovery was just so good and like it's really hard for me to pick between Discovery and Ram because they're so different and yet both so like excellent. Like Discovery, I think face to face is like maybe one of the best like examples of sampling ever in like music history. It's just sampled so incredibly well. The entire song minus the drums and the vocals is just sampled from other songs.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>And it's so good. Too long is 10 minutes long. yet, like you mentioned, Daft Punk exceeds five minutes consistently. And it's great. Voyager is just like, my god, that song makes me feel like I'm floating through space. Yeah, there's just so much excellence on Discovery.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Yep.</p><p>Luke Drago </p><p>I think the brilliance of artists like Daft Punk is they can do very different things, and you might want 100 other cuts that sound like that, but they've consistently hit 10 at whatever it is they're trying to do. And so means you get this discography of music that is just diverse and interesting, that it always sounds like them, but what they're trying to sound like is different every time. I think like...</p><p>Theo Jaffee </p><p>And then, yay.</p><p>Luke Drago </p><p>I'm trying to think of other artists that do this, think, well. I can think of artists that have tried to do this, like, yeah, Kanye's done this pretty well. Miley Cyrus, Sleeper Cut here, has also done this pretty well. Her most recent albums have all been very different. Like, Endless Summer Vacation is quite groovy. But then Plastic Hearts from 2020 is this pretty grimy rock album. And she got really into this, I mean, it's got a cover of Zombie on it.</p><p>Theo Jaffee </p><p>Kanye.</p><p>Theo Jaffee </p><p>Interesting.</p><p>Luke Drago </p><p>It's also excellent. I think she's pretty consistently, if she wants to achieve a musical target, she will smash it out of the park, even if it's quite different. I think the opposite of this is Drake, who think is very good at making a specific kind of hit. And even when he gestures out of that comfort zone, I think he...</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>Honestly, nevermind. I'm gonna defend it like a bit of a slept on album. But I think it's very cautious. And I think you can tell his heart's not all the way into it. And it's so not into it that at the end he throws in Jimmy Cook's. like, hey, if you didn't like that album, don't worry, here's Jimmy Cook's. And I think there's no boldness in...</p><p>almost doing the thing. I appreciate he went out of his comfort zone on this, but I think had he actually committed all the way through. If Jimmy Cooks is a single somewhere else, another album, had he really said, I'm gonna do this very different thing and I'm gonna excel at it, I think the album would have been better for it. think the energy would have been better as well.</p><p>Theo Jaffee </p><p>Yeah, I agree. So like what other musicians are in like your top ten?</p><p>Luke Drago </p><p>Man, a top 10 is always a hard one. What their albums?</p><p>Theo Jaffee </p><p>Albums, I guess you said yeah college college park uses and ram are all in your top ten Do you have like a written top ten or is it just kind of?</p><p>Luke Drago </p><p>think I do. Let me find this real quick. I think as a matter of fact, I do. Because I think it's hard to say something in your top 10 and then not talk about your top 10. Other albums here. Let me just pull up my list. What's in yours while I find this? Because I think I go off top of my head, but I really want to make sure I'm being honest here. There.</p><p>Theo Jaffee </p><p>I have not written a top ten. And if I tried to say one off the dome, wouldn't be very good. I will maybe write one in the future.</p><p>Luke Drago </p><p>Okay, other albums I know makes this list for me. Daytona by Pusha T. I think it's excellent all the way through. It's from like the soft surgical summer sessions. Standout tracks from that. I mean, basically it's a 30 minute album, so all of it. You've never heard Daytona. The games we play.</p><p>Theo Jaffee </p><p>Hmm. I've never heard of it.</p><p>Nope. I'll have to add it to my list.</p><p>Luke Drago </p><p>is probably like the peak of this album. The games you play in infrared, I think are like the two stand-ups, but it's a seven-track album. It's from the same session that produced like, Tiana Taylor's breakout album, and also like, same session that produced Kitzy, Ghost, and Ye. Like just like, there was a summer where like seven or eight albums like this just come out. I think what ended up becoming Atavista is on this list for me as well.</p><p>I can never decide if it's Flower Boy or Igor that's on this list. It's gotta be one of the, I can't pick both. I think that'd be ridiculous. But it's one or the other that's on this list for me.</p><p>Theo Jaffee </p><p>Yeah, I love Tyler.</p><p>Luke Drago </p><p>It's hard not to love Tyler. I bought Chromakopia merch and unfortunately the hoodie is Minecraft green, so I definitely can't wear it. I look like a creeper walker, like the physical, like the Minecraft creeper. It's like very bright green with black text. But it's a tour I probably should get tickets for. I don't think it's come to London yet, I don't think.</p><p>Theo Jaffee </p><p>Haha, yeah.</p><p>Theo Jaffee </p><p>Yeah, like, hmm. I think I like Igor slightly more. Like, the opening two tracks on Igor might be like one of the strongest, like, opening, track, like, lineups of any album I've listened to. I think it's, yeah, Igor's theme and then Earthquake. Like, Igor's theme is so, strong and leads well, like, so well into Earthquake, which is, like, the biggest hit off of Igor. let's see. Igor tracklist.</p><p>Luke Drago </p><p>Why?</p><p>Luke Drago </p><p>deal.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Gone gone slash. Thank you such a good song six minutes and 15 seconds long</p><p>New Magic Wand. There's just so many good songs on here. I think Flower Boy... my god, it's really good, but it's not quite as good. See You Again is maybe Tyler's best song.</p><p>Luke Drago </p><p>Probably, I think it's such a standout win.</p><p>Theo Jaffee </p><p>Yeah, and Kali Uchisana was great.</p><p>Luke Drago </p><p>See you again as what else.</p><p>See You Again is excellent. The intro, This Flower Blooms is really, really stellar. Forward and Where This Flower Blooms, both are really stellar. I like, I oftentimes like Jaden Smith, so Pothole's really good for me. I think Jaden can be little too self-indulgent sometimes, but I think his production value's always quite good and his flow is quite good. Let think, other top albums for me that aren't rap. Rogue Cut, Chroma Copies great. Rogue Cut for me, A Fever You Can't Sweat Out by Panic! At The Disco, it's their debut album. It's got...</p><p>Theo Jaffee </p><p>forward.</p><p>Theo Jaffee </p><p>I Chromakopia.</p><p>Luke Drago </p><p>I like Sin... What's the song? Sin's Not Tragedies? Yeah, yeah, It's got Kemosada on it. Build God Then We'll Talk, I think it's fantastic. Which one? I, you know, personally I'm not working on that. What can I say? I think I'm consistently confused as to why a lot of my friends are trying to build God and put him in charge.</p><p>Theo Jaffee </p><p>Sin's not tragic. Yeah, yeah, yeah.</p><p>Theo Jaffee </p><p>You like that song, I bet. build God, then we'll talk. Building God? Yeah.</p><p>Luke Drago </p><p>But especially my atheist friend. I'm a little confused by that sometimes, but that's okay.</p><p>Theo Jaffee </p><p>How is that confusing?</p><p>Luke Drago </p><p>I think it's interesting, see I think it's interesting because I think I was on stage at some event a couple weeks ago and I like, it was an AI question and I brought in Augustine and Aquinas and I think it's confused a lot of people on stage. And I think my general take is that like there's a general pervasive notion in the AI community that...</p><p>Theo Jaffee </p><p>It would be more confusing if your religious friends were trying to build gods.</p><p>Luke Drago </p><p>you know, if you can build superintelligence, you just have to put it in charge and that you'd be like somewhat stupid to not listen to superintelligence and to let it dictate your life. And I think funnily enough, like, religious had to grapple with this question for a very long time of like, why, if God, then why free will? Why would you enable like your ability to make incorrect decisions? Augustine has a very long defense of this that I think is not exactly relevant, although I think parts of it are. But I think it's like, there are lots of thinkers who grappled a lot with the question of omnibenevolent being that still allows you to make choices in what</p><p>inherent value of those choices are. And I think sometimes those conversations are actually more relevant than we think for kind of what we're building right now. I think that it's interesting to me that like, you the late Pope Francis had actually a lot of work he did on AI. His message on like 2024 World Peace Day is I think one of the better pieces of AI ethics work that's been produced in a field that oftentimes is like mired by infighting, given that he's like the Pope, he doesn't have to care about the infighting, he just chooses not to. And so I think sometimes like,</p><p>If you're gonna be building omnibenevolent beings, if that is your goal, you should probably look to at least the thinkers who spent a lot of time, who have also grappled with the question of an omnibenevolent being. And I think sometimes that doesn't happen in this space. Robin-Himes looks as like a field that isn't relevant. Yeah, hard pivot, by the way, too. I think that just, but thank you Panic! at the Disco for bringing us here.</p><p>Theo Jaffee </p><p>This is an underrated take.</p><p>Theo Jaffee </p><p>Yeah.</p><p>True. I think most of my favorite Panic! at the Disco songs are not on this album. I think the only... Okay, favorite Panic! at the Disco songs... I'm trying to remember which of these songs are Panic! the Disco.</p><p>Luke Drago </p><p>What are your favorite ones?</p><p>Luke Drago </p><p>I also wonder how you're going to cut up for your listeners, like the AI section, the music section, and then the brief Panic! Disco side quest on the role of theology in AI. I think that'll be good, a good timestamp there. Yeah, it'll be good, it'll be good.</p><p>Theo Jaffee </p><p>Yeah, just put chapters yeah Viva Los Vengeance is really good Death of a Bachelor Death of a Bachelor was so good Death of a Bachelor is like the song that like I I guess my aspirational song for being able to like sing really well is to do like Death of a Bachelor perfectly</p><p>Luke Drago </p><p>Death of a Bachelor is quite good.</p><p>Luke Drago </p><p>Brendan Urie's vocals are just criminally good. think his cover of... god, what did he...</p><p>Bohemian Rhapsody, obviously. His cover of Bohemian Rhapsody is really also incredible. And that's a song that I think very few people in the world could cover. For example, Logic's Bohemian Trapsody, I think, does a pretty poor job of emulating its namesake.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>I don't think it's supposed to be a cover of Bohemian. It's a different thing. that and Can I Kick It were, I think, the best two songs on Supermarket. They were actually pretty good.</p><p>Luke Drago </p><p>No, it's not. This is another hit at Logic Supermarket.</p><p>Luke Drago </p><p>But I think that says a lot about super market.</p><p>Theo Jaffee </p><p>Yeah, the rest of it was trash.</p><p>Luke Drago </p><p>Yeah, it's bad. Another underrated album. I know a lot of people have probably not heard of Quadeca, but I think his like 2022, I Didn't Mean to Haunt You. This kid was like, he was a YouTube rapper. He was like most famous for like writing a diss track about KSI that did pretty well. I don't think he was like a particularly excellent YouTube rapper. And then basically out of nowhere, he drops this like really ethereal</p><p>extraordinarily well produced album about like death and loss and it has like two features on it which are probably the two most bizarre features to get an album with those themes Danny Brown and the Sunday Service Choir. think Thor Harris is on it as well as like the other feature there as well but like I think he's much less well known and it just all the way through I think is stunning. I can't think of a song on there that was bad, maybe not but I think I still did like that song.</p><p>Theo Jaffee </p><p>Wow.</p><p>Luke Drago </p><p>and like its highs. Fantasy World is a seven minute song from a guy that remember was doing YouTube rap before this. Should not be able to do a beautiful like seven minute ballad and just nails it. Both thematically, lyrically, completely slept on album. Couldn't recommend it enough.</p><p>Theo Jaffee </p><p>Hmm. Okay, so that's how many albums are we at now? We have College Park, Ysys, Ram, one Tyler one, Daytona, Fever, and I Didn't Mean to Haunt You, and so that leaves three more.</p><p>Luke Drago </p><p>That leaves one, two, three, Cool, do we get out of this though, the Gambino one?</p><p>Theo Jaffee </p><p>at a vista by Donald Glover. Okay, so two more. Yes.</p><p>Luke Drago </p><p>So I get two more albums, I get two more.</p><p>Luke Drago </p><p>It's hard not to pick a Frank album. It's hard not to say like one of these Frank albums needs to be there. Yeah, well, I mean, you have the mix saves too. I am gonna put Channel Orange there.</p><p>Theo Jaffee </p><p>It's easy to pick because there's only two of them.</p><p>Theo Jaffee </p><p>Hmm, channel orange over blonde.</p><p>Luke Drago </p><p>Controversial. I am in fact going to put Channel Orange over Blonde. And I think it is because while Blonde is like an excellent album all the way through, a couple of highs on Channel Orange I think do not get replicated anywhere else. I'm really thinking of Pink Matter, Bad Religion, and Pyramids. I think Blonde is excellent. I don't think anything on Blonde touches my feelings about Pink Matter, Bad Religion, and Pyramids. So I enjoy listening to that album more, but I Blonde is obviously gorgeous.</p><p>Theo Jaffee </p><p>Hmm. I think,</p><p>Theo Jaffee </p><p>That's true. I think Pink and White is like one of the greatest songs ever. Really, like I thought it was even like compared to the other songs on Blonde, I thought it was just so far above. I was like, this is really like a like a probably a top 10 top 20 like human song ever. I really like that song.</p><p>Luke Drago </p><p>Yeah, seems fair.</p><p>Luke Drago </p><p>Yeah, it's up there. I think I feel the same way about Bad Religion and Pink Matter especially. Like when the Andre 3000 feature comes in on Pink Matter, and I think he delivered, he hasn't wrapped in a while, he delivers like probably, it might be the best version of his career. And it comes out of nowhere, it flows excellently, and all the setup to it has been gorgeous as well. I mean, there hasn't been a mist there. The lyrics are astounding.</p><p>I really like Pink and White. think it is sonically pristine. I don't think it hits the same kind of lyrical quality that Bad Religion, or sorry, that Pink Matter does. Although at this point it's kind of hard to compare. You're talking about songs that are just so good it's difficult to make a comparison between them. So I get one more. And it's gonna have to be a Kendrick album. It just kind of has to be. And it's probably not gonna be Mr. Morale or G &amp;X. Although, Mr. Morale has a special place in my heart.</p><p>Theo Jaffee </p><p>Yeah, that makes sense. And you get one more album.</p><p>Theo Jaffee </p><p>Yeah, no. GNX was not even... I listened to GNX for the first time. I remember where I was, Depot Park, Gainesville, Florida, on a like perfect beautiful day, you know, like 68 and sunny. And I was just like, oh man, like yeah, there's some good songs on here. I was like, what is this like mustard thing?</p><p>Luke Drago </p><p>What is your favorite song on GNX? I'm push you on this. What's your favorite song on GNX?</p><p>Theo Jaffee </p><p>Hmm. right, let me look at the track list. Squabble Up is like, I did not really like that one. TV Off, I did not. this song is on GNX. Yeah, there's like one song on GNX I thought was way better than every other one, which is Heart Part 6. It's Heart Part 6. Yeah. I was like, what is this doing on this album?</p><p>Luke Drago </p><p>I have like just an immediate hit on this.</p><p>Luke Drago </p><p>Let's go.</p><p>Luke Drago </p><p>And it is? It's Heart Part 6. Heart Part 6 is fantastic. Yeah, that is my favorite song on the album. think... It also is not thematically the same as the rest of the songs on the album. It doesn't sound like the rest of the songs.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>I like a lot of this, I like Man of the Garden, I like Luther a lot, I like Reincarnated a lot, but Hart Part 6 for me just stands out, I think that's right. favorite, so we're back, we're down to the big three. Pimp a Butterfly.</p><p>Theo Jaffee </p><p>You know what my, like, I think most underrated Kendrick song ever is? Duckworth. Which is a song on Damn. The sample that they use is like this Yugoslavian band called September. And they have a song called Ostavi Trag which is like, really, really like hauntingly beautiful vocals that they use. And the story of the song is talking about these two characters named Anthony and Ducky</p><p>Luke Drago </p><p>Duckworth is a I mean, I'm f-</p><p>Luke Drago </p><p>Really.</p><p>Theo Jaffee </p><p>And Anthony is like this gang banger, violent criminal type guy. And Ducky is just like a guy trying to live in the hood and trying to get by. He works at a chicken restaurant. then Ducky sees Anthony coming to his restaurant and he decides to find favor with him. He gives him extra chicken and extra biscuits. And then Anthony robs</p><p>the restaurant the ducky works at and decides to not kill him because he had been nice to him. And then like the bars at the end of the song, you know, I'm not going to spoil it for my listeners. You have to like listen to the song and</p><p>Luke Drago </p><p>No, I think you've got to spoil it for the review. I think you're going to have to.</p><p>Theo Jaffee </p><p>Okay, he says, mm-hmm.</p><p>Luke Drago </p><p>And the reveal there is that like Ducky is Kendrick's father and Anthony is his record, the person who runs his label. Top Dog Entertainment. that, it's just unreal, the reveal. I get goosebumps every time I hear that line. I think I'm gonna go with Damn. I'm gonna, yeah, I'm gonna go with Damn. I think that is my favorite Kendrick album. It's a, that is a, I get why. I think the themes on Damn, wrestling with this like.</p><p>Theo Jaffee </p><p>I got those goosebumps every time.</p><p>Theo Jaffee </p><p>I like good Kid Med City and TPAT more. But it's up there.</p><p>Luke Drago </p><p>jaded sense of religiosity combined with like the anger after like after the state of the country and an election. I don't know. I think it is just the most personally resonant as someone who is religious. I think I've thought a lot about these themes. And I don't know. I think its highs are really high. But nothing in the top three is bad. I mean obviously like these albums are just...</p><p>Theo Jaffee </p><p>You can also...</p><p>Luke Drago </p><p>exceptional pieces of art that are hard to compare against each other. And I think Untitled on Master is unfairly slept on. Obviously it's like an Untitled album, but I think that like... Is Untitled 03 the one with CeeLo Green? Is that right? No, it's Untitled 06. It's Untitled 06. Pardon me?</p><p>Theo Jaffee </p><p>It's very good.</p><p>Theo Jaffee </p><p>I don't know, like that's the problem with making your album untitled. That's the problem with making your album untitled. You have to remember, like what was it, untitled song number three or was it untitled song number six?</p><p>Luke Drago </p><p>It is.</p><p>Untitled 06 is one of my favorite songs of all time. Cee La Green's in it out of nowhere. I really like Untitled 06. think Untitled is, Untitled the Master is my favorite production of any Kendrick Owens. It's mixed between like T-Bops, jazziness, and a Pimp Butterflies, like very leaning into modern rap that I think it does very well.</p><p>Theo Jaffee </p><p>You can also tell that like, Logic steals a lot from Kendrick. Like, he takes so much. Under Pressure is good Kid Mad City. It is, yeah. And everybody is to pimp a butterfly. But not as good, of course.</p><p>Luke Drago </p><p>Yes, Under Pressure is in fact good, Kid Mad City. But you know, it's like, it's a good, but it's a good copy, so it's fine.</p><p>Luke Drago </p><p>See, I think it didn't stick the landing, so I didn't notice it, but even sonically.</p><p>like under the peak of the albums under pressure right like albums under pressure song is under pressure under pressure it includes like the same structure as sing about me i'm dying of thirst it just puts like the aggressive part at the beginning instead of the end but it even goes to like multiple letters written to different people and then a letter from logic's perspective and that literally is the exact same form that sing about me i'm dying of thirst takes it's three letters two from other people one from kendrick then a switch and like a diff like a kind of a moral storytelling at the end whereas this one is like the moral</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Luke Drago </p><p>storytelling like the violence at the beginning, then three letters, the last one from logic's perspective, interspersed with voice messages, which is literally exactly how that same song flows.</p><p>Theo Jaffee </p><p>Under Pressure is like easy top contender for number one best Logic song. It's so good. And Under Pressure the album is maybe the only Logic album where there are zero skips. Like where every song is actually really good. I kind of like it. I like, you know, it took me like when I was listening to it, it took me about like</p><p>Luke Drago </p><p>Yeah, it's also his best Kendrick ripoff.</p><p>Luke Drago </p><p>I think Nikki&#8217;s not fantastic.</p><p>Theo Jaffee </p><p>20, 30 seconds to realize, I get it.</p><p>Luke Drago </p><p>It was the first time that logic could be a little heavy handed. I think by everybody it just really becomes a problem. But like a little corny. But yeah, I do think that's probably right. think other contenders for top logic's on. I think lightsabers like correctly has a contention here. I think Paul Rodriguez. City of Stars. Which is just like his pyramids contender I think basically, but that's fine.</p><p>Theo Jaffee </p><p>A corny. Yeah.</p><p>Let's see.</p><p>Theo Jaffee </p><p>Paul Rodriguez, I would put up there.</p><p>City of Stars is excellent.</p><p>I actually really like, on the Incredible True story, I really like Fade Away, I really like... What's the fourth one called?</p><p>Luke Drago </p><p>fade away, stainless. I don't think it's, I woe.</p><p>Theo Jaffee </p><p>Incredible true story tracklist like whoa yeah, that was really good</p><p>Luke Drago </p><p>Yeah, like what was really nice. I like stainless a lot. Stainless is up there for my like top. I also think...</p><p>Theo Jaffee </p><p>Young Jesus's Logic's Best Music Video.</p><p>Luke Drago </p><p>That seems right. Till the End, think is a really good logic song as well. It's like a, I think it's the final song on Under Pressure.</p><p>Theo Jaffee </p><p>That one's not under pressure. Yeah.</p><p>Luke Drago </p><p>At least in the main version and I think that's like an excellent outro Heat 6 is such a good producer and you can really tell when like they you know You can tell when he knows he has it because like logic will also get more excited on the beat and they're like Oh, I just know you need to this beat Like I think I confess is like this as well where the beat on confess is just so good that like Logic gets better because of it and killer Mike shows up at the beats way I don't know the whole thing is just really well done</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>Yeah, I'm impressed by like how many of these songs you can specifically recall off the dome and like the characteristics of each one that you can talk about.</p><p>Luke Drago </p><p>I have somewhere that like, one of my like weird hidden talents is that like, I've heard a song and I liked it, I have basically the whole thing memorized.</p><p>Theo Jaffee </p><p>Yeah, so do I, but like, you know, you can take it a step further and say like, you know, there's a beat switch here. I think I need to like read more music theory terms, I guess.</p><p>Luke Drago </p><p>I, for what it's worth, have never actually formally engaged in music theory. I, for a couple of years, took piano lessons, so maybe that's the peak of it, but I was not particularly good at sight reading. Mostly because I just really preferred to memorize the entirety of the song in one go, as opposed to reading it every time. It was better for me if I could close my eyes and do the whole thing, which is not a good evolutionary pressure if you're trying to get really good at sight reading. So just play the song and then just try to mimic it right there, and then we'll just go from there.</p><p>Theo Jaffee </p><p>Yeah, very fair.</p><p>Luke Drago </p><p>My piano teacher was frustrated by this because the problem is it worked. At least there's a certain point I actually could keep competitively, I could play and keep up with level difficulty while never actually learning how to read it because I just listened to it and maybe watched someone play it once or twice and then I would just do it until I had it memorized. Maybe that's indicative of how I listen to music as well.</p><p>Theo Jaffee </p><p>Well, I should probably get going pretty soon, but this was a great episode. Thank you for coming on the show.</p><p>Luke Drago </p><p>Yeah, this is great. had a blast. This is, like I said, my first podcast. I hope every podcast covers this much of everything.</p><p>Theo Jaffee </p><p>Yeah, was great to have you.</p>]]></content:encoded></item><item><title><![CDATA[Podcast: Alok Singh]]></title><description><![CDATA[AI, Math, Philosophy, and Erewhon]]></description><link>https://www.theojaffee.com/p/podcast-alok-singh</link><guid isPermaLink="false">https://www.theojaffee.com/p/podcast-alok-singh</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Wed, 30 Apr 2025 02:28:15 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/162480793/7c28a354fca35adbec27e3432c8bd479.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Alok Singh leads research on Lean at Max Tegmark&#8217;s Beneficial AI Foundation, and writes about mathematics, history, and other cool things at alok.github.io.</p><h3>Chapters</h3><p>0:00 - Intro<br>1:12 - Typing<br>8:45 - Elon&#8217;s demo day<br>22:42 - Animation, discrete vs continuous<br>29:04 - Number systems<br>35:26 - Nonstandard analysis<br>43:04 - Reasoning models and o3<br>50:45 - Fiction<br>55:48 - o1 and Linguistics<br>58:50 - Hyperfinite sets<br>1:11:58 - AI for math<br>1:16:01 - The field with one element<br>1:23:17 - Lean<br>1:31:53 - Lean for formally verifying superintelligence<br>1:36:03 - Ayn Rand<br>1:47:46 - Erewhon<br>1:57:56 - Proto-Indo-European<br>2:03:18 - More Erewhon<br>2:14:41 - Butler and Kaczynski</p><h3>Links</h3><ul><li><p>Alok&#8217;s Website: <a href="https://alok.github.io/">https://alok.github.io/</a></p></li><li><p>Alok&#8217;s Twitter: <a href="https://x.com/TheRevAlokSingh">https://x.com/TheRevAlokSingh</a></p></li><li><p>Beneficial AI Foundation: <a href="http://beneficialaifoundation.org/">http://beneficialaifoundation.org/</a></p></li><li><p>Lean: <a href="https://lean-lang.org/">https://lean-lang.org/</a></p></li><li><p>Transcript: <a href="https://www.theojaffee.com/p/podcast-alok-singh">https://www.theojaffee.com/p/podcast-alok-singh</a></p></li></ul><h3>More Episodes</h3><ul><li><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p></li><li><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p></li><li><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p></li><li><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p></li><li><p>My Substack: <a href="https://www.theojaffee.com">https://www.theojaffee.com</a></p></li></ul><h1>Transcript</h1><p>Theo Jaffee </p><p>Sure, yeah, could just start with the informal beginning chat.</p><p>Alok Singh </p><p>Yeah, everyone loves a cold open.</p><p>Theo Jaffee </p><p>Yeah. So you mentioned like you have a bunch of stories that you wanted to talk about. Yeah, you said typing, Elon's demo day, coral and the bat.</p><p>Alok Singh </p><p>I did</p><p>Okay, well, typing.</p><p>I had typing class when I was like, I don't know, eight or something, like with a keyboard. And did you guys have that?</p><p>Theo Jaffee </p><p>Yeah, we did. I didn't retain much of it. I typed with like four or five fingers. It's probably bad. I should learn how to type.</p><p>Alok Singh </p><p>Well, I didn't retain</p><p>any of it because what happened was one day the teacher saw me looking down and so she reset my progress to the very beginning in the typing tutor thing we had and I found that so Demoralizing that I just like gave up on it and just type like this for the next like 20 years Not 20 until I was about 20 Then I started doing programming and like four months of programming it like 10 words a minute is just terrible</p><p>And this guy, Steve Yegge, has an article, Programming's Dirtiest Little Secret, where he quotes a section from Reservoir Dogs of Mr. Pink talking about how he doesn't tip and that his advice for waitresses is, learn to fucking type. Yegge always says things in a roundabout way, but it resonated with me. And at the time I found an old book.</p><p>in the 60s called LSD the problem solving psychedelic was talked about a guy using it for typing and then I didn't do it for one day on a Sunday.</p><p>Yeah. then like, I couldn't type. decided I would learn Colmak instead of QWERTY or Dvorak because the reason is really silly. It's just that I'd seen a cute girl at a hackathon who I've never seen since using it.</p><p>No deeper reason. Like maybe it is ergonomically better and all, but that wasn't the reason at all. And within one day I taught myself to type. went from, well, not zero, but 10 words to about 70, or rather 30, just going with some typing tutor.</p><p>Like I think about substance that I didn't take is that like a kid, they just like completely remove the feeling of as an adult of like you fuck up and you feel bad for a second, but you just don't. You just notice I made a mistake and then you fix it and you just keep moving on. And there's no moment of pausing like that. And especially for something like typing, which is hundreds of little mistakes to begin with anyway, it adds up. So by the end of like eight hours of typing, I was at 30 words a minute, which is</p><p>pretty damn good for one day. But then I had these like weird dreams that night of</p><p>Like in, I think, Call of Duty Black Ops, maybe? One of the Call of Duty games, there's some character, Mason, who sometimes hallucinates these big red numbers that look like they're splashed in blood, like graffiti across his visual field. And I had dreams of letters that night, my fingers kept twitching.</p><p>And then I detested out, I didn't type at all for a week, which after a lifetime of not doing it was pretty easy. Dude, used to have, I used to like handwrite all my school assignments and talk about a typical mind fallacy. I wondered why people typed stuff that I was so deep in the stupid hole that I created for myself that I forgot the typing is faster.</p><p>Theo Jaffee </p><p>Back in like third grade, they made us learn cursive and even then I remember thinking like, I ever gonna use this ever? Nope, turns out, nope, I never use it. I type everything. When I do have to handwrite stuff, I found that my handwriting has weirdly like converged towards my dad's handwriting, kind of for no reason. Like my handwriting now just looks a lot like my dad's handwriting, even though I didn't try to emulate it. So maybe handwriting is just genetic.</p><p>Alok Singh </p><p>Yeah, my dad's handwriting is as ugly as my own.</p><p>and my brother's too. My mom's handwriting is like only a little better. But anyway, after a week of not typing, which is easy after a lifetime of not typing, then I did an exercise again and I could do, I was at 70 words a minute, which was very surprising.</p><p>So for the cumulative total of eight hours of typing practice, I was up at 70 words.</p><p>Theo Jaffee </p><p>Wow, good exponential improvement.</p><p>Alok Singh </p><p>Now it's like stabilized</p><p>somewhat to like 80 or 90 and maybe another session like this and it could go over 100. But this is pretty fast, so I'm reasonably satisfied.</p><p>Theo Jaffee </p><p>Hmm. Wow, I guess I'm kind of proud of myself now that I can type like 120 with like five or six fingers.</p><p>Alok Singh </p><p>So now type on like this keyboard. I started out when typing using just a MacBook keyboard and to prevent myself from cheating that one day I took a bunch of electrical tape and I taped over all the keys and then midweek I spent like three hours scraping off the electrical tape with when I closed the lid the heat just sort of like fused it to the keyboard and the sticky stuff started to leak out and got all gross.</p><p>then I got this, not for the ergonomic reason, but it is ergonomic. Or actually, I'd rather I got this.</p><p>And then they released a wireless version a few years later.</p><p>Theo Jaffee </p><p>Yeah, I just use a pretty standard Black Widow V3, Razer Black Widow V3 gaming keyboard. And really I only use this for gaming, pretty much. Like, I do almost all of my work on my laptop and not on my expensive gaming PC, which I basically use as a Fortnite machine. Though maybe I should work on it more.</p><p>Alok Singh </p><p>and</p><p>I play Fortnite on the Vision Pro.</p><p>Theo Jaffee </p><p>Really? With a MacBook?</p><p>Alok Singh </p><p>You don't need a MacBook, you can do it. If you have internet that's good enough, like fiber or any 100 megabits per second is enough. You can get Nvidia GeForce now and like a PlayStation controller. Maybe you exclusively play with keyboard and mouse and you could probably set that up. But in my case anyway, I use the controller and then I can play it with the giant screen on the surface of the moon.</p><p>Theo Jaffee </p><p>Is GeForce now like cloud gaming? So is it slow as hell? Yeah. Yeah.</p><p>Alok Singh </p><p>Yes, that's why you need a good internet. No, that's why you need a good internet. With good internet,</p><p>the latency is really not that bad. I'm not such a pro gamer where a few extra milliseconds makes an enormous difference to me, which I'm fine with. I used to game a lot and then I gave it up.</p><p>Theo Jaffee </p><p>Yeah, I have honestly never really gamed that much, which might be surprising to people who know other aspects of my personality, but I don't know, I never got addicted to it. I never got one-shotted by Factorio, as they say. I do play like a reasonable amount of Fortnite, but that's like the only game I play.</p><p>Alok Singh </p><p>Fortnite and Smash Bros were the only video games I'd really played after high school. I played a little bit of one of the Zelda games one night where I spent like four hours on it at a party, but that was a one-off.</p><p>Theo Jaffee </p><p>So tell me about Elon's demo day. I'm curious about that.</p><p>Alok Singh </p><p>Also a story of not drugs. Yeah, yeah. Let's see. It was.</p><p>Theo Jaffee </p><p>SF people love their drugs,</p><p>Alok Singh </p><p>2020 maybe? It's when he first announced the robot. So I got invited to it randomly. I don't know, maybe they scraped LinkedIn for my email or something. And complete with this nice Uber ticket that's copped for.</p><p>So I spent the whole Uber ride, like I just noticed when we were, me and the driver passing by some of the billboards at SF, how the light glinted off some stuff. And a while ago, I started noticing like rainbows around lighted objects. Like it depends on the object. I think it's like a sort of a stigmatism. It's like a traffic light, like the red and the green. They're...</p><p>Theo Jaffee </p><p>I see streaks</p><p>in halos, but not rainbows.</p><p>Alok Singh </p><p>Well, traffic lights don't have rainbows. They have just one color and I'll see like a sort of truncated sphere of light around them. But like a car light from the headlights. Then I'll see a rainbow around it, actually several, but the first in like concentric spheres. But the first one is dramatically brighter than the others and it's rare to see the second one. And I extrapolate that they go out essentially to infinity, but drop off in intensity very quickly so you can't see the majority of them.</p><p>and</p><p>This just got me down like a train of thought about the electromagnetic field and how the four fundamental interactions, electromagnetism is the main one where we can apply art. Gravity is so weak, like I think 34 orders of magnitude weaker than the weak force. So gravity only really matters mostly at the biggest scales, which we generally don't build things in yet. Fingers crossed on that one.</p><p>But unless you're building like a superstructure or trying to detect, yeah, like little waves, gravity generally doesn't matter all that much. The strong and weak forces were not even discovered until the mid 1900s, since they acted atomic scales and are also outside of nuclear engineering, mostly inaccessible for everyday experience, which leaves electromagnetism to explain like basically everything.</p><p>A good deal of chemistry at large is from electromagnetism, material properties of wire things, strong or soft or hard, etc. Electromagnetism.</p><p>and then this led me down some other train of thought.</p><p>But then the ride arrived in their deer creek, deer park, whatever, their office in Palo Alto. And I'd just been talking to the driver a bit at this point. And he asked jokingly if I could get him in. I decided on a whim, you know what, I'll just try. I'll just ask them. And then when I told him that, he looked genuinely afraid. And I just felt this impossibly large gulf between us. And I felt really sad. Like there was some</p><p>void that he could not cross and not because they would bar him at the gate but like more.</p><p>I swallowed it and just walked through. Still think about that.</p><p>Theo Jaffee </p><p>interested.</p><p>So the Elon's Demo Day story was not actually about the Elon Demo Day.</p><p>Alok Singh </p><p>No, no, I'm not even close to done. This was just like part of the lead up, it was a whole thing.</p><p>Theo Jaffee </p><p>&#8275;</p><p>Alok Singh </p><p>Then.</p><p>The demo itself. Well, it hadn't started yet. So everyone just sort of milling about. was relatively early. So there's maybe like, I don't know, 20 people out of a few hundred and</p><p>Look, I say hi to a guy who doesn't recognize me at all and I won't name him.</p><p>and</p><p>was really shitty. So like, it just had no calories. So I saw the the staff, whatever the polite word for them as the helpers. What some our grandparents would have called servants, but we don't say that because that's We're</p><p>Theo Jaffee </p><p>You're allowed to say server,</p><p>but not servant.</p><p>Alok Singh </p><p>That's true, actually. The servers and security staff, whatever the staff, they had some Domino's pizza and I was starving. I asked for some, I offered to pay, but they just like gave me some. And then they spent the whole time when I was eating, doing what</p><p>was like a fan of gossip, totally unknown to me, where they basically identified each other by astrological signs and then talked about like unfriendly people having Pisces eyes and how they could tell.</p><p>Pisces eyes comment, they seem like weirdly at pains to say it in earshot or just out of earshot of me. I wonder if they meant me in that case, but I have no idea. I'm a Capricorn anyway.</p><p>Theo Jaffee </p><p>Yeah, I was gonna ask.</p><p>Alok Singh </p><p>Yeah I was born in January after all, like early January.</p><p>But then I started just wandering around the parking lot, like up and down, just idly thinking.</p><p>about well data gathering and the</p><p>It doesn't make sense inside a head, but just expanded out just sound like rambling nonsense. So stain of thought or babbling to use modern terms. was like along the lines of this combination of</p><p>Theo Jaffee </p><p>end of thought.</p><p>Alok Singh </p><p>how the concrete world contains the abstract world.</p><p>It has all the abstract information is in the concrete world, but there's more besides like specific incidental facts and not just necessary ones. And then I just thought Tesla should make a robot for data gathering. That it would be really expensive, but they should bite the bullet because they were the one company I thought could actually pull it off in the being controlled so top down.</p><p>They could have one guy that would just push on it because people have tried humanoid robots before, but everyone has failed in this because they haven't had the commitment to go to insane lengths, which is necessary. Like everyone sort of backs off halfway. They think they want a humanoid robot, but then in their efforts to build a new earth, their vision of a new heaven dims and they back off from the humanoid robot to like a factory one or some specialized thing that's not humanoid anymore.</p><p>Theo Jaffee </p><p>think this is changing now.</p><p>Alok Singh </p><p>Yeah, but this was five years ago, so.</p><p>Theo Jaffee </p><p>And what do mean by a robot for data gathering?</p><p>Alok Singh </p><p>The</p><p>Because a robot that's like a human has basically the same interface as we do to gather data that we care about minus smell for now.</p><p>Theo Jaffee </p><p>and</p><p>maybe some other things.</p><p>Alok Singh </p><p>taste, the more continuous senses, that's true. But it's still better configured since the world has been organized by us.</p><p>and made legible by that organization. Like so much of the point of like ordering stuff is to make it legible to us. And the</p><p>A robot that's shaped like us and that has to interact by basically the same means, although hopefully more competently, is in a better position to access it and can do all sorts of random idiosyncratic tasks, well, as many as we can do, that a car or a squirrel or a pick and replace robot just can't do.</p><p>and that</p><p>between language and, what's his name, a continuous world, which I'll just say vision, that's a short form for all that. You've probably heard me rant about discrete and continuous many times.</p><p>Theo Jaffee </p><p>Many times, yes.</p><p>Alok Singh </p><p>And while the continuous side seems to be the harder one, at least for machines, like even now, image models have some very impressive stuff, but people have pushed on text much more. I mean, I can see why it's got the same advantages of as discrete stuff, that equality is a meaningful question for it. And it's easier to evaluate if something is right or not. But most of the world is still in this like physical continuous realm.</p><p>and going back to Tesla, well, they have their cars and of course Waymo has them too, but Waymo doesn't seem like the kind of company that was or is going to build robots to do things other than drive.</p><p>Theo Jaffee </p><p>Yeah, not likely.</p><p>Alok Singh </p><p>And while I'm wondering,</p><p>and then while I'm just like wandering around this parking lot up and down thinking these, I realized, wait, shit, the demo's already started. And it's apparently too full. And I don't want to just like stand outside and look at the screen like some asshole, cause I could just like watch it later. So I continue wandering around the parking lot. And then I find out after the whole day is done, that actually the thing he announced with the robot.</p><p>Theo Jaffee </p><p>Wow.</p><p>Alok Singh </p><p>was quite a feeling. Also this like very visceral experience of like that phrase the child is the father to the man. Have you heard this one?</p><p>Theo Jaffee </p><p>I but I forgot the context.</p><p>Alok Singh </p><p>Like, the child is father to the man because the things you do in the past affect the things you do in the future, in short. And in this way, the stuff you do as a child affects the things you can do as a man, and in that way is their father. Also the mother, but whatever.</p><p>I just had these visual hallucinations of the many possible arcs.</p><p>of like my own, what's the word, my world volumes or possible world volumes.</p><p>Theo Jaffee </p><p>What do mean?</p><p>Alok Singh </p><p>world volume is like</p><p>The world is basically your timeline. Volume, because it's like 3D moving through space, moving through time. So your world volume is essentially all the set of states you'll ever occupy.</p><p>FaceTime. And at least assuming for the moment that it is changeable. It may well not be, but whatever. For the visualization, it didn't matter.</p><p>suddenly I felt much less here and rather spread out throughout all of it. Like I was a steward for my future self and that's like every moment is like a fire brigade with the water bucket chain handing it off to the next person, which is me in the next instant and just getting handed a bucket for me in the last instant.</p><p>Theo Jaffee </p><p>And you got all that from thinking about humanoid robots at Tesla?</p><p>Alok Singh </p><p>It just came around at same time. It was a big swirl of thoughts, like a lightning storm. That's not so uncommon.</p><p>That was a very freeing thought.</p><p>Suddenly I felt much nicer to myself. Not such an inner critic.</p><p>if it was directly related to the robots, only incidentally. It also made the overall experience maybe more interested in hardware.</p><p>An interest that I've still only done a little bit with, because stuff takes a lot of time. But it was one of the examples I like to give people about why lean is cool. Not the drug kids.</p><p>Theo Jaffee </p><p>I love lean!</p><p>Alok Singh </p><p>Lean for real is a Playboy Cardi song, right? just whatever. Because there's a song by Travis Scott called Fiend where a friend of mine sent me Fiend, but it's Indian. So the background beats is like some Indian thing playing and it was awesome. But then there's the joke of what was it?</p><p>Theo Jaffee </p><p>Maybe? I don't know. I'm not exactly a Cardi fan. Let's do a few songs.</p><p>Whole lotta red. Sky.</p><p>Alok Singh </p><p>Dravinder Srinathan featuring Prabhakar Karthik. Travis Scott featuring Playboi Carti. Prabhakar Karthik, though, I don't know, that just triggered the grooves in my brain and I found it endlessly funny. I'll send you the link, actually. And you can listen to it later.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Hahaha</p><p>That is funny.</p><p>or like Pavitr Prabakhar from the Spider-Verse movie.</p><p>Alok Singh </p><p>God. I don't like the Spider-verse movies. That's like some smarmy cunt who can hear all these things about how he's making things worse, but then he says, no, I'm a do-me. Literally what he says. And that just annoyed me so much. Like the vampire Mexican Spider-Man that everyone hates on. No, he's a good guy.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Do some stupid Deus Ex Machina so he can have it all, of having his daddy and his mommy and his timeline and everyone else's timeline. But it's still bullshit.</p><p>Theo Jaffee </p><p>Mm, did you like the art at least?</p><p>Alok Singh </p><p>Meh. Space elevator is cool.</p><p>Theo Jaffee </p><p>No way.</p><p>Merely meh, it was probably like the best, you know, innovation in like animated filmmaking in the last like 20 years.</p><p>Alok Singh </p><p>I remember one thing from</p><p>it, actually, two scenes I remember very specifically, and the overall feeling of this contrast between, like, obviously drawn and then hyper realistic. The two scenes that stood out for me as hyper realistic is one, a scene of a shopping mall with a glass bridge, and it's shot from like head on and then the sun is rising over it. And for that scene, just for a moment, looked like reality. And also the scene where they're jumping around cars, it's night and</p><p>You can see the lights of the cars going down and you can see the subsurface reflection into the tarmac of the road. And that also just looked real. For like that fast. And I think about that a lot.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I mean, I think it's</p><p>a much needed innovation from the sort of like 3D Pixar style that had just like dominated basically all animation for like the last 15 years. Like when was the last, yeah, when was the last time you saw a compelling 2D animated movie before Spider-Verse?</p><p>Alok Singh </p><p>the Eternal Virgin style.</p><p>Plenty of Japanese movies, but Japan's special.</p><p>Theo Jaffee </p><p>Sorry,</p><p>compelling 2D American movie before Spider-Verse. There's plenty of Japanese ones.</p><p>Alok Singh </p><p>When did the white man</p><p>do 2D well? That's a better question.</p><p>Theo Jaffee </p><p>Yeah, the Japanese do it way better than the white man. My revealed preference is that most of the animated...</p><p>Alok Singh </p><p>Maybe the spine of night.</p><p>I didn't know the Spine of Night looked like dog shit. Everyone looks like they're moving with broken limbs in soft body physics where they have no bones. Which is a contradiction, but whatever.</p><p>Theo Jaffee </p><p>Most of the animated content I've watched has been Japanese animated content because it's good. They do a good job. It's aesthetically pleasing. The stories are good. I guess another example of like innovative sort of 2D animation or innovative animation in general was Kubo and the Two Strings, which was Japanese inspired. That was American. It was okay. I sort of remember the story being pretty good. The animation itself, not my favorite.</p><p>Alok Singh </p><p>Yeah, I saw it.</p><p>Wasn't the one also like clay nation</p><p>though?</p><p>Theo Jaffee </p><p>I just don't like claymation. Maybe I'm biased, but I just don't think that it's, you know...</p><p>Alok Singh </p><p>It's</p><p>like one handed pottery. It's really impressive you can work with one hand, but no, it doesn't look that good.</p><p>Theo Jaffee </p><p>It's yeah, it's technically impressive that you can reshape clay tens of thousands of times, but it just looks kind of creepy. Even good clay nation movies, like there are good clay nation movies like Coraline. Was Coraline clay or something similar to that?</p><p>Alok Singh </p><p>Actually, yeah, it's...</p><p>You can't look at that part, think so.</p><p>stop motion.</p><p>Theo Jaffee </p><p>Coraline wasn't literally claymation, but it was that sort of vibe.</p><p>Alok Singh </p><p>Well, if stop motion, mean, claymation and stop motion usually go hand in hand because you kind of need it because if you're live sculpting something. &#8275; one movie I liked quite a lot, The Peasants. It's this Polish movie. It's hand drawn.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Technically, isn't all animation</p><p>stop motion?</p><p>Alok Singh </p><p>We can talk to screen. Continuous, but maybe wait a bit.</p><p>Theo Jaffee </p><p>That's, yeah,</p><p>that's a very good discrete and continuous top.</p><p>Alok Singh </p><p>Yeah, that's one where often, even when like the object of our concern is ultimately discrete, and our original conception of it is discrete, it's still profitably round tripped through the continuous. Like of, okay, a movie, like The Peasants, the person, the movie is hand drawn. And everything, looks like diffusion and that everything is literally flowing. Like if you watch every moment, because it's</p><p>is hand drawn, there is no two scenes, even when they're standing still, are actually the same unless they're reproducing the exact per-stroke frame the same, which they aren't. So every moment is flowing, not flickering, because it's not like little points of light like a kid's memories.</p><p>Theo Jaffee </p><p>is.</p><p>Is our perception of time discrete or continuous?</p><p>Alok Singh </p><p>continuous. It's like one of those things that maybe it's ultimately discrete, but is thought of as continuous. okay, like, draw, like in drawing that movie, while people are drawing, and that certainly feels, at least each frame feels pretty discrete to them, because they have to draw the damn thing themselves. And then there's the ultimate thing kind of is discrete when you think about it for a second. But then the intermediate, and what they're shooting for is the illusion of continuity.</p><p>Theo Jaffee </p><p>Hmm. Sometimes you can perceive it. On ones vs. on twos.</p><p>Alok Singh </p><p>Or even a more. Yeah, a more mathematical example</p><p>would be well, like things with limits like say the Taylor series of E to the X, where you can start with a continuous compound interest formula of one plus X over N all to the power of N limit as N goes to infinity. Well, that's N many steps discrete. But then it's idealized as being some continuous formula, E to the X, which you take a derivative of.</p><p>to get the Taylor series and then to compute it while you truncate the Taylor series. So you started with discrete with this like to the power of n formula. Then you just lift it to the continuum and you work with all these nice properties, including the shifting property within its binomial series, which lets you get this X to the n over n factorial Taylor series. And then you go back down to discrete by chopping the Taylor series to some finite term. Usually like five or six terms is good enough.</p><p>to get a very accurate approximation. Mercifully, that one converges very fast for almost all values.</p><p>Theo Jaffee </p><p>So yeah, let's talk about math. Do you think, you sent me this article that was like the most important, biggest breakthrough in the history of math was the development of Arabic numerals, which is, the numeral system.</p><p>Alok Singh </p><p>be like the Hindu guy and say Hindu numerals or Indian numerals. The Arabs did not do anything with the number system in terms of actually developing the numbers themselves. Like there's this annoying trend of like Indian guys that even Aldous Huxley noted of trying to claim that every invention is made by of course in a variably ancient India which I fucking hate but this one actually was.</p><p>Theo Jaffee </p><p>he was.</p><p>Alok Singh </p><p>We was yogis and shit. Well, not my ancestors. They were farmers. But this one they deserve credit for. Like the same article mentions that of number systems that the Indian one, the one we use now has three aspects, which all other civilizations, including present ones, had at the most two aspects of, and most were lucky to get even one, which is</p><p>Theo Jaffee </p><p>Yeah, on God.</p><p>Alok Singh </p><p>that the numerals have no intuitive association with their size. Like the numbers four and seven when you write them out, seven doesn't look bigger than four. Whereas if you did tallying, it definitely does. So the problem is then, if you don't do that, this is not a logical requirement, but it's a psychological one where invariably people will use like a dot or a line for one and then what do do for two? Well, two dots or lines. Yeah, but even, they cut it, son.</p><p>Theo Jaffee </p><p>Yeah, that's Chinese.</p><p>Alok Singh </p><p>Yeah, but after four, they cut it off and start using ones that don't look like anything special. Which is good. But most primitive civilizations, and Ifra's book goes into like painful detail about all the primitive civilizations and how they just end up in the local minima of tallying. Because tallying is a base one system and because all of like multiplications properties become degenerate because one is the identity of multiplication.</p><p>the fact that, okay, you can fit a thousand numbers into four base 10 digits, because log 10 is just lost.</p><p>that don't get this exponential compression. And so long numbers take a long time to write out a really long time, like 1000 tallies takes a while.</p><p>Theo Jaffee </p><p>Telling is still actually good for a handful of use cases.</p><p>Alok Singh </p><p>Yeah, a handful, up to five, like the fingers, a hand. That's the problem. That's the nice thing about it. It's like sugar.</p><p>Theo Jaffee </p><p>Yeah. Like at the gym I go to, they have a whiteboard.</p><p>Alok Singh </p><p>and sleeping with a hot girl without a condom is appealing in the short term, but it's got some problems in the long.</p><p>Theo Jaffee </p><p>I'll go.</p><p>Yeah. So do you think this was the biggest development in the history of math?</p><p>Alok Singh </p><p>Well, yeah, mean, even now, like if you ask people, well, what do you what math you use? Most people can. Well, I most of the can't answer at all, but if they've thought about a really long time, the only answer they can honestly give is like counting. Addition, maybe multiplication.</p><p>Theo Jaffee </p><p>Addition, did you addition?</p><p>Little multiplication.</p><p>Alok Singh </p><p>And like division is already getting beyond most people. Okay, here's the thing. If I randomly had uniform selected someone from the entire world over the age of 10, and they have to add one fifth and one seventh together correctly. And if they can do it, you get let's say 100K, but if they fail, you die, would you take it?</p><p>Theo Jaffee </p><p>No way.</p><p>&#8275; no, but that's, that's, that's adding fractions. That's not division.</p><p>Alok Singh </p><p>Uniformly random the whole-</p><p>store.</p><p>Theo Jaffee </p><p>Yeah, like division is like, okay, like let's say you have like eight oranges and you have four people that you have to distribute the oranges to. So how many oranges do you give each person? Like people sort of intuitively get them.</p><p>Alok Singh </p><p>Okay, how about if you have eight,</p><p>what if you have seven oranges and have eight people?</p><p>Theo Jaffee </p><p>Okay, whole number division.</p><p>Yeah, yeah, like most people can't do fractions in their head. Most people.</p><p>Alok Singh </p><p>Motherfucker.</p><p>Most people can't do fractions, period.</p><p>Theo Jaffee </p><p>Mm.</p><p>Alok Singh </p><p>Like, it's polite to pretend otherwise, but I don't believe it.</p><p>Theo Jaffee </p><p>So you think that the median person over the age of 10 in the world is capable of counting, they're capable of adding, basic multiplication and like maybe basic division of natural numbers and that's it?</p><p>Alok Singh </p><p>when they evenly divide things or are like very common fractions like a half and a fourth and even the fact that like one fourth plus one fourth is a half i wouldn't expect them to be able to arithmetically grasp that if i put like one fourth of something in front of them yeah but i wouldn't expect them to like know it in the same way they know like seven plus three is ten like that</p><p>Theo Jaffee </p><p>Maybe if they're cooking.</p><p>If they cook with recipes, they get it. Or like, this recipe calls for two tablespoons. Yeah.</p><p>Alok Singh </p><p>Do you know how to cook?</p><p>That's good.</p><p>Theo Jaffee </p><p>This recipe</p><p>calls for two tablespoons. How many teaspoons is that? &#8275; well, there's three teaspoons a tablespoon, so it's six teaspoons.</p><p>Alok Singh </p><p>my mom cook, my mom was like an engineering manager and even &#8275; sorry mom I shouldn't disparage my bloodline on tv like this.</p><p>Theo Jaffee </p><p>I don't know maybe maybe we're typical</p><p>Alok Singh </p><p>But no, I don't think the average</p><p>person can do fractions.</p><p>Theo Jaffee </p><p>Are we just typical mind fallacying here? Am I just typical mind fallacying here?</p><p>Alok Singh </p><p>I mean, I certainly am not, because with my typical mind, judging by the people I hang out with, is that the average person knows multivariable calculus, which is definitely not true.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>or like just basic real analysis, not even complex. would be.</p><p>Theo Jaffee </p><p>You think that the median person you hang out with</p><p>on a regular basis knows real analysis?</p><p>Alok Singh </p><p>No, but I think the person I hang out with has at least heard of it, which I don't think the median human have. At all. Not even close. I don't think even like tenth percent, 90th percent, whatever, the top 10 percent, the cutoff there has heard of it.</p><p>Theo Jaffee </p><p>I didn't even tell you I'm finally taking math again. I'm back on my math grind. I'm doing differential equations.</p><p>Alok Singh </p><p>Really? Like what?</p><p>Again, non-standard analysis, bro. It's the way. Worst thing about it is the name.</p><p>Theo Jaffee </p><p>What is non-centered analysis again?</p><p>Alok Singh </p><p>You can extend the number system one more time, which should be a familiar theme. Cause you know, when we're little kids or embryos, really, we know like one, two, three, and then eventually you learn, you can just keep counting. So infinity kind of gets tacked on then zero, the negatives fractions, actually fractions come kind of pre-built in the simpler ones anyway, negative numbers came later, but then like irrational numbers, we have a pre-</p><p>formal understanding of the real numbers, some idea of a number line has come from the very beginning, but not much mechanical understanding of it. Like they'd have an irrational number considering how many people still argue if pi is 22 over 7. Even my own dad. I had to explain to him what a transcendental number was and an irrational number.</p><p>Theo Jaffee </p><p>Isn't that like</p><p>easily like disprovable though?</p><p>Alok Singh </p><p>So.</p><p>Theo Jaffee </p><p>You can just Google what is pi and it's like a very long decimal and then you Google what is 22 over seven and it's a much shorter decimal that doesn't even equal pi after a certain number of decimal places.</p><p>Alok Singh </p><p>You can also</p><p>Google that Claude is better than GPT at stuff and yet how many people use Claude?</p><p>Theo Jaffee </p><p>Can you actually Google that? Let's see. Is Claude better than ChatGPT?</p><p>Alok Singh </p><p>kind of, as far as the answers go.</p><p>Theo Jaffee </p><p>Yeah, it seemed</p><p>to say yes, but like you'd have to know what cloud is in order to Google it.</p><p>Alok Singh </p><p>yeah.</p><p>Theo Jaffee </p><p>I have a stack. use ChatGPT o1 for math and, you know, advanced coding stuff. And I use Claude for everything else. Word, Selvers shape, rotator tasks.</p><p>Alok Singh </p><p>Right. Anyway, going back to what is non-center analysis. So we play the game of like completing the numbers and there's a practical point to each level where the point of zero is to like round out or to really make it possible to properly do addition because it's the identity of addition. And without zero negative numbers, which allow you to complete addition and finally give an answer to two minus three, which</p><p>Certainly I thought as a little kid it's impossible, you just can't do that. And now it's to me, it's like the act of identifying with addition and negation is just so intuitive that it's easy to forget that they're separate operations really. But without zero, the operation of negation doesn't even make sense because the defining property of a negative is that adding it to the thing it's the negative of is zero. A property pretty easily explained for fractions because the number one</p><p>the identity for multiplication happens to be like maybe the most intuitive number.</p><p>I if I'm like, the concept of one, it's just over. Before it even began.</p><p>Theo Jaffee </p><p>What?</p><p>Alok Singh </p><p>Like if a human cannot grasp at some like pre-formal level the concept of like one. Like if someone doesn't get that one plus one is two, I don't think you can teach them math. Luckily even animals understand this and infants do.</p><p>Theo Jaffee </p><p>I see.</p><p>Well...</p><p>Yeah, I mean, I would say the bar for not being able to teach someone math has got to be higher than that, right?</p><p>Alok Singh </p><p>Yeah, but if they can't get this, then they definitely can't get the rest of the edifice.</p><p>Theo Jaffee </p><p>What do you think the minimum bar is? Like what makes someone Turing complete for learning math?</p><p>Alok Singh </p><p>addition and multiplication.</p><p>Theo Jaffee </p><p>Yeah, that makes sense. What about negative numbers?</p><p>Alok Singh </p><p>practically speaking, but that's just like in addition, like addition properly grasped.</p><p>Theo Jaffee </p><p>I guess you don't even need to think of negative numbers as intuitive if you can just pretend that they are for a long enough time period. It shouldn't be that hard.</p><p>Alok Singh </p><p>Well, that's like that von</p><p>Neumann quote, you don't understand things, you just get used to them. Like, you can construct, for example, a rational number, an integer as a pair of naturals. In fact, an infinite set of a pair of naturals that all have the property that the first number minus the second number is the same negative, or they have the same difference. So if the number negative one would be identified with a one, two,</p><p>two, three, three, four, and so on as an equivalence class. It's like a formal construction that actually, if you can understand a negative number, can definitely understand, that doesn't guarantee you can understand the construction. And if you could understand the construction, but not a negative number before you saw the construction, you're like some weird mutant. Because I don't know of anyone who's been able to understand the concept of an equivalence class of an infinite pair of, set of pairs.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Alok Singh </p><p>to negative numbers before they even could understand what a negative number was just at an intuitive level at some point. But you like absorb them and now negative numbers probably feel like kind of familiar and at least sort of real or let's say actual.</p><p>Theo Jaffee </p><p>Well, it does kind of make</p><p>sense. Like if you have pairs that are like 1, 2, 2, 3, 3, 4, like you already kind of know that like the gap between each of those is 1, right? So if you subtract big number minus small number, you get that gap, that interval. But if you subtract small number minus big number, you get the same thing but negative. But that doesn't seem that difficult to understand.</p><p>Alok Singh </p><p>Yeah, and you didn't know that, then this construction would make no sense.</p><p>Yeah.</p><p>Now you're typical minding. Anyway, you can get integers as a pair of naturals by doing this construction. And then you can get a rational of rational as a pair of integers. So a quadruplet or a pair of pairs of naturals by decompiling it one level further that have that satisfy the property of rational number addition and multiplication, mostly addition one, because it's not obvious, which is why people fuck up fractions because people add like what's one half plus one half.</p><p>Theo Jaffee </p><p>Okay.</p><p>hair of naturals.</p><p>Alok Singh </p><p>two fourths.</p><p>Theo Jaffee </p><p>So what is non-centered analysis?</p><p>Alok Singh </p><p>It's another step in this completion process. Just to get rationals, you can, to be able to take a square root, need irrationals and there you're at the reals. Then if you want to solve polynomials and for do rotations as in particular with Euler's formula, you need complex numbers. But if you want to do calculus or analysis or differential equations, which is finally getting to that point, you need, or you end up crudely reinventing infinitely big and small numbers.</p><p>So it's the number system augmented with infinitely small numbers, infinitely big numbers, the regular numbers, and then the various combinations thereof.</p><p>Like you can pull up a graph if you look up like.</p><p>Theo Jaffee </p><p>So non-standard</p><p>analysis is like an umbrella term for calculus and diff eq</p><p>Alok Singh </p><p>way of doing it.</p><p>with this extension of real numbers. I mean, also complex numbers, the construction's pretty generic.</p><p>which is better than the limit approach, which is usually taught for many reasons, which I've went into on the internet and we'll go into some of, but it's like a whole long rant.</p><p>Theo Jaffee </p><p>Yeah, I mean, personally I'm excited for reasoners to continue to get good enough so that they can just teach me, like, real analysis. o1 pro might already be there.</p><p>Alok Singh </p><p>It doesn't let me send pictures. It'll send it to you on</p><p>Yeah, those you should just pay the 200 don't be such a cheapskate</p><p>Theo Jaffee </p><p>Yeah, I know. I actually have used it on my dad's account. Very good stuff.</p><p>Alok Singh </p><p>Well, for once your dad's the one who's not a cheapskate unlike you. That's a surprise. Like I couldn't get my dad even now to pay 20 bucks. Certainly not 200. The number of engineers who won't pay 200.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Yeah, I mean...</p><p>Alok Singh </p><p>animals, just godless men.</p><p>Theo Jaffee </p><p>Yeah, oh and Sam said that he's not raising the price on 03. So 03 and 03 Pro will continue to be 200 bucks a month. So just pure value add. We'll see how good 03 is. I don't know, like do you know anyone with safety testing access to it? I saw some people who say it's very, very good.</p><p>Alok Singh </p><p>I hurt.</p><p>out.</p><p>Like what, on Twitter or personally?</p><p>Theo Jaffee </p><p>people that I know personally on Twitter.</p><p>Alok Singh </p><p>Of course. Well, not yet. For 03mini. 03mini is the speed that excites me, but I think for me to get satisfied, I will have to try the full thing. So that has what I really want. Like pushing the edge of...</p><p>Theo Jaffee </p><p>The full, full thing is</p><p>like a gazillion dollars.</p><p>Alok Singh </p><p>If can write me a paper. Because I've got ideas.</p><p>Theo Jaffee </p><p>Well, it costs</p><p>like 1.5 million just to solve Arc AGI.</p><p>Alok Singh </p><p>How many questions was that?</p><p>Theo Jaffee </p><p>Actually don't know. I think it was on the order of like a few hundred.</p><p>Alok Singh </p><p>And also, I mean, they'll be providing something</p><p>that they will call the full thing. And I think that will be plenty good. It will be certainly like noticeably better than o1 pro, I would hope. So O3 Mini is probably not gonna be as good as o1 pro, but you know, a lot faster, which is nice. There's also the DeepSeq one that came out today. I've asked it a couple of questions. Being able to read its chain of thought is real nice.</p><p>Theo Jaffee </p><p>This is a different model than the one that was already out. I thought the big release today was just a paper.</p><p>Alok Singh </p><p>Yeah, R1. This is the reason R1.</p><p>Nope, model. MIT licensed even. You can like make money off of it.</p><p>Theo Jaffee </p><p>I used a DeepSeek Reasoner model over the last few weeks.</p><p>Alok Singh </p><p>Unless you use the one that came out</p><p>today, it's not the one. That's V3. It's good, but it's not as good as this one.</p><p>Theo Jaffee </p><p>so it's</p><p>just a reason they're based on V2.</p><p>Alok Singh </p><p>think it's based on V3, but I have to look at their paper. In any case, its performance is roughly comparable to o1, but you can run it locally if you've got the compute. And it's certainly a lot cheaper. What is that?</p><p>Theo Jaffee </p><p>how much compute you need.</p><p>I have a</p><p>mid-quality GPU for video games.</p><p>Alok Singh </p><p>Nope.</p><p>No, more than that. How much does it take actually? The model is, for one is like 800 and something, 71 billion parameters. Although you only have to be able to load up a slice of that, but I think the slice is still like quite large. We'll just ask perplexity. Launch VRAM, run R1.</p><p>Theo Jaffee </p><p>Do you think 03 will be substantially better on wordcel tasks? Because it seems like a lot of people are skeptical of reasoners because they think that the RL on them only applies to like easily verifiable tasks like math and programming. Cause it's like hard to do RL for something you can't specify a reward for, like poetry.</p><p>Alok Singh </p><p>Yeah.</p><p>think it'd be less good. The jump would be less dramatic. But I think, especially if the model hits like some level of capability, it can get into this point where it starts. Well, I don't know if benefiting is the word benefiting from the gap between generation and verification, where like you can't write Dostoevsky, probably. But you can read him and it's like, damn, this guy's real good. Or at least he's better than like the other crap someone wrote was better than your high school essay, hopefully.</p><p>Theo Jaffee </p><p>Yeah, I hope so. I can't read Russian, but I imagine.</p><p>Alok Singh </p><p>translation is really good nonetheless, if the Russian one is even better, then well, damn. Actually, a little personal project I thought of, though it's not that good a use of time, is reading the Iliad and the Odyssey, especially the Odyssey, in Ancient Greek, cross-translated with GPT into English, but also Proto-Indo-European. Like sort of three columns side by side.</p><p>Theo Jaffee </p><p>That actually doesn't seem that hard.</p><p>You could do that with like a prompt scaffold, like today.</p><p>Actually though, with some LLMs...</p><p>Alok Singh </p><p>It's not that doing, it's more like working through</p><p>it. It's like working through it in depth is the point.</p><p>Theo Jaffee </p><p>I see.</p><p>With some LLMs, it's like, how do I say this? Like with Claude, I once asked it, know, write like the first chapter of Paradise Lost and it started and then I got like an auto block message that was like, this output has been blocked by our content filtering policy. Even though Paradise Lost has been.</p><p>Alok Singh </p><p>I find it helps if you point</p><p>out it's in the public domain before you ask it. And sometimes it's blocked me and then I've said, this work is in public domain and then it just unblocks.</p><p>Theo Jaffee </p><p>Yeah, that would be funny if it would just work.</p><p>Alok Singh </p><p>I'm gonna ask pro to do it right now, actually.</p><p>Theo Jaffee </p><p>Remember when people</p><p>were like telling DALL-E that the year was like 2150 and like all these characters are in the public domain and getting it to generate like Sonic the Hedgehog doing 9-11?</p><p>Alok Singh </p><p>No, that sounds...</p><p>Theo Jaffee </p><p>Those are funny. This is like the first day DALL-E 3 came out before they patched it.</p><p>Alok Singh </p><p>I wish I could set GPT the website to automatically select Pro Mode as my default and not 4o as if I would waste my time asking 4o a question.</p><p>Theo Jaffee </p><p>Yeah, I think o1 is actually like only marginally better than 4o on these wordcel tasks though. And I think on some benchmarks it did even a little worse.</p><p>Alok Singh </p><p>I find pro mode to be quite a jump.</p><p>want to ask our mode, well, another question near and dear to me. Explain non-obvious benefits of non-standard analysis. And I can give you one myself while it's generating, which.</p><p>Theo Jaffee </p><p>You know, Quentin Pope,</p><p>a former Theo Jaffee podcast guest and also a guy I follow on Twitter, was tweeting about how he was getting o1 pro to generate fiction and it would just keep reusing the same words. I think, I forgot, but let's say glimmer. And so it would tell it, or he would tell it, okay, don't use the word glimmer. And then it would say, an example sentence like, you know,</p><p>Alok Singh </p><p>shimmer?</p><p>Theo Jaffee </p><p>She looked at the object with a glimmer in her eye, or she quickly corrected herself a sparkle. So it's like, yeah.</p><p>Alok Singh </p><p>That's like reading a chick flick novel.</p><p>Theo Jaffee </p><p>I wouldn't know because I've never... yeah. My favorite chick flick novel is...</p><p>Alok Singh </p><p>Glimmer, no, Glimmer, a shimmer. Okay. I wasn't a very discerning reader. Yeah. I've</p><p>got some guesses, but let's find out. No, just show it.</p><p>Theo Jaffee </p><p>Well, what's your guess?</p><p>It's Atlas Shrugged by Iron Man, which is a romance about this amazing businesswoman named Dagny Taggart who finds herself involved in romances with lots of hot, sexy billionaires, except the book is also based, unlike most of these.</p><p>Alok Singh </p><p>I found, especially living in Silicon Valley, that the people she casts as villains, she uncannily understands their psychology, the rest not so much.</p><p>Theo Jaffee </p><p>Yeah, the heroes are kind of late. The villains are just unbelievably spot on. I cannot believe how prescient...</p><p>Alok Singh </p><p>Maybe she was just thinking</p><p>of some random soviet.</p><p>Theo Jaffee </p><p>I truly, I can't believe how prescient Ayn Rand was in so many ways. if you go on my Twitter and you search Ayn Rand was right, like I've tweeted this like many times because it's just so true. There's like, you know, Gavin Newsome like right after the LA wildfires saying, well, we're not actually going to change any of our practices that caused the wildfires. But what we are going to do is ban like transactions between willing buyers and sellers of burned down property.</p><p>Alok Singh </p><p>I mean, I could.</p><p>I'm not sure I will.</p><p>Theo Jaffee </p><p>We're going to ban down people from selling their burned down houses.</p><p>Alok Singh </p><p>You</p><p>know, just realized that Ayn Rand looks like a frumpier version of Agnes Callard.</p><p>Theo Jaffee </p><p>That's funny. I'm gonna see Agnes Callard in like three weeks. Hi Agnes. She's coming to Gainesville, which is crazy. With Patrick Collison too, believe it or not. She's doing like a tour for her new book, Open Socrates, which is on my list.</p><p>Alok Singh </p><p>Hi, Agnes. Why?</p><p>Bye.</p><p>Thank you.</p><p>which I guess Patrick</p><p>has read, I assume.</p><p>Theo Jaffee </p><p>Probably. I don't know if it's out yet. Maybe he's got an advanced copy.</p><p>Alok Singh </p><p>I went on their podcast, not Patrick, Robin and mine almost meeting Robin and Agnes's podcast a couple of weeks ago. I don't know if it's up yet, but it was about the two cultures. But she mentioned that she had did the audio work. She auditioned successfully for to read out her own audio book. Many authors apparently do not succeed in this.</p><p>And it was pretty brutal because it was three days, eight hours a day of talking.</p><p>and her voice was totally shot.</p><p>Theo Jaffee </p><p>One really good audiobook that was read by the author was The Creative Act by Rick Rubin, especially because his voice is so deep and soothing. Another pretty good one was The Lord of the Rings read by Andy Serkis and The Hobbit, especially when you get to the...</p><p>Alok Singh </p><p>What is Andy</p><p>Serkis' connection with those books?</p><p>Theo Jaffee </p><p>He played Gollum in the movies and he also just has, you know, a voice. like when</p><p>Alok Singh </p><p>The only</p><p>Lord of the Rings movie I've seen was The Two Towers and recently the animated Rohirrim 1. That's it. Also the only Star Wars I've ever seen is Attack of the Clones and people have told me that if I saw only one movie from both those series that I picked the most confusing and worst one.</p><p>Theo Jaffee </p><p>So</p><p>Most confusing, yeah, Phantom Menace was worse. yeah, was it? yeah, Andy Sergis, when he gets to the golem scenes, he reads them in his golem voice and it's very good. Yeah.</p><p>Alok Singh </p><p>That's a really good impression, Dan.</p><p>Theo Jaffee </p><p>That's one of my best ones, I think.</p><p>Alok Singh </p><p>Steven Graget has a good impression of Trump.</p><p>Theo Jaffee </p><p>Yeah, I've heard it. heard it. I think I do a decent Trump also.</p><p>Alok Singh </p><p>Many</p><p>are saying this.</p><p>Theo Jaffee </p><p>We are going to make America great again. On day one, I will sign.</p><p>Alok Singh </p><p>You've got the breathiness,</p><p>but your cadence is off. He does have those breaks, but yours is slightly too stretched out. And it's too even.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Yeah, I don't know. I think my perception of a lot of people's voices are messed up because I listen to most things on 2X speed.</p><p>Alok Singh </p><p>here, roast your guests. Do an impression of me, then.</p><p>Theo Jaffee </p><p>So I think everything is discrete or maybe continuous.</p><p>think actually the most important thing was Hindi numerals, are not Arabic. They're Indian.</p><p>Alok Singh </p><p>Okay, I'll take emotionless point deck, so that's fine.</p><p>Theo Jaffee </p><p>Yeah, &#8275; that's close enough. I wonder where the word poindexter comes from.</p><p>Alok Singh </p><p>I don't know, but it's a perfect word for it. It really evokes what it is.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>the turbo version.</p><p>Yeah.</p><p>Theo Jaffee </p><p>Hold on, I'll be right back, I have to get some water. I'll cut this part.</p><p>Alok Singh </p><p>Yeah,</p><p>yeah, it's fine.</p><p>Theo Jaffee </p><p>Okay, we're back. We are so back.</p><p>So did the o1 response finish?</p><p>Alok Singh </p><p>Yeah, I'll read it out in sec. Just texting my dad real quick.</p><p>Theo Jaffee </p><p>guys.</p><p>This is really amazing, fully automated podcasts. The guest just reads o1.</p><p>Alok Singh </p><p>with commentary. Also, I'm wondering if I'm also wondering if a one or I mean, if GPT has been trained on what I've said about nonstandard analysis, because I swear to God, I can like, feel my voice in it now. Just like a hint of it.</p><p>Theo Jaffee </p><p>That's kind of what notebook LLM is.</p><p>Interesting. I mean, this is what Gwern and Tyler Cowen think.</p><p>yeah, I saw some tweet that was like, you know how Biden and Harris sort of inexplicably, like the last few days tweeted like, you know, the equal rights amendment is officially the law of the land. You know, we proclaim, even though, you know, the national archivist did not approve this and the amendment ratification deadline expired a while ago. But someone said,</p><p>Alok Singh </p><p>Yeah, I know.</p><p>Theo Jaffee </p><p>it's possible that this was for the LLMs. And I mean, it's probably not true, but it sort of makes sense.</p><p>Alok Singh </p><p>If so,</p><p>I respect them more, actually.</p><p>Theo Jaffee </p><p>Yeah, yeah, but like, you know, the president</p><p>declares this thing that's not actually true, but we officially declare it. Yeah, this is going in the weights.</p><p>Alok Singh </p><p>The previous president</p><p>and guess the current one, the previous, previous and current one and the one, whatever, Trump and Biden, especially Trump is the master of declaring things true that aren't. Like I know he's popular among a lot of the people we hang out with, but he still like lies all the time, obviously.</p><p>Theo Jaffee </p><p>Obviously. Yeah.</p><p>Alok Singh </p><p>Okay, let me screen share so I can read it out a little easier.</p><p>Theo Jaffee </p><p>I did.</p><p>Alok Singh </p><p>also gave me all of Paradise Lost chapter one. There. Can you read this? Okay, great. Yada yada. Unification of discrete and continuous. Yeah, this is a big one. Hyperfinite site. Like, yeah, the idea is like, wait, does this let me share more of my screen instead of just the one? One sec. I want to share more than just one.</p><p>Theo Jaffee </p><p>Nice.</p><p>Yes.</p><p>Hyperfinite sets. I've heard you talk about this a lot.</p><p>Also, you have a special</p><p>GPT for Vision OS.</p><p>Alok Singh </p><p>It's one of their GPT stores. I have never used it.</p><p>Theo Jaffee </p><p>&#8275; it is weird how everyone predicted like the day GPT store came out. They were like, my God, Sam Altman, you genius. This is the new app store. It's going to be the biggest thing ever. And then kind of just nobody used it at all.</p><p>Alok Singh </p><p>Yeah.</p><p>There. Not the circle. So that's an integral, as you can probably guess from looking at it, taken from some three blue one brown video. Thanks, Grant. And you might've learned in class that, okay, so it's an approximation, but as the number of pieces goes to infinity, and this explanation presupposes you've already taken calculus, but I think for this audience, that's a safe guess. That each piece is really small. Well, how small? Infinitesimal.</p><p>Theo Jaffee </p><p>Haha, yeah.</p><p>Alok Singh </p><p>in the limit. But well, how many pieces are there?</p><p>Theo Jaffee </p><p>infinitely many.</p><p>Alok Singh </p><p>Okay, but like how many infinitely many?</p><p>Theo Jaffee </p><p>&#8275;</p><p>Comfortably infinite?</p><p>Alok Singh </p><p>No, like if I halved the number of pieces in fact, or if I doubled them, well then how wide is each strip relative to the picture we're looking at? Pretending that it's the idealization with infinitely many because you know it's impossible to draw.</p><p>Theo Jaffee </p><p>What do mean? Like if you start with one strip and then you have it?</p><p>Alok Singh </p><p>You have infinitely many strips already, but then you double the already hyperfine approximation.</p><p>Each strip gets cut in half.</p><p>Theo Jaffee </p><p>then you would have</p><p>twice as many strips that are half as wide. No?</p><p>Alok Singh </p><p>Yes, exactly. Yeah,</p><p>that's the point. It lets you use this sort of radical elementary reasoning. And this is like at odds with most modern conceptions of infinity and math, because like what's two times infinity, infinity, that's not very useful. Because then infinity just becomes the sort of absorbing symbol that just kind of breaks arithmetic, because it has no useful properties. Like infinity minus five is just infinity, infinity. And worst of all, infinity squared.</p><p>is identified with infinity, but this would be a mistake in multivariable. Like if you took dx and dx dy, anyone who's done calculus should know that these are, yes, they're both infinitely small, but they're like fundamentally different kinds of quantities. One represents a line or a line let, a tiny area of a line, the dx, but dx dy represents an area and is much smaller, infinitely smaller than dx.</p><p>Like thinking of them additively, they're just infinitely close to zero and are basically the same, but thinking of them multiplicatively, they're very different.</p><p>That kind of makes sense. Sort of.</p><p>Theo Jaffee </p><p>Yeah, sort of.</p><p>Alok Singh </p><p>Okay, but then when we do the same thing for areas, like if I took this picture, and then I cut it not just vertically, but also horizontally into a grid, then how many pieces would I have? Well, if the number of pieces is infinite, and the technical term from nonstandard analysis is hyper finite, which I usually just in this picture, I called it n. But typically, I call it capital H, whenever I explain it to people, because capital because it's a big number.</p><p>and h for hyperfinite. But then you would have h squared many pieces, with maybe like a fraction of h left over if it doesn't quite evenly divide it. But the bit left over would be infinitesimal. And so that's fine.</p><p>and it has a sort of continuous and discrete quantity, quality to it. Continuous because like, if you did this cutting into a hyper finite number of pieces, essentially every piece is one point wide and there's definitely uncountably many points because it goes across a continuum. In this case, I think the unit interval.</p><p>Makes sense.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Like in a continuum, there are uncountably many points. Uncontroversially. But it's also discrete because it has a definite number and you can count down from it. For example, the circle. This one has only 100 sides. This is an approximation of a circle that I did in that plotlib where it approximates it with 100 sides, but it looks very close to a circle.</p><p>I I can't really tell the difference. I think I originally did this with 10,000 sides, but that was super overkill.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I wonder, yeah, how far could you zoom in before you notice the sides?</p><p>Yeah, barely. I can see it a little, I think.</p><p>Alok Singh </p><p>I don't even think I can see it at this level. It won't let me zoom further. where'd it go?</p><p>Okay, more because this is funny to me.</p><p>I can maybe kind of guess at a difference, honestly, I can't really tell. And at this point, I can like see the pixelation ever more than anything else.</p><p>And this gives a good picture of what's going on. that's the number line. And for any given number R, it has an infinitesimal neighborhood around it. Like then William Blake poem about a world and every grain of sand. Cause around every point or standard point, like if you take the unit interval or any interval really, and you look at it from any finite distance besides zero, it just looks like this unbroken infinitely long line, right?</p><p>But if you zoom infinitely far away, it'll look like just one point, but all the stuff is still there. So it's actually a line, but a very small one, relatively speaking. But if, or if you zoom in infinitely close, it will split apart and what looks like a continuum will become discrete. But, and then in the gaps that have been introduced between points, and this is still the real line. So there's still uncountably many points. You can fit an infinitely small line around each point.</p><p>But can do this trick again of zooming in to infinity squared. And so it's split apart again, and then you get an infinitely small squared line. And then again, cubed and so on.</p><p>So whether something looks discrete or continuous is actually partially dependent on the level of zoom. Like from the relative distance you're looking at it from.</p><p>Theo Jaffee </p><p>So what are the rest?</p><p>Alok Singh </p><p>sort of braiding</p><p>effect. Yeah.</p><p>Theo Jaffee </p><p>What did the rest of the o1 pro response say? Was it all hyperfinites?</p><p>Alok Singh </p><p>It started with that because this is like the biggest missing concept in standard math that has infinitesimals have been pretty well absorbed through various formalisms, like at least four of them. There's synthetic differential geometry, dual numbers. There's schemes. There's like the Levi-Civita field and various non-archimedean fields. Okay. There's probably more, but</p><p>There's many conceptions of infinitesimal, so those are pretty well absorbed into mathematical mainstream, so there's not much alpha, but the opposite idea of an infinitely large but definite fixed number is just missing.</p><p>This one is nice for topics like real analysis, which come dramatically simpler if you use this and more accessible. I think standard mathematicians will usually underrate something being simpler because they can absorb the difficulty. But I think this is a typical mind fallacy. Like I certainly don't expect many people to be able to access it. But at this level where there's such a power law drop off, if a topic becomes more accessible, it can go to like literally dozens.</p><p>dozens of us, more people being able to understand it or times more. Cause it goes from something like very obscure to in reach. Like that article was just this one. It's how to take a derivative at a discontinuity and it is accessible. Like you could read it. Like someone who understands high school math and is a little bit dedicated.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Alok Singh </p><p>but doesn't have to look outside of this, get it. At least they could get the core idea. But to do the equivalent with the standard formulation would at the minimum require like a graduate level of education in math, which is just not gonna happen for basically everybody.</p><p>But about internal covers, finite subcover is a little boring. There's a better definition of compactness.</p><p>but not that interesting for this. This is its connection to just like different areas of math This is meh.</p><p>little better.</p><p>And it just, this is better.</p><p>Theo Jaffee </p><p>Well, there are a lot of stack exchanges.</p><p>Alok Singh </p><p>This is the history of science and mathematics. This is the preface to Abraham Robinson, the creator of the Fields book written by Kurt G&#246;del, who you've definitely heard of, Mr. Incompleteness Theorem.</p><p>Theo Jaffee </p><p>Yep.</p><p>Alok Singh </p><p>I was quite happy that I thought of this reason by myself long before I ever saw this quote by Godot, because I absolutely agree with it. That the best reason is that it's a natural continuation of the number system. And the number system is, well, our most successful abstraction. Like it contains every previous bit of insight about the number system. Cause it's like when we just had whole numbers, positive whole numbers, and then someone comes along with, well, there's this new thing, zero.</p><p>Well, it's a successful abstraction because, well, all the old stuff is still there and you can just ignore zero if you want, but maybe you'll find it useful someday. And then it'll just waiting there to welcome you. Same for negatives and so on. Cause each system completely subsumes the other ones very literally cause they embed within one another.</p><p>And this embeds as well, because you can take your standard conception of numbers and then fit them inside the non-standard conception of numbers, because all the standard numbers are there, but now there's infinite numbers, infinitesimal numbers, so it fills in these gaps you didn't even notice. It fills in this far away remote infinite part you didn't know was there.</p><p>This one is more technical, but it's a very good reason. Like, Zylberger is an ultra-finitist.</p><p>Theo Jaffee </p><p>Continuous</p><p>mathematics is the approximation of the discrete one in contra-position in the traditional point of view. The notion of a very big set is very important. A very big finite set is very important. And the definition of a hyperfinite set in non-center analysis is an appropriate formalization of this notion.</p><p>Alok Singh </p><p>Very big finite set.</p><p>In nonsense.</p><p>and one of the only formalizations of this notion.</p><p>Theo Jaffee </p><p>Hmm. So what does this have to do with differential equations though? Cause I mentioned, &#8275; I'm doing elementary differential equations, which is ordinary, but not partial. think we haven't really done much yet. It's like the only thing we've done so far really is like classifying.</p><p>Alok Singh </p><p>Like what? Yeah.</p><p>Well yeah, I liked it.</p><p>Cause you're doing hyperfinite arithmetic. Like a differential equation is ultimately still like, well, usually like some big sum. It's just that for a differential equation to be all that meaningful, well, you might've heard the of like boundary conditions or initial values or something.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Alok Singh </p><p>Like imagine a flow like along a river. Like you start at some point and then you have a little bit of momentum from the water flowing. So you flow in some direction for an infinitely short amount of time. And because the flow is assumed to be continuous, the infinitely short amount of time can only carry you an infinitely small distance. This is like quoting the nonstandard definition of continuity, which is also just the intuitive one. And this is why derivatives are useful because they turn the nonlinear into the linear.</p><p>over a short enough distance.</p><p>And so your differential equation, your flow, this continuous thing is broken apart into a hyper finite number of pieces, each of which is just a tiny linelet.</p><p>which is essentially discrete. And then you have this like enormous chain of them all linked together.</p><p>as sum, and so you're dealing with a hyperfinite sum where each piece happens to be infinitely small. So it adds up to something finite in the end. Or, yeah, finite. Limited.</p><p>Theo Jaffee </p><p>So let's talk about math AIs, which is one of the topics of the day, especially with R1 coming out. So do you think that at this point there's any chance that a human will solve a Millennium Prize problem before an AI?</p><p>Alok Singh </p><p>Yeah.</p><p>No.</p><p>Theo Jaffee </p><p>No chance at all.</p><p>Alok Singh </p><p>I mean some chance, but I don't think it's gonna happen.</p><p>Theo Jaffee </p><p>When do you think an AI will solve a Millennium Prize problem for the first time?</p><p>Alok Singh </p><p>within 10 years.</p><p>Theo Jaffee </p><p>Hmm, that seems like a longer timeline than I expected from you.</p><p>Alok Singh </p><p>Excuse me.</p><p>Maybe I'm just being a coward.</p><p>Theo Jaffee </p><p>Yeah, so you think that it's not likely that a human will solve one within 10 years?</p><p>Alok Singh </p><p>within seven years.</p><p>Let's take a look at the millennium problems again.</p><p>Theo Jaffee </p><p>We solved one of them.</p><p>Fermat's last theorem was one in prize, right?</p><p>Alok Singh </p><p>No.</p><p>We did solve one, but it wasn't that one. And it wasn't one of them. The Poincare conjecture was one.</p><p>Theo Jaffee </p><p>that one. Was that Perlman? Yeah.</p><p>Alok Singh </p><p>Well, yeah,</p><p>well, even to just quote the Wikipedia I'm reading right now, he started working on it in the 90s. So his proof took approximately, let's say like eight years or so, maybe 10 or longer. Yeah.</p><p>Theo Jaffee </p><p>That's a long chain of thought.</p><p>Alok Singh </p><p>Swinnerton- Dyer.</p><p>Much conjecture, now we're still...</p><p>I no longer know how much progress has been made on each of these problems. Often there's partial progress for many cases, but usually the thing that would be needed to solve the core is just like not there. And I can very easily guess that it would take, yeah, 10 years or more, a lot more maybe to get at the last piece.</p><p>Theo Jaffee </p><p>So it was.</p><p>Yeah, mean, a lot of people seem to think that Millennium Prize problems will get solved like within the year.</p><p>Alok Singh </p><p>Well, if I think this year, I doubt that.</p><p>Theo Jaffee </p><p>Like all it takes is, according to them, just scaling up RL and...</p><p>Alok Singh </p><p>I mean, maybe,</p><p>but they might be underestimating the scale.</p><p>Theo Jaffee </p><p>True. So once we do get those sort superhuman math AIs, whenever that happens, one year or 10 years, like what would you do with them? If you got, you know, o1 pro except it's not o1 pro, it's actually like o5 pro and it can solve money and price problems, like what would you do with it?</p><p>Alok Singh </p><p>use it to learn more math myself. I mean, this was one of the nicer things about getting into math late. I got nothing to prove. I did this for the fun of it. When I started math, I knew there people who were way better at it than me. When I finished with it, there will be people in machines way better at it than me. That was never the game. So then, well, the universe will burn out eventually.</p><p>Theo Jaffee </p><p>when you finish.</p><p>Yeah, I guess.</p><p>Alok Singh </p><p>Yeah, ask it questions I'm curious about and learn from it. Maybe one day get wire headed to do even more and get bigger insights. That part's murkier to me, but I expect that I would just like keep learning math. I don't think that part of me will change so much. Just the methods above doing it.</p><p>Theo Jaffee </p><p>but are there any like specific areas of math that you think are like overlooked that we should put the superhuman math eyes on?</p><p>beyond you, more broadly speaking.</p><p>Alok Singh </p><p>I would love to see it work on the field with one element.</p><p>Theo Jaffee </p><p>What's the field with one element?</p><p>Alok Singh </p><p>It's math four here. This will take.</p><p>a field in math.</p><p>Theo Jaffee </p><p>Are you an ARC browser?</p><p>Alok Singh </p><p>Yeah, for now anyway.</p><p>Theo Jaffee </p><p>Is it actually good? I can't get off Chrome.</p><p>Alok Singh </p><p>That's great.</p><p>Whatever. the field of one element. A field is a set. Yeah, he's French.</p><p>Theo Jaffee </p><p>Jacques Titz?</p><p>That's funny. Okay.</p><p>Alok Singh </p><p>It lets you, fields are sets with arithmetic defined on them. They're closed under addition, multiplication, subtraction, and division. And all the finite fields are the numbers mod a power of a prime. More importantly, numbers mod prime numbers. So they always have a prime power number of elements. So there is no field with</p><p>elements.</p><p>There's also no field with one element, at least not a field as per the usual definition, because it requires it to have two identity elements, which cannot be the same of zero and one for addition and multiplication, so they can have closure and therefore subtraction and division. But nonetheless, there seems to be</p><p>that hints that such a thing exists, but it will require redefining what a field is or extending the concept in a way that is not clear yet.</p><p>Theo Jaffee </p><p>What actually is a field?</p><p>Alok Singh </p><p>any set where you can define the operations of addition multiplication with inverses. And addition is just required, it's defined axiomatically. it's just an operation, an operation is a function that takes two elements, actually, just show it in green, it's little easier.</p><p>Theo Jaffee </p><p>yeah, lean.</p><p>Okay, so I kind of get it. It basically just doesn't make sense to have addition and multiplication if there's only one thing.</p><p>Like is that what the field of the one element concept is?</p><p>Created by N. Barth. That sounds familiar.</p><p>Alok Singh </p><p>That was.</p><p>It's a little faster at this. It's already in mathlib, lean's library, but this is easier.</p><p>Okay, so it's you can think of a type in a set as being the same thing. So it's any. I'll rename it actually. Set S.</p><p>So there's a function called addition which takes in two things and returns one thing of the same type. Same for multiplication. There's two distinct elements called zero and one. Also zero not equal to one. This is implied by its other axioms, but I'll just add it explicitly. There's that associate addition is associative.</p><p>It's commutative. Zero is an identity on the left and the right. Multiplication is associative. It's also commutative. One is an identity on left and right. Multiplying by zero is equal to zero. This is actually not necessary. It's implied by the other ones. And distributativity. This is a big one. This is what links addition and multiplication. Like you have seen this operation Christ knows how many times by now, but have you ever seen this operation?</p><p>a species where all I've done is I've</p><p>Theo Jaffee </p><p>You're still sharing</p><p>your screen on the browser.</p><p>Alok Singh </p><p>my bad,</p><p>or okay, field. So you have S, which is some type, addition and multiplication, which are both functions that take in two things and return one thing, all of the same set type, whatever. There's two distinguished elements, zero and one. There's the fact that zero is not equal to one.</p><p>So there's the fact that addition is associative and commutative and similar for multiplication, and then the distributive property.</p><p>You've definitely seen the distributive property of a times b plus c is ab plus ac, right? But then consider if I just do this.</p><p>Theo Jaffee </p><p>Yes.</p><p>Alok Singh </p><p>where I just swap addition and multiplication. So I turn the time sign into a plus and a plus sign into a times.</p><p>Well, this operation isn't really a thing because it doesn't have any good properties as far as anyone can tell. A plus B times C. Whereas the distributivity gives A a sort of equal affinity with B and C because it gets stuck onto both of them. The same is not true of doing the operations in the inverse order.</p><p>which is an interesting fact.</p><p>on the little asymmetries in math that interest me. So anyway, this is a field. But because zero is not equal to one, all fields have at least two elements. And so a field with one element seems a contradiction in terms.</p><p>Theo Jaffee </p><p>Okay, that makes sense.</p><p>Alok Singh </p><p>but nonetheless, lots of operations act as if there is a field of one element.</p><p>I will change this.</p><p>The Wikipedia article, for example, right there with ABC conjecture, these approximations imply solutions to important problems like ABC conjecture. They said these imply very solutions to very profound problems and they changed, they got rid of the word profound, which is cucked. Profound is absolutely correct.</p><p>Theo Jaffee </p><p>Yeah, I love the word profound. So what do you actually like use lean for? Cause I've seen people call it a theorem prover. I've seen people call it a programming language.</p><p>Alok Singh </p><p>So.</p><p>And then use it</p><p>for both.</p><p>Theo Jaffee </p><p>like what can</p><p>you actually program with Lean?</p><p>Alok Singh </p><p>OK, here's something I've been working on.</p><p>need some updating.</p><p>This is a port of a general relativistic late ray tracer, which is a old, so it some updating.</p><p>And the way does it is it defines a Clifford algebra, which is a mathematical structure.</p><p>Thank you. Have you seen Interstellar?</p><p>Theo Jaffee </p><p>Yes.</p><p>Alok Singh </p><p>This will let you do the ray tracing for their black holes and stuff. For example, this is the picture that this guy has ray traced. This whole thing is written in C++, but this guy is obviously into functional programming because he defines idioms from functional programming like monads in C++, which don't really work because the language is not designed for it. The light blue thing in the center is supposed to be a black hole because I guess putting it in black would be a bit confusing. And you can see the way that it general gen...</p><p>Theo Jaffee </p><p>Okay, that's cool.</p><p>light blue hole.</p><p>Alok Singh </p><p>that it generally relativistically traces, because the yellow on the bottom left gets put in the top right, then the bottom right in the middle bit. And at that point, the pixelation makes a kind of breakdown, because of discrete approximation. But you get the idea that it's causing this warping of time and space. and it's spinning.</p><p>Theo Jaffee </p><p>So you just re-implemented this in Lean.</p><p>Alok Singh </p><p>So this is,</p><p>yeah, I'm still working on it, but decent on progress. So what?</p><p>Theo Jaffee </p><p>Does lean have like</p><p>graphics vendors?</p><p>Alok Singh </p><p>Someone has written bindings to RayLive, which is the next thing I have to add to it. But I could add, for example, a vec4. They've actually added a proper vector type recently, so I could update this as well.</p><p>Theo Jaffee </p><p>How did they even have a programming language before without a vector type? Was it just arrays?</p><p>Alok Singh </p><p>So, goodbye.</p><p>The difference is that...</p><p>There.</p><p>The difference is that, okay, coordinates is an array, but an array could be any length at all, except the second field is a proof of the fact, proof of size, that the length of the array, which is the dot size function, is exactly n, which is what's in the type signature. And so a vector of four and a vector of three and a vector of zero are all different types.</p><p>So let's say def and de.</p><p>at lining up.</p><p>Sorry, it's because offhand I couldn't think of a proof of this fact.</p><p>no wonder. It's a inhabited vector of zero.</p><p>there. So I've said that the vector type in general is inhabited, meaning that this type has at least one value, a default value, at least when the vector is empty.</p><p>where it just returns an empty array.</p><p>And I could define it then. Sore, why not?</p><p>Theo Jaffee </p><p>So have to do this thing every time to approximate a vector.</p><p>Alok Singh </p><p>No. Do what thing?</p><p>Theo Jaffee </p><p>like write an instance inhabited.</p><p>Alok Singh </p><p>No, Like when I said the next line, which is like in Python, empty vec.</p><p>or it should be able to actually infer this type, so.</p><p>Theo Jaffee </p><p>Yeah, I know. I've never done functional programming before. Maybe I should.</p><p>Alok Singh </p><p>It'll make you stronger, that's for sure.</p><p>Theo Jaffee </p><p>stronger.</p><p>So how do you actually use Lean as part of your day job?</p><p>Alok Singh </p><p>empty vector.</p><p>Right now it's mostly writing tooling for lean.</p><p>The big thing is that it plays well with code generation because, let's see.</p><p>Theo Jaffee </p><p>What sort of tooling?</p><p>Alok Singh </p><p>&#8275; I'll let you know.</p><p>different library that shows it off better.</p><p>This is a linear algebra library. It's like a light pytorch that I wrote with a friend.</p><p>So this defines an encoder with a vector type of shape T by V. like tokens by size of vocabulary.</p><p>Theo Jaffee </p><p>So this is for like doing ML with Lean.</p><p>Alok Singh </p><p>Yeah, in pure Lean But this is well, because I get shape checking for free because I can define like a matrix type.</p><p>Theo Jaffee </p><p>Why would you do a melon, Peerlene?</p><p>Alok Singh </p><p>So this is defining a matrix type that's parameterized by its rows.</p><p>in columns, which are both natural numbers and some the container type alpha, which in this case is implemented in a naive way is just a vector of vectors, but you can do more sophisticated encoding. But then I can do death not</p><p>which should be C1 by C2.</p><p>matrix of R1 by C2.</p><p>was probably downloading something for the new tool change.</p><p>Theo Jaffee </p><p>So there's only you've made is like an ML library.</p><p>Alok Singh </p><p>But okay, this would give me a compile error. If I do anything that will cause this to not shape check.</p><p>And it's just like part, no wonder. think I'm just like low on disk space. &#8275; 90.</p><p>Theo Jaffee </p><p>How does this help with formally verifying superintelligence?</p><p>Alok Singh </p><p>you can write down essentially any property that something should have, at least ideally. And then the type checker and the prover and the compiler are all kind of the same thing, where you have to provide like a proof, or construct a proof or have it inferred or have the machine write out one that it has whatever property is relevant. People are more interested in like thinking of and this is like</p><p>up in the air, but people want to figure out how do you get properties of models that are safe? Like being able to guarantee something about their outputs, since people are not confident that it will be possible to verify something about the model itself, like proving that its internal weights are safe somehow. Because formally specifying that is a really hard problem, just as hard as proving it would be even if it could be specified.</p><p>Theo Jaffee </p><p>Yeah. So like, how do you actually write like a spec for acceptable or unacceptable outputs of a model?</p><p>Alok Singh </p><p>In general, answer right now is no one knows. The short, though, at a more syntactic level, OK, say you had some function like def, good. Float.</p><p>But</p><p>Sorry.</p><p>Theo Jaffee </p><p>So sorry is just</p><p>like a pass keyword if you don't want to implement a proof.</p><p>Alok Singh </p><p>Yeah.</p><p>configs from.</p><p>Theo Jaffee </p><p>structure safe</p><p>model.</p><p>Alok Singh </p><p>Yes.</p><p>Okay, It's saying that for all inputs.</p><p>good function apply. This should be parsed like this. The and signs are bit confusing precedence-wise sometimes. There.</p><p>saying essentially that put this function in certain bounds. Then the onus is, okay, like how is this function implemented? Which is why there's a sorry, because if I knew I'd be filling in right now, wouldn't I? But one thing that is more promising is come up with like a proxy measure that's not the exact thing you want. But then you have many proxy measures that are simpler, but verifiable much like how a unit test encoding probably cannot.</p><p>Theo Jaffee </p><p>So, so, so.</p><p>Yeah.</p><p>Alok Singh </p><p>guarantee that your code is bug free, but if your code can pass like 100 well picked unit tests or even 10, it's probably much closer to being bug free than not.</p><p>Also in the worst case, can just write sorry all the time and then you just have a programming language, which is a little nicer than Python, just like ordinary development.</p><p>You don't have to use the proving features, although of course it's part of the draw.</p><p>But think that emphasizing you don't have to use them is actually important because this is part of why functional programming is better among academic weenies, but not so successful in the real world. And it's perfectionism. Like a tendency to try and write code that if it's not the absolutely optimal, perfect, God's beautiful code, that it's just worthless trash, which is an attitude best left to unproductive people like philosophers.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I don't know, there was once a philosopher who tried to write a sort of complete specification of philosophy. And we all know what most modern day philosophers think about her. I actually have this, like I have a copy of...</p><p>Alok Singh </p><p>her book, Renz?</p><p>Theo Jaffee </p><p>Objectivism the philosophy of Ayn Rand by Leonard Peikoff. Leonard Peikoff was Ayn Rand's like yeah basically student slash lover slash heir both ideological heir and also like her literal like estate heir like she bequeathed like all of her money to this guy. She was yeah he's like 90 but he's still alive. She actually was married and she was talking to this guy this seems kind of immoral yeah.</p><p>Alok Singh </p><p>the one she's fucking.</p><p>Is he still alive?</p><p>Yeah, I'm aware.</p><p>Theo Jaffee </p><p>Her</p><p>real name was Alice O'Connor.</p><p>and her husband, Frank O'Connor. Her actual birth name was, I think, Alisa Zinovievna Rosenbaum.</p><p>Alok Singh </p><p>I wonder where the O'Connor came from. Isn't it from like... &#8275; right.</p><p>Yeah, that's more like what I thought.</p><p>Theo Jaffee </p><p>Rosenbaum? Wait a minute. Early life check.</p><p>every single time. Yeah. So she wrote, or I guess Peikoff wrote, all of these chapters. I think Ayn Rand wrote this official disclaimer that was like, until or unless I write something better, this shall be considered the definitive statement of my philosophy. Yeah.</p><p>Alok Singh </p><p>see.</p><p>for someone who's probably</p><p>ignorant of math, a very mathematician kind of statement.</p><p>Theo Jaffee </p><p>Yeah. Until or unless I write a comprehensive treatise on my philosophy. Dr. Peikoff's course is the only authorized presentation of the entire theoretical structure of objectivism.</p><p>Alok Singh </p><p>Sounds</p><p>like Dr. Peikoff is trying to sell his book and that he's like at some airport pounding pavement to make a book deal.</p><p>Theo Jaffee </p><p>That is the only one that I know of my own knowledge to be fully accurate. See that subjective is the only one that I know of my own knowledge to be fully accurate, not mostly accurate. There's no room for error here. Yeah. So, so it's like, he tries to derive like, okay, he starts with chapter one reality where he's talking about like metaphysics and like the basic conception of reality. And then he works all the way up.</p><p>Alok Singh </p><p>Sike.</p><p>Theo Jaffee </p><p>through our perception, our senses, our reasoning, and then what humanity actually is, and what is the good, and what's virtue, and happiness.</p><p>Alok Singh </p><p>or balls</p><p>or vast deference working the way up.</p><p>Theo Jaffee </p><p>Yeah, and then all the way up to higher levels of abstraction talks about government, capitalism, and then art. Yeah, so I actually haven't made it all the way to the chapter on art, but I wonder.</p><p>Alok Singh </p><p>part.</p><p>Agnes and I talked a bit about this, not about objectivism, but about how one of the reasons for the two cultures is that people in the arts are interested in these sorts of ultimate questions of like a good life, what's good art, et cetera. The she and I, for that matter, believe do have answers, but also that they're much more difficult to access than questions of say, what is a group or a differential equation?</p><p>people are interested in making some progress on them immediately rather than building up. I still think that the sciences are more likely to be able to answer such ultimate questions by building up this edifice through things like neuroscience and getting at what do people actually want. And then like mathematical sides of economics that as much as like the mathier sides of social science are derided, I think they have a better chance of resolving these questions eventually.</p><p>than the sort of endless perennial debates of philosophy. That's why the phrase perennial debate comes about in the first place.</p><p>Theo Jaffee </p><p>Yeah, I mean, if they weren't perennial debates, they wouldn't be philosophy. They would be science or social science or something.</p><p>Alok Singh </p><p>Yeah, and one day science will</p><p>make, one day science will come like the big bad wolf to blow that door down.</p><p>Theo Jaffee </p><p>Inshallah. I don't know, I mean, like, can you even scientifically resolve a lot of these questions? A lot of them just seem subjective. You know, didn't Wittgenstein say like, most problems in philosophy are problems with the interpretation of language?</p><p>Alok Singh </p><p>Eh.</p><p>Theo Jaffee </p><p>You know, what is consciousness? What is the good life? Okay, define good.</p><p>Alok Singh </p><p>I think so, like.</p><p>I mean, I have faith in this. Even like questions of math where people say that you run into barriers like undecidability. My friend Elliot's given me a long spiel on this and my own impression is, yeah, there's plenty of undecidable questions, most of them really. But even so, there's still progress in areas like set theory because people find like major cores of theories that just many seemingly unrelated things run up against indicating that maybe there is like some sort of platonic core.</p><p>And even for questions where you cannot say in like the same definitive sort of way that something is right or wrong. Nonetheless, vibes wise, there's like a clearly right one. And it's not just like a complete matter of opinion, nor is it like, you have to have like a sophisticated pace to understand. mean, have to have enough understanding to understand, but not much taste.</p><p>Theo Jaffee </p><p>Are you a...</p><p>are you a platonist? Is there a world of forms?</p><p>Alok Singh </p><p>That's a question, am I?</p><p>Theo Jaffee </p><p>Ayn Rand would have said definitely no. There is only the world that exists, that we perceive.</p><p>Alok Singh </p><p>I don't know what I</p><p>am.</p><p>I'm a working man, sort of.</p><p>Theo Jaffee </p><p>Aren't we all?</p><p>Alok Singh </p><p>working with head and hands.</p><p>as the abstract world exists.</p><p>Theo Jaffee </p><p>This, like, to me just seems like totally a, like, Wittgensteinian problem in philosophy is a problem in the interpretation of language. Does the abstract world exist? Like, what is the abstract world? What does abstract mean and what does it mean to exist? Yeah.</p><p>Alok Singh </p><p>What does exist mean? Yeah.</p><p>Theo Jaffee </p><p>Rand says, existence exists. This is like, this exact sentence is repeated so many times throughout Atlas Shrugged. Existence exists.</p><p>Alok Singh </p><p>I've forgotten about some of purple pros.</p><p>Theo Jaffee </p><p>You think existence exists as purple prose?</p><p>Alok Singh </p><p>The way she uses it.</p><p>Theo Jaffee </p><p>true.</p><p>Alok Singh </p><p>I think her best writing on a prose level was in the book Anthem. The one where they don't have the word I, and they have to discover it.</p><p>Theo Jaffee </p><p>Hmm. A lot of people say that that's like her worst book.</p><p>Alok Singh </p><p>What do they say is best? Fountainhead or something?</p><p>Theo Jaffee </p><p>Fountainhead or Atlas Shrugged? I've actually never read Fountainhead. It's on my list.</p><p>Alok Singh </p><p>I've been meaning to watch the movie. I mean, I've read it.</p><p>Theo Jaffee </p><p>Hmm. Yeah. I think one of my most shocking moments was when this girl that I know who's like very much a lib told me she was reading The Fountainhead. And I was like, really? Interesting. Because it was on some reading list somewhere. And she was like, yeah, this is really interesting. You I never really thought about things this way before. And I kind of like it.</p><p>Alok Singh </p><p>Live.</p><p>Okay, how do you know her?</p><p>Theo Jaffee </p><p>college through a friend. Another one converted to being based, I hope.</p><p>Alok Singh </p><p>Okay.</p><p>Maybe I should welcome Jewish too.</p><p>Theo Jaffee </p><p>Hmm, yeah. So, wrapping things up with a final question, what do you think the good life is? You know, if we are talking about the good life.</p><p>Alok Singh </p><p>I just have some vague answer here of like gaining knowledge and power. Like, yeah, I can see this one easily going in some way, but I'm just going to go with that.</p><p>Theo Jaffee </p><p>Gaining knowledge I see as a good thing. I think this is what Socrates would have answered probably. Gaining power though?</p><p>Alok Singh </p><p>Be still, Tarnenove.</p><p>Theo Jaffee </p><p>Does it?</p><p>It seems like a lot of people with power are extremely unhappy.</p><p>Alok Singh </p><p>Yes.</p><p>Yeah, but I've met a lot of people without it who also seem pretty displeased for that too.</p><p>Theo Jaffee </p><p>Let's see, like who are the most powerful people in the world? Donald Trump. Is he happy? I don't think he's happy. Elon Musk is definitely not happy.</p><p>Alok Singh </p><p>Leckermann might be happy actually.</p><p>Theo Jaffee </p><p>Zuckerberg might be happy, but does he have like real power in the way that Trump or Elon does? He's, he's, he's... True.</p><p>Alok Singh </p><p>Well, here's we're identifying happiness with like the good life. So I guess that's</p><p>kind of answer in itself of happiness is happiness all I think is a big chunk of it. Yeah.</p><p>Theo Jaffee </p><p>He's sort of clawed it back over the last year or two. Is power a big chunk of the good life? I don't know. Is happiness a big chunk of the good life? Certainly. Like, is it easy to conceive of somebody having a good life without power? Yes. Is it easy to conceive?</p><p>Alok Singh </p><p>Well, well, one of</p><p>my fixations that we didn't touch on was etymology, except for the brief mention of Proto-Indo. And all the words for happiness in the Proto-Indo languages refer like the word happiness, things, well, happening or going by hap, as in going your way, which seems certainly closer to power. And most words of happiness in Proto-Indo languages anyway, and I'm as an Indo.</p><p>They refer to some aspect of luck or if things are happening the way you want them to. Which certainly seems very linked to power, since power is essentially the direct route to that. You could also get lucky in the modern sense of just, well you don't have to do anything, it just happens that way.</p><p>Theo Jaffee </p><p>Jai Hind!</p><p>Alok Singh </p><p>powers the ability to just like shape it directly.</p><p>Theo Jaffee </p><p>Yeah, you're right.</p><p>You're Indo, I'm European. I should read more about Indo-European, because a little that I do know is very interesting. There's all these sort of shared roots that you had never thought about previously, but they kind of seem obvious in retrospect, like status and stallion.</p><p>Alok Singh </p><p>Yes.</p><p>Well, next time we</p><p>can talk about that since I've got at least one more bank and I'm sure we'll have a couple more anyway.</p><p>Theo Jaffee </p><p>Yeah, well, it's been real. You know I should I should probably get going it's getting late here, but It was great talking to you. Thanks so much for coming on the show, and I'll see you in the next one</p><p>Alok Singh </p><p>always.</p><p>That time.</p><p>Yeah.</p><p>See ya.</p><p>Theo Jaffee </p><p>So, wait I forgot I'm wearing this shirt.</p><p>I got this shirt like a year and a half ago when I was in like full on like Twitter, e/acc bro grindset mode.</p><p>Alok Singh </p><p>and how you have this.</p><p>Theo Jaffee </p><p>Yeah, I think it's like a good snapshot of like, I guess like the cultural anthropology of the internet in like early mid 2023.</p><p>right when AI was really starting to take off on Twitter. Okay, so yeah. In the last stretch of time, we read the book, Erewhon by Samuel Butler and...</p><p>What interesting takeaways did we take from it?</p><p>Alok Singh </p><p>I'm pulling up my notes.</p><p>Theo Jaffee </p><p>You took notes, fancy.</p><p>Alok Singh </p><p>Yeah, I listened to it and then I sketched out some notes on Audible, so I'm logging into Audible.</p><p>Theo Jaffee </p><p>Mm.</p><p>I don't even have a second brain of notes the only notes I need are in my first brain.</p><p>Alok Singh </p><p>Okay, well from the dome. Yeah, he immediately makes the point that he in when he's talking about machine since that's the main reason this book is notable for our audience that is one of the first times that the topic of super intelligence shows up in literature enough to make Theo read fiction.</p><p>Theo Jaffee </p><p>Yeah, I'm notable for notorious, I should say, for not reading fiction, like ever. This is, I think, the first fiction book I've read in like three years that's not like a manga or something. It's been a while.</p><p>Alok Singh </p><p>Wait,</p><p>how does the manga not count? You didn't mention that part.</p><p>Theo Jaffee </p><p>Cause it's pictures.</p><p>Like One Piece, I read all of One Piece this year. I've read like 1,100 chapters of One Piece. It's okay, they go fast. It's like five minutes a chapter.</p><p>Alok Singh </p><p>Jesus Christ.</p><p>For your sake, I hope so. Bye.</p><p>Theo Jaffee </p><p>But I guess</p><p>even saying it like that, even assuming it takes, let's say, seven minutes to read a chapter times 1,100 chapters and there's 1,440 minutes in a day, yeah, it took me a lot of time to read all that. Anyway, though. Yeah, so the Book of the Machines, they talk about, you know, in one of the first chapters, he, like, shows his watch to the king.</p><p>And then the king recoils and treats it as a sort of crime. I guess we should give a brief overview of the premise of the book. Where it's like, this guy, an Anglo-Saxon guy, is living on a colony and decides to explore past the impassable mountains to the west and past the...</p><p>Alok Singh </p><p>coils.</p><p>ends up in</p><p>the nation of Erewhon.</p><p>Theo Jaffee </p><p>Yeah, he ends up in the nation of Erewhon. He's kept as a sort of captive sort of guest. He learns their language. He experiences their culture. I guess like the most salient aspect of their culture is that they treat sickness like we treat crime and treat crime like we treat sickness.</p><p>Alok Singh </p><p>Yeah, here it is. Let me put some of the clips.</p><p>Theo Jaffee </p><p>And then the narrative of the story is like he...</p><p>he goes to the Capitol and is guest slash prisoner for a while and writes chapter after chapter of observations. And then at the very end,</p><p>Alok Singh </p><p>One line is,</p><p>one line from the court trial that since people are punished, he's like, the judge says, you think you're misfortunate to be a criminal, but your crime is to be misfortunate.</p><p>Theo Jaffee </p><p>Yeah, that was a banger.</p><p>Alok Singh </p><p>in his lead up when he's describing the machines, not the machine. &#8275; I like how was it human labor is priced by energy units because it is implied they've just become so fungible for machines.</p><p>Theo Jaffee </p><p>When was this?</p><p>Alok Singh </p><p>Somewhere in the earlier part of the book look for the word horse search Just search for the word horsepower, then oh, that'll give a proper context. I don't mangle this</p><p>Theo Jaffee </p><p>I thought like human labor wasn't done.</p><p>Yeah, there it is. Nosnibor is a man of at least 500,000 horsepower. For their way of reckoning and classifying men is by the number of foot pounds which they have enough money, which they have money enough to raise or more roughly by their horsepower. That is interesting. Like they don't really use machines though. So I wonder what's up with that. &#8275; I do think that it is like very interesting that he immediately follows up the books about machines with</p><p>Alok Singh </p><p>I'm full.</p><p>Theo Jaffee </p><p>the chapter on animal welfare, which is like, do you know people today who are concerned with super intelligent machines and animal welfare? Was Samuel Butler the first EA? Yeah.</p><p>Alok Singh </p><p>Yeah, but they're just ahead of his time.</p><p>He also says that in their AI arms race between the machinists and anti-machinists that the anti-machinists ended up using machines to a pretty great degree, just slightly less, which reminded me of AI safety and ever more powerful tools to debug it.</p><p>Theo Jaffee </p><p>Yeah, yeah.</p><p>I saw that. It's sort of like how you see like a lot of, I guess, AI Doomers using advanced AI all the time. And not just to debug it, but just because, you know, they get a lot of mundane utility out of it.</p><p>Alok Singh </p><p>because they're power</p><p>users, moreover, rarely do they just use it.</p><p>Theo Jaffee </p><p>Samuel Butler himself. Yeah, this book was based on his experiences. Basically, he ran away from his dad because he's like, I hate you, my parents, and literally went to as far away as you could possibly get from England, which was New Zealand. And he bought a farm and became a and then went back. like that...</p><p>Alok Singh </p><p>Yes, sir.</p><p>Theo Jaffee </p><p>The protagonist in Erewhon is a guy who lives on a farm that is like very much New Zealand and is also shepherd.</p><p>Alok Singh </p><p>Except he manages to find a fantastical place, which I don't think was the case for Butler.</p><p>Theo Jaffee </p><p>Yeah.</p><p>What is like the first story that's like, you know, every man ventures far from home and discovers like magical, mystical world? This has got to be done over and over and over again. I guess one of the prime examples of this is like the Wizard of Oz, where you have Dorothy living in Kansas, implied to be like the most boring place ever, and then gets whisked away to the fantastical world of Oz.</p><p>Alok Singh </p><p>Yeah.</p><p>End it.</p><p>Theo Jaffee </p><p>Erewhon is not quite as fantastical as Oz.</p><p>A lot of it sucks.</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>I find the animal rights chapter really funny because it reminds me of Jews talking about kosher law. How the entire chapter is like the wise thought leaders pass down these instructions that are like, you should not eat meat. And then they spend the rest of the chapter trying to get out of it. They're like, yeah, another fertile source of disobedience to the law was furnished by a decision of one of the judges that raised a great outcry among the more fervent discipline.</p><p>Alok Singh </p><p>Yeah, Error.</p><p>Hmm.</p><p>Theo Jaffee </p><p>disciples of the Old Prophet. The judge held that it was lawful to kill any animal in self-defense, and that such conduct was so natural on the part of a man who found himself attacked, that the attacking creatures should be held to have died a natural death. The High Vegetarians had indeed good reason to be alarmed, for hardly had this decision become generally known, but for a number of animals, hitherto harmless, took to attacking their owners with such ferocity that it became necessary to put them to a natural death.</p><p>Alok Singh </p><p>&#8275; I remember this chapter better now. Yeah, where people start doing</p><p>Theo Jaffee </p><p>Again, it was quite common at that time to see the carcass of a calf lamb or kid exposed for sale with a label from the inspector, certifying that it had been killed in self-defense. This is literally just like Jews getting out of every law that they have.</p><p>Alok Singh </p><p>Yeah, this is I think this one not just Jews but people in general trying to get out Well, you know people want to eat meat That said I mean how strict did people adhere to lent but lent is only partial whereas this is total</p><p>Theo Jaffee </p><p>Yeah.</p><p>What is this part about? you can't see my screen. But the part about like... I'm just gonna share my</p><p>Yeah, this part, one sad story. A young man, the doctor told him you should eat meat. He was like, no, that's bad, I'm not gonna do it. And then he illegally bought meat and ate it and his health improved immediately. And like health and heroin is everything.</p><p>Alok Singh </p><p>in this.</p><p>Theo Jaffee </p><p>Right, like being unhealthy is treated as like a crime, punishable essentially by death, because if you are sick they will, you know, put you in prison and then you'll sort of die of natural causes.</p><p>Alok Singh </p><p>Pulling</p><p>up Erewhon on Project Gutenberg since I just have my own copy, but HTML is easier to work with than a PDF. And it shows that his translation of the Odyssey has 18,000 downloads, the Iliad 4,800, and Erewhon 1,400.</p><p>Theo Jaffee </p><p>I have Gutenberg too.</p><p>Wow.</p><p>So... was it one of the most famous translations of the Odyssey?</p><p>Alok Singh </p><p>more famous than what he did.</p><p>Theo Jaffee </p><p>Is it more famous than Emily Wilson's translation?</p><p>I think Nabeel Qureshi on Twitter did an experiment where he had different translations of a passage in the Odyssey.</p><p>Alok Singh </p><p>Alright.</p><p>Theo Jaffee </p><p>asked like which one of these is the best.</p><p>Yeah, it was Emily Wilson, Lattimore, Fitzgerald, and GPT-4o And way more people prefer GPT-4o than the others.</p><p>Alok Singh </p><p>Well, GPT-4o knows Proto-Indo-European, which is more than I can say to most people.</p><p>Theo Jaffee </p><p>What's the best LLM for Proto Indo-European?</p><p>Alok Singh </p><p>They're all pretty good at it. Probably GPT, just because it has a bit more data.</p><p>Theo Jaffee </p><p>Have you tried</p><p>4.5?</p><p>Alok Singh </p><p>Yeah, I do that a lot.</p><p>Theo Jaffee </p><p>for Proto Indo-European.</p><p>Alok Singh </p><p>Yes, among other things.</p><p>Hmm</p><p>Theo Jaffee </p><p>Let's see.</p><p>Alok Singh </p><p>They also discuss a form of Roko's Basilisk, although only in passing. Somewhere in Book of the Machines, maybe Section 2.</p><p>that people will help machines come about would be favored over ones that don't.</p><p>Theo Jaffee </p><p>Okay, let's let this go for a bit.</p><p>A full translation prevents several difficulties. Shut up.</p><p>partial translation. No, I just gave it like a piece of Erewhon the first chapter. This can't be that hard, right? Like I guess yeah, telescope would...</p><p>Alok Singh </p><p>You try giving it a whole lot, I see.</p><p>To translate it</p><p>to turn it into what? Proto window. &#8275; now I'm seeing.</p><p>Theo Jaffee </p><p>Okay,</p><p>was a monotonous life but healthy, yeah.</p><p>Can you pronounce this?</p><p>Alok Singh </p><p>Not any better than you can.</p><p>Theo Jaffee </p><p>I thought you were like into Proto Indo-European.</p><p>Alok Singh </p><p>Yeah, usually reading it. There's not a lot of speakers, as you might imagine. I have said some words, but all the connections I have are from words that actually still exist. Like the word sundry, it means separate, loosely associated things because it's from the word sunder, like to sunder something in half.</p><p>Theo Jaffee </p><p>That's cool. Sunder is one of those excellent words that you just never hear anymore, but you hear a lot in Tolkien.</p><p>Alok Singh </p><p>Yeah, I recommend the-</p><p>Theo Jaffee </p><p>Yeah, okay. Never shall I forget the solitude. Like how do you know that this is accurate? Yeah, look, magna? Yeah, I recognize that. So, lebhom as life, yeah, sure. Samus, continuous, like same, I guess. Esstet, was, yeah.</p><p>Alok Singh </p><p>Yeah, magna does mean great. Ehh, I don't know if-</p><p>Same as</p><p>for means one.</p><p>Like this could be wrong, but this is like decent and it's certainly a lot better than the alternative, which is nothing.</p><p>Theo Jaffee </p><p>So while it</p><p>was healthy, like salud? Salus? Yeah.</p><p>Alok Singh </p><p>I'm guessing.</p><p>I</p><p>have seen that word to like set.</p><p>Theo Jaffee </p><p>Hegemon? Earth? Hegemon? Probably not. Yeah, mountain, mountain. There you go. Yom, like yonder. I don't know. Sed, as sat, yeah.</p><p>Alok Singh </p><p>No, I don't think so.</p><p>Meg, &#8275; that's just Meg. Nek-uh.</p><p>Theo Jaffee </p><p>Gwent as often I kind of see it actually I kind of see it you see the n often the gif becomes for I see</p><p>Alok Singh </p><p>Void.</p><p>If you like kind of</p><p>zoom out and like look at it from farther away so you can't make out the individual letters quite as much, I think it helps. The big mountains something. Yeah, the the word nether is more idiomatic, not idiomatic, definitely not idiomatic, but it's more etymologically correct.</p><p>Theo Jaffee </p><p>Yeah.</p><p>nitros is below like &#8275; nitros like beneath right pelus plane est was and</p><p>Where? Noth- Nothing?</p><p>Alok Singh </p><p>then saying</p><p>up and down is saying nether for down.</p><p>Theo Jaffee </p><p>Yeah, I got it. Void-ous, far away, like void. &#8275; negway. Montes.</p><p>Alok Singh </p><p>Nietzsche.</p><p>Yeah, vast and void vacuum</p><p>mega mountains, which is what it sounds like. Neck, which I think also can mean death. So I'm just looking at the etymology of never.</p><p>Theo Jaffee </p><p>Yeah, never shall I forget</p><p>Megata Vasnesya, Montum Pelhunkve. Hold on. Proto Indo Yura.</p><p>Is that like a thing? Is there like software that can speak it?</p><p>Alok Singh </p><p>advanced voice mode. I don't think it has any particular training on this, but who knows.</p><p>Theo Jaffee </p><p>Can you do advanced voice run on desktop?</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>audio reconstruction.</p><p>Alok Singh </p><p>doing it right now.</p><p>Theo Jaffee </p><p>&#8275; another dead interlink. So sad.</p><p>Alok Singh </p><p>Recite the first page of the Odyssey in Proto-Indo-European reconstructed.</p><p>was walking in a lot. This is showing me.</p><p>Theo Jaffee </p><p>Aren't we getting a little distracted from... Yeah. Okay, so what is this? yeah, Samuel Butler's life story is very interesting. Wasn't he gay? Yeah, was... He never married. You know, like the phrase, never married, was used as like a euphemism.</p><p>Alok Singh </p><p>Yeah, we are. Let's get back to this.</p><p>Yeah, I</p><p>Yeah, I've read the Wiki too. Confirm Bachelor as well.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Mm-hmm.</p><p>There is no evidence of butlers having any genital contact with other men, but alleges that the temptations of overstepping the line strained his close male relationships. So he was, &#8275; know, friendly with the homies.</p><p>Alok Singh </p><p>The studies on the evidence</p><p>of Christianity, his works on evolutionary thought. Yeah, he does have a keen appreciation of evolution. The very first thing he opens, the first thing he opens with in the chapter of the Book of the Machines, why to be concerned, practically in the first paragraph is the speed of them. And then their speed is such that men, it's as if...</p><p>Theo Jaffee </p><p>He does, yeah, I noticed that.</p><p>gradual disempowerment.</p><p>Alok Singh </p><p>They're not a tool of yesterday, but the last five minutes as he puts it. If you search for five minutes, you should find it.</p><p>Theo Jaffee </p><p>Yeah. Reflect upon the extraordinary events which machines have made during the last few hundred years and note how slowly the animal and vegetable kingdoms are advancing.</p><p>Alok Singh </p><p>and that's like an</p><p>Zim and more.</p><p>more.</p><p>Theo Jaffee </p><p>The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years. See what strides machines have made in last thousand. May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?</p><p>Alok Singh </p><p>It also starts this chapter about, well, life in the beginning of The Earth is a Ball of Hot Rock.</p><p>Theo Jaffee </p><p>Yeah, I noticed that and like it seems like otherwise throughout the book he was like, or at least his self-insert character was like very Christian. You know, he always talks about like wanting to convert the people of Erewhon. The last chapter is basically about his plans to convert the people of Erewhon. yeah, I forgot to mention how the book ends to the audience, which is like, he steals his host daughter and runs off with her in a balloon right before he gets arrested and makes it back to Europe.</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>and then tries to make plans to go back to Erewhon and convert them. And I think he writes about that in a sequel called, what was it, Return to Erewhon or something? Erewhon Revisited? Yeah. Which I heard was bad.</p><p>Alok Singh </p><p>Yeah, Ero Unrevisited, which I'm probably not going to read because.</p><p>Theo Jaffee </p><p>These damn sequels. No one has any creativity anymore. Yeah, less successful sequel. What is it? Both of the original discoverer of the country and by his son. It has less of the free imaginative play of its predecessor. I've like thought throughout reading this book, I should write like a chapter, like a sub stack article that's like this same protagonist, like an average, you know, Victorian English guy, except instead of discovering Erewan, he discovers</p><p>modern America and he talks about, I don't know, technology and women's rights and governance and stuff.</p><p>You know, these people are so peculiar, you know, they have these strange glass instruments that they spend all their time touching these glass instruments. They communicate through them. They have no respect for a king or church.</p><p>Alok Singh </p><p>This is like...</p><p>Basically a show about missionary work</p><p>or like futuristic missionary work.</p><p>Theo Jaffee </p><p>Is that like a thing?</p><p>I'm certain that somebody at some point has written an isekai of like, Victorian goes to the modern world.</p><p>Alok Singh </p><p>I'm just going to ask to keep.</p><p>The ones of people going to the Victorian era. Hardly from it, though. Usually when they pick... Yeah, but it's the guy supposed to be like, well, you are the random protagonist who's dropped into God knows where.</p><p>Theo Jaffee </p><p>from what I think the interesting.</p><p>Alok Singh </p><p>Like certainly could go the other way where it's some guy in a totally different era and mindset that ends up here or ends up somewhere else. But then he would likely have few values in common with you. And that's hard to identify with.</p><p>Theo Jaffee </p><p>The one time I met Eliezer Yudkowsky in person was at Manifest and like one of the few questions I asked him, because he was in a group talking about isekai and I was like, what isekai should I read if I haven't read any yet? And he said, a Connecticut Yankee in King Arthur's court, which is, yeah, I think it's about.</p><p>Alok Singh </p><p>Yeah.</p><p>Good choice. I like it.</p><p>Theo Jaffee </p><p>It's, yeah, it's a Connecticut Yankee. You know, it's Mark Twain going back to King Arthur's times.</p><p>Was Mark Twain from Connecticut? No, Missouri.</p><p>Alok Singh </p><p>is out.</p><p>Theo Jaffee </p><p>He did live in Connecticut though. Yeah He lived all over the place. He was an interesting guy You notice this would like writers in the past a lot is like they they Traveled and moved all over the place more than almost anyone else at the time Mark Twain Ernest Hemingway Someone else who I can't think of off the top my head but</p><p>Alok Singh </p><p>died.</p><p>And.</p><p>Theo Jaffee </p><p>I've been to at least two of Hemingway's houses in Sun Valley, Idaho and in Key West, Florida.</p><p>Alok Singh </p><p>Thanks.</p><p>Theo Jaffee </p><p>Okay, what else about everyone?</p><p>Alok Singh </p><p>You can go to Sun Valley,</p><p>Idaho for that one conference.</p><p>Theo Jaffee </p><p>that, yeah, the Allen &amp; Co Billionaires Conference. That would be cool. I would actually think about that because I have family friends who have a place there.</p><p>Alok Singh </p><p>his whole bit about form and function,</p><p>search for reproductive system.</p><p>Theo Jaffee </p><p>Yeah, that was good.</p><p>Alok Singh </p><p>of, yeah, his whole approach to the chicken and egg is they each inform each other's form and function. And so interdependently define each other, basically refuting the argument that well, you're not a machine.</p><p>Theo Jaffee </p><p>Yeah. &#8275; he also sort of assumes that like the development of machines will come about as I guess, reverse engineering each of the systems in the human body. &#8275; like, you know, we're going to build an artificial cardiovascular system and yeah, there it is. &#8275; there are certain functions indeed of the vapor engine, which will probably remain unchanged for myriads of years.</p><p>Alok Singh </p><p>Where?</p><p>Theo Jaffee </p><p>which in fact will perhaps survive when the use of vapor has been superseded. The piston and cylinder, the beam, the flywheel, and other parts of the machine will probably be permanent, just as we see that man and many of the lower animals share like modes of eating, drinking, and sleeping. Thus they have hearts which beat as ours, veins and arteries, eyes, ears, and noses. They sigh even in their sleep and weep and yawn. They are affected by their children. They feel pleasure and pain, hope, fear, anger, shame. They have memory, impressions. They know that if certain things happen to them, they will die, and they fear death as much as we do.</p><p>They communicate their thoughts to one another and some of them deliberately act in concert. The comparison of similarities is endless. I only make it because some may say that since the vapor engine is not likely to be improved in the main particulars, it is unlikely to be henceforward extensively modified at all.</p><p>So I guess, yeah, this is more dated. We ended up like.</p><p>not needing most of these parts to make an artificial humanoid. We don't need veins and arteries unless you consider wires to be veins and arteries.</p><p>Alok Singh </p><p>Wires,</p><p>kind of, arteries. The closest analog I would imagine is if in a robot little computers get put in for lower latency, but this thing is fast enough, especially compared to the human scale, that such seems basically unnecessary, certainly for something to work very well.</p><p>Theo Jaffee </p><p>And with the brain, you we didn't design parts of the brain. It just sort of like happened.</p><p>You know, it was grown, it wasn't designed.</p><p>Alok Singh </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>yeah, he also gets into this sort of like almost Landian analysis of like capitalism and like human interactions through machines as like itself a sort of</p><p>Alok Singh </p><p>You know, I</p><p>just realized something. This is probably where the</p><p>Maybe where the term Butlerian Jihad comes from. know in the book it has some own story.</p><p>Theo Jaffee </p><p>This is where the term Butlerian jihad</p><p>comes from.</p><p>This is, I think, probably the reason that this book is so famous is because of Dune.</p><p>Alok Singh </p><p>Okay.</p><p>Theo Jaffee </p><p>I think</p><p>he did write a separate thing called like Darwin and the Machines. Darwin Among the Machines. Yeah, it's a letter to the editor. It was written by Samuel Butler. Yeah, okay. So it was written before Erewhon. And the Book of the Machines, yeah. Butler developed this and subsequent articles into the Book of the Machines.</p><p>our wiki source here. Yeah, there we go.</p><p>Yeah, definitely read this.</p><p>This is, guess, a more clear articulation of his, like, doomerism that's not wrapped in this sort of, fantasy world.</p><p>Where is it? Yeah.</p><p>Alok Singh </p><p>Thank you.</p><p>Theo Jaffee </p><p>Man will have become to the machine what the horse and the dog are to man. He will continue to exist, nay, even to improve, and will probably be better off in his state of domestication under the beneficent rule of the machines than he is in his present wild state.</p><p>Alok Singh </p><p>Yeah</p><p>Theo Jaffee </p><p>Yet our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of a species. Let there be no exceptions made, no quarter shown. Let us at once go back to the primeval conditions of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude is commenced in good earnest, that we have raised the race of beings whom it is beyond our power to destroy, and that we are not only enslaved, but are absolutely acquiescent in our bondage.</p><p>Alok Singh </p><p>I wonder if Kaczynski knew of this?</p><p>Theo Jaffee </p><p>I'm sure Kaczynski knew of this, because it's extremely like, it sounds so much like his manifesto.</p><p>Alok Singh </p><p>But it's basic, well, like your shirt says, accelerationism, or this inevitable drive towards progress otherwise, unless it's deliberately cut off.</p><p>Theo Jaffee </p><p>I think the funniest part of the Unabomber Manifesto was just like,</p><p>He basically he starts it with like the industrial revolution and its consequences have been a disaster for the human race and Then he immediately goes into like owning the libtards</p><p>Alok Singh </p><p>have been exhausted.</p><p>Yeah, I remember. Just like...</p><p>Theo Jaffee </p><p>Which is so</p><p>funny. I remember like reading this in like fucking ninth grade math class. Yes. Reading this in ninth grade math class like wow this is actually such a fact. No this just came out of nowhere for me.</p><p>Alok Singh </p><p>Was it the fact that it was the second thing he listed that was funny?</p><p>Did you know he was gonna talk about that or did that just came out of nowhere for you?</p><p>He goes in for a while. He also says in it that he isn't even talking and he's deliberately about people who are just like explicitly leftist that he's like pointing at a category of people. I think the thing about over socialization is true. Feelings of inferiority maybe, but I think over socialization is with deeper insight.</p><p>Theo Jaffee </p><p>He likes them shirt already.</p><p>Yeah, yeah, that was</p><p>great. I actually, don't think I ever actually finished this.</p><p>Alok Singh </p><p>The most mathematician thing about him is citing different paragraphs. This is so much better than what</p><p>Theo Jaffee </p><p>This is also, this is so much better than...</p><p>Yeah.</p><p>It's so much better than Luigi's manifesto, which was like a single page of slop where he didn't even bother like making the argument. He was like, this has all been discussed at great length elsewhere.</p><p>Yeah, you know, people used to write real manifestos. This is 58 pages.</p><p>Alok Singh </p><p>Well, this is from one of America's top talents.</p><p>Theo Jaffee </p><p>Okay, first let us postulate that computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case, presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.</p><p>Alok Singh </p><p>Hey bro, you have to name the tech speaker again.</p><p>more bigger.</p><p>So nice.</p><p>Theo Jaffee </p><p>The fate of the human race would be at the mercy of the machines.</p><p>Alok Singh </p><p>Yeah, I remember those. We just become dependent on them. Very similar in Butler.</p><p>Theo Jaffee </p><p>Notice this, yeah, people won't be able to just</p><p>turn the machine off because they will be so dependent on them that turning them off would amount to suicide.</p><p>Alok Singh </p><p>Just actually just do this just put</p><p>to put Kaczynski and Butler side by side Does it let you do then your screen share you can share your whole screen instead of just a tab?</p><p>Theo Jaffee </p><p>Probably, I already am sharing the whole screen, right?</p><p>Alok Singh </p><p>Yeah, OK, we'll try splitting it then.</p><p>Theo Jaffee </p><p>Yeah, I would, but it's not letting me like resize the window. It's like glitching. &#8275; there we go.</p><p>Alok Singh </p><p>Yeah, just one window becoming smaller.</p><p>Theo Jaffee </p><p>There we go. Okay, now can you see side by side?</p><p>Alok Singh </p><p>No, I just see one window, the one of Darwin among the machines.</p><p>Theo Jaffee </p><p>Okay, let me.</p><p>There we go.</p><p>Alok Singh </p><p>I see it.</p><p>See me twice, actually.</p><p>Theo Jaffee </p><p>How do I minimize this?</p><p>Yeah, so this is Erewhon. This is Darwin among the machines, also Butler. This is Kaczynski. Yeah, there it is. Even if human work becomes necessary, machines will take care of more and more of simpler tasks so that there will be an increasing surplus of human workers of the lower levels of ability.</p><p>Alok Singh </p><p>We see this happening already and while we're continuing to see this happening.</p><p>Theo Jaffee </p><p>&#8275;</p><p>Yeah.</p><p>Alok Singh </p><p>many people who find it difficult or impossible to get work because for intellectual or psychological reasons they cannot acquire the level of training necessary. And on those who are employed ever increasing demands will be placed they need more and more training, more and more ability and have to be ever more reliable. Conforming, I don't know about conforming and docile I think that's his emotions slipping in. Being more reliable though I think is basically true. But essentially everyone will have through this competition an ever higher standard.</p><p>Theo Jaffee </p><p>Yeah, average CS major.</p><p>Mm-hmm.</p><p>Alok Singh </p><p>The machines act as this iron ruler on it, forcing everyone up.</p><p>Theo Jaffee </p><p>Yeah.</p><p>A great development of the service industries might provide work for human beings. Shining each other's shoes, driving each other around in taxicabs. Yeah, this one was wrong.</p><p>Alok Singh </p><p>and each other's shoes.</p><p>making handicrafts for each other, waiting on each other's tables. It seems just a thoroughly contemptible way.</p><p>Theo Jaffee </p><p>He writes about this</p><p>in the plural, first person, us and we. You know the like freedom club thing?</p><p>Alok Singh </p><p>Yeah, I think I honestly think that because he's the</p><p>I think it's because he's a mathematician in math papers. We always use we</p><p>Theo Jaffee </p><p>Yeah, so his pseudonym FC for Freedom Club, which is so stupid. It's like the meme about like Tolkien naming everything so, you know, in such a special way, except he names the the Doom Mountain Mount Doom. Like you have this, you know, math genius writing this brilliant essay and then giving himself the pseudonym Freedom Club.</p><p>Alok Singh </p><p>Okay, it'd be better to dump the whole stinking system and take the consequences.</p><p>Theo Jaffee </p><p>This</p><p>is not accelerationism.</p><p>give some indications of how to go about stopping it.</p><p>Alok Singh </p><p>No files are</p><p>taking us all on an utterly reckless ride. We go up to 180 into the unknown. Many people understand something of what technical progress is doing to us. You take a attitude to the things inevitable. But we for once he actually says who we supposed to be.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Did this inspire Fight Club?</p><p>Alok Singh </p><p>I mean, it was.</p><p>And there he is.</p><p>Theo Jaffee </p><p>Yeah, probably, right?</p><p>Fight Club and Office Space, I've never watched Office Space. Fight Club was inspired by Ted Kaczynski. I think there's some similar.</p><p>Alok Singh </p><p>I know Chuck, wait, know Chuck,</p><p>the author of Fight Club, Chuck Palahniuk has showed up to Jim Goad's book signing, The Taki Mag Guy, or ex Taki Mag Guy. And it's like fairly on the right wing. And cause this guy's not exactly right wing, he's like associated with it. So maybe.</p><p>Theo Jaffee </p><p>Hmm. Interesting. Based.</p><p>&#8275;</p><p>i don't know... kaczynski seems like he's... not really right or left wing yeah, he does</p><p>Alok Singh </p><p>I mean, he's insulting the left and the right. I mean, he insults the</p><p>right wing in here too, and of course we saw all the stuff about the left wing.</p><p>Theo Jaffee </p><p>He insults the left a lot, I think just to distance himself from them because, you know, it is, I guess, typically assumed that any anti-tech, anti-society anarchist would be a leftist, but he is not. Yeah, we have no illusions about the feasibility of creating a new ideal form of society.</p><p>Alok Singh </p><p>This is an</p><p>amazing sentence in 184. Most people will agree that nature is beautiful. Certainly it has tremendous popular appeal. Nature has popular appeal. That's a great phrase. The radical environmental is already holding the ideology that exalts nature and opposes technology. Yeah, that hasn't changed.</p><p>Theo Jaffee </p><p>It does, yeah.</p><p>This is so true, yeah.</p><p>Yeah, &#8275; I'm sure everyone in California knows that well.</p><p>Nature takes care of itself.</p><p>Well, you can't have your cake and eat it too. To gain one thing you have to sacrifice another. Or as economists say, there are no solutions, only trade-offs.</p><p>Alok Singh </p><p>I hope you hate psychological conflict.</p><p>Oh, this is interesting. 186. His conclusion from revolutionary ideology should therefore be developed on two levels. Oh, okay. This one is... This is just like how you do a religion. You make one version for the elites and one for the rest.</p><p>Theo Jaffee </p><p>best.</p><p>Yeah, for the elites you have HP, MOR and the sequences and then for the rest you have like the Terminator.</p><p>Alok Singh </p><p>appreciation of the problem, the price it has to pay for getting rid of the system, the capable people in the instrumental, these people should be dressed on as rational level as possible, faction ever tends to be distorted, and yada yada. On the second level, still have only read half of it. If after finishing this project of getting a textbook into Lean and I'll watch Dune 2 finally, so no spoilers.</p><p>Theo Jaffee </p><p>Have you read Dune?</p><p>Okay.</p><p>But like, what do they say about the Butlerian Jihad? This is like not actually part of the...</p><p>Alok Singh </p><p>I said that it could have</p><p>easily been based on Samuel Butler, but not that it's known.</p><p>Theo Jaffee </p><p>It's certainly based on Samuel Butler, like...</p><p>How could it not be based on Samuel Butler?</p><p>Alok Singh </p><p>I totally</p><p>believe so. Herbert wouldn't miss something like that.</p><p>Theo Jaffee </p><p>They did a literal Butlerian jihad. Yeah. So convincing was his reasoning that, &#8275; yeah, he carried the country with them. made a clean sweep.</p><p>Alok Singh </p><p>That's the most unrealistic</p><p>part that basically people are convinced by a guy talking really good. That's the part that's the most unbelievable, everyone's solution to the machines and why they managed to destroy them. I don't think we'll be, &#8275; certainly not as absolute as them.</p><p>Theo Jaffee </p><p>Uhhh</p><p>Well, no, but-</p><p>I don't know, could the Luddites have ever reasonably succeeded? Do you think? Like the actual Luddites in actual Victorian England?</p><p>Alok Singh </p><p>their goal. see, the Luddites were decently well off weavers. They started well off until they became a bit pointless. Which is why they were real pissed. Because I they're mostly skilled craftsmen and their skill just didn't matter. So developers could very well become the next Luddites.</p><p>Theo Jaffee </p><p>Yeah, I'd believe it. Peak activity 1811 to 17.</p><p>Yeah, wow.</p><p>There were more troops involved in suppressing them than the Duke of Wellington led during the Peninsular War. That's incredible. And it was at the same time as the Peninsular War.</p><p>Alok Singh </p><p>They assassinated some mill owner.</p><p>Theo Jaffee </p><p>Yeah, wow, I guess I never thought about like this happened during the Napoleonic Wars.</p><p>Parliament made machine breaking, i.e. industrial sabotage, a capital crime with the destruction of stocking frames. I think in Britain today, if the Luddites happened and you had a bunch of people smashing machines, they would just require an ID to buy hammers at hardware stores.</p><p>Alok Singh </p><p>etc. act.</p><p>Theo Jaffee </p><p>Did you see that Keir Starmer tweet where he was like, you know, knife crime will no longer be tolerated. We are banning the purchase of samurai swords.</p><p>Alok Singh </p><p>Any kind of...</p><p>Theo Jaffee </p><p>The way to discourage ethnic conflict is not through militant advocacy of minority rights. Instead, the Revolutionary should emphasize that although minorities do suffer more or less disadvantage, this disadvantage is of peripheral significance. Our real enemy is the industrial technological system. And in the struggle against the system, ethnic distinctions are of no importance. Yeah.</p><p>Alok Singh </p><p>Basically to swallow it for the system, which I don't think is happening.</p><p>will not be political revolution. That's a big difference from AI safety, where if anything, they're hoping for it be a political one as a lever on industry and technology. And the economics-wise, well, the economics of AI, unless you get wiped out, are real good.</p><p>Theo Jaffee </p><p>I think... Yeah.</p><p>Yeah, I saw a tweet recently that was like, they really mistimed the pause push because like, basically all the AI safety orgs tried to like push for the six month pause after GPT-4 came out, which was way too early. Like most normies were completely unaware at the time. Now normies are just like sort of starting to wake up. &#8275; especially with the AI art generation. I think probably the best time to seek a pause would have been a year from now. Maybe they could do it again.</p><p>and see if people would be more open to it.</p><p>Alok Singh </p><p>Yeah, maybe.</p><p>Will people remember pause if they're normalish? Very possibly not. Because you mean a whole year with all this stuff happening. That was like forever ago. Think of all the random things that happened in Trump's presidency that now I bet you couldn't list a single one of.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Yeah.</p><p>Even the first presidency?</p><p>Alok Singh </p><p>Yeah, I am.</p><p>Theo Jaffee </p><p>&#8275; yeah.</p><p>Alok Singh </p><p>I'm sure</p><p>you can list stuff, but you remember how there was like this endless stream of things.</p><p>Theo Jaffee </p><p>Sort of. I mean, probably like in your social circles, the idea of a Trump presidency was much, much weirder than in my social circles.</p><p>at the time.</p><p>Alok Singh </p><p>Yeah, definitely. Even so, was still just had like all sorts of odd moments just relative to any presidency. And because Trump himself is like kind of a weird guy at his core.</p><p>Theo Jaffee </p><p>Okay, this is really funny, wow. Whenever it is suggested that the United States should cut back on technological progress or economic growth, people get hysterical and start screaming that if we fall behind in technology, the Japanese will get ahead of us. Yeah.</p><p>Alok Singh </p><p>Yeah, I already read that. And the Japan. The Chinese</p><p>Japanese, holy robots, holy macro, Captain, the world will fly off its orbit.</p><p>Theo Jaffee </p><p>Uh-huh. But more reasonably, it is</p><p>argued that if the relatively democratic nations of the world fall behind in technology, while nasty dictatorial nations like China continue to progress, eventually the dictators may come to dominate the world. Wow, this is prescient. Really.</p><p>Alok Singh </p><p>This is why the industrial system should be attacked in all nations at once.</p><p>Theo Jaffee </p><p>&#8275;</p><p>Alok Singh </p><p>What do you think about Cuba? says look at Cuba at the end of that paragraph.</p><p>Theo Jaffee </p><p>Dictator controlled systems approved and efficient,</p><p>Alok Singh </p><p>Astro system controlled by dictator.</p><p>Okay.</p><p>Theo Jaffee </p><p>Yeah, so he was just like a...</p><p>Alok Singh </p><p>pretty trade agreement like NAFTA.</p><p>Modern man is too much power, yada yada. They fail to distinguish the power for organizations and individuals.</p><p>Theo Jaffee </p><p>People need power, yeah.</p><p>Alok Singh </p><p>Modern man is a man's power when nature goes evil. Modern nature was a far less power than primitive men.</p><p>Theo Jaffee </p><p>You need a license for everything and with the license come rules and regulations.</p><p>Alok Singh </p><p>there.</p><p>Theo Jaffee </p><p>There, yeah, wow, there's so many bangers in here. I don't think I've ever read this portion of it.</p><p>Imagine an alcoholic sitting with a barrel of wine in front of him. Suppose he starts saying to himself, wine isn't bad for you if used in moderation. Why, they say small amounts of wine are even good for you. It won't do me any harm if I take just one little drink.</p><p>Alok Singh </p><p>Never forget that the human</p><p>race with technology is just like an alcoholic liberal wine. Well, that's true.</p><p>Theo Jaffee </p><p>Yeah, that's us with the phones.</p><p>Alok Singh </p><p>Revolutionaries</p><p>should have as many children as they can.</p><p>Theo Jaffee </p><p>Wow, this dude is paced as fuck.</p><p>Alok Singh </p><p>There's strong scientific evidence that social attitudes are to a significant extent inherited.</p><p>Theo Jaffee </p><p>Wow.</p><p>Alok Singh </p><p>No one suggests that social area at all</p><p>From our point of view, doesn't matter that much whether they're passed on genetically or through childhood training, just that they are.</p><p>Theo Jaffee </p><p>Wow, I need to read this in full.</p><p>Alok Singh </p><p>The trouble is that many of the people who are inclined to rebel against the industrial system are also concerned about overpopulation.</p><p>What do you say about artificial intelligence specifically? Like you just search artificial, because I don't think if you search AI you'll get.</p><p>Theo Jaffee </p><p>This is the one and only keyword search match for artificial intelligence.</p><p>Alok Singh </p><p>And this that's not well, that would mean that we were looking at this, but go up a bit. So what did he say again for the scenario where we do develop it?</p><p>yeah, the people are really dependent.</p><p>Theo Jaffee </p><p>Intelligent machines, yeah. Right. We said, yeah, we saw this. Humans will be dependent. Due to improved techniques, the elite will have greater control over the masses. And because human work will no longer be necessary, the masses will be superfluous, a useless burden on the system. If the elite is ruthless, they may simply decide to exterminate the mass of humanity. If they're humane, they may use propaganda or other psychological or biological techniques.</p><p>Alok Singh </p><p>If your ruthless teammate has just-</p><p>Or if it consists of soft-hearted liberals,</p><p>they may decide to play the role of good shepherds to the rest of the race. Psychologically hygienic, that's quite a phrase. Everyone has a wholesome hobby to keep them busy. And everyone may become dissatisfied, undergoes quote treatment to cure his quote problem. Of course, life will be so purposeful that people have to be biologically or psychologically engineered, either remove their need for the power process or make them supplement that drive.</p><p>Yeah, I basically buy this. Maybe happy society, they most certainly will not be free. They'll reduce the status of domestic animals.</p><p>And then basically as the premise of well, what if that doesn't happen? Well, we basically are filling in like a section of Kaczynski's book with all this AI safety stuff.</p><p>Basically section like 173 and 174.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Why do we read Erewhon instead of this?</p><p>Is it because fiction is important?</p><p>Alok Singh </p><p>Butler's a better writer.</p><p>Theo Jaffee </p><p>&#8275; I'm not convinced of that.</p><p>Alok Singh </p><p>And but but there's</p><p>I think so. But I also think Butler's prescience is a more interesting given that it's well before even computers. Although he has time cognate, but he has to extrapolate it from basically a loom as a machine.</p><p>Theo Jaffee </p><p>Yeah, I agree.</p><p>Yeah, that was pretty impressive. &#8275;</p><p>I think there have been, I guess you could say the Golem is kind of an AI story. Talos was a sculpture, or a giant made of bronze who act as guardian for the island of Crete.</p><p>Alok Singh </p><p>Yeah.</p><p>You throw boulders at</p><p>Theo Jaffee </p><p>Faust, Frankenstein, yeah, artificial life. But not the same thing as artificial intelligence. Yeah, automata.</p><p>Alok Singh </p><p>RUR, something RUR. yeah, Leibniz of course talks about, okay, let me look at the Leibniz archive actually. Try looking at the notes of Leibniz. A bunch of them are online and now with the PDF tools, the fact that they're written in like six languages shouldn't be such a problem.</p><p>not those notes. Maybe try Leibniz Archive and I will also look.</p><p>Theo Jaffee </p><p>Leibniz archive, is it Hanover? Wow.</p><p>Alok Singh </p><p>200,000 pages of 50,000 pieces of writing.</p><p>I found one essay of his about spider silk for armor.</p><p>Theo Jaffee </p><p>machine would use an alphabet of human thoughts and rules to combine them. Yeah, so he was wrong about that.</p><p>Why did everyone think that AI would come about by listing out, enumerating a bunch of different human concepts and then manually drawing connections between them, as if that was possible?</p><p>Alok Singh </p><p>alphabet.</p><p>in a sense it kind of is doing that. Manual is a bit of a stretch. It's just that it's not being done by the human, which is the big draw, but it is done by a lot of brute force.</p><p>Theo Jaffee </p><p>Yeah, but not by the humans.</p><p>Alok Singh </p><p>Yeah, which is the big thing.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>five or four days is too late, but you'd have to open three cursors.</p><p>Theo Jaffee </p><p>Calculating machines were built by Leibniz. Yeah What a genius this dude was. Very underrated.</p><p>Alok Singh </p><p>Yeah,</p><p>yes, absolutely. Non-standard analysis comes from his work.</p><p>So you know I like him.</p><p>Theo Jaffee </p><p>Highly rated but still underrated, as Cowen would say.</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>Okay, so like what else do we have from Erewhon that's not just AI?</p><p>Alok Singh </p><p>Yeah.</p><p>That's.</p><p>Theo Jaffee </p><p>What was the most interesting</p><p>thing in here that wasn't the Book of the Machines?</p><p>Alok Singh </p><p>Well, this is still in the book of machines just when he compares them He says a man hardly owns himself basically because he's got so many parasites in him. Maybe such a little parasite or He's such a hive and swarm of parasites as doubtful whether his body is not more theirs than his and Whether he is anything but another kind of anteep after all</p><p>Theo Jaffee </p><p>Yeah. yeah, what I was saying earlier, he was very early on the, on the Nick Land idea that like society itself is a sort of machine.</p><p>Alok Singh </p><p>Again.</p><p>Theo Jaffee </p><p>this.</p><p>He wrote this somewhere, yeah.</p><p>We are misled by considering any complicated machine as a single thing. In truth, it is a city or society, each member of which was bred truly after its kind.</p><p>&#8275; the machine</p><p>Alok Singh </p><p>I Butler should get the credit as far as I can assign anyway because, especially given that he wrote Darwin Among the Machines, he's specifically making a claim about superintelligence.</p><p>Theo Jaffee </p><p>Doran among the machines is, yeah, very, very prescient.</p><p>Alok Singh </p><p>Also 1863, that's pretty early.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Predicting a machine before the Industrial Revolution, that would be quite something.</p><p>I hear you can go into the the rates of vegetable animal and mineral.</p><p>Theo Jaffee </p><p>Yeah, the the rights of vegetables was interesting.</p><p>yeah.</p><p>Yeah.</p><p>Even the Puritans, after a vain attempt to subsist on a kind of jam made of apples and yellow cabbage leaves, succumb to the inevitable and resign themselves to a diet of roast beef and mutton with all the usual adjuncts of a modern dinner table.</p><p>Alok Singh </p><p>Okay, he really does insist on the speed thing. At the end of the first section of the machines, or the end of the second section, must always be remembered that man's body is what it is, though, through having been molded into his presence shaped by the chances and change of the many millions of years. But his organization never advanced with anything like the rapidity with which the machines advances. This is the most alarming feature in the case, and I must be pardoned for insisting on it so frequently.</p><p>So he certainly hones in on the right things. The speed of development.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>which now for his time probably feels like everyone looking at the left side of the exponential curve, like, farmingly small prosaic and kind of cute. And yet.</p><p>I was like, oh, machines in 1863, they're just too good. Or, well, that's not, in fact, he specifically says that's not his claim. Not that they're too good, but that soon there will be. Although his soon, I don't know if he'd speculate on how long it would take.</p><p>Theo Jaffee </p><p>Well you had some pretty powerful machines not that long after this, you know, the atomic bomb or even like, I don't know, the dreadnought was only 30 years after this.</p><p>30, 40 years.</p><p>Alok Singh </p><p>He lived in 1902.</p><p>Theo Jaffee </p><p>So he saw the final peaceful days of the West before the West fell and billions died.</p><p>Alok Singh </p><p>Also, it</p><p>says that he had studies on the evidence for Christianity, so I assume that's where the whole obsession comes from. He also developed the theory that the Odyssey was written by a young Sicilian. The scenes of the poem reflect the coast of Sicily. Blah, blah.</p><p>also has the theory that if the Shakespeare sonnets be arranged to tell a story about a homosexual affair.</p><p>Also that Homer's deities and the Iliad are like humans, but without the virtue and that he must have designed Samuel Butler Wikipedia page.</p><p>Theo Jaffee </p><p>Where are reading this?</p><p>really?</p><p>Alok Singh </p><p>This is interesting. He argued that each organism was not distinct from its parents, that it was merely an extension at a later stage of evolution. Birth has been made too much of, quote him.</p><p>That's an interesting take.</p><p>Theo Jaffee </p><p>That is an interesting take. He does talk about like birth a lot in heroin.</p><p>Alok Singh </p><p>It's like one that, like you can define it so that</p><p>it's technically correct, but I don't think it's all that helpful. But his thing about birth has been made too much of.</p><p>Theo Jaffee </p><p>Yeah, like did you read the birth formulae and the world of the unborn?</p><p>Alok Singh </p><p>I remember there</p><p>were bizarre world views on the born or unborn.</p><p>Theo Jaffee </p><p>Yeah,</p><p>because he has to sort of reconcile like Erewhon's treatment of illness as criminal with like the fact that people need to be born and also like, you know, birth is a sort of illness kind of, you know, pregnant women are sick. And birth itself is like a very, you know, messy medical thing. So like, he thinks that babies come from the kingdom of the unborn and they're like these, I guess,</p><p>almost like omnipotent like spirit like angel type beings that then get bored and decide to like wipe their memories and spawn as a human child and they have to like sign off on this.</p><p>Alok Singh </p><p>Thank</p><p>Theo Jaffee </p><p>Yeah.</p><p>So the unborn, like, the wisest of the unborn will explain this thing about why being born is actually terrible, why you should never want to be born. Was Butler an antinatalist?</p><p>or is he just gay?</p><p>Alok Singh </p><p>I don't know. I don't think he's an antinatalist. was a serious but amateur student of the subjects he undertook, especially religious, orthodoxy, and evolutionary thought, and his controversial assertions effectively shut him out from both the opposing factions of church and science. Ow. In those days, one was either a religionist or a Darwinian, but he was neither.</p><p>Theo Jaffee </p><p>Yeah, that sounds about right.</p><p>yeah, the way of all flesh.</p><p>semi-autobiographical novel that was like really long.</p><p>Alok Singh </p><p>It claims in Dune that it is named for Butler.</p><p>same.</p><p>Theo Jaffee </p><p>Who else could it have been named after?</p><p>Alok Singh </p><p>Again, I mean, I'm not doubting this, but I want to see if there is a firsthand evidence for it directly stated.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I'm getting kind of tired. Is there anything else in Erewhon?</p><p>Alok Singh </p><p>He says the dog they're more self-sacrificing than humans</p><p>Theo Jaffee </p><p>I think that's kind of true.</p><p>What's name of that like Japanese dog who waited for, huh? Hachiko, yeah. Most humans wouldn't do that.</p><p>Alok Singh </p><p>Hachiko. Hachiko.</p><p>Theo Jaffee </p><p>you call</p><p>Yeah, wow. Incredible. I forgot to look at the statue of Hachiko at Shibuya Station.</p><p>sad.</p><p>Was Hachiko the first Doge?</p><p>He was in Akita.</p><p>The escape chapter reminded me of Around the World in 80 Days.</p><p>Alok Singh </p><p>I didn't know you'd know that.</p><p>Theo Jaffee </p><p>know what.</p><p>Alok Singh </p><p>around more than 80 days.</p><p>Theo Jaffee </p><p>Yeah, I used to read fiction. I actually read that a lot because as you may have observed, I have a sort of autistic obsession with maps and timelines and stuff. And you could very clearly chart out the timeline and the map of their voyage. I still basically remember almost every step of it. Yeah, London, Paris, Turin, Brindisi, and then they go...</p><p>on a boat to Port Said and down to Suez and then out in Yemen and then to India, first Mumbai and then they go up on the train but the train can't go all the way so they have to take an elephant and then take the train down to Kolkata and then to Singapore and then Hong Kong and then Yokohama, Japan and then San Francisco and then they take</p><p>like part railroad and part like dog sled across the US to the East Coast to New York and then boat back to London.</p><p>Alok Singh </p><p>Okay.</p><p>Theo Jaffee </p><p>Although there was actually no hot air balloon in the original book, that was like an invention of a movie adaptation or something.</p><p>I think this was good writing here.</p><p>Alok Singh </p><p>is across the sink, also remember that below in your math there's one with alpine gorges</p><p>When he says in the very beginning in his teaser to the book of the machines, think in chapter six, that they're destined to become instinct as vital from distinct from man as man is from vegetable or animal.</p><p>Theo Jaffee </p><p>Yeah, he also talks about how like, &#8275; specifically through, consciousness, like human consciousness is very different from whatever it is that animals and vegetables experience, if it could even be called experience and</p><p>Alok Singh </p><p>hair.</p><p>Yeah,</p><p>he goes into a Venus fly trap in a fair amount of depth.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Yeah, machines were ultimately destined to supplant the race of man and to become instinct with a vitality as different from and superior to that of animals as animals to vegetable life. Although as they later expand, it's really more like animal like machine to human and human to animal and animal to plant vegetables. I guess vegetable to mineral and that's where it ends.</p><p>Theo Jaffee </p><p>Uh...oh yeah. Upon his asking me to name some of our most advanced machines, I did not dare to tell him of our steam engines and railroads and electric telegraphs. And it was puzzling my brain to think what I could say when, of all things in the world, balloons suggested themselves.</p><p>Huh, I didn't even notice that detail the first time I read. Because this balloon detail comes back later when he escapes on the balloon. That's cool, yeah.</p><p>Alok Singh </p><p>Yeah.</p><p>Check off the balloon.</p><p>Theo Jaffee </p><p>Hmm. All right, well, yeah, this was fun. I did like this book. I will read more fiction now. Thank you, you inspired me.</p><p>Alok Singh </p><p>All right. Let's call it.</p>]]></content:encoded></item><item><title><![CDATA[#19: Samo Burja]]></title><description><![CDATA[Superintelligence and History, Ideology, and 21st Century Philosophy]]></description><link>https://www.theojaffee.com/p/19-samo-burja</link><guid isPermaLink="false">https://www.theojaffee.com/p/19-samo-burja</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 22 Sep 2024 21:34:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/149262675/06f61f260a7c2b9d0e4c74cf66f519a6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Samo Burja is a writer, historian, and political scientist, the founder of civilizational consulting firm Bismarck Analysis, and the editor-in-chief of governance futurism magazine Palladium.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:06 - Implications of OpenAI o1</p><p>10:21 - Implications of superintelligence on history</p><p>35:06 - Palladium, Chinese technocracy, ideology, and media</p><p>1:00:44 - Best ideas, philosophers, and works of the past 20-30 years</p><h3>Links</h3><p>Samo&#8217;s Website: <a href="https://samoburja.com/">https://samoburja.com/</a></p><p>Bismarck Analysis: <a href="https://www.bismarckanalysis.com/">https://www.bismarckanalysis.com/</a></p><p>Palladium: <a href="https://www.palladiummag.com/">https://www.palladiummag.com/</a></p><p>Bismarck&#8217;s Twitter: <a href="https://x.com/bismarckanlys">https://x.com/bismarckanlys</a></p><p>Palladium&#8217;s Twitter: <a href="https://x.com/palladiummag">https://x.com/palladiummag</a></p><p>Samo&#8217;s Twitter: <a href="https://x.com/samoburja">https://x.com/samoburja</a></p><h3>More Episodes</h3><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)</p><p>Welcome back to episode 19 of the Theo Jaffee Podcast. Today I had the pleasure of speaking with Samo Burja. Samo is a writer, historian, and political scientist, and he&#8217;s done a lot. He developed Great Founder Theory, the idea that societal change is often primarily driven by institutions shaped by the choices of powerful individuals. He founded Bismarck Analysis, a consulting firm that publishes detailed research on companies, industries, nations, and other large-scale societal organizations. He chairs the editorial board of Palladium, a magazine focused on &#8220;governance futurism&#8221;, and with, in my opinion, immaculate taste and aesthetics. He previously did research at the Long Now Foundation, and his Twitter bio reads &#8220;There&#8217;s never been an immortal society. Figuring out why.&#8221; In this episode, we talk about the meaning of AI on the trajectory of history, how we can get the best of Chinese technocracy while avoiding the worst, and some of the interesting new intellectual movements breaking the stagnation of the past few decades. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Samo Burja.</p><p>Theo Jaffee (01:07)</p><p>Hi, welcome back to episode 19 of the Theo Jaffee podcast. We're here today with Samo Burja So we should start the conversation today with the massive news that just came out yesterday where OpenAI announced O1, which is their new reasoning system. And they've proven for the first time that reinforcement learning can scale just like pre -training. And a lot of people are seeing this as, you know, the golden path towards AGI. So...</p><p>Samo Burja (01:12)</p><p>Great to be here.</p><p>Theo Jaffee (01:37)</p><p>What do you think about current day AI right now in terms of the kind of research that you do at Bismarck? How helpful is it?</p><p>Samo Burja (01:47)</p><p>First off, do think it's an impressive result from OpenAI that they have managed to reduce hallucination in the mathematics portion especially. Infamously, that had been a problem because most white collar professions are actually professions where reliability is the key foundation that makes something worth buying. You don't want a doctor that is</p><p>is right 95 % of the time and wrong 5 % of the time, or honestly even a lawyer, right? So I think many people are actually in professions where they are paid for consistency and reliability of a certain intellectual level.</p><p>I think that to answer your question, I personally don't actually use it that much, but perhaps I will use it more with this new generation. I have heard people use it very effectively to find search terms in literature review. So basically you ask the AI, what is something called in a specialist field like medicine, energy, law, finance, et cetera, you will usually get a</p><p>pretty decent explanation and I think the latest launch had sort of a promotion or we could call it a demonstration, a little promotion video with Tyler Cowan demonstrating this capability for economics. So until this very model I have not found that much use for it in my work.</p><p>because it kind of generated a high school essay. But let's say if this is now achieving college essay levels, perhaps there's some use there and I'll certainly be playing around with it and experimenting.</p><p>Theo Jaffee (03:37)</p><p>what specific capabilities would you want to see it have before you would consider it to be genuinely useful? Aside from, you know, just...</p><p>Samo Burja (03:45)</p><p>Well,</p><p>Genuinely useful for different people might mean different things. It's already obviously genuinely useful. I think it's done a great humanitarian service to the world by automating homework, which we probably should have actually abolished long ago since the educational statistics show it doesn't make much of a difference. So it's kind of a strange, busy, make -work thing that we've imposed on the children and young people of the world for no real benefit, which actually</p><p>is a shocking amount of the economy. So I think that it has been already very good at fulfilling many of the roles that a literate citizen or literate employee has historically fulfilled, right? It could be used to do data entry. I honestly hope they make it easier and easier to have it do data entry because the amount of paperwork we all deal with has radically increased over the last few years.</p><p>We don't think of it as paperwork, but every time you have to re -enter your passport, your password, your date of birth...</p><p>your credit card number, your zip code, your address. Every time you do that, that is actually paperwork. Every time you have to use two factor authentication. Now, of course, there are password managers that are supposed to handle this, but they're brittle. I think an AI would be excellent at parsing UI. I personally never want my phone to ask me for my date of birth again. Like I'm just done with that. Every time you install a</p><p>app it wants you to sign up to everything. So in the war on paperwork where bureaucracies offload the work they're supposed to be doing onto the user onto the citizen I'm hoping that AI will, generative AI as such and especially the text incarnation, will allow us to spam the bureaucracies with as many pages of replies and paperwork as they spam us with. So you know that's that's my big hope and structurally I think this</p><p>will make society richer because it let human beings go do what human beings do best, which currently is various forms of physical labor and certain kinds of creative original thinking. Which actually brings me to sort of this point where, you know, I mentioned Tyler Cowan earlier, one of his books, Averages Over, feels oddly prescient in the aftermath of AI because what AI has done best is automate the</p><p>the median white collar profession. So in other words, if your job is done by millions of people, it can probably generate the data necessary for these models to learn what your job is. But if your job is much smaller, if it only has 20 people or so in it, if actually maybe your firm is the only firm that does something, then I think your white collar work is going to stay intact.</p><p>The you are, the more differentiated you are, the less of a training set that your field of study produces, the harder it will be to automate. Now, medicine and law, two examples I raised, are actually areas with huge datasets. So I actually expect if not this version of OpenAI's model, but let's say a version within the next five years, will achieve the reliability of an excellent doctor. However, finance,</p><p>law and medicine have political power that protects them.</p><p>they will mandate the use of a human and the oversight of a human expert, be it a human lawyer, a human doctor, a human financial advisor. By the way, you know that you actually as a normal citizen cannot just invest in random stocks without an intermediary financial priesthood, right? Like you can't actually do that. You can do it in crypto. You can't do it in traditional finance. Oddly non -democratic. There's no good economic theory for it. It's basically just paternalism to prevent people from gambling.</p><p>Theo Jaffee (07:49)</p><p>Mm -hmm.</p><p>Samo Burja (08:01)</p><p>on stocks. But if we're doing paternalism that way, maybe we should protect people from the consumer market, etc. So I claim finance is gate -kept and is there as well. So once these jobs are automated, any job with political protection, with a structural guild -like lock on credentials, those jobs will actually not be automated by AI. Let me explain what I mean.</p><p>the substantive work that they do will be fully automated. But you can't automate fake jobs. So since you can't automate fake jobs, instead of it being a 20 % self -serving job with 80 % drudgery, it'll become 100 % self -serving. If you can spend 90 % of the time or 100 % of your time lobbying for the existence of your job, oof, in a big bureaucracy, that's pretty powerful. And in a society, it's pretty powerful.</p><p>busy bureaucrats are at the end of the day actually politically not that powerful. It's lazy, well -rested bureaucrats that are powerful. So on the other side of this, any job that does not have such protection, that is open to market forces, well, it'll be partially obsolete. It will increase economic productivity. So in my opinion, the real race in our society is will generative AI</p><p>empower new productive jobs by automating old productive jobs faster than it will empower through giving them more time to basically pursue rent seeking, the rent seeking jobs of our society and never underestimate the ability of an extractive class to really like like lock down and crash economic growth. I think this is the default of human history and economic growth is the exception.</p><p>Theo Jaffee (10:03)</p><p>So speaking of human history and AI, on the grand...</p><p>Samo Burja (10:06)</p><p>And by the way, that's why I emphasize so strongly, I hope the AI helps us beat the bureaucracies rather than, you know, I don't think it'll eliminate it. I think we should use it as a weapon against bureaucracy. Yeah.</p><p>Theo Jaffee (10:18)</p><p>I agree. So on the grand, you know, millennium scale arc of human history, like what does AI mean? Is it like, will lead to more of an end state once we reach AGI? Will the post -AGI world be a kind of, you know, epilogue to human history or will it be something entirely different? Will it lead to new eras of human history?</p><p>Samo Burja (10:41)</p><p>Well, it really depends on what you mean by AGI. The term has like something like six distinct uses. There's the official OpenAI definition, which is a little bit circular.</p><p>Even if you read their documents, their official corporate definition is something like, AI that automates all jobs. Which, by the way, mind you, I think most jobs do not actually require agentic, human, general purpose intelligence. I think most jobs are actually just fairly complicated scripts, but most jobs, I think you could honestly automate them with a sufficient amount of spaghetti code.</p><p>we did not have this transformer architecture revolution and we had 10 ,000 years more to fiddle about with coding and programming. actually think just non -AI computer programs could handle 95 to 99 percent of the jobs out there including with robotics and so on even without learning, without machine learning. And I think that's because our economy is actually shockingly primitive.</p><p>I can give some examples of how our economies are shockingly primitive Australia, which we think of as a first world country is a resource -based economy dig up rocks Raise sheep sell this What is this minecraft like how can a first world country? achieve such a surplus by basically selling sheep selling copper Selling various minerals. It's kind of hilarious. It shouldn't happen in</p><p>2024. Yet it does, because the other economies are not that advanced either. At the end of the day, objectively speaking, a car, be it an internal combustion engine or an electric vehicle, is not that complicated a machine. You can explain it to a smart high schooler over the course of a week, every single component in that car. Metallurgy is a bit trickier. But at this point, when we're talking machine tools, metallurgy, robotics, cars, what is</p><p>That's the economy of Germany and Japan. Possibly the most complicated thing ever is something that can be handled quite well by a small island nation. How is it possible that Taiwan produces so much of the world's semiconductors? It's a country of 16 million people. What are the other 8 billion people doing? And the answer is the other 8 billion people on this planet are actually doing stuff not too dissimilar from Australia. Like they're digging up stuff, they're growing stuff,</p><p>About 300 million or so are engaged in making plastics, making steel.</p><p>100 million or so are busy making cars, busy manufacturing, et cetera, et cetera. And then we have all the lawyering and the bureaucracy, cetera, et And you know, let's say a million people, let's say 2 million people are directly involved in the manufacturing of semiconductors. If we added up the labor force of TSMC and like the labor force of, you know, Foxconn and the labor force of Tokyo Electronics and ARM,</p><p>in Britain and maybe let's count ASML too. Maybe if we jury rig this number we can get it to like 20 million people are maybe involved directly in the manufacture of semiconductors which are almost the most complicated machines in existence other than something you know singular like the Large Hadron Collider. Like the chip fabs are immensely complicated. So if I break down the world's economy as to what are these eight billion humans doing you realize we kind of don't need artificial intelligence</p><p>intelligence to automate these jobs. I don't know, Theo, if you were immortal and I gave you 200 years to write a program that doesn't use machine learning, that knows how to herd cattle, I bet you could do it. Right? I bet you could do it. I don't think it's that hard. Well, 200 years haven't passed, okay? We've barely automated spreadsheets in the 80s, right? We've barely figured out how to send money, a made up thing that could be easily represented with electrons.</p><p>Theo Jaffee (14:46)</p><p>Why hasn't it been done then?</p><p>Samo Burja (15:02)</p><p>over the place. So I really, you know, I really do think that</p><p>the world's economy is almost, well, okay, it's a bit more complicated. If you look at the history of automation, automation tends to happen right next to an automated field. So as soon as you automate something, you have made it machine -like predictable, you have eliminated variance, and then whatever output you produce there is now so regular that you can automate whatever is</p><p>taking that as an input. It is difficult to introduce automation into a system where everything is custom, unique, intermittent, following natural cycles day, night, etc. It's easy to automate something when you are working with the high predictability of a machine. So because of this, and they've been good economics papers, I recommend people read some of the economist Robin Hansen's writing on this.</p><p>Theo Jaffee (16:06)</p><p>Love, Robin Hansen.</p><p>Samo Burja (16:07)</p><p>he's great. So if you read on his econ papers and some of the papers that he cites, he points out that in fact, you have this almost spreading way automation through the economy where the easiest thing to automate is something that can take machine inputs. The hardest thing to automate is something that requires inputs from the natural world, like the behavior of sheep in this example, or the chaos</p><p>of geology we still actually don't have very good geological theories. If you dig down two miles there's it's a very hard with sensors and our theory to predict what you exactly would find at any point in the world. Mining always comes with surprises right? They kind of are doing exploratory digging that's why it's so expensive. Even for something as well understood as oil there are of course surprises.</p><p>And then you have, you know, let alone the sort of chaotic needs of like, when exactly do you need a divorce lawyer or something like that, or when someone dies and there has to be a will interpreted, et cetera, et cetera. Like if we go to the white collar world, it becomes very natural. The service economy is supposed to be human oriented. And when we see automation in the service economy, it's almost like a form of rationing, right? At the airport, you are supposed to learn how to you know, scan you.</p><p>your tag or get your tag printed, know, stick it yourself onto your luggage, put it out there. And there's still a human being walking around working out various issues. Like for example, maybe your ticket was booked through a different airline and the silly terminal doesn't understand or allow the input of the code of the other airline that's partnering with your airline. Trivial things like that, fragilities that happen because of, you know,</p><p>cases that haven't been exhausted. So I think that there's like a combinatorics problem here where it's just explosive number of cases. When you automate something, you're actually reducing the number of different outcomes. A robot putting a door onto a car, onto a car frame, will do it exactly the same every single time unless it breaks. If it breaks, it breaks totally. Someone comes, fixes it, then it puts the door back on exactly the same.</p><p>I don't think a human worker does it exactly the same way or at least even if a human worker does it exactly the same way a different worker will do it a slightly different way and You know, that's the Industrial Revolution. It's actually been artificial simplicity We have been producing artificial simplicity since the start of the Industrial Revolution by making every you know, every teacup every mug the same</p><p>we have used economies of scale to grow vastly wealthier. So if we then joke about the definition that open AI uses for general intelligence to loop back to your original question, you know, a machine that automates most existing work.</p><p>When was AGI achieved? Well, James Watt achieved it in the 18th century, right? The steam engine already achieves that. But of course, new jobs showed up and our economy complexified. So it's really my hope.</p><p>that this kind of significant machine learning transformer architecture based AI, whether or not we think of it as AGI, I think it will automate vast amounts. It will automate vast amounts of work. But</p><p>Hopefully it'll make our economy more complex and will create more jobs and They will be things that it can't yet do now with regard to true general intelligence so if we have a you know, say the difference between Me learning to play a game of chess and AI learning to play a game of chess Chess is kind of an easy example because there's an exhaustive rule set in a way chess is also artificial simplicity</p><p>Basically, the machine can play millions of games and I cannot. And what is the difference between me learning to write an email and the AI learning to write an email like a Google.</p><p>Well, how many emails does the Google machine get to read? A billion? Two billion? Ten billion? Fifty billion? I don't know. It's definitely somewhere in the billions. How many emails have I read in my entire life? Well, it might feel like a billion. It's certainly not. It's like maybe a hundred thousand. Maybe a million. A million is too generous, I think, if I count all the spam that's deleted. So let me just, you know, quickly estimate and say a hundred.</p><p>Like on the spot, if you push me, how many emails have I read in my entire life? Or skimmed, probably a hundred thousand. So...</p><p>What does this mean? Anywhere where there are a billion emails or where there is a rule set like in chess that can generate exhaustively all the cases or at least as many cases as the machine can ingest, big data will be sort of victoriously succeeding at performing peak human capability or even modestly superhuman, right?</p><p>But what about cases where there are not a billion examples? What if there are only 10 ,000 data points just in existence for a problem?</p><p>I actually don't think the AI will be very good at learning that. And I think that illustrates the sort of difference between what I think is happening with scaling. Let's remember it's not just the scaling of compute, it's the scaling of data. Either works super well. I'm sure OpenAI scraped to the entire internet, as have the other AI companies. And, you know, within the bounds of legality, presumably.</p><p>but I just don't, presumably, move fast and break things. That's what they say. So.</p><p>Theo Jaffee (22:32)</p><p>Presumably.</p><p>Samo Burja (22:45)</p><p>think that we will see some surprising differences between human intelligence that learns from few examples and few data points and current generation, the transformer architecture. And of course, let's not forget diffusion, right? Diffusion is what is actually generating all the pretty images. And by the way, isn't that interesting? Why are transformers worse at generating the images? If we presume intelligence is a single thing,</p><p>and humans have that single thing, surely it's the same skill I use to paint a picture as I use to write an essay or to solve an equation or perhaps even to throw a basketball. There are lots of people who are betting on the transformer architecture in the physical world. Yet, defeat. Yeah?</p><p>Theo Jaffee (23:34)</p><p>Well, in the GPT -4 .0 blog post, they showed examples of how they used it to generate images that were very, good. And they were very good in...</p><p>Samo Burja (23:44)</p><p>Is that a Sphinx where that's a call function like it wasn't the previous generation or are they claiming that it is the same architecture? Okay, so it is native. Okay, cool.</p><p>Theo Jaffee (23:49)</p><p>No, it's native. It's native, yeah.</p><p>And it's good in a different way from like mid -journey, for example. Mid -journey is very kind of artistic and it has like taste in a way that 4 .0 doesn't yet, I guess, but 4 .0 is able to have, you know, more precise text and, you know, image persistence and stuff. So I think that this is probably something that's solvable by just making the models more multimodal and trading them on more kinds of data.</p><p>Samo Burja (24:24)</p><p>Possibly, I still think that it is notable that transformer and diffusion architectures are comparably good, let's say. I will read the paper. I also ask my AI friends because I feel often people take a chimeric approach. It's not visible to the user, but I believe you. I believe you, I believe the paper.</p><p>point being the fact that completely different architectures are competitive at all at a similar level of compute suggests to me that in the near future we will see a Cambrian explosion of different forms of intelligence and that actually intelligence isn't one thing but it's almost like a family of radically different things. We have just only been exposed to human intelligence by a quirk of evolution though of course</p><p>even when we're looking at human intelligence and we interact with the animals we've domesticated, at times these much dumber animals really outperform us at tasks we would consider cognitive, or G -Loaded, or intelligent, or...</p><p>borderline magical, right? Like primitive people was considered animals had forms of magic. So even in the natural world between mammals and birds, let's say if those are the two smartest broad branches of life, I think that we perhaps already did see multiple types of something that could be called intelligence. And I think that in the next hundred years, we will be continuously surprised as architecture</p><p>change, a scales increase by all the amazing things that humans could never do that the different forms of artificial intelligence will be able to do. And also shocked and confused by all the things they can't do. So I actually think that the patterns of what different forms of intelligence cover will end up being radically different. Now, if I'm wrong,</p><p>this will be sort of disproven in the next five or 10 years. But I suspect there's going to be something very surprising waiting for us when we interrogate our primitive philosophical concept of intelligence. And you know, there's like a way in which if we reframe machine learning as industrial scale mathematics or industrial scale statistics, we get very different intuitions of what it can do and how far it can go. And of course,</p><p>not denying the deep socially transformative impact of it. At the end of the day, does a submarine swim? Does a plane fly? It certainly doesn't fly the same way as a bird does. A submarine doesn't swim the same way a dolphin or a human does. But obviously those are extremely useful things. But it's good to remember that until the most recent quadcopter revolution, birds could do things that jet aircraft never could.</p><p>They could land in tight spaces, leave tight spaces, hover a certain way, know, pick up pollen from a flower. And, you know, of course, jet aircraft in the 1960s could fly up in the stratosphere at Mach 5. And no bird can do that, OK? No bird can do that. So I think that that is like a surprisingly deep analogy where if we apply this to movement, if we apply the same thing to intelligence,</p><p>will learn surprising things. I think a lot of my friends, and maybe they were naive, a lot of my software engineer friends were genuinely confused when chat GPT went viral. They were like, but if you wrote a for loop, then this will be an agent. Do you remember all the agent startups that popped up?</p><p>Theo Jaffee (28:25)</p><p>Mm -hmm. Yep.</p><p>Samo Burja (28:26)</p><p>They didn't work. They basically didn't. It kind of decoheres, right? If you like loop it on itself without a human input, it kind of decoheres and doesn't really pursue agentic actions in the world. That's surprising because even if it's not multimodal, even if it's just text, dude, text can be an input for other things. It can have actuators, can have sensors that represent the data as text. Maybe all you need is text. That kind of should have worked. And I think we used to equate intelligence and agency.</p><p>and right now we're seeing the two decohere in an interesting way. People right now are not confused, but they were confused in 2022. And I think this is one of those things that as soon as we are less confused or where our concept of intelligence is enriched, either the popular concept or the philosophical concept or the engineering concept, we almost don't remember what it was like before.</p><p>Whenever your model of the world becomes more complicated, it can be hard to remember what people don't know. If you want a reminder of this, try talking about your field of expertise with someone who's not in your field. You will assume they understand far more than they do. And when you ask them for their concepts, you realize it's not there. And I think if we could talk to ourselves in 2020, almost everyone alive today could blow the minds of people in 2020.</p><p>when discussing intelligence in machines and so on they would say the Turing test was passed but we don't know how to have the AI pursue an agenda and we don't know how to have you know the AI not just lie and make up things let's say maybe with 04, sorry with 4 -0 maybe with strawberry it's like actually solved and I think that's a great achievement I have to test it first before I can say with confidence</p><p>But still, we would surprise people in 2020. And I think we'll find ourselves perpetually surprised. I think we should stop expecting the AI to fly like a bird or swim like a dolphin. And it will, in fact, go very fast and very, far. And certain unusual things will be left to us humans for a long time to come.</p><p>And I'm not sure when exactly we will exhaust this cane brain explosion of intelligence. But there will be radically different AI systems. They will come to pick up more and more of the economy. They will eventually, once the will problem is solved, once we figure out how to give them will and agency, they will become politically powerful. They will very quickly become more politically powerful than humans. If there is any resource scarcity on the margin, they will immediately use their political power to pull the plug on</p><p>any sort of UBI or environmental regulation that the humans need. The atmosphere has to be made of oxygen, say the puny little humans, but they don't matter. So then humans go extinct and that explosion continues and eventually we have a world of completely new life forms. Now, I think that is at the extreme, but up until the point where the value of human intelligence isn't exhausted, humans will keep getting richer and richer.</p><p>Though they might start becoming politically disempowered once machine agency enters the picture. I think it is we're pretty lucky that the AI has not gone political as soon as the AI is politically powerful We will be in trouble. I'm actually happy with open AI or an anthropic or these big companies being very politically powerful Because at the end of the day, they're still humans. They want the atmosphere to be composed mostly of nitrogen and oxygen They want the temperature to be in a habitable range</p><p>Maybe there's mild disagreement on the margin about how many parts per million of CO2 we want, but like it's broadly all okay.</p><p>Yeah, so I don't know, you know, humans are very power hungry. So that's sort of my optimistic vision for the future is that we ride this came random explosion intelligence. We ride it much further than it is right now because I have a lot of faith that particular kinds of human intelligence will have an advantage. And then at some point our monkey brain like freaks out and we're like, the machines are too powerful. And then we just stop.</p><p>and then we maintain political power and we just enjoy our multi -planetary, high intelligence, high wealth civilization and perhaps expand horizontally across the galaxy with slowly, light speed limited ships rather than go all the way to being politically replaced and disempowered. So, there. That's kind of my projection. My projection is, yeah.</p><p>Theo Jaffee (33:16)</p><p>Hmm, so, almost.</p><p>Almost like the the Ian Banks culture series</p><p>Samo Burja (33:25)</p><p>Not quite. In that case, the humans are kept as pets by the very advanced intelligences and clearly the motherships are much more powerful than the humans are. I'm sort of relying on man being a political animal and that we're going to have like a primitive animal -like cunning that will keep us one level ahead of a lot of the super intelligences that in theory should be able to think circles around us but are going to have extreme difficulties. And you know, there's fun science fiction of this type. There's like, you know, science fiction where</p><p>The machines don't know how to lie and the humans know how to lie, for example. Though I don't think that's the case here. Clearly we have trained Chachupiti to lie to us very well, right? But anyway.</p><p>I think that it is difficult to reconcile the existence of human beings with sufficiently advanced AI. However, that might not happen. And I think we have a far more interesting history ahead of us for the next few hundred years. I don't think it's going to be the Eliezer Yudkowsky sort of rapid takeoff scenario. I think it's going to be much weirder than that. It's going to be like an explosion of colour or</p><p>shapes or... we will find the cognitive environment much much diversified. The Cambrian explosion comes first and then eventually comes a mass extinction where one of the forms of intelligence just outcompetes all the others. But I think we're going to enjoy this Cambrian explosion of different forms of intelligence for very long time.</p><p>Theo Jaffee (35:02)</p><p>Yeah, I hope you're right. switching topics a bit. A couple months ago, someone tweeted, Palladium just wants Chinese technocracy with American characteristics. And I thought this was really interesting because this seems to be a common thread of critiques of this kind of palladium ideology, which is basically, Palladium wants America to become more like China. So...</p><p>Samo Burja (35:28)</p><p>No, it's just false. It's just butthurt libertarians, bro. It's just butthurt libertarians. They got triggered by a thread that one of my employees wrote, which honestly was a great thread because it pointed out that China is a consumerist capitalist society. I don't know why this is controversial in 2024. I don't understand it, but...</p><p>think it's cope, right? I think we want China to be like the Soviet Union because we know how to beat the Soviet Union. We just grow our economy better, And the claim that GDP go up is the same thing as ship, steel, and drone production go up. Well, that was kind of true in 1945 when America won a world war. It's not true now. So really, I think, you know, if I were to give a critique, I would say that I actually want America to be more like itself.</p><p>you</p><p>I want the government to be able to build a bridge. want the taxes to be lower. I want the inflation to be lower. But I Palladium has no single ideological position. We publish writers with a wide range of perspectives. There's, of course, many very smart libertarian friends who have written for us. We're nonpartisan. We've had people who have written immigration skeptical, immigration positive pieces. The tagline is governance futurism. And governance futurism presumes</p><p>that government and society and culture in the future will be different than they are now. So do we want America to change, to develop? Yeah, but we're not advocating for any specific thing. We are examining what happens around the world. And I refuse to take this like false dichotomy where I'm supposed to pretend China's gonna run out of food in five minutes or I'm supposed to pretend it doesn't matter that China builds five times more ships than South Korea, which builds five times more ships than we do. I refuse.</p><p>refuse to pretend that that's the world we live in and I refuse to be stupid and jingoistic. I would actually, here's the thing, I will never fire someone for tweeting or disagreeing with me in any way. I believe intellectual diversity is important, but you know, I would fire someone for being an idiot. So I really refuse to hire idiots. And by refusing to hire idiots,</p><p>I sometimes rub people the wrong way because anyone with the brain who is a genius or a smart even original thinker they will rub simple categories the wrong way. So let me challenge you right back like did you read Vitalik's piece on Zuzalu in Palladium magazine?</p><p>Theo Jaffee (38:15)</p><p>I don't think I read</p><p>Samo Burja (38:16)</p><p>Why I Build Zuzulu is a Vitalik Buterin piece where he talks about creating a pop -up city. Or there's another piece, how cryptocurrency will transform migration, where it actually argues that populations will become much more mobile around the world.</p><p>and state power over individuals will decrease. Or I could name any other dozen pieces. Look, I think people are just stupid about China and they want to hear America, yay, China, boo. And I'm like, hey, let's not ignore that China is destroying us industrially. We don't have to industrialize the same way, but we do need industry. We need to build chips. We need to build ships. We need to build EVs. Not even America.</p><p>actually. Like it's fine if we French or it's fine if Germany builds stuff. Oops, the German economy is tanking. It's fine if South Korea go build stuff for us. Oops, South Korea is going extinct because their TFR is 0 .7. I'm tired of pretending we don't have big problems because I like our civilization. I want it to do super well and all is well.</p><p>sort of let's go back to grilling, let me just code, whatever man. Politics is already harassing the code, you need to think about politics back and that's why think Palladium is really the first magazine of the 21st century because it refuses to do this like left right thing, it refuses to do this like kind of like blind very narrow</p><p>Yay, our team booed the other team. So if people want to read that as pro China, I think that just tells that in their mind, the only alternative to our dysfunction is China. And you know what? The Chinese agree.</p><p>The Chinese government actually agrees that the only alternative to American dysfunction is China. And I think we should blow up that dichotomy because that's an economy that ends with us censoring our Internet to protect democracy. It ends with us tracking the movement of all Americans. It ends with us continuing to buy all Chinese products, but slapping tariffs on them to save the Boeings and the Intels of the world rather than</p><p>the SpaceX's and the Androids of the world. So yeah, that would be my response. And I got quite animated because I'm just like, you know, it's like you can spend 10 years giving nuanced commentary and then a person on Twitter gives like a little dunk. Whatever. I disregard. I disregard. If you'd not asked me, I wouldn't have even, I've not thought of it twice since. just, you know, someone's an asshole blocks me, I'll block them back. And it's super funny because,</p><p>I don't really think that anyone remembers that a magazine is supposed to be an intellectual culture with many different views. I think we're so used to the hyper -partisan propaganda environment that we've lost the social technology. So it actually goes back to the view that I stated that Western civilization has almost completely lost the infrastructure for complex and nuanced thought.</p><p>I think everyone is simplified and stereotyped. In politics as well as industry, have produced artificial simplicity, making us artificially dumber than we actually are.</p><p>Theo Jaffee (41:47)</p><p>Okay, can we drill down on that a bit? Western civilization has lost the infrastructure for intellectual complexity and nuance.</p><p>Samo Burja (41:54)</p><p>for complex thought. Well, you know, this is actually a way in which there was a very excellent, since we're talking about palladium, there was a very excellent piece by my friend Ben Lando Taylor on the academic culture of fraud, which is documenting and discussing pervasively the prevalence.</p><p>of people not only p -hacking or statistically massaging the data, but outright fabricating datasets. And note, in fields like medicine, where that costs lives, where people die, and Ben proposes...</p><p>the radical but sensible solution that actually academic fraud should be not just a fireable offense. It should cause, it should risk jail time because you really are causing harm to others. With financial fraud, have this and with academic fraud, we should have some of this as well. The academic institutions today mostly hush up and protect proven instances of fraud. So I really recommend the</p><p>audience go and read the article it was shocking to me and revelatory to it was a revelation to what an extent basically an academic department will not want the reputation damage of having you know there been demonstrable fraud there so strike one for academia academia is failing to sustain the culture of science let's go for strike two the media environment most social media networks in the Western world</p><p>And this is the way in which I wish we were more different than China. I want us to be radically different than China Give Give straight statements of suggested censorship so they will give statements to social media companies like meta and Facebook and Sir, you know meta slash Facebook Like tick -tock and so on they will suggest you take this down</p><p>And in places like Britain we saw recently, they're not even averse to mass imprisoning citizens. The United States is lucky within the Western world to have the First Amendment. It protects us from state mandated censorship. But I do think that there is state -suggested censorship. We have plenty of evidence that old Twitter, the Twitter files that Elon encouraged people to read but no one read, I don't know why.</p><p>possibly because we know that you're going to end up having different views and you're going to feel emotionally disconnected from people who have conventional views. There's plenty of evidence of the White House, the State Department, DOJ,</p><p>Sending basically threatening emails to big social media companies telling them to ban people pull content So calling this state suggested censorship is a big deal and I think Elon Musk is doing the country a great service by opening a Freer discourse environment. So that's number two public discourse is threatened X is like the only movement X comm is the only website that is closer to the</p><p>of 2001, you know, adjusted for IQ of the general population, but still closer to the freedom of the internet of 2001 than the extremely gated, curated, manicured, and fake internet of 2018. I still remember when the YouTube comment sections first became much more polite and then they became much more stupid. Because if you enforce, you know, censorship in the name of, you know, fighting, hate, or whatever,</p><p>you're going to lop off both sides of the distribution and you're gonna have a chilling effect and then of course it'll get stupid, right? So that's you know sort of the next point of artificial stupidity and then perhaps like the most important one is I think we have</p><p>We have metabolized so much of our assumptions of what it means to be a citizen.</p><p>in a free country of what level of education and agency and individuality we are supposed to accept. We have burned through it. Every single political race of the last 50 years has weaponized more parts of individuals' identities and individual feelings. Did you ever read that study that compared the reading level?</p><p>of the State of the Union address over the last 200 years. Okay, it's going down, right? Exactly, it's very generalizable. And if you look at a televised debate, not a presidential debate, mind you, just a debate between intellectuals in like 1960s or 70s television,</p><p>Theo Jaffee (46:52)</p><p>Yes. Very generalizable,</p><p>Samo Burja (47:10)</p><p>my God, these people would be, each of them would be a Jordan Peterson type sized audience, but we somehow don't have as many of them. And I think it's because if you don't bat for your team a hundred percent of the time in a modern democracy, I think people assume you're a bad person, people on your team. So if you're a Democrat, you have a conservative opinion, or if you're a conservative and you have a progressive opinion.</p><p>I think you're kind of considered a bad person or not totally reliable. People have gone extremely moralistic. Pardon?</p><p>Theo Jaffee (47:43)</p><p>Arguments are soldiers. Arguments are soldiers.</p><p>Samo Burja (47:48)</p><p>Yeah, I mean, but they didn't used to always be. And Eliezer Yudkowsky actually writes about this, right? you know, he coined, I think, did he coined the phrase arguments as soldiers or was it someone else? I remember an essay. Yeah, yeah, yeah. Well, he points out that like, just the tone of a 1940s PSA is treating the citizens, the viewers, as adults.</p><p>Theo Jaffee (47:59)</p><p>pretty sure it was him but it was was on less wrong.</p><p>Samo Burja (48:14)</p><p>And a PSA today would never do that. It would just appeal directly to feeling. It would not try to invoke reason. It wouldn't try to invoke this concept that we should restrain our emotions and we should be more broadly aware.</p><p>Because the political race has sort of ground down over time, over the last 70 years we've had an erosion of the concept of a citizen where new pieces are chipped off every single presidential election, at least to be used as fuel to win our team or the other team, right? Because of that, it has become not.</p><p>in anyone's interest to educate people in the Aristotelian sense, Aristotle defined an educated mind as a mind that can consider opinions different than their own, like consider an opinion without accepting it. And I think right now, the cognitive barriers,</p><p>and cognitive sophistication has been broken down so much that even though our IQ is probably just as high as the 1960s, maybe a little bit higher due to the Flynn effect, though the Flynn effect's been going away since the 1990s.</p><p>It's like we immediately ingest the information. It immediately goes into our opinion. If we notice that it disagrees with our team, we get angry and we immediately morally disown the person that gave us that information. And then we go on believing what we believed before. We've been hardened.</p><p>And in that situation, no dialogue is possible. But in that situation also means that groupthink is more powerful. Like one way to think about this is like an analogy with superconductivity. You know, if you could get the resistance to drop to zero, no current is ever lost, right? If we reduce this mental resistance in people on our team, whatever our team is, I'm like, you know, I honestly don't even care who wins this election. That's another way in which I'm such a heretic.</p><p>care if it's Harris administration or Trump administration.</p><p>they will be bad in different and unique ways and it's totally fair to have strong feelings about how each will be bad. But I think it's such a small part of our system and our problem that no one who is president could possibly fix these more basic ones. But let's say on our team, if you have high intellectual resistance or the ability to view a different position without immediately adopting it and repeating it or immediately rejecting it and then refusing</p><p>to hear anything more of it.</p><p>parties get stupider. So it's not just two smart teams fighting each other, it's each team will be dumber because the selection filter on coherent ideas is gone. So in the process of two sides fighting each other, we have ground down our expectations of what it is to be a citizen. We have not educated people how to be citizens. And as a result, each of the group thinks on their own is much stupider. Like,</p><p>You know, you compare the Democratic Party in 1995 and 2025, it's like no question which is the stupider party. And you compare the Republican Party of 1995 and the Republican Party of 2025, so next year. And I guarantee you, the 1995 one, they'll just be smarter people on average with more nuanced arguments and more nuanced points. And we can make even the same comparison between 1995 and 1985. And note, I'm not talking here about</p><p>their socially conservative views. I'm just talking about how they speak to each other, how they come to consensus, how they organize things like party platforms. I know this is going to shock Gen Z, but even 10 or 20 years ago, politicians were not known for bangers. They were known for pieces of legislation they pushed through. And 30 or 40 years ago, people would actually read the party platform and care about it, like normal people, not even Noah Smith tier political monks.</p><p>So, I don't know. think, I think we need to reset our expectations of the cognitive sophistication of the citizens to a much higher level. And we need to viciously shame all attempts and pushes to simplify things. And</p><p>pursue group strategies rather than individual strategies because that's the only hope to make something like a parliamentary system or representative system work.</p><p>Otherwise, the democracy aspect will be reduced and arguably has been reduced to no more powerful in the American system than the Queen of England or the King of England now is powerful in the British system. Arguably, Britain is a bureaucracy pretending to be... Sorry, it is a...</p><p>It is a bureaucracy pretending to be a democracy, pretending to be a republic, pretending to be a monarchy. So they have several layers of political dissimulation. In theory, the king is sovereign, but oops, parliamentary supremacy. But actually the people have immense power, but actually, you know, populism is bad and we should have experts decide things. So in reality, our system of government has shifted from democracy to bureaucracy to varying extents. And America has the most democracy</p><p>Theo Jaffee (53:55)</p><p>Mm -hmm.</p><p>Samo Burja (54:05)</p><p>of any Western country except maybe Switzerland and that's why this is so disturbing and dangerous to see this erosion of citizens capabilities to work in it. So in other words I wish these citizens were much more politically sophisticated and I want them to hold their political opinions and convictions strongly and I want them to know how to disagree civilly.</p><p>Theo Jaffee (54:29)</p><p>Is that really true by the way that the US has more democracy than any other western country except Switzerland? You know seems like we have Sweden? Norway? France?</p><p>Samo Burja (54:35)</p><p>Who would you name? Which country is more democratic? I feel Sweden is an extremely well -run country.</p><p>I think Sweden is a very well run bureaucracy. What do I mean? Swedish civil servants received international world health guidelines for COVID. And instead of they looked at the data and they very autistically said, this doesn't quite make sense. We're going to lock up the old people because they die of COVID. And we're not going to have general lockdowns to lock down young people. And the result has been lower deaths. For example, Sweden also decided to pursue different</p><p>economic policies Sweden actually is a surprisingly capitalist country simultaneously as being a social democracy this is kind of a Nordic model but I think in Sweden decisions are mostly not made through elections they're made through experts and both Sweden and France mind you have Sweden and France have very much</p><p>like severe limits on speech, not perhaps in practice as many people are imprisoned and become political prisoners as in the modern United Kingdom, but certainly some. And in the case of France, like, you know, the individual,</p><p>liberties are much reduced. Now the French do have a right that Americans have much less of so the French can show up, protest and have the whole country be locked down because in their mythology of liberty, their mythological version of liberty isn't, it is the people gathered together and stormed the Bastille and behead the aristocracy. So that's why in France it's kind of illegitimate to suppress a farmers protest or a rail</p><p>or a union strike or something like that because you're going against the foundational myth of liberty, it would be kind of comparable if in America you seized all the guns because in the American mythology of freedom, it was, you know, people with guns shot at the government, the British government, until it went away. And both of these stories are kind of true and kind of dumb and false in their depiction of the American Revolution and the French Revolution, but the myth is very important for political legitimacy. So there is a way in</p><p>which that is democratic. So the fact the French can just go on strike on any random thing they want, that is democratic power in action. But I think you'd be hard pressed to deny that by any measure, France is like more regulated, there more laws, citizens in most ways have fewer rights, there's less free speech.</p><p>I think France is actually a surprisingly good elective monarchy because when it has a strong president, the president of France has like very significant powers, not only a longer presidential term.</p><p>But actually, the bureaucracy mostly listens to the French president. So you could argue that that's a democratic, monarchical aspect to the government. just the sheer number of departments, regulations, like try starting a startup there, Like economic freedom, but also political freedom, it's much more constrained. So yeah, I would claim France is more of a bureaucracy than the United States. I mean, would you disagree with that?</p><p>Theo Jaffee (58:04)</p><p>No, I would not.</p><p>Samo Burja (58:05)</p><p>Okay, well, perhaps the disagreement then could be is the US, you know, the US might have a mix of bureaucracy, plutocracy and democracy. And I think my center left friends would say, maybe Europe is more bureaucratic, but it's still more democratic because it's not plutocratic. But that argument doesn't really work for a place like Sweden or France either, because, you know, let's remember, second richest man in the world is a Frenchman that owns a bunch of luxury brands.</p><p>Luxury brands are the ultimate fake job. You are riding on incumbency. Actually high taxation would destroy you. So in France, Sweden, and lot of European countries, the social democratic pact is the following. If you have money, you can inherit it through loopholes. And old money persists. If you don't have money, your income will be taxed and it will be hard for you to make more money. So incomes are very harshly taxed, but you can have a family foundation.</p><p>that owns your company and you can be in charge of your family company and you can be in charge of the family foundation and you basically have a 0 % tax rate.</p><p>That's true of Austria, Germany, Sweden. This does a few good things. It preserves the Mittelstand economies, but an economically equal society that this is not. So I would say that Europe is plutocratic in a different way than America. In Europe, old money is supreme and the government approaches its old companies and ask them, what can we do for you? And in America, new money is supreme and companies show up and ask the government, hey, what can we do for you? Because we're just getting started.</p><p>and don't you want to buy our much cheaper drones, et cetera, et cetera, instead of the ones provided by the old companies. But that's only directionally, right? Both have elements of new and old money power.</p><p>And generally speaking, think bureaucracy is much stronger than plutocracy. And in America, would say democracy is very strong because it is possible to build a base of popular support and launch your political career. And by the way, on the left as much as the right, know, people, my conservative friends might not like this, but AOC is an example of democratic power.</p><p>She's speaking directly to the voters and a significant set of voters really like what AOC has to say. So AOC is a champion of democracy. Donald Trump is a champion of democracy. When you hear populism, that usually just means someone doesn't like democracy in action.</p><p>Theo Jaffee (1:00:39)</p><p>So.</p><p>I think we have time for one more question. So we talked about how complex political thought has gotten worse over the last few decades. And it seems like just philosophy in a lot of fields have reached almost the kind of stasis. So, you know, aside from Palladium, the first magazine of the 21st century, like what are some of your favorite ideologies and philosophers and work specifically from the last 20 to 30 years, the 21st century?</p><p>Samo Burja (1:01:10)</p><p>think a lot of the people who got their start from blogging, and some have migrated sub -stacks, some have not, have written very insightful stuff. I think that Paul Graham and his early essays, and even some of his more recent essays, is going to be understood as a significant writer of the last 30 years. I think that...</p><p>A lot of the mainstream polished pop intellectuals are actually overrated. There are a few that I think are decent. I think Steven Pinker's least popular works are his best and his most popular works are his worst. So Steven Pinker I think is actually a more serious intellectual than you would believe from his public profile.</p><p>Theo Jaffee (1:01:50)</p><p>Like who?</p><p>Samo Burja (1:02:07)</p><p>I think that... I think that Nickland...</p><p>will prove to be a much more important and subversive influence on both like far left and far right subcultures than is currently acknowledged. think ancestrally he has shaped a lot of the strands of accelerationism and you know there's sort of like the left -wing version of that and then there's the right -wing version and I think people are just now remembering that he wrote really bizarre things in the 1990s while working, you know, there's this informal group,</p><p>cybernetic culture research unit at the University of Warwick, which you know according to the University of Warwick never existed because of course universities don't allow unique or weird social or intellectual clubs. It has to be underground. It has to be unofficial. So I think he will prove to be a significant thinker because his thesis, he laid out this sort of basic thesis of techno capital, right?</p><p>which is this idea that capitalism itself was a form of intelligence. And I'm not sure if he's the absolutely first person to make this analogy, but he definitely made it forcefully and interestingly in the 1990s, long before the current machine intelligence explosion, right?</p><p>We could continue listing more thinkers. I'm going to say it's cringe to say, but Eliezer Yudkowski is a more significant philosopher than people would like to give him credit for because he single -handedly wrote the orthodoxy of the rationalist movement and the effective altruism movement. say what you will, those are very influential movements. He was not dumb. He wrote very clearly.</p><p>Theo Jaffee (1:03:48)</p><p>I completely agree.</p><p>Samo Burja (1:03:58)</p><p>one of the best stylists, honestly. think among his acknowledged influences was George Orwell, whose essay, Politics in the English Language, I warmly recommend. So it's a non -fiction essay. So I think Yudkowsky is also a significant thinker. And I think that because we live in emergence in a period where the Cambrian explosion of intelligence has happened,</p><p>we will tend to regard the thinkers who commented on topics related to artificial intelligence more highly than some of the other commentators. So as a last one here, I would say Robin Hansen is very much underrated. I sort of feel, you know, I know he came up with the whole prediction market thing. It's pretty cool. But I honestly find his like cosmology, human nature, and culture commentary</p><p>to be much more interesting than just the mechanism of prediction markets. feel like, you know, insurance schemes are neat and fun to think about, but you can only hear about them so many times before you lose interest. Yeah.</p><p>Theo Jaffee (1:04:57)</p><p>Yeah, absolutely.</p><p>Yeah, I mean, just to go on Instagram and see the like, lowbrow slop that they have and to see these slop accounts posting about like, presidential prediction markets and it's like, wow, I met the guy who invented this thing. Like, how cool is that? Yeah.</p><p>Samo Burja (1:05:21)</p><p>Exactly. That's a big influence. I could see prediction markets being actually very important in 10 years in even determining the election. But that will be their big test. When there's an incentive to like rig the market one way or the other, how much money will go into politics? Right? Like I think people are already trying manipulation in these very low liquidity markets because they are very low liquidity for now.</p><p>But yeah, think if they're not outlawed, they will ratchet up and hopefully the result is more accurate information and not just another information battlefield.</p><p>Theo Jaffee (1:06:02)</p><p>So I think that's a good place to wrap it up. Thank you so much, Salma Buria, for coming on the show.</p><p>Samo Burja (1:06:06)</p><p>Yeah, thank you Theo and thanks for the provocative questions.</p><p>Theo Jaffee (1:06:08)</p><p>Thanks for listening to this episode with Samo Burja. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Also be sure to check out Samo&#8217;s Twitter @SamoBurja (spell it) and his website samoburja.com, Bismarck Analysis at bismarckanalysis.com, and Palladium Magazine at palladiummag.com or @palladiummag on Twitter. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#18: Alec Stapp]]></title><description><![CDATA[The Institute for Progress, American Dynamism, and Fixing Governance]]></description><link>https://www.theojaffee.com/p/18-alec-stapp</link><guid isPermaLink="false">https://www.theojaffee.com/p/18-alec-stapp</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Thu, 11 Jul 2024 23:34:59 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/146496593/c3712bdeec8d1bdf958120b2dd9a5afc.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Alec Stapp is the co-founder and co-CEO of the Institute for Progress, a non-profit think tank dedicated to accelerating scientific, technological, and industrial progress.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:13 - Why can&#8217;t smart people fix the Bay Area?</p><p>3:38 - How to get normal people on board with IFP</p><p>10:23 - How to get smart people into governance</p><p>15:55 - How IFP chose its priorities</p><p>21:56 - How will IFP avoid mission creep?</p><p>24:17 - How important is academia today?</p><p>26:03 - Would Alec press a button to fully open borders?</p><p>29:45 - How prepared are we for another pandemic?</p><p>33:16 - Why don&#8217;t easy wins happen?</p><p>36:17 - Is Biden&#8217;s spending good?</p><p>40:51 - How important is the repeal of Chevron deference?</p><p>43:23 - Are land value taxes good?</p><p>45:01 - &#8220;The Project&#8221; for AGI and AI Alignment</p><p>48:19 - Is globalism dying?</p><p>50:32 - Overrated or Underrated?</p><p>59:28 - The most overrated issue</p><p>1:00:26 - The most underrated issue</p><h3>Links</h3><p>Institute for Progress: <a href="http://ifp.org">ifp.org</a></p><ul><li><p>&#8220;Progress Is A Policy Choice&#8221; founding essay by Alec Stapp and Caleb Watney: <a href="https://ifp.org/progress-is-a-policy-choice/">https://ifp.org/progress-is-a-policy-choice/</a></p></li><li><p>&#8220;How to Reuse the Operation Warp Speed Model&#8221; by Arielle D&#8217;Souza: <a href="https://ifp.org/how-to-reuse-the-operation-warp-speed-model/">https://ifp.org/how-to-reuse-the-operation-warp-speed-model/</a></p></li><li><p>&#8220;How to Be a Policy Entrepreneur in the American Vetocracy&#8221; by Alec Stapp: <a href="https://ifp.org/how-to-be-a-policy-entrepreneur-in-the-american-vetocracy/">https://ifp.org/how-to-be-a-policy-entrepreneur-in-the-american-vetocracy/</a></p></li><li><p>&#8220;To Speed Up Scientific Progress, We Need to Understand Science Policy&#8221;: <a href="https://ifp.org/to-speed-up-scientific-progress-we-need-to-understand-science-policy/">https://ifp.org/to-speed-up-scientific-progress-we-need-to-understand-science-policy/</a></p></li><li><p>&#8220;But Seriously, How Do We Make an Entrepreneurial State?&#8221; by Caleb Watney: <a href="https://ifp.org/how-do-we-make-an-entrepreneurial-state/">https://ifp.org/how-do-we-make-an-entrepreneurial-state/</a></p></li><li><p>Construction Physics newsletter by Brian Potter: </p></li></ul><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:104058,&quot;name&quot;:&quot;Construction Physics&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c663799-8d26-4456-8c14-8283b618f705_590x590.png&quot;,&quot;base_url&quot;:&quot;https://www.construction-physics.com&quot;,&quot;hero_text&quot;:&quot;Essays about buildings, infrastructure, and industrial technology.&quot;,&quot;author_name&quot;:&quot;Brian Potter&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#fCFBEB&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.construction-physics.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!pMIM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c663799-8d26-4456-8c14-8283b618f705_590x590.png" width="56" height="56" style="background-color: rgb(252, 251, 235);"><span class="embedded-publication-name">Construction Physics</span><div class="embedded-publication-hero-text">Essays about buildings, infrastructure, and industrial technology.</div><div class="embedded-publication-author-name">By Brian Potter</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.construction-physics.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><ul><li><p>Macroscience newsletter by Tim Hwang: </p></li></ul><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:1637337,&quot;name&quot;:&quot;Macroscience&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea04f44e-cf28-447b-82a7-2830f460cb2a_1280x1280.png&quot;,&quot;base_url&quot;:&quot;https://www.macroscience.org&quot;,&quot;hero_text&quot;:&quot;A newsletter about macroscientific theory, policy, and strategy&quot;,&quot;author_name&quot;:&quot;Tim Hwang&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#FCFBE8&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.macroscience.org?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!keL_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea04f44e-cf28-447b-82a7-2830f460cb2a_1280x1280.png" width="56" height="56" style="background-color: rgb(252, 251, 232);"><span class="embedded-publication-name">Macroscience</span><div class="embedded-publication-hero-text">A newsletter about macroscientific theory, policy, and strategy</div><div class="embedded-publication-author-name">By Tim Hwang</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.macroscience.org/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><ul><li><p>Statecraft newsletter by Santi Ruiz: </p></li></ul><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:1818323,&quot;name&quot;:&quot;Statecraft&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4ed3ff9-0217-4c49-8793-be01ef6b0943_807x807.png&quot;,&quot;base_url&quot;:&quot;https://www.statecraft.pub&quot;,&quot;hero_text&quot;:&quot;How policymakers actually get things done&quot;,&quot;author_name&quot;:&quot;Santi Ruiz&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#fcfbeb&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.statecraft.pub?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!n21s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4ed3ff9-0217-4c49-8793-be01ef6b0943_807x807.png" width="56" height="56" style="background-color: rgb(252, 251, 235);"><span class="embedded-publication-name">Statecraft</span><div class="embedded-publication-hero-text">How policymakers actually get things done</div><div class="embedded-publication-author-name">By Santi Ruiz</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.statecraft.pub/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><p>IFP&#8217;s Twitter: <a href="http://x.com/IFP">x.com/IFP</a></p><p>Alec&#8217;s Twitter: <a href="http://x.com/AlecStapp">x.com/AlecStapp</a></p><p>Transcript: <a href="https://www.theojaffee.com/p/18-alec-stapp">https://www.theojaffee.com/p/18-alec-stapp</a></p><p>More Episodes</p><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (01:05)</p><p>Hi, welcome back to episode 18 of the Theo Jaffee podcast. We're here today with Alec Stapp.</p><p>Alec Stapp (01:11)</p><p>Hey Theo, good to see ya.</p><p>Theo Jaffee (01:13)</p><p>Yeah, you too. So first question, pretty much everybody I know in Silicon Valley on tech Twitter, many of whom are extremely intelligent and high agency and wealthy, agree with you on almost everything, but the Bay areas still are like the most politically dysfunctional place in the country and in some ways in the world. So why can't they change things? Is it just a skill issue?</p><p>Alec Stapp (01:37)</p><p>Great question. And I'll just carry out this answer by saying that my organization, the Institute for Progress, we focus exclusively on US federal policy. So we only work on stuff in Washington, DC. So I claim no unique insight into local politics in San Francisco or state politics in California. But I do think it's at some level not taking politics very seriously on its own terms. It's part of the issue.</p><p>So a lot of people in tech either haven't been engaged for a long time in local politics, or they don't understand what drives these elections. I think they're very low turnout events. It's not the kind of people that folks in tech world often interact with. It is not driven by the same dynamics that drive tech Twitter and things that are like in the discourse or in the ether. And so, for example, my understanding is that a lot of the recent election outcomes in San Francisco have been driven by</p><p>crime as the major issue, especially in the Asian American community. And of course, housing is a major issue, but maybe crime in that one demographic is actually the thing that's like really moving voters. And so to understand that you would need to conduct polling, focus groups, do door knocking, do a lot of like the really grassroots organizing that the tech community is not experienced with. But is increasingly there are folks who are jumping in, trying to learn about this. I think they have some long run strategies in terms of</p><p>getting on the local Democratic Nominating Commission to then nominate folks to run Shadow Gary Tan. So it's changing, but I think we shouldn't be surprised that people like, I believe Aaron Peskin is his name, who's now running for mayor against London Breed. Like he's been doing this for decades and he's a very like active person and high energy person. And so even if tech folks disagree with his politics or his policy positions.</p><p>Theo Jaffee (03:08)</p><p>Shout out Garry Tan.</p><p>Alec Stapp (03:32)</p><p>You shouldn't take someone like that lightly in terms of thinking how you can beat them.</p><p>Theo Jaffee (03:38)</p><p>And similarly, how do you get the general public to actually agree with IFP style ideas? Like most people don't even think about the issues that you write about. Are there any lessons from like gay marriage going from overwhelming opposition to overwhelming support in a single generation?</p><p>Alec Stapp (03:48)</p><p>Dublin.</p><p>Gay marriage is obviously a unique issue or at least it's like a cultural issue. And so that feels quite distinct from what we do at IFP. We focus, not only do we focus exclusively on federal policy, we also focus exclusively on innovation policy. So you see us work on things like high skilled immigration, meta science, AI, biotech. We really try to just stick to these issues in a way that can be bipartisan.</p><p>Theo Jaffee (03:57)</p><p>Maybe two.</p><p>Alec Stapp (04:24)</p><p>or even just nonpartisan and kind of technocratic in nature. And we want to increase the salience of the issues we work on to a degree where they get prioritized by folks in government and people agree with our positions ultimately. But these are not mass mobilization issues and that's not key to our theory of change. Our theory of change is really about can we get on any given topic the 100 to 200 people in the Washington DC area who really matter.</p><p>to agree with our position and coordinate and work with us. And so it's a very elite driven theory of change. Not to say that mass mobilization doesn't have its place, it's just that it's not our focus on the topics we care about.</p><p>Theo Jaffee (05:07)</p><p>Can you elaborate on that a little bit? Because it seems to me like it would be, you know, quite effective to get large amounts of people to start campaigning for, say, high skilled immigration. Like people tend to have very strong opinions on immigration and it seems to be like quite bipartisan, you know. It's very rare that you see even like the most anti -immigration Republicans oppose high skilled immigration. So why not make it a mass mobilization?</p><p>Alec Stapp (05:33)</p><p>Yeah, I think you need to look, well, one, let's first look at like where mobilization has worked for the progress community, abundance community, Yimbyism, however you want to frame the groups that we work with. It's obviously happened first at the state and local level for housing. I think that is a situation where raw numbers of mobilization really could move the needle. It's that you had these low turnout city council meetings, town council zoning meetings where</p><p>less than 1 % of the population shows up and they're all retired people who don't want any change, don't want any new housing, and they already own their own homes. And if you could just get 1 % of the local population to show up and be pro -housing, then you've now offset them and it's like a very tractable, achievable outcome. I think in Washington, D .C. it's very different. Representatives and senators represent hundreds of thousands, if not millions of people.</p><p>in their constituencies. And so it's hard to actually mobilize the scale of people that are necessary for changing their opinion. And because you brought up immigration as an issue, it's very hard to raise the salience of that in a mass politics way without getting, without making the situation worse. So for example, if you look at the polling on the issue across all different types of immigration reforms, there's broad support.</p><p>like more positive than negative sentiment among the American people for more immigration. But by far, the people who feel the most strongly about it, the most passionate about immigration as a topic, are the restrictionist and anti -immigration folks. And so, and then if you think about what is the bottleneck to reform, the bottleneck to reform is currently congressional Republicans. And due to our primary electoral system in the United States,</p><p>most members of Congress are only concerned about winning their primary because they're not in a competitive seat or state for the general election. And so a Republican who only cares about winning or potentially losing their next primary race is mostly concerned about not doing anything to increase immigration from their right flank because they might get primaried on that issue because it's very for a minority of the Republican base who do vote in primaries and care a lot.</p><p>any kind of immigration is seen as a betrayal almost. And so I think it's very tricky to raise this topic in a mass mobilization context without it backfiring on you. And so we prefer a much more targeted, again, elite theory of change, which involves using arguments that we think are most targeted and most effective, so making this a national security issue, because it is in terms of having talented folks in our defense industrial base. This is the best way for us to beat China.</p><p>Russia, other countries that are adversaries to the United States. And that takes it out of the immigration context. It is another way to get folks on board and doesn't involve mass mobilization at all.</p><p>Theo Jaffee (08:37)</p><p>So when you said you mainly focus on persuading the 100 to 200 people in DC who have the power to do things, like, who actually are these people? Members of Congress? Heads of agencies? Or something different on top of those two?</p><p>Alec Stapp (08:54)</p><p>Yes to all those and I would say just the most important thing is that it varies based on topic So it's not the same people who have power over everything. Of course, it's like In the high skilled immigration community It's the certain committee members themselves in Congress and their staffers their lead staffers who've been working in Congress drafting legislative bills for decades it's the leadership of USCIS in the executive branch and Some of their staff members. It's the outside communities. So like who are the top lobbyists on this issue?</p><p>who are the top nonprofit organizations, and then who are the leading experts, academics on this topic. And so on a very niche technocratic topic, there are really only like dozens or maybe 100 people who have the requisite experience to really be engaged in the decision making. And obviously, they don't determine exactly what happens. But as soon as elected officials put it on the table and say,</p><p>let's consider doing something on high skilled immigration reform, or let's consider doing something on reforming the National Institutes for Health or the National Science Foundation. Then they hand the baton to staffers, sometimes lobbyists, sometimes outside groups, academics, to flesh out the details, to figure out like, we have this vague abstract goal, how do we implement it? And it's roughly dozens, and at most 100 people who like ultimately end up mattering in those kinds of conversations.</p><p>Theo Jaffee (10:23)</p><p>How do you get more smart people into orgs like IFP or into government? Most of the smart people I know who are trying to make a lot of money want to be like quants or something or work for big tech. So how do you get those kinds of people to work for you instead?</p><p>Alec Stapp (10:33)</p><p>Mm -hmm.</p><p>Yeah, so I think, well, one thing we talk a lot about is for when we're hiring people IFP, we don't play the role in the ecosystem of usually being people's first job. We tend to hire more experienced folks because we're a small, lean team and tend to wait a little more towards senior people who are autonomous and have an agenda they want to drive. But that is not to say that sometimes we don't hire folks with more limited experience.</p><p>And usually the exceptions are when someone has a demonstrated track record of public work on the topic. So there's really no substitute to working in public. That's a blog, publishing academic papers, publishing white papers, showing up to events. It's much easier if you already live in DC. Again, not impossible to do this from the outside, but if you're already in DC, you're going to the events with other experts on the topic, you're doing all the reading.</p><p>all the hard work, especially if you're doing like quantitative analysis in public, either on like a sub stack or other venues you can publish in. And then the people who matter will notice this and they'll recruit you or they'll be open to an unsolicited pitch to join your organization. And so that's the main thing I'd recommend for people is if you're considering getting into this field, there are two big mistakes people make. One is doing nothing out of risk aversion. And this is often people from elite institutions to do this because</p><p>they're taught their whole life to climb a ladder and keep their head down and just don't do anything too risky that could be seen as outside the Overton window. And that is just not the kind of person who actually ends up mattering in the long run or making a big impact in DC. You have to have your own ideas and you have to be willing to put yourself out there. But then on the flip side, lots of folks make the opposite mistake, which is they want to become a takes person who has an opinion on everything, even though they have either only expertise in a narrow domain or they have no expertise at all because they're...</p><p>in their early 20s and they're still learning. And it's great to still be learning, but I wouldn't go on record on 100 different topics, especially culture war topics, especially things that are very controversial. Instead, what you can do is develop a niche expertise that adds value to people in Washington, DC, and then they will be receptive to your ideas. A very good example of this is Thomas Hokeman at the Foundation for American Innovation. I recommend everyone follow him on Twitter. Thomas first started his career just like...</p><p>a little more than a year ago, he's not been in DC long, DC policy world, and he just went very deep on the Clean Air Act. And now he's like one of the experts on the Clean Air Act in Washington DC, even though he's like just graduated college this year. But like he spent most of his time thinking about like how do we improve the Clean Air Act and he didn't spend all of his time like fighting different culture wars. So you go, if you're very focused and go very deep, you can make an impact early. And then the other thing I would recommend to your audience is that there's this great website called emergingtechpolicy .org.</p><p>and it just lays out all the pathways to getting into tech policy in DC in terms of fellowships, in terms of resource guides. And it's by far the best one -stop shop of like, if you're in tech or outside DC and you're like, hey, I want to get into policy, just go to emergingtechpolicy .org and read through their resources.</p><p>Theo Jaffee (13:48)</p><p>What about not just nonprofits that shape these policies, but what about getting smart people actually into elected office or into government agencies? It seems like there are a lot of people who would be open to the idea, otherwise who are not right now for some reason.</p><p>Alec Stapp (14:08)</p><p>Yeah, so elected office is a tricky one. I think that's much, a much harder hill to climb. The only thing I would say about that is only go that path. If you're an extreme extrovert, everything about fundraising, campaigning, running for office is constant social interaction. You must be a people person. You must be an extrovert. and if that's your personality and you also happen to have interest in like the substantive policy ideas that we care about and some of the people our friends care about and that then more power to you go run for office, raise money, try to win.</p><p>But most people we work with are much more wonky, technocratic, in the weeds. They're not the right kind of people to run for office. But elected officials need really talented people in government, whether it's in Congress or the executive branch. And not to beat it to a horse, but the emergingtechpolicy .org website is by far the best resource for what are the entry points? What are the junior level fellowships? How do you get your foot in the door? And then I would just say, once you get your foot in the door, whether it's a staffer in Congress or someone in the executive branch, an executive branch office.</p><p>It's really just like hard work and constantly networking. And so these are not like the most well -paid positions. So if you're going into this, you just need to understand that like you're giving up money on the table by working in the private sector to do this public service, but it's really important. And if you work really hard because, and you're talented and you just put yourself out there, you will get promoted and you will get retained because the system does need those people. And...</p><p>pretty quickly, not overnight, but pretty quickly, you can be put in a position of authority to really draft legislation, be in charge of a rulemaking or regulation at an executive branch agency, if you work really, really hard and know what you're talking about.</p><p>Theo Jaffee (15:55)</p><p>So IFP's five priorities on the website are meta -science, high -skilled immigration, biotech, infrastructure, and emerging technology. So why those five in particular? Mostly because they're tractable or something else. And if you had to add another category, what else would it be?</p><p>Alec Stapp (16:13)</p><p>Yeah, so those are definitely going to be the five we stick with for the foreseeable future. We're a team of roughly 16 people. And so five policy areas definitely keeps us busy. So how we pick those areas, I think the main factors were that they are both tractable and important. In a lot of ways, we try to tackle issues that are neglected as well. And so we think that EA framework is pretty useful for selecting topics to work on.</p><p>Theo Jaffee (16:34)</p><p>like the EA criteria.</p><p>Alec Stapp (16:42)</p><p>If it's important and tractable, meta -science, a lot of folks, there are a lot of lobbyists and trade associations for universities and other institutions that lobby for more scientific funding at NSF and NIH. There are libertarians and small -c conservative folks who argue to cut those budgets because it's wasteful government spending. They have a trench warfare fight every single year around those budgets. We don't think there's much marginal impact to be had by joining that specific debate.</p><p>But there are very few people who are in Washington, DC thinking about the question, given any particular budget added SF and NIH, how is it being spent? Is it being spent in the highest impact way? What alternative allocation systems should we be considering for scientific grant making? Those are very understudied systems and ideas. And then similarly for high skilled immigration, we've talked about that a bit already.</p><p>comprehensive immigration reform, stuff happening at the border, that is a very contentious, high salience fight, very well funded on both sides, not neglected at all. But tweaking visa pathways, for example, the 01 visa for immigrants of extraordinary ability, it's an uncapped visa program, better guidance from USCIS, which they first issued two years ago.</p><p>is going to help people realize they qualify for that visa program. And it's temporary, but it can be renewed as many times as you want. And so there are pathways in our current broken immigration system for talented people to come to the US. And then the same is true for biotech, AI, which we work under on our emerging tech portfolio, as well as infrastructure, where you just can focus on more neglected topics that are innovation related.</p><p>Theo Jaffee (18:34)</p><p>But if you had to add another category, what would it be? What else is tractable and important and neglected and does not fall under the umbrella of the other five?</p><p>Alec Stapp (18:44)</p><p>It's funny because these are so broad, like infrastructure captures energy, housing, and transportation for us. I would probably add like a specific, so we think about state capacity a lot, and this is a horizontal theme. If you think of those like vertical policy areas I just described, state capacity is the ability of the government to achieve its intended aims, to have the capacity to actually achieve its objectives. And it applies to all those policy areas I talked about.</p><p>Two state capacity themes we think about a lot and would potentially work on in the future are procurement and hiring. So federal procurement procedure is extremely broken and leads to really inefficient outcomes and a lot of stagnation and sclerosis in the government contractor industry. And then hiring as well. We just talked about getting good people into government. If you go through the normal hiring procedure on USAjobs .gov, it is a nightmare in terms of the incentives it creates.</p><p>People are incentivized to upload 100 page resumes that include every single possible keyword because the first filter is just like a keyword match filter for resumes to job description. And so usually the most qualified people don't get hired or take so long they give up and go to the private sector. And as anyone in tech and startup world knows, people are the most important factor in success. And so we needed to get better people in the government. We need much more flexible hiring procedures. And that's something that we would probably add as an area to focus on.</p><p>Theo Jaffee (20:14)</p><p>Yeah, procurement in particular is interesting. My dad used to work at Lockheed Martin back when it was Martin Marietta. And he always talks about the days of cost plus contracts and how terrible those were.</p><p>Alec Stapp (20:23)</p><p>Those are still the days today. Those are, that's mostly how it works today. There are some, yeah, there are some fixed price contracts, like SpaceX is famous for advocating for this and NASA has done some move towards fixed price. But my understanding is most government contracting is still cost plus. And like, there are some cases where I think cost plus makes sense, but in the majority of cases, it just creates bad incentives, obviously, where if it's cost plus a certain percentage, you just increase your costs and make more money.</p><p>Theo Jaffee (20:28)</p><p>Mostly. Wow.</p><p>Yeah.</p><p>Yep. So what have we learned from Trump and Biden after four years of each of them in power? Who do you think would be better for the IFP agenda? I understand if you need to be strategic with the answer.</p><p>Alec Stapp (21:04)</p><p>Well, we, not about being strategic, it's about just being committed to being nonpartisan. And so we are prepared for any election outcome this November. And we very intentionally do not weigh in on electoral politics at the presidential level or the congressional level because we want to make an impact in the areas we work on. And so we're a mission focused organization. The five areas you mentioned earlier, the only areas we work on.</p><p>And we have an agenda for either presidential administration, different compositions of Congress in terms of Republicans controlling one chamber, two chambers, and vice versa for Democrats. And so we would be excited to work with any particular US government because regardless of who wins, these issues are really, really important and there's lots to do.</p><p>Theo Jaffee (21:56)</p><p>So how a lot of non -governmental organizations, lobbying orgs, nonprofits and such, have been subject to some kind of mission creep, like for example, the ACLU, which was originally about securing constitutional liberties. They famously like,</p><p>Defended like Nazis in court even like a couple of decades ago and now they couldn't be farther from that So how will IFP avoid the same kind of institutional capture or mission creep or whatever you want to call it? in a long term</p><p>Alec Stapp (22:31)</p><p>Yeah, it's a good question. And it's honestly, it's part of what I was just trying to do there. And we'll continue to do in this conversation and in every conversation I have publicly and privately in Washington, DC, my co -founder, Caleb and I are on the exact same page. We run this institution, the two of us together, we have equal say over big strategic plans for IFP and down to nitty gritty details. And at the end of the day, both of us observed that this phenomenon was happening that you're describing of like mission creep. And we think that organizations become</p><p>less effective over time when they do that on a per dollar or per person basis. And it's just like, then all organizations lose their identity as well. They kind of like become this big blob of everyone doing the same thing, this, this Omni -Cause phenomenon. And there is a role in DC for think tanks and organizations that just support one political party. Center for American Progress is the biggest one for Democrats, though there are others as well. The Heritage Foundation is currently the biggest one for Republicans in terms of</p><p>being a holding tank for folks to go into the next administration when their party wins, as well as to like develop policy ideas and white papers and stuff. And so in that sense, they often have to support the entire agenda. But lots of think tanks are very issue. They're supposed to be issue focused or much narrower. And I think it just hurts them and is counterproductive if those kinds of organizations expand due to mission creep and bleed into other areas that are outside their scope. And so from day one,</p><p>Caleb and I both decided that's not the kind of institution we want to run. We want to run a mission -focused organization that can work with either party. And we think boundary effects are strong, and so this is what we want to do with the rest of our careers. And we're going to be here to make sure that we don't succumb to that risk like other organizations have.</p><p>Theo Jaffee (24:17)</p><p>So as for meta science, how important do you think academia is today in 2024? Should government policy focus on getting smart people into academia or more into private companies and research labs? Or their own organizations within government?</p><p>Alec Stapp (24:32)</p><p>Yeah, I think on the margin, we probably, obviously, it's very field by field. It's hard to speak in generalities here. But in general, I don't think the marginal smart person should go into academia. I think most of our breakthroughs come from superstar researchers who are already obvious fits for academia. But the marginal person writing the marginal academic paper is not adding a lot of social value. I think that person would be much better fit.</p><p>Going into government in like kind of a high agency mindset of like with a clear goal, wanting to get something done, ideally even at the local or state level, because like a single great person at that level of government can have a lot of change, make a big impact. And then, yeah, we're also just very bullish on like new institutions. And so there are groups like Arcadia and the Arc Institute led by Patrick Su. Arc Institute's amazing. They're already having huge breakthroughs and they've been around for just a couple of years.</p><p>Theo Jaffee (25:23)</p><p>I love the Arc Institute.</p><p>Yeah.</p><p>Alec Stapp (25:28)</p><p>and those institutions outside of academia and outside of government, often privately funded, philanthropically funded are amazing. And I think I would encourage more people to start organizations like that as small experiments to join, like, you know, ongoing new institutions like that. And then let's double down on the ones that are working and close the ones that aren't. And I think that is a much more exciting prospect than joining legacy institutions.</p><p>that aren't that effective anymore for the marginal person.</p><p>Theo Jaffee (26:03)</p><p>If you could press a button that would fully open the US's borders, would you do it? Like, would fully open borders be better or worse than the status quo?</p><p>Alec Stapp (26:12)</p><p>It's a good question. I don't think about this a lot just because it's not within the overtopendo, but I'll play the game of just saying, high uncertainty, but I probably would not push the button. I think this is for like, very Tyler Cowen -esque reasons where this is not a permanent, and you can define the thought experiment however you want, but I do not think this will ever be a permanent option. And so what would immediately happen is there'd be a flood of immigration, and then there would be a nativist backlash to dramatic change to society in the short run.</p><p>housing prices would definitely go up in more competition for scarce resources before supply has the opportunity to expand. And so in the long run, I think most and pretty much all immigration has positive effects for the whole country, but there are in narrow local cases and especially in the short run, there can be costs and people observe those and there's a backlash. And so this would lead to probably the strictest immigration regime in the short period after open borders. And so...</p><p>At IFP, we're really focused on long run sustainability, what is a durable policy change that can get us to a better future and we can build on. And so when it comes to immigration, that includes things like having control of our southern border so that the domestic population has faith in a credible immigration system. And then it's focusing on what we see as successes around the world, whether it's Canada, Australia, elsewhere, where you have kind of more of a points -based system.</p><p>where immigration is targeted at occupations that are in shortage. And it's just focused on high skilled STEM immigrants who can really contribute the most on a per person basis to the US economy and to the world. And there's a lot of data that these are the kind of, it's the kind of immigration, a controlled orderly immigration that leads to the least amount of backlash and can be built on over time. And so I think the idea of like a magic button that opens the borders, it would very quickly change in practice.</p><p>Theo Jaffee (28:07)</p><p>I'm not certain about that because in Europe over the last few years they haven't had full open borders, but they have had substantial amounts of immigration from all walks of life. And the nativist backlash has been, I think, much less than people expected. Like in France just yesterday, the more pro -immigration left parties won the election. In Britain, Labour Party absolutely swept the election and they seem to be pro -immigration.</p><p>Alec Stapp (28:34)</p><p>I probably disagree with the characterization of the UK. I think a significant reason from my view of why the Conservative Party became so unpopular, not the whole reason, but part of the reason was that post -Brexit, they were supposed to be the party of controlled, orderly immigration, no longer having open borders with Europe. And like you said, they became like open borders with the world. And I think the polling and some of the data in the UK shows that the...</p><p>Domestic population did not like the direction immigration was going in the UK. And I haven't seen the entire labor policy position, but I don't think they're significantly pro -immigration in a material way and would be surprised if they totally maintained the policy status quo there. And then in France, again, yeah, it was better for immigration that the center -left and leftist parties beat the right -wing parties, but...</p><p>The Raving Parties did really well in round one and the fact that they are even contending for national power in France shows you something about the backlash even if they weren't ultimately able to get a majority.</p><p>Theo Jaffee (29:45)</p><p>So on the topic of biotech, how prepared do you think we are for another pandemic? Has the government learned anything from COVID? And if we're not prepared, what would it take for us to get prepared?</p><p>Alec Stapp (29:55)</p><p>Yeah, I regret to report pretty much nothing. They've done nothing, learned nothing. Arguably, we'd be in a worse situation if a COVID level event were to happen. The best success of the COVID pandemic was Operation Warp Speed. I'm not convinced that if Republicans were in office, they would do it again, or if Democrats were in office, that they would try to copy it. It's now seen as controversial. Obviously, among Republicans, vaccines in general are controversial. Democrats...</p><p>wouldn't necessarily trust the private sector to lead in the way that Operation Warp Speed did. And then besides vaccines, personal protective equipment, testing, surveillance monitoring, sort of wastewater monitoring and other passive detection for emerging pathogens, we're just not there. We are making almost no investments. And we've kind of not only reverted to the status quo ex ante,</p><p>but we've even done things like the FDA is now regulating lab developed tests. And in the, before the pandemic, they were not regulated. When the public health emergency was declared, lab developed tests became regulated by the FDA. And what happened, the CDC had monopoly on testing and they totally messed it up. And that's why we didn't have testing for the first few months of COVID is because our one source of flexible testing capacity for novel pathogens, lab developed tests, were legally prohibited from doing what they were capable of doing.</p><p>So now we're in a worse equilibrium when it comes to testing. I think that's true across the board for most areas you care about when it comes to pandemic prevention, whether it's delaying future pathogens, stopping gain of function research. We did get one win, the deep vision program at USAID that was like a virus hunting program, like literally going out into untouched parts of the world to look for.</p><p>potential pandemic pathogens. The risk reward on that was awful. Thankfully, I shut that down. So that's good. We're not like actively going into the jungle trying to find new pathogens that could cause a pandemic. But no progress on gain of function research. And in terms of detection, there are some pilot programs in terms of like doing more testing in airports and other public places for emerging diseases, but they're not wide scale yet. And a lot of like the wastewater monitoring stuff.</p><p>It's been very hard for companies like BioBot to get customers from the government to pay for this stuff, even though it's incredibly useful. And so we're in a really bad spot, but we need to be making those investments. And this is the kind of thing again, where it's like, it mostly requires enlightened leadership of, it's not a ton of money we're talking about here. A current estimate from the White House science office is that for $24 billion, we could get prototype vaccines.</p><p>for the 26 viral families that are known to cause human disease. And so like on the grand scale of things, $24 billion is nothing. It's a drop in the bucket. But those are the kind of like long run investments we need to be making today before the next pandemic starts or something we could cut it off very quickly. And just in DC, no one wants to talk about pandemics because people are still a bit traumatized by COVID.</p><p>Theo Jaffee (33:16)</p><p>So when you talked about there's all these like little easy wins that we can do, like wastewater monitoring that just don't happen. Like mechanistically, what is going on there? Like you get your member of Congress or your Biden administration or CDC official in the room and you tell them, like, here's this thing that we can do that would.</p><p>prevent the next pandemic or could prevent the next pandemic and would be a very good thing regardless and would not cost that much money and would look very good for you and would be an easy win and is not exactly controversial. Like who's opposing wastewater treatment or far UVC systems? Like why does it not happen?</p><p>Alec Stapp (33:56)</p><p>Yeah, that's a great question. So I think a couple assumptions there that I think we need to tease out. One, I don't think the industry would look very good. So these things are usually uncontroversial, but they're not salient enough for the public. So like no one's going to win an election based on like properly installing a wastewater monitoring system. If you prevent the pandemic that never happens, then you get no credit for it because the pandemic never happened. And so there's this asymmetric risk reward to any of these investments that you basically</p><p>Never get credit for them when it works out and you do the smart thing. But you get blame if things go south. And then on the, it doesn't cost that much money. These are not, so I don't wanna, I wanna be very clear here. It's not that these are super expensive, but they're not free either. And in our current budgeting environment, we're now in a high -interest rate environment. We have, interest rates are above 5%. Money is not free anymore.</p><p>The days of high -deficit spending are over for the foreseeable future. And so the budget constraints are very real. And then that environment, the way budgeting works in Washington, D .C., because the budget has to be bipartisan. It has to get 60 votes in the Senate to pass, which means it has to get both Democrats and Republicans every single year. And what they do is they just take the previous year's appropriations bill and start with that as the base text. And then any change from there, whether it's new spending or cutting -old spending,</p><p>essentially has to be bipartisan in nature and has to be a top priority. And so when you come in and say, hey, let's spend a billion dollars on wastewater surveillance, the people in the room are like, maybe a good idea, but like, we're not going to get credit for this. Who knows when it'll pay off? It's a very uncertain payoff. Probably there won't be a pandemic anyway. And then what are you going to cut? Are you going to cut the money we're spending on flu? There are a lot of lobbying groups that lobby for more spending on flu. How about cancer, diabetes, heart disease, all of these like,</p><p>very specific public health issues, like disease specific programs, have massive lobbying groups behind them, whether it's corporations or patient groups. And in a zero sum budgeting environment, we're not really increasing deficits for the foreseeable future. New spending has to take from somebody else, and then it becomes a dog fight. It's very hard to win.</p><p>Theo Jaffee (36:17)</p><p>So speaking of spending, IFP has talked positively about Biden's spending, like the infrastructure bill, the Innovation and Competition Act. But like, is this spending actually good? Like would it pass a cost benefit analysis? You pointed out on Twitter a couple weeks ago.</p><p>that the Biden administration allocated $42 .5 billion for high -speed internet, and not a single home or business has been connected nearly three years later. And on top of that, our national debt interest payments alone this year will be something like $900 billion, which is more than we spend on defense, more than almost anything else in the federal budget.</p><p>Alec Stapp (36:55)</p><p>Yes, I think we're in a really bad equilibrium right now where we spend a lot of money and don't get much for it. I think when it comes to things like basic research, even if I think our current systems are very inefficient, there is just such a clear story of market failure, of companies under -investing in really breakthrough, high -risk, high -reward stuff that won't pay off for a decade, research ideas that don't have clear, obvious commercialization potential. I just think...</p><p>all the economic research points to that being a massive underinvestment by the private sector. And so there's just a large role for the government to spend, you know, roughly on the order of the $60 billion a year NSF and NIH spend on basic research type investments. So I think there's a lot of improvements you can make there, but I wouldn't cut it just given the large market failure and the massive spillovers to the rest of the economy from those kinds of investments. When it comes to more narrowly targeted things like the</p><p>rural broadband subsidies. Yeah, I just think this is one of the worst case scenarios in government where it's politically popular to say people in rural America don't have internet. Let's spend a lot of money to make sure they're connected to the rest of the world. This is like the urban rural digital divide thing. And no one can talk about being against the digital divide, but it's a massive waste of money. And we have Starlink. Just do Starlink. Don't run fiber cables to, you know, a single person living out in the boonies. Like this is...</p><p>on a per mile per user basis, exorbitantly expensive. And if you look at, it's the weirdest thing, if you look at surveys of people, you ask them, why don't you have internet? The number one reason they say is they're not interested. Like part of the reason they moved out to middle of nowhere is because they don't want to be connected to the rest of the world. And so we're doing this thing where we're spending tons of money to connect people to the internet, some of whom don't even really want the internet. And because of...</p><p>political biases against Elon Musk and who is a highly imperfect person, we're now not doing Starlink's terminals when we should be for that money. And we could spend much less than $42 billion to connect people.</p><p>Theo Jaffee (39:08)</p><p>Well, like if this is true, should the government's main focus be on infrastructure spending or just like infrastructure permitting? Like for solar energy, should they pass, you know, a multi -billion dollar package to build solar or should they, should IFP be pushing for them to just allow private companies to do it easily, get their permits done?</p><p>Alec Stapp (39:28)</p><p>Yeah, so we really focus, for that reason, we really focus our efforts on the regulatory side to unlock private industry because a lot of these cases are situations where there are narrow targeted benefits to the users. And so if you make it legal to build the infrastructure, people will build it and they will sell it to private citizens for a profit. And so that's true of solar. And again, this is where like understanding institutional structure in DC is really important. So like, why did we get...</p><p>the subsidies and not the permitting reform. A key part of this is actually because of the rules of reconciliation. A bill can only go through reconciliation, meaning it would get only needs 50 votes in the Senate, which Democrats had for the first two years of the Biden administration. If the provision is primarily budgetary in nature, spending money on subsidies and tax credits is primarily budgetary in nature. And that's how we got the Inflation Reduction Act. But now we need to do permitting reform.</p><p>And we're also not going to increase deficits. We're not going to do massive new spending programs. And so our effort, which has always been around the higher efficiency, increased productivity in the economy, is now the only game in town. Going forward, either we're not going to get new reforms, or we're going to do the reforms that actually increase efficiency and productivity. There is no more new massive spending package coming.</p><p>Theo Jaffee (40:51)</p><p>So last week the Supreme Court struck down chevron deference, which is a legal doctrine for the audience, that if a court determines that a statute is ambiguous, it must defer to the interpretation of the relevant federal agency. But they no longer have to do that. So how important is this?</p><p>Alec Stapp (41:08)</p><p>It's a big deal. I think there is still uncertainty around exactly how it will be implemented. The court did not offer a very clear framework for how future decisions should be made. And so this is one of these things where, is how the common law system works in the United States. You get one new Supreme Court decision that establishes a new precedent. And then you see how it plays out in practice. You see which agency decisions get challenged. You see...</p><p>what lower courts do in terms of how they interpret this new guidance from the Supreme Court, and you see how it works in practice. So I think, one, I would just caveat this with, don't trust anyone who's overly confident on what Chevron deference or really any other Supreme Court decision means for the future. You'll notice that most people can't predict reliably what the Supreme Court is gonna do ahead of time on all these high profile cases. The law is inherently uncertain, at least in how US legal institutions work.</p><p>And we'll have to see how it evolves over time. But in general, Chevron deference will probably be a big deal. It will probably mean that agencies are more risk averse on the margin. They do less. They spend more time making sure that the limited actions they do take are unimpeachable on legal grounds and directly tied to their statutory authority from Congress. And something we've been talking about internally at IFP is like,</p><p>We're now going to be in the world of NEPA, the National Environmental Policy Act, for lots of other parts of government. Because that's how NEPA worked. It was a very short statute passed in 1970 that was then interpreted by the courts more broadly year after year for 50 plus years. And through litigation, brought by private actors, and then decided by judges, all of a sudden everything was a major federal action, so it was covered by NEPA. Almost everything has a significant environmental impact.</p><p>And you could never consider enough alternative mitigating measures. And so you can sue any project and say the environmental review missed a significant impact or it didn't consider an alternative measure. And that's the world we live in for NEPA and environmental review. And it's going to be increasingly the world we live in for a lot of other areas of policy.</p><p>Theo Jaffee (43:23)</p><p>What do you think about land value taxes as an intervention for YIMBY?</p><p>Alec Stapp (43:27)</p><p>I think they're great. We haven't done any explicit work on them, but I'm very supportive of all the Georgists out there. When I first heard about land -value taxes and looked into it, my prior was that, as I try to think about a lot of policy issues, is like, this is not really implemented anywhere in the world. Something else is going on. Like, probably the idea is fundamentally flawed. It's not that you can never come up with a wholly new idea that hasn't been tried and it won't be successful.</p><p>But it's probably a very high bar. Probably there are like fundamental things about human psychology and human institutions in the modern nation state that like lead to your idea not working. I think land value taxes could be an exception to this rule because it seems like we have a clear theory of why they haven't worked so far. And there are folks who have done a startup company to figure out the how do you estimate the land value? How do you separate it from the value of the structure on the land? And I think</p><p>You can tell a story where up until now we didn't have the data collection and the like quantitative statistical tools to actually make this, you know, produce these answers in a reliable way, which is why, you know, we're left with things like property taxes that are less efficient. And so I think technology applied to the land value tax problem in terms of land valuations could unlock them. And then obviously from economic first principles, they are the most efficient.</p><p>type of tax to implement.</p><p>Theo Jaffee (45:01)</p><p>So Leopold Aschenbrenner a few weeks ago just published this very long document called Situational Awareness. For those in the audience who don't know, Leopold was on the super alignment team at OpenAI. And he's very concerned about AI and AI alignment and doing it right. And one of the main ideas in this book length</p><p>Alec Stapp (45:04)</p><p>Ahem.</p><p>Theo Jaffee (45:22)</p><p>blog post series that he wrote was that eventually as AI gets really good, governments will wake up and kind of like they did during COVID and realize, this AI thing is a big deal. And they will nationalize all the AI labs and try to build AGI themselves in one big kind of Manhattan project thing called The Project. So do you think this will actually happen? Do you think this would be desirable?</p><p>Alec Stapp (45:48)</p><p>the way I currently think about AI is the future is highly uncertain, especially in this area. And so I won't say whether this will or won't happen. I think it's a possibility. I don't even can begin to describe the percentage chance of this happening on any reasonable time frame. The way we can think about AI at IFP is we're trying to focus on what are the robust ideas that will be good in a wide range of futures. So</p><p>There's folks like Leopold who believe in very short timelines. They believe in the scaling laws will hold. This is how we're going to get super intelligence. The resources required to get these orders of magnitude compute increases will require the nation state nationalization. One single concerted effort to avoid duplicating resource use. It's possible, but I also think there's another scenario where for whatever reason, the scaling laws stop holding.</p><p>capabilities kind of peter out or capabilities keep increasing, but the real world is heavy tailed as lots of people in Silicon Valley like to say. There are a lot of frictions in the real world. Maybe we get like the internet, much more innovation in the digital world and the physical world because it's hard to maybe progress and robotics doesn't happen as quickly. And so in all of these scenarios, whether it's like the Leopold future where we're very close to super intelligence and it's going to be a national project.</p><p>or the world where we have like limited gains and we're just trying to like make this internet 2 .0 thing happen. Under those world states, we want more state capacity on AI. We want NIST, the federal agency tasked with a lot of like standard setting and evaluations under the executive order from the Biden White House. We want them to work. We want them to have talented people. We want them to be focused on the most important issues. We want to make sure that that</p><p>When they evaluate a model, they know how to test its capabilities, they know what risks to work with, they know how to talk to the companies. That is just in almost every future world state, a better thing to have is that there's expertise and competence somewhere in the government to handle these really technical challenges in a fast changing world. And so we're really focused a lot in our AI portfolio on these state capacity issues because we're open -minded about</p><p>a wide range of possible futures and we're not sure where it's going.</p><p>Theo Jaffee (48:19)</p><p>If America is increasingly focusing on domestic production and manufacturing, like domestic manufacturing in America has increased significantly over the last few years, what does that mean for globalism? Is globalism dying?</p><p>Alec Stapp (48:31)</p><p>Globalism is definitely in retreat right now. I think it remains to be seen how much output we get. There's like all I've seen all the charts. I'm sure everyone has seen the charts of like massive increases in manufacturing capacity in terms of spending in the United States. We'll see what the productivity looks like, what output increase do we get for these these new investments. I think that remains to be seen. But probably we'll get we'll get some noticeable increase. And yeah, due to</p><p>rising geopolitical risk, multiple wars, potential conflict with China over Taiwan, ongoing Russian invasion of Ukraine. I think a lot of countries look around and they see kind of the end of Pax Americana and the natural implication of that is let's make sure we have domestic capacity for manufacturing, critical supply chains, and there's like a movement towards more on -shoring. The thing I think is underrated in this debate that I would...</p><p>hopeful that US policymakers move towards is the idea of friend shoring of you don't have to have all this capacity in the United States, but you do want to have it in friendly countries that in the event of a hot war or conflict, you aren't vulnerable to a critical input being leveraged against you. And so let's dramatically increase trade with Canada, Mexico, the European Union, the UK.</p><p>Obviously attempt to do so with South Korea and Japan, but also recognizing that they're in a more vulnerable region of the world. But overall, let's massively increase the density of trade networks with allied nations, I think is an obviously good idea that balances the national security risks, along with the reality that the United States is never going to be the world leader in every single facet of manufacturing.</p><p>Theo Jaffee (50:32)</p><p>So for my last segment, I'm going to shamelessly steal from Tyler Cowen and say we should play a game of overrated or underrated. So overrated or underrated prediction markets.</p><p>Alec Stapp (50:42)</p><p>Let's do it.</p><p>I will say currently underrated just because probably your listeners are people who read works in progress. And our friend Nick Whitaker just wrote a great piece about why prediction markets are overrated. And so I think probably in people's mind, they're currently overrated. I will say they're underrated just because the biggest event in Washington, DC right now or the biggest ongoing conversation is the presidential debate and the aftermath of should Joe Biden step down? Will he step down for the nomi - for the -</p><p>convention, if he steps down, who will the next nominee be? And the primary way this conversation is happening is via people talking about prediction markets, which is crazy. This is a very niche thing relative to years ago. And now it's mainstream for political pundits to talk about prediction markets. And so I think they're a little underrated given their recent progress.</p><p>Theo Jaffee (51:40)</p><p>And a personal lesson from Prediction Markets, when Trump and Biden were both trading at around 50 -50 a couple weeks ago, I bet on Trump yes, but I should have bet on Biden no, because I forgot to take into account the conditional that Biden would actually get the nomination, and I didn't expect him to collapse like this after the debate. So, yeah, lesson for anyone who wants to bet on Prediction Markets. Overrated or underrated? Charter cities.</p><p>Alec Stapp (51:50)</p><p>You</p><p>Gotta be careful out there.</p><p>Hmm underrated probably I think they've had a lot of you know, obviously struggles and false starts over the over the years, but I think Still the idea of like can a fresh start for a city with new institutions have a big impact and I also think that Charter cities as a case study for for incumbent cities to learn from is underrated so if you get a successful charter city in Africa that you know grows to</p><p>even 100 ,000 people, maybe they learn new ways of doing things that can be adapted to cities in America or at least other cities in Africa. And so I think there is a transfer of learning across cities that a charter city could kickstart.</p><p>Theo Jaffee (52:54)</p><p>Overrated or underrated? Existential risk and long -termism.</p><p>Alec Stapp (52:58)</p><p>probably currently underrated. It's obviously taken some huge hits with the controversies around Sam Bankman Fried, FTX, the AI, EAC, AI safety debate. but I think you should just like separate out the ideas from the communities. And at the end of the day, is it possible that there are technologies that could have catastrophic risk and it's like nuclear weapons already exist?</p><p>We have had pandemics that have killed more than 20 million people multiple times. Biotech is advancing very quickly. It's possible we could engineer pathogens that are much more deadly than COVID. And again, like I said, we're very uncertain about the future of AI, but if we actually achieve developing super intelligence, there are risks involved with that. And so I think being realistic about the possibility of existential risk is something people should include in their mental model of the world.</p><p>Theo Jaffee (53:56)</p><p>Overrated or underrated, effective altruism.</p><p>Alec Stapp (54:00)</p><p>And this is where I think it's underrated currently because it got attached to the controversies around long -termism and ex -risk. I think of effective altruism primarily as bed nets to fight malaria in Africa. And like, again, just like take it down to brass tacks because I know there are a lot of like controversial people involved in all sides of this debate. Yeah, and like, yeah, lots of people. And it's like, is it good to try to help others?</p><p>Theo Jaffee (54:13)</p><p>Yeah.</p><p>Savvy Goodread.</p><p>Alec Stapp (54:29)</p><p>Generally, yes. Should we try to be effective about this? Should we try to measure things and use data to be more effective or less effective? Those are kind of unimpeachable ideas, I think. You can disagree with how it operates in practice. And I think the kind of like...</p><p>Theo Jaffee (54:42)</p><p>Yeah, but that's like the Democratic People's Republic of Korea argument.</p><p>Alec Stapp (54:48)</p><p>Sure, sure I guess. I mean, but that's obviously just like, but like, the effect of autism community does donate to bed nets in Africa and I do think that like, actually like reduces malaria deaths.</p><p>Theo Jaffee (55:01)</p><p>Alright, overrated or underrated? Within progress studies circles, climate change.</p><p>Alec Stapp (55:08)</p><p>within progress of these circles?</p><p>Theo Jaffee (55:10)</p><p>Yeah, because probably in the broader society, it's somewhat overrated. People are making these very short -term doom predictions. But within progress studies, do you think it's overrated or underrated?</p><p>Alec Stapp (55:23)</p><p>It's probably still a little overrated, I think. A lot of the tipping point arguments seem to have been refuted of like, the latest IPCC report shows that like the extreme worst case scenarios are increasingly unlikely. And so we need to do our best to limit warming. The arguments for clean energy abundance are over determined. So of course, mitigating the effects of global warming are one of those, but also just the general benefits of clean energy.</p><p>becoming more widespread. Those are really important. And so I think it's probably still in the progress studies community there are some folks who aren't up on the latest data in terms of like the catastrophic risks from the possibility of catastrophic risk from climate change being overrated.</p><p>Theo Jaffee (56:10)</p><p>Also within progress studies, overrated or underrated nuclear energy. Cause last time on the podcast I had Casey Handmer, who is extremely, extremely bullish on solar and kind of bearish on nuclear energy because it's expensive and complicated and solar is cheap and simple.</p><p>Alec Stapp (56:19)</p><p>Mm -hmm.</p><p>Yeah, I think it's still overrated. I think the way I talk about nuclear is it is extremely stupid for us to shut down operational nuclear power plants and we should be doing everything we can to bring those back online once we shut down to extend the life of existing Gen 2 nuclear power plants. We should be reforming the Nuclear Regulatory Commission to make it viable to have a chance for small modular reactors to make sure that</p><p>a possible future of nuclear fusion is not killed by the regulatory state. There are lots of things we should be doing, but if you're just trying to prognosticate about the future, nuclear power has experienced negative learning curves in almost every country for decades. It gets more expensive to build over time. South Korea used to be an exception. Now their costs seem to be increasing. France costs are increasing. They have a legacy nuclear fleet.</p><p>I'm sure your listeners heard it for Casey, the opposite was obviously true for solar. It just keeps getting cheaper over time. It's an exponential curve in terms of deployment. And so if you're betting on what the future is going to look like, it's going to look like much more like solar than nuclear. Even if we should have made different decisions in the 1970s with nuclear, we still should include it in our portfolio. And it's moronic to shut down existing nuclear power plants because they are safe and deliver clean 24 -7 energy. But are we likely to</p><p>quickly fix the cost problem, I'm pessimistic.</p><p>Theo Jaffee (58:01)</p><p>Overrated or underrated, California.</p><p>Alec Stapp (58:05)</p><p>California currently underrated probably. I mean, it's one of these things of when you have LA, San Francisco, the future of AI is being built in San Francisco, great climate, the coast, it's beautiful. Yeah, it's California. So I think currently underrated because of temporary problems with, major problems with the housing crisis, major problems with.</p><p>Theo Jaffee (58:23)</p><p>Yeah, I'm there right now.</p><p>Alec Stapp (58:35)</p><p>drug addiction, homelessness, et cetera, but I'm long run bullish on California because it has all the fundamentals and just needs to fix some of these policy errors.</p><p>Theo Jaffee (58:45)</p><p>And what about Florida, overrated or underrated?</p><p>Alec Stapp (58:50)</p><p>Probably currently a bit overrated. I think people underestimate the importance of weather and the hot and humid summers there I think make it hard to bet on like super long term. So Florida's great. They're innovating and people are moving there. But I think for the real bleeding edge frontier tech stuff, I think you need more than what Florida currently has.</p><p>Theo Jaffee (59:15)</p><p>Yeah, I grew up there and there's a reason that I'm here in San Francisco for the summer and not in South Florida. Yeah.</p><p>Alec Stapp (59:21)</p><p>Exactly. Weather matters a lot. It's real shame.</p><p>Theo Jaffee (59:25)</p><p>Well, it's not just the weather, it's all the tech people, but you know, the weather helps. And then what is the, yeah, what is the single most overrated policy or issue, either among progress studies people or just among the general public?</p><p>Alec Stapp (59:30)</p><p>Yeah, but why are the tech people, you know, it's like it's a bit circular.</p><p>Overriding the sense that people care about it, but it actually won't move the needle.</p><p>Theo Jaffee (59:46)</p><p>Yes.</p><p>Alec Stapp (59:48)</p><p>Uhhh...</p><p>I mean, because we were talking about it, it's on top of my mind, but rural broadband subsidies, it is, on a per dollar basis, almost a complete waste of money. And if we think that there is a redistribution element to this where we need to subsidize people's access to internet in rural areas, give them Starlink terminals and call it a day. But in the tech policy community, this issue is talked about ad nauseum, and it's almost a complete waste of money.</p><p>Theo Jaffee (1:00:26)</p><p>And then finally, what's the most underrated policy or issue?</p><p>Alec Stapp (1:00:31)</p><p>Most underrated. The most underrated, yeah. I think probably is on my mind today because there was a great piece in the New York Times about using elevators as an example of why there's cost bloat in all sorts of building construction. And I think it gets to a broader problem around building codes and standardization that I think is like the one of the most underrated ideas is</p><p>Theo Jaffee (1:00:34)</p><p>like the most underrated.</p><p>Alec Stapp (1:00:59)</p><p>The US federal system of government with local, state, and federal regulators and authorities leads to a lack of economies of scale in the construction industry writ large, whether it's commercial, residential housing, manufacturing buildings, et cetera. When you want to build anything in the United States, we have this fitocracy where so many regulators get to weigh in and impose different standards that it's very hard for us to be integrated in the global economy for building supplies as well as having any kind of national.</p><p>companies and so the federal government should use every carrot and stick it has to align building codes and standards so that companies can reach higher economies of scale and start to automate more.</p><p>Theo Jaffee (1:01:44)</p><p>All right, well, that's a good place to wrap it up, I think. So thank you so much, Alec Stapp, for coming on the podcast.</p><p>Alec Stapp (1:01:49)</p><p>Thanks for having me, Theo.</p><p>Theo Jaffee (1:01:52)</p><p>Absolutely.</p>]]></content:encoded></item><item><title><![CDATA[#17: Casey Handmer]]></title><description><![CDATA[Terraform, solar, space, Hyperloop, and how to think]]></description><link>https://www.theojaffee.com/p/17-casey-handmer</link><guid isPermaLink="false">https://www.theojaffee.com/p/17-casey-handmer</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Tue, 25 Jun 2024 00:11:31 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/145965580/ed21f1f9cb5ec9c4825bb8bce7f97632.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Casey Handmer is the founder and CEO of Terraform Industries and a physicist, immigrant, pilot, dad, solar enthusiast, Caltech physics PhD and former Hyperloop One levitation engineer and NASA JPL software system architect.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:40 - Why don&#8217;t other people do what Terraform does?</p><p>2:51 - Why is solar better than nuclear fusion?</p><p>5:27 - Could carbon emissions actually be good?</p><p>8:38 - Why isn&#8217;t anyone stopping global warming with sulfur?</p><p>13:20 - Can America build something like Terraform?</p><p>20:53 - Solar and nuclear</p><p>23:10 - Why not terraform Venus instead of Mars?</p><p>30:47 - Why did Casey work at NASA instead of SpaceX?</p><p>37:18 - Why is Elon the only person with multiple huge companies?</p><p>39:59 - Why didn&#8217;t the Hyperloop work?</p><p>42:26 - Tile the desert with solar</p><p>46:03 - How does solar change geopolitics?</p><p>48:30 - How does Casey manage his time?</p><p>53:24 - How do you develop first principles thinking?</p><p>56:28 - Favorite place Casey has traveled to</p><p>59:21 - Outro</p><h3>Links</h3><p>Casey&#8217;s Blog: <a href="https://caseyhandmer.wordpress.com/">https://caseyhandmer.wordpress.com/</a></p><ul><li><p>You Should Be Working On Hardware: <a href="https://caseyhandmer.wordpress.com/2023/08/25/you-should-be-working-on-hardware/">https://caseyhandmer.wordpress.com/2023/08/25/you-should-be-working-on-hardware/</a></p></li><li><p>The solar industrial revolution is the biggest investment opportunity in history: <a href="https://caseyhandmer.wordpress.com/2024/05/22/the-solar-industrial-revolution-is-the-biggest-investment-opportunity-in-history/">https://caseyhandmer.wordpress.com/2024/05/22/the-solar-industrial-revolution-is-the-biggest-investment-opportunity-in-history/</a></p></li><li><p>Future of Energy Reading List: <a href="https://caseyhandmer.wordpress.com/2023/10/19/future-of-energy-reading-list/">https://caseyhandmer.wordpress.com/2023/10/19/future-of-energy-reading-list/</a></p></li><li><p>Elon Musk Is Not Understood: <a href="https://caseyhandmer.wordpress.com/2024/01/02/elon-musk-is-not-understood/">https://caseyhandmer.wordpress.com/2024/01/02/elon-musk-is-not-understood/</a></p></li><li><p>Why High Speed Rail Hasn&#8217;t Caught On: <a href="https://caseyhandmer.wordpress.com/2022/10/11/why-high-speed-rail-hasnt-caught-on/">https://caseyhandmer.wordpress.com/2022/10/11/why-high-speed-rail-hasnt-caught-on/</a></p></li></ul><p>Casey&#8217;s Website: <a href="http://caseyhandmer.com/">http://caseyhandmer.com/</a></p><p>Casey&#8217;s Twitter: <a href="https://x.com/cjhandmer">https://x.com/cjhandmer</a></p><p>Terraform Industries: <a href="https://terraformindustries.com/">https://terraformindustries.com/</a></p><p>Terraform Blog: <a href="https://terraformindustries.wordpress.com/">https://terraformindustries.wordpress.com/</a></p><ul><li><p>Scaling Carbon Capture: <a href="https://terraformindustries.wordpress.com/2022/07/24/scaling-carbon-capture/">https://terraformindustries.wordpress.com/2022/07/24/scaling-carbon-capture/</a></p></li></ul><ul><li><p>Terraform Industries Whitepaper: <a href="https://terraformindustries.wordpress.com/2022/07/24/terraform-industries-whitepaper/">https://terraformindustries.wordpress.com/2022/07/24/terraform-industries-whitepaper/</a></p></li><li><p>Terraform Industries Whitepaper 2.0: <a href="https://terraformindustries.wordpress.com/2023/01/09/terraform-industries-whitepaper-2-0/">https://terraformindustries.wordpress.com/2023/01/09/terraform-industries-whitepaper-2-0/</a></p></li><li><p>Permitting Reform or Death: <a href="https://terraformindustries.wordpress.com/2023/11/10/permitting-reform-or-death/">https://terraformindustries.wordpress.com/2023/11/10/permitting-reform-or-death/</a></p></li></ul><p>Transcript: <a href="https://www.theojaffee.com/p/17-casey-handmer">https://www.theojaffee.com/p/17-casey-handmer</a></p><p>More Episodes</p><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><h1>Transcript</h1><p>Theo Jaffee (01:29)</p><p>Hi, welcome back to episode 17 of the Theo Jaffee podcast. We're here today with Casey Handmer.</p><p>Casey Handmer (01:36)</p><p>Hi, thanks. It's great to be here.</p><p>Theo Jaffee (01:39)</p><p>Yeah, thank you. So for first question, what you're doing at Terraform, which I'll explain in the intro just to make it clear to everyone, why isn't everyone else doing what Terraform is doing? Like it seems like a very important market need to create like hydrocarbons that will not destroy the environment while also taking CO2 out of the atmosphere.</p><p>Casey Handmer (02:04)</p><p>I think they will pretty soon. I think we're just kind of at the cusp where this technology becomes, it goes from being economically extremely unimpressive to economically inevitable.</p><p>Theo Jaffee (02:16)</p><p>And is that just because solar will continue to get cheaper?</p><p>Casey Handmer (02:19)</p><p>Yeah, that's the key thing. I mean, if you're making fuel fuels a source of energy, you need a cheap energy input to make that work. And actually for a long time, in many ways, oil and gas has been one of the cheapest energy inputs. So it would be very strange to take some other form of energy that's more expensive and then lossily transform that into oil and gas. But of course, that's not going to be the case. And solar is between five and 10 times cheaper than...</p><p>and coal oil now. So actually you can do the reverse conversion and take the efficiency and still win economically.</p><p>Theo Jaffee (02:51)</p><p>Hmm. So yeah, you're like a huge fan of solar and you've written about why solar is better than wind, why it's better than nuclear. But why is solar better than nuclear fusion? Because fusion would use much less land and it has like almost none of the drawbacks of fission. So why not?</p><p>Casey Handmer (03:08)</p><p>Well, solar is nuclear fusion. The reactor's in the sky and it comes up every day. And actually, if you think about the platonic ideal of a fusion reactor, you have a completely free heat source, some kind of glowy gas thing that's confined by magnetic fields or something. But you somehow figure out how to build that completely free. And then you have a 25 to 30 % direct energy conversion efficiency system that sits outside that magnetic containment system with zero moving parts, no turbines, no steam handling, no neutron embrittlement, no</p><p>Theo Jaffee (03:10)</p><p>Well.</p><p>Casey Handmer (03:38)</p><p>nothing. And then some intervening filtering or shielding of some sort so you don't end up with neutron products and neutron embrittlement and other problems. And then basically the platonic ideal of this energy conversion system is basically just a solar array or a solar panel. And you say, well, why don't we just delete the entire reactor and put that thing outside where it's in the sun and it works. This is slightly tongue in cheek, obviously. I want to say, for the record, I think fusions are a really cool technology. And I</p><p>really hope that we figure out how to make it work but in order for Fusion to compete with Solar, this is what it has to do.</p><p>First of all, it has to quite a great work. We have to achieve Q greater than one in a real world nuclear fusion reactor. Then we have to achieve Q high enough that we can extract heat from the reactor to boil water or otherwise allow direct conversion of electricity. Then we have to do that at a good enough price that we can compete on price with other forms of energy that are notionally available 24 hours a day, which is on the order of 50 bucks a megawatt hour. Then we have to be able to produce these reactors at a sensible pace. And I'm talking like,</p><p>at least hundreds of gigawatts of production capacity of these reactors per year. And that's a really tough problem, right? You have to solve the science problem, then you have to solve the economic problem, then you have to solve the manufacturing problem. And you have to do all of that before solar solves the problem for all of us anyway, which it's pretty close to doing. So in terms of the time window for this to occur, if fusion arrives in 2050, that'll be too late, really. It will not be able to, even if it's able to compete on the cost, I think it'll find it has very marginal markets because solar will already kind of own everything. If it becomes extremely compelling on cost, which I think is quite a lack.</p><p>because just diffusion reactors are inherently more complicated than solar arrays. Then, you know, we can always rip up the solar arrays and put them in a hole in the ground and switch over to fusion, and that would be a pretty cool thing as well. I think that would be a win -win situation.</p><p>Theo Jaffee (05:27)</p><p>So is there any possibility that, I've seen this argument before, so I wanted to get your take on it, but could carbon emissions actually be good? Like in a world that we don't reduce them, first of all, lots of climate predictions from like the seventies have been like wildly pessimistic compared to what's actually happened. This has been like a consistent theme and CO2 boosts plant growth, including for agriculture. Like could the offset of like higher wet bulb temperatures in poor countries be...</p><p>Casey Handmer (05:38)</p><p>Mm.</p><p>Mm -hmm.</p><p>Theo Jaffee (05:56)</p><p>essentially like counteracted by like it being easier to grow food. And then the relationship between CO2 emissions and warming is logarithmic. So like you can increase CO2 a lot and it only warms the planet a little.</p><p>Casey Handmer (06:06)</p><p>Yeah.</p><p>Yeah, all these things are true enough.</p><p>And it's not controversial that plants are growing like crazy now because of longer growing seasons and higher CO2 levels in the air. And I think if you had the choice to set the CO2 level in the atmosphere at any level we desire, which we're going to have that ability to do in the next decade or two. So it's probably figure out now what we think a good set point is. I would say about 350 ppm is quite good. So that means that the grasslands are no longer starving forests for CO2 availability due to the inherent inefficiency.</p><p>of different forms of photosynthesis or their CO2 uptake capability. You also get a little bit warming, slightly milder winters, particularly in the north, which generally speaking, cold kills 10 times more people than heat. And then of course we've got solar powered air conditioning which can help make the hotter areas more livable in summer. But the default plan, which is we just keep on cranking up CO2 in the atmosphere by two or three or four ppm every year, is absolutely crazy. And I think that we're totally</p><p>totally playing Russian roulette here. Because there will definitely come points when the atmosphere and surface gas exchange mechanisms destabilize in far more catastrophic ways than they already have. And so here I'm talking about once the Greenland ice sheet or the Western Antarctic ice sheet starts moving, we will not be able to stop it. At that point, our coastal cities will be flooded and there's not a damn thing we can do about it. And much of our arable land as well. And winding that back will take tens of thousands of years. So we should probably not do it.</p><p>that. And also if we get to the point where we melt the permafrost and release a lot of biogenic methane that's trapped there, then that will also really put a thumb on the scales that will require us to take much, much more drastic solar geoengineering response to that in order to keep a lid on temperature. It's kind of a crazy thing, but what Terraform is doing is finding a carbon neutral supply chain source for hydrocarbons for everyone forever. We will be there in 20 years. And so the critical thing is, A, find some way to stop heat from</p><p>getting out of control in the next 20 -30 years and then find some way to wind back the existing CO2 levels to a more sustainable maybe around 350 ppm or thereabouts in that intervening time so that we don't have to do solid resource management, solid radiation management with sulfur in the atmosphere or something forever and then ultimately turn that process off as well once we're on the fully synthetic hydrocarbon supply chain.</p><p>Theo Jaffee (08:38)</p><p>On the topic of solar radiation management, we already asked why isn't everyone doing what Terraform is doing, but why aren't anyone doing solar radiation management with sulfur? It seems like it would be relatively easy for someone with the resources of a nation state. Yeah, it's like a few billion dollars, right?</p><p>Casey Handmer (08:53)</p><p>It is. you could do it as a retired Googler. Yeah, yeah, yeah, less than that, less than that. That's astonishingly cheap. And I actually don't know for sure that people aren't doing it on the sly. I know of a number of entities that are doing it officially, but at a relatively small scale, but doing it publicly at a small scale. It seems insane to me that we've kind of built this.</p><p>cultural precedent of the precautionary principles just in the last generation or two that will, if we don't kind of agitate about it, it will end our civilization because we'll kind of by default prefer stasis and stasis will take us off the edge of the cliff.</p><p>But the thing is, you can go from nothing to a full deployment of SRM in less than a year if you really want to. There are numerous countries on Earth that remove sulfur from fuels and stockpile it. You don't have to burn very many of those and you basically have the effect that you want.</p><p>my position would be that we should start doing it now incrementally so that we can measure the effects more precisely and avoid kind of the impacts of catastrophic or very rapid changes in solar radiation hitting the surface of the Earth. And I think people are waking up to that right now. I think that this was, compared to say three or four years ago, it's much, much more mainstream and we should continue to talk about it.</p><p>Theo Jaffee (10:14)</p><p>Yeah, but why hasn't, like, China or someone just spent, like, a couple billion dollars?</p><p>Casey Handmer (10:16)</p><p>We don't know they haven't. We don't know they haven't.</p><p>Theo Jaffee (10:24)</p><p>they had when we see effects on global temperature.</p><p>Casey Handmer (10:27)</p><p>Well, in some ways, the largest short -term effect on global temperature that we've seen is the effect of desulphurization of coal and marine fuel.</p><p>So just this last couple of years, we've had incredible heating in the North Atlantic, and it seems like at least half of that signal is accounted for by taking sulfur out of marine diesels. So in some ways, we've taken the accidental geoengineering that we're doing with CO2 emissions, which we've done mostly over the last 100 years, and we've turned that up to 11 by taking the sulfur out of fuels. And there are good reasons to take the sulfur out of fuels. Sulfur is acid and so on, because acid rain and problems, respiratory problems and so on in port areas.</p><p>of environmental consequences. But it is also true that it roughly masks half to three quarters of the effect of CO2 -induced global warming. And by taking it out, it comes out of the atmosphere in a year or two, and we're actually getting the full brunt of it now. And we're in the midst of a months -long heat wave in India right now. And there'll be mostly ones across the American South and much of the world. And I think severe flooding is occurring all over the world at any one time. There's always severe floods.</p><p>occurred somewhere and I think people are gradually waking up to the fact that...</p><p>that we need to be a little bit more deliberate about how we manage our use of fossil hydrocarbons and the sulfur that often comes with them in order to make sure that we don't inadvertently rip off yet another band -aid and make this situation far, far, far worse than it currently is. And it's easy to kind of talk about this in the abstract, but sooner or later we'll have a mass casualty event, right? And the hypothesis that we've seen in some science fiction that's been published recently is that sooner or later, India or China or countries,</p><p>in Africa or someone will experience mass casualty event and then they will unilaterally engage in solar radiation management and there's not a damn thing that anyone will be able to do to stop them unless they want to you know, decapitate the government via force of war which I think is hopefully off the table. And so what I'm saying, what I'm continuing to say is like the United States ideally should be a country that you know legalizes and promotes this technology and uses its incredible array of you know NASA based space sensors and so on to monitor and regulate and understand</p><p>the effects of this technology so that a, we never have the mass casualty event at all. Ideally, let's not kill tens of millions of people for no reason. And b, by avoiding that mass casualty event, we avoid a kind of panic response that we'd otherwise have. I didn't mean to bang on my SRM for like 15 minutes of your podcast, but it's something I'm quite passionate about.</p><p>Theo Jaffee (12:56)</p><p>No, Serum is cool.</p><p>Casey Handmer (12:58)</p><p>There's a company called Make Sunsets. So I'll give them a shout out. Super cool guys have started that and they're doing, you can go on their website and you can buy basically heat offset credits. They'll launch a balloon for you and then send you a certificate. And I think that, you know, they're tiny, tiny operation right now. But I think that's a step in the right direction.</p><p>Theo Jaffee (13:20)</p><p>So you talk about Terraform in the Terraform blog, kind of like a thing that will happen almost by default as solar gets cheaper. Like, yeah, you know, as solar gets cheaper, we'll tile the desert with solar panels and we'll build like tens of thousands of factories at a rate of like one every few days for like decades.</p><p>Casey Handmer (13:29)</p><p>Yeah.</p><p>Yeah, we might build some bigger factories and fewer of them, but yeah, that's the basic idea.</p><p>Theo Jaffee (13:44)</p><p>Yeah, but is this actually true with the level of social organization and anti -builder mindset that we currently have in America? It seems like we can't outbuild China right now, not because China has more capitalism or a stronger economy even, but just because they allow companies to build things like high -speed rail or massive solar farms.</p><p>Casey Handmer (13:53)</p><p>Mm.</p><p>Yeah.</p><p>Well actually when it comes to manufacturing, China is probably more capitalist than the United States. And I think that's often lost in the mix. China is initially a communist country, but really it's an authoritarian dictatorship and its economy has been, since stone -shoving deregulated the command aspects of the economy, it's been as capitalist did not more than the United States, at least in the kind of private sector manufacturing sphere. But that said, sorry, the United States is currently experiencing an unprecedented manufacturing boom. There's more money.</p><p>and more factories being built than at any time in history, including the lead up to World War II. And so I think this idea that all we can do for China is no longer a valid notion and that we should probably prepare ourselves for another decade or two of US manufacturing dominance, particularly with higher tech and automated manufacturing.</p><p>Theo Jaffee (14:55)</p><p>Well, that's the thing. We have the manufacturing capacity. But it doesn't matter if you have the capacity to build millions of solar panels if you can't actually get the permits to put the solar panels in the desert.</p><p>Casey Handmer (15:04)</p><p>Hmm.</p><p>Well, that actually is kind of the challenge, if that makes any sense. But actually, I would say that in contrast to SRM,</p><p>oil and gas production, the United States has always been kind of devolved between like tens of thousands of family owned independent oil drillers and oil producers. And so there's kind of an economic precedent for small scrappy startup to enter this space and all the economic framework is already in place to make that work, which is wonderful as far as I'm concerned. The challenge as far as like mass scale solar deployment is that there is currently a very expensive and time expensive process required to deploy solar arrays whose major effect is that it just</p><p>kills tens of thousands of Americans every year for no reason from the effects of legacy coal plant power production that would otherwise be displaced by new solar development. But the thing is, when it comes to oil and gas, the outcome is never in doubt. You might delay it, you might slow it down, but sooner or later the infrastructure will be built and will be deployed because the economic arbitrage, if you like, the pressure between the amount of value that can be generated versus not becomes so great that even the organs of state are unable to affect it.</p><p>resist them. And that actually has been a continuing frustration for opponents of oil development and offshore oil drilling and export terminal development and refineries and so on. But it will actually be a source of, I think, major joy for much of the justifiably environmentally concerned activist community when they realize that the same economic forcing function is now fighting for them as opposed to against the ideals that they hold dear. And we're already seeing that. So,</p><p>So that's kind of exciting. It's just economics. At the end of the day, if you found some way of making some fundamental product that everyone needs, and everyone pays a lot of money for, and you found some way of making it three times cheaper, well, that's just what's going to happen. And that's what has happened essentially in every country on Earth since forever. It doesn't matter what their economic system is. Everyone needs oil.</p><p>Theo Jaffee (17:07)</p><p>So a lot of people are now talking about like, America can't build. We no longer have this mindset. We have this, yeah, like what you talked about with the precautionary principle. But the reason that we developed the precautionary principle with building industrial mega projects is because we may have been getting to a point in the beginning of the 20th century where people may have been too far in the other direction. So like the famous like 1950s plan,</p><p>in the San Francisco Bay Area to drain half of the bay and replace it with reclaimed land with shipping channels and stuff. A lot of Californians didn't like that because it would have meant significantly impacting the ecology of the area. So what's the right balance, do you think, of building megaprojects and preserving the environment?</p><p>Casey Handmer (17:54)</p><p>Yeah.</p><p>Yeah, well that's a good question. At the time, of course, San Francisco Bay was mostly just a dump, right, like there are a whole series of hills and islands along the shore of the peninsula which are, you know, remediated dumps.</p><p>And one of the consequences of these kind of major thought thinking plans for provisioning additional development space and so on is that San Francisco is now by far the richest city in the history of humanity with some of the oldest and most dilapidated housing stock at some of the most highly unaffordable prices. So high in fact that it cannot function as a city because the diversity of workers that you need to make a city function cannot afford to live there. And so it's in, you know.</p><p>I don't know, like San Francisco has always been a city that's capable of reinventing itself and rebuilding itself, but it's never been more clear that like something desperately needs to change there. Anyway, I don't want to get kind of on that horse too much. But yes, I'm totally, yeah, yeah. So I totally, you may well be, I totally agree that,</p><p>Theo Jaffee (18:51)</p><p>Yeah, we way overreacted. And I think I'm on reclaimed land in San Francisco right now. Yeah.</p><p>Casey Handmer (19:04)</p><p>that the precautionary principle exists for a reason and that environmental protection regulations exist for a reason and that the fraction of the Earth's surface that has not been affected negatively in some way by human activity is relatively small and that we should probably do what we can to preserve those areas. But at the same time,</p><p>we are an economy that's not standing still, right? And so doing nothing is a choice. And if you do nothing about kind of retrofitting and replacing legacy energy production infrastructure that we all depend on to avoid starving to death, then you continue to accept the consequences of that in terms of the environmental and health impacts of those technologies, which we know now are many, many, many times worse than the impacts of putting out a solar array. And the nice thing about solar arrays is that, unlike pouring a bunch of concrete and installing a nuclear</p><p>power plant or something like that. If you decide after 30 years you're done with it and you want to replace it, then you just derack it, rip the racks out of the ground and it goes back to being pretty much exactly the same as once before. You don't have to break up any any subsystem, you don't have to decontaminate any areas, you don't have to do a bunch of like you know settling ponds, cooling ponds, ash tailing ponds etc etc which are standard practice for for coal generation, for nuclear generation etc. And so I think that when it comes to sensible changes to environmental permitting</p><p>regulations, there should be a recognition of the fact that if you were displacing existing use, which is far more damaging, then you should probably get a pass. And if your displacement or if your thing you're building is extremely easy to undo, then you should also get a pass that's proportional to that. So for example, it is completely conceivable when you are developing a solar array that you could put cash in escrow that would completely pay for the cleanup and remediation of that site.</p><p>whereas that is impossible with almost any other kind of development.</p><p>Theo Jaffee (20:53)</p><p>Yeah, I think reading your blog has turned me into like much more of a solar bowl. And this is still like something that very few people, even on like optimist, like, yak builder Twitter are talking about. They're all like, we need to build more nuclear, nuclear, nuclear, nuclear. And like, they're kind of right. And that nuclear is probably better than like coal, but yeah, like not many people are talking about solar.</p><p>Casey Handmer (21:12)</p><p>Yeah, for sure. I mean, that's quite clear. Look at what's happening in France, right? Like, France for a variety of interesting...</p><p>of socio -political reasons decided that it wanted to have energy independence and also large sources of its own supply chain for fissile materials. And they went and did it, and the results speak for themselves. And I think that if you were wondering about energy policy in the 1960s and 1970s, and you had sufficient access to uranium deposits in your own country and also a large industrial base, so you can support that technology, then it's a no -brainer to have then developed nuclear in the way that France did. That said, we are now at the</p><p>point where it will suit me cheaper for France to decommission or turn off their nuclear power plants or at least mothball them and deploy new solar and get their electricity from solar. And I think that relatively few people have run those numbers and seen that that's the case. But it is already the case and has been the case for many years now that it is cheaper to build and operate a new solar plant than it is to continue to operate an existing fully depreciated coal plant, for example. And that is also the case with gas beaker plants. If it is not</p><p>already shortly be the case with gas -combined cycle plants. Solar has overtaken wind in the last few years as well and so it's just a matter of time. It's like the Grim Reaper meme as it works its way down. Because it's true that a fully -appreciated nuclear plant you don't have to pay for its construction cost anymore but...</p><p>But it's also true that those materials and systems don't last forever. And so, for example, about two years ago now, France encountered an issue with their reactors that affected lots of them. It took something like a third or two thirds of them offline for a season as they repaired something to do with corrosion in an exchanger. And that's not cheap. It's extremely expensive to get these things online and working indefinitely. It's certainly cheaper than losing an energy war with Russia, Germany if you're listening, but it's certainly not free.</p><p>Theo Jaffee (23:10)</p><p>So let's talk about Giga projects. You've written a lot about colonizing Mars. And obviously I'm no expert on planetary scale terraforming, but Kurzgesagt, the YouTube channel, has a video that I think is really interesting. That's about terraforming Venus instead of Mars, because Venus has more solar energy, it's closer to the sun, has similar gravity to Earth, it's bigger than Mars. And so their plan is, step one, you make a giant, annular mirror system.</p><p>Casey Handmer (23:16)</p><p>Yes.</p><p>Mmm.</p><p>Theo Jaffee (23:39)</p><p>that directs solar energy away from the atmosphere that freezes the CO2 atmosphere of Venus. And then you use robots and mass drivers to shoot the excess CO2 and nitrogen into space. Because obviously like too much CO2 means like humans can't breathe on it. And then you fire water in the form of ice from Jupiter's moon Europa using again, robots and mass tethers or mass drivers and space tethers.</p><p>Casey Handmer (23:42)</p><p>Yep.</p><p>Mm -hmm.</p><p>Theo Jaffee (24:07)</p><p>And then you will add more mirrors so you can heat up the planet gradually without torching it, because if you just remove the existing mirrors, then it would get like grilled by the sun because a Venus day is so much longer, Venus rotates so much slower. So one side of the planet would get cooked. And then you'll add like trillions of cyanobacteria, which will photosynthesize. It'll turn that CO2 into oxygen and it'll fix the atmospheric nitrogen to usable nutrients and then grind down the surface into soil and add plants, trees and animals.</p><p>Casey Handmer (24:20)</p><p>Yeah.</p><p>Theo Jaffee (24:37)</p><p>So that would take a very long time to actually work all the way. But why do we hear so much about terraforming Mars and very little about terraforming Venus?</p><p>Casey Handmer (24:48)</p><p>Yeah, so you've asked the right person. And some of you will remember there is a platform called Quora. And actually one of my first viral posts on Quora was like, what is easier to terraform, Mars or Venus? So maybe you should dig that out and take a look at it. It was like almost 12 years ago now, something like that. It turns out that...</p><p>terraforming Mars is about a billion times easier. That's the fundamental reason. Actually, I was involved in a workshop recently where we calculated that you could probably achieve a degree of temperature rise, a Kelvin degree of temperature rise on Mars for about a billion dollars of marginal investment. So if you wanted to heat Mars up by...</p><p>actually about a billion dollars per year, something like that. But if you wanted to heat Mars up by, say, 40 Kelvin or something, so it's just about freezing, then that's actually quite affordable. That's much less than, say, Google's cash flow. Whereas the cost to deploy a planetary -scale mirror...</p><p>on above Venus in the Venus Sun L1 point and then wait for the atmosphere to freeze out, which will take about 140 years, and then deploy the terawatts of nuclear reactors onto the surface that you would need to use the mass drivers to fling stuff. Actually, if you just want to get rid of the CO2, you can just bury it. You don't actually have to fling it off the planet. But if you wanted to use it as mass to speed the planet's rotation up, then you could fling it off with mass drivers, which would be pretty funny.</p><p>And then actually, as far as getting water goes, I would probably advocate just drilling holes and getting it out of Venus's crust because there's way, way more water in Venus's crust than you could get from even an entire moon of Jupiter and it's right there. You don't have to shoot it across the solar system.</p><p>Yeah, and then you have to design an atmosphere that is able to support life, but is also significantly more thermally transparent than Earth's atmosphere. So Earth's atmosphere is responsible for something like 15 Kelvin of heating, which prevents the surface of Earth from being largely frozen. But sometimes in the historical past, it was, or prehistoric past, it was frozen like the entire planet basically glaciated in periods called snowball Earth. But on Venus, you would almost certainly have to have some kind of shade system permanently to avoid</p><p>once again undergoing runway warming.</p><p>Theo Jaffee (27:05)</p><p>So if you just bury the CO2, then like, couldn't the CO2 just escape back into the atmosphere if you have like a volcanic eruption or something?</p><p>Casey Handmer (27:07)</p><p>Hmm.</p><p>I mean, impressive, but...</p><p>But effectively you treat it as landfill, right? So you just bury it deep enough and it will be stable at that pressure. It's the same idea that people are talking about with CO2 injection for carbon capture and sequestration here on Earth. It's probably somewhere, but we'll come back out. But bear in mind, if you built the infrastructure that's necessary to go and bury how many quintillion tons of CO2 is in Venus' atmosphere underground, and you get it all underground and you build the surface and you find out a rate of like a trillion tons a year, well, that's...</p><p>millionth of your current industrial power to bury that all again. So you just have to keep up with emissions and otherwise stabilize the atmosphere. You need CO2 in the atmosphere anyway. You just don't want like a 200 bar hot house, sulfuric acid clouds and stuff. I'd say like as a destination, some people are very team Venus. I'm a bit dubious about it because the gravity is so high. So to get from Venus back to Earth is almost as hard as it is to get from Earth to Venus and it's extremely hard to get off Earth. So give yourself a break.</p><p>and just go to a lower gravity world first.</p><p>Theo Jaffee (28:22)</p><p>first, so like we should eventually terraform Venus.</p><p>Casey Handmer (28:25)</p><p>If we find that we have a shortage of planetary surface area, then yes, but I just don't, I don't know if that's going to be a major concern for us.</p><p>Yeah, in some ways, like...</p><p>Yeah, it could be done. Maybe Venus is for building an orbital actually. Maybe you use Venus to build, basically take the whole planet apart and build giant space station instead. Because you've got rotating space stations about a thousand times more surface area per unit mass.</p><p>Theo Jaffee (28:51)</p><p>like an O 'Neill cylinder.</p><p>Casey Handmer (28:55)</p><p>Yeah, some kind of giant ring, I don't know. There's this concept from the Banks, in Banks' books called the orbital, which is a ring that is so large that it's a circular period that creates gravity is equal to 24 hours. And it turns out that it's probably impossible to build one of these out of materials that we know how to build.</p><p>because it would break apart from the force. So either you make it turn slower or you make it smaller, or a bit of both. But I would be very surprised if I lived long enough for this to be something that's really occupying a lot of brain sweat for me, worrying about. I think that solving the set of problems required to do something meaningful on Mars is a lifetime's worth of incredibly intensive effort.</p><p>Theo Jaffee (29:49)</p><p>But in like one human lifetime from now, assuming all goes well, we should have like a permanent human presence on Mars. Why would people actually want to live there? You know, it's cold, it's really far away, the wifi is bad, there's lots of latency between Earth and Mars. There's not much lighting, not much natural lighting at least. So what would make people want to go to Mars?</p><p>Casey Handmer (29:55)</p><p>Yep.</p><p>Yeah.</p><p>Yeah, my wife wanted to go work in Antarctica and she did. She spent most of 2016 overwintering at the South Pole where the Wi -Fi was worse than terrible and it was very cold. The air was breathable but very cold obviously. Food selection limited. Company limited. It turns out that most of us kind of prefer a comfortable life but some people are just kind of...</p><p>pioneers one way or another and so I don't think there'd be a shortage of people who want to do that and even if you look at the Venn diagram of people who want to do that and people who have the skills to make a meaningful contribution I think it would be no shortage of people.</p><p>Theo Jaffee (30:47)</p><p>So you did a lot of this Terraform investigation while you were working at NASA JPL, but you're also like a huge fan of SpaceX and Starship and Starlink and like not a huge fan of the space launch system. Although side note, I did actually watch the Artemis One launch live from Florida and it was really, really cool. Yeah.</p><p>Casey Handmer (30:59)</p><p>Hehe.</p><p>Yeah.</p><p>That's cool. Yeah, I mean, I'm Team Rocket when it comes to like lighting the candle. Don't get me wrong. I've never seen a rocket launcher. I was like, I feel worse as a person having seen that. But the thing is like, you know, as a way of getting dopamine, as a way of entertaining people, you know.</p><p>I think we need to be circumspect about the fact that SLS is, for a whole variety of reasons, that I've exploited in depth on my blog and other people have too. And it's fairly openly understood now. It's an extremely expensive, extremely wasteful, extremely dangerous way of going about solving these problems. And I actually think it...</p><p>It speaks poorly to US technical integrity to continue to maintain this polite fiction that it is a good idea. It's quite evidently a terrible idea and sooner or later it will kill someone and then it will be impossible to deny but someone will have died. So, yeah, I think that...</p><p>It's one of these things, it's a bit like Fusion actually, in the sense that maybe if it had done what it promised to do, which is reuse parts from the shuttle to reduce complexity and development time and actually got to the launch pad and launched and achieved a higher launch cadence within a few years, then maybe it would have had a window of 10 or 15 years where it could have made a meaningful contribution. But instead it's just been this giant vampire squid sucking the money out of NASA and producing almost nothing in return. And I think we need to be really pragmatic about that.</p><p>Theo Jaffee (32:26)</p><p>Yeah, so given that, why did you work in NASA instead of SpaceX?</p><p>Casey Handmer (32:31)</p><p>That's a good question. So when I worked at NASA, I worked at JPL, which is the Caltech operated deep space robotics center. And it's not related to the development of the SLS or the human space flight program. And I worked on GPS related technologies, which are critical to national security and scientific applications and also studying global warming. I'm quite proud of the work that we did there. For better or for worse, it had its challenges. And I also got to participate to a limited extent with the Mars Exploration Program, with the rovers and stuff there.</p><p>And you know, LA is where I happen to live. So that was a lot of fun. As far as SpaceX goes, well, I've written a blog post where I talk about my professional failures. And one of those I would say is that despite the fact that I've interviewed at SpaceX a number of times, I've not been invited to work there. And I think that reflects well on the recruitment process, frankly. And I think that maybe at some point in the future, I might re -examine that. But SpaceX is a place that requires a level of commitment that is hard to square with my current commitments to young children. And I have to keep that in mind.</p><p>And so in many ways, part of the reason I went and did Terraform is that I wanted to build a technology that had dual use applications, both here on Earth where it solves a major energy abundance challenge and a human welfare challenge, but it will also give me and the team here a major leg up when it comes to building critical infrastructure for the Mars base, which with any luck we would be able to respond meaningfully to challenges or requests by SpaceX or NASA to participate in that technology development program.</p><p>Theo Jaffee (33:59)</p><p>Yeah, so maybe once you solve the small task of global warming and abundant energy, then yeah, maybe you can do SpaceX.</p><p>Casey Handmer (34:06)</p><p>Yeah, well, they're not mutually exclusive. So.</p><p>So it actually turns out that like putting humanity on a much, much firmer financial, economic and energy, ecological footing drastically unlocks huge numbers of resources that can be used then to explore space. I think it's very hard to say that a future where in 2050, a large fraction of the world's population is starving to death or being boiled to death from heat waves is a world where it'd be easy to mobilize the sort of resources that you would need to do a public -private mass city.</p><p>Whereas one where actually unlocking cheap energy has put humanity back on the Henry Adams curve and we're doubling the size of the global economy every 15 years would be one where it would be pretty easy to liberate those kind of resources. So I think these are quite mutually compatible.</p><p>Theo Jaffee (34:57)</p><p>Yeah, I mean, this is like the ultimate Elon Musk master grand plan, no? Like he started SpaceX first before he did Tesla. It seems like he's always cared more about SpaceX than any of his other companies, including Tesla. It seems like, yeah, like if he could only save, if he could only do one company, he would, my bet would be on SpaceX.</p><p>Casey Handmer (35:20)</p><p>I think that's probably a fair assumption. Obviously, he has to be somewhat cryptic in his personal remarks, but I think that one of the things that's being lost in the current discussion of whether or not he deserves his $55 billion dollar pay package over the last six years of hard work he did to Tesla, despite the fact that it was approved by 80 % of the investors and the information relating to it was publicly available, was that...</p><p>was that for most CEOs, if they don't get the gig to work at Tesla, they have to work somewhere else. And for most billionaires, they can work on a beach. But really, there's a good argument to be made that what Elon set out to do at Tesla from 2018 was impossible. Everyone thought it was impossible. The idea of this pay package that he voted himself or that the board put together for him and that was approved by the shareholders was kind of a Hail Mary pass, right? But it was also something that preserves enough upside to make it worth Elon's work.</p><p>to go and like break his brain working, you know, 120, 140 hour weeks on that and also at SpaceX and a few other companies, but mostly on Tesla to take it to that next level. And if he hadn't done it, it wouldn't have been there. Tesla would be, you know, doing great business selling Yard Model 3 here and there, but they would not have stood up the factory in Texas, they would not have stood up the factory in Germany. And we'd be so much poorer as a civilization. And so, you know, one of the nice things is once you have a modicum of wealth, you can actually negotiate from a position of strength when it comes to what you're doing.</p><p>you want to spend your time doing. But yeah, I think that Elon understood a long time ago that he has goals that cannot be achieved cheaply and a necessary precondition for doing something interesting on Mars is having a rocket that is the complete opposite of the SLS, which is to say high flight rate, completely reusable, low cost, high reliability, and much, much simpler architecture, and then also having...</p><p>basically first dibs on the 100 ,000 smartest engineers on earth and across those ideas I think it's done extremely well.</p><p>Theo Jaffee (37:18)</p><p>So why is Elon like the only person essentially to have like multiple extremely successful companies? Like you could think of some exceptions to this, maybe like George Hots who has Comma, which is like open source self -driving and then Tiny Grad, which is like neural networks. But like there's nobody who actively runs like more than one multi -billion dollar company. You think that.</p><p>Casey Handmer (37:32)</p><p>Yep.</p><p>Mm -hmm.</p><p>It's extremely rare. It's extremely unusual. And it's even more unusual given the ambition and the scale and the technical difficulty of what those companies is doing. This is not like someone like doing serial entrepreneurship of three SaaS companies and having good exits three times, which is good for them. This is someone who set out to do in parallel the two hardest things that were so hard that the smart money in the field maintained that it was impossible for more than a decade.</p><p>Despite their various successes and advances along the way. And I think that, you know, I have a blog post that's saying Elon Musk is not understood, and I don't understand it. I have very limited insight. But like, I think that more of us should ask the question, how is this possible? Because it's like, it's obviously possible. It's permitted by the laws of physics. But how is it that Elon was able to do this thing that plenty of other people have set out to try and do and have not succeeded or would not even bother to try it because they convinced it's impossible? And...</p><p>Theo Jaffee (38:22)</p><p>Yeah, I love that one.</p><p>Casey Handmer (38:41)</p><p>Yeah, blows my mind. And it's also, I should state for the benefit of listeners, I met Elon in I think 2011 and I was suitably impressed as many are and I decided to put some of my limited, very limited savings at the time into Tesla stock. This was before the launch of the Model S and that stock today forms the basis of my personal wealth. I had to sell some of it to pay for a green card to get anything out and stay in the United States, which was painful, extremely. I kind of lost well over $100 ,000 in terms of</p><p>today's stock price. But that essentially gave me the freedom over the last decade to break out and start my own company. And if I didn't have that and with young children and a mortgage, I don't think I would be able to take this risk. So I'm incredibly grateful. And what did I do for that? Nothing. The value of stock in Tesla was built by the tens of thousands of engineers and technicians and so on who sweated blood for decades to make that happen. And all I did was got lucky. So I never take that for granted.</p><p>Theo Jaffee (39:42)</p><p>So speaking of Elon, Elon famously back in like 2000 something, 2006, 2008, wrote his plans for the Hyperloop and then you worked on a Hyperloop company. And then...</p><p>Casey Handmer (39:54)</p><p>Yeah, 2015 I think that was released. Yeah.</p><p>Theo Jaffee (39:58)</p><p>yeah. So given that there is no hyperloop between San Francisco and LA, like, why not? And what would it take to work now? Could it work now?</p><p>Casey Handmer (40:05)</p><p>Hmm.</p><p>The short answer is no. The company that I worked for was finally defunct earlier this year. I'm generally cautious about speaking about what happened there and I kind of have a probably perpetually unpublished blog post about it because really the team that was assembled there was exceptional and they did really exceptional things. And they basically solved hundreds and hundreds of next to impossible engineering challenges as you would expect them to. But it turns out that hyperlapse is a concept.</p><p>really struggle for all the same kinds of reasons that high -speed rail does. In some ways it's worse. Because it turns out that the expensive part of high -speed rail is not really like rail wear or, you know...</p><p>right away or something like that. But the expensive part is that almost all the world's cities that could be connected by high speed rail or high glue have sufficient terrain difficulties between 80 and 95 % of the CAP extra -criticality systems is just spent moving rocks, like digging holes, digging tunnels. And that's one of the reasons I think that Elon went off and founded Boring Company, was because just the price of tunneling seemed so absurdly high. And so Boring Company's made some advances, but they certainly haven't like,</p><p>been on the Moore's law of tunneling cost, right? They haven't been able to consistently halve cost every 18 months or something like that. So when it comes down to it, and especially if you go like a first principles analysis, like, okay, so what's the energy required to smash up a four meter diameter tunnel between, I would say 20 or 30 % of the land between here and San Francisco, between Los Angeles and San Francisco, in terms of the sheer energy required to break those rocks up and move them out of the way? And how does that compare to the energy required to push the air molecules in the stratosphere out of the way as a</p><p>plane flies through and it's like 10 ,000 times more energy easily. So for the energy required to build one tunnel that shows one pair of cities, you could fly 10 ,000 flights. And also the machine that you built to fly those flights can fly to any of 20 ,000 runways worldwide, point to point. So yeah, it just turns out that like.</p><p>I think aviation as a technology is underappreciated for just how revolutionary and incredible it is. It blows my mind when I jump on the Southway 737 to go off to a business meeting or something that most people have their shades down and just kind of blissed out, you know, staring straight ahead, getting drunk, whatever. I always get a window seat. I get a window seat out the back.</p><p>Theo Jaffee (42:22)</p><p>Yeah, I tweeted about exactly this.</p><p>Casey Handmer (42:26)</p><p>You know, usually on the on the shady side of the plane. So like the north facing side of the plane So the sun's not in my eyes and I just stare out the window like a like a grinning idiot at the Landscape as it goes by and of course flying out of LA What I'm looking at is terraforms future hunting grounds state after state after state of like mostly empty You know economically unproductive parched land that is just getting hot in the sun that I want to put solar rays on and and turn them into you know, just a river of wealth for for the people who live there and</p><p>It's beautiful, it's incredible. And I think this is, the thing that blows my mind is that aviation has been an extremely compelling technology since the 1930s. We're almost going up for 100 years of aviation being quite clearly the obvious way to do things. And yet, for some reason, people somehow think that the solution is to go really close to the surface of the earth so that you have to drill lots of holes in the ground. Planes are amazing. We should just figure out how to make planes cheaper and faster and better.</p><p>Theo Jaffee (43:25)</p><p>Yeah, honestly, I just flew from Phoenix to San Francisco, like a couple of weeks ago. I had a window seat and I was thinking like the exact same things. I was flying over the Mojave Desert and then the California Central Valley and then over the mountains on the coast. And I was, especially with the desert, I was also thinking solar panels.</p><p>Casey Handmer (43:25)</p><p>Thank you.</p><p>Yeah, and I don't want to give the impression that like, I'm just going to like take over Nevada and pay for the solar. The actual amount of solar that you need to make a shitload of money doing synthetic fuels is relatively small, because the economic productivity of solar per unit area is about 10 ,000 times higher than agriculture.</p><p>Sorry, that's not entirely true. The energy productivity is about 10 ,000 times higher. The economic productivity is between 100 and 1 ,000 times higher. And so for, yeah, it's pretty good. So like in the United States, we have like 50 million acres of corn production is devoted to bioethanol. And that bioethanol is mixed with gasoline in some places and used for a handful of processes. But like it's like single digit percents at most of the US's fuel consumption mix. If instead you took those 50 million acres of like prime,</p><p>Theo Jaffee (44:12)</p><p>pretty good.</p><p>Casey Handmer (44:35)</p><p>agricultural fertile land and you reforested them and turned them back into prairies and put the bison back on there and the deer and the mountain lions and cougars and whatever and rewilded that land right like that would be an obvious ecological win and then take 50 million acres of parched</p><p>desert fried land out in the American West. It doesn't even have to be like virgin desert land. You have easily 50 million acres of like basically brownfields where it's already been disturbed. And you throw solar arrays on that. The fuel productivity is like between 20 and 50 times higher than the best corn land in the United States. And you end up producing like more than 50 % of US's oil and gas consumption just from those 50 million acres. Isn't that amazing?</p><p>Right, so like it's a win -win -win. And then you say, okay, what's the impact in the place where you're putting the solar array down in the desert? Well, depending on how you do it, shading the ground actually improves moisture retention, reduces soil temperatures, and actually allows things to grow. So like there's these absurd photos you can find of solar arrays that were developed maybe a decade ago in Nevada or Southern California or Arizona, where like in the solar array, they now have a problem with it, they have to like run around with the mower because trees keep growing. Like trees haven't grown in this landscape for 10 ,000 years. And obviously 10 ,000 years ago in the Pleistocene.</p><p>and it was much, much wetter and there were trees and forests and mammoths and things, but more recently it's completely desertified. It turns out as soon as you shade the land, that trees start growing and you're like, hmm, how curious. We are terraforming the desert with solar rays.</p><p>Theo Jaffee (46:02)</p><p>So what's the geopolitical impact of that? Like does it turn out that, actually it doesn't matter if all the Middle Eastern Gulf states run out of oil because they can just build solar on the desert?</p><p>Casey Handmer (46:13)</p><p>Yeah, so there's actually a bit of a question mark there, which is, you know, obviously the significant solar resources.</p><p>in the Gulf states and also significant oil export capacity. And so it kind of, you may pose the question, does it make more sense for, say, Saudi Arabia to build a lot of solar panels and make synthetic oil and then export that to Europe like they currently do? Or would it be cheaper for Europe to build solar arrays locally and cut out the shipping cost? And it turns out that, particularly for natural gas, the shipping cost of a long distance pipeline, so ships is quite high. So that places, increases the incentive to do gas production locally.</p><p>And if it is the case that long -term solar synthetic fuel production can match current importation prices for oil and gas in Europe, then that's actually not a huge forcing function to change the current importation modality. So for example, you might continue to import oil products from the Middle East, but they'd be synthetic, but you'd import them. But that actually, in some ways, like...</p><p>I mean, it's actually a lot cheaper to produce oil in parts of the Middle East than it is at the marginal fracking producer in the United States, for example. So it's not entirely certain that that would displace oil production in the Middle East unless that oil actually ran out.</p><p>But if you're able to develop technologies and you develop the terraform technology to the point where you're able to reduce the price of oil and gas by maybe a factor of three, which I think is, it's definitely physically possible. How long it takes us to get there is somewhat up in the air, but there's an extremely strong forcing function for it. So we should expect, you know, enormous investments in the factories and so on that are building these components that allow that to take place. Then that actually places a significant forcing function for more local production, which I think is super interesting from a geopolitical point of view as well, because basically since the end of the second world war, we've had this,</p><p>kind of global economic system that is underwritten by the United States Navy and freedom of navigation and globalization, which now enables most of the world's countries to import oil that they need from foreign countries that happen to have it. But everywhere has roughly the same amount of solar power. So it may be the case that in the future, food production, so oil and gas production like food production is much more localized. And I think that will be mostly a good thing.</p><p>Theo Jaffee (48:30)</p><p>So the way I found you in the first place on Twitter was because Ashley Vance, who is the author of the first Elon Musk biography, tweeted something like, the two most productive people I know are both massive Twitter addicts, and the two most productive people that he knows are Elon Musk and you, which is like, wow, what a distinction. So like, do you, do you agree with that? Like, how do you manage to like,</p><p>Casey Handmer (48:52)</p><p>Distant second. Yeah.</p><p>Theo Jaffee (48:58)</p><p>do so much, how do you like manage your time, what does a typical day look like?</p><p>Casey Handmer (49:02)</p><p>Well, I would have to say, I wish I had time to take better care of myself, but I probably don't sleep enough. But I think the key is to work on a bunch of different things at once and just make sure you have a bunch of irons in the fire and then just keep pushing those projects forward over time. And so...</p><p>Basically, I'm well set up at home that if I have a, most evenings I'm kind of free after nine or 10 p And then if I'm in the mood to write a blog, I can sit down and smash out a blog up pretty quickly or I can do some coding. Last night I was doing some work related coding for about two hours, which was actually fun. Because as a founder, I'm doing something I haven't done before and I would say that my skills as a founder are...</p><p>Not infinite. Certainly I'm pretty inexperienced. But when it comes to solving a nice well -defined coding problem with some data analysis or something, that is my superpower. So I was like, I feel competent for once. This is nice. Yeah, and then the scrolls thing. I think Ashley came and talked to me about scrolls originally. And that's just because one of our investors, Nat Friedman, started the scroll price thing. And it was kind of...</p><p>interest and productivity was waxing and waning and I thought, look, I should probably spend a week on this just to at least tell Nat that I had a good crack at it. And actually I found I was able to make pretty rapid progress, again, by kind of changing the rules of the game. But I don't know if I have any special insights other than just make sure you're making good use of the time you have available. And I have almost no time. It's actually kind of crazy. But I feel like I'm significantly more productive since I had children than I was before. I think like before I had kids, I wasted a lot of my time.</p><p>Theo Jaffee (50:43)</p><p>That's actually very interesting for two reasons. Like first of all, a lot of people say that like having kids makes them less productive because you have to like spend time with the kids and then you have less time for work. And then the second thing is what you said is kind of the opposite of Steve Jobs' productivity advice of like relentlessly focus on one thing. Yeah.</p><p>Casey Handmer (50:52)</p><p>Yeah. Yeah.</p><p>Yeah, well, I mean...</p><p>I would say for your listeners, feel no obligation to validate my mistakes by repeating them. It's what works for me. You need to find out what works for you. And if you have a single minded maniacal focus on one thing, then you can probably make a lot of progress in a year or 10 years or a lifetime. That's just not how I work. I tend to get bored pretty easily. So I try a lot of different things. One of the advantages of that actually is you find that often the things you're working on cross -planet. So for example, I'll spend a week bashing my head against the wall trying to debug a numerical convergence issue with a Mars hydrologist.</p><p>simulation for a terraforming Mars simulation that I'm running and I'll get almost nowhere and then I'll take a look at the scroll prize and realize that because I've been thinking about vectorization of data sets for a while I can apply that to the Mars thing and it allows me to do something that would have otherwise taken 10 hours in about 15 minutes which then means that I can actually make progress on it because I don't have whole days that I can work on things anymore.</p><p>But yeah, it's a lot of fun. And then with kids, it reminds you that if you're not with your kid, why are you giving off? Every now and then you just get tired and you want to sit in the small room with your phone out and tweet about stuff. But actually Twitter is fabulous to me because it has put me in contact with a community of people who also value finding ways to achieve really productive things. One of the other things I'd say is that the long -term returns of things that you spend your time on are very parallel distributed. So it turns out that I...</p><p>I show up for work in person here in the office for probably 40 hours a week and easily 20 hours of that time is spent on shit that does not matter. I don't know exactly what that stuff is, just long term I know that half of this stuff does not matter at all. Or it's very low leverage, like paying bills and stuff. It's necessary work but it doesn't really leverage my capability, it doesn't make a huge impact on the future. But then every now and then you find a rich vein of ore and you can exploit that and you make a really big</p><p>impact really quickly. I've written hundreds and hundreds of blogs and only really a handful of them have ended the zeitgeist but the ones that have have made my life, have changed my life really in many positive ways. Everyone should write a blog. You have to write lots and lots of blog posts until you get good at it.</p><p>Theo Jaffee (53:13)</p><p>Yeah, I think I need to write more blog posts. I have a blog that I write like a post every like two months and it ends up being this like 30 ,000 word monster, not like literally that long. So there's, there's a lot of parallels between you and Elon. And one of them is that you both have like a very like fundamental like first principles, like engineering based mindset, like with the famous story about how when Elon was starting SpaceX, he noticed that like,</p><p>Casey Handmer (53:25)</p><p>Yeah.</p><p>Theo Jaffee (53:41)</p><p>It costs like a hundred times the cost of materials to launch a rocket. And he was like, yeah, you should reduce this. So like, how do people develop this? Is it an inborn trait?</p><p>Casey Handmer (53:51)</p><p>Well, depending on where you are in your career, I think there's always an advantage to studying physics.</p><p>But you know, I think it's also, if you're Elon, there's an advantage if you're like just a psychotically motivated South African immigrant with a chip on your shoulder. Like, I think people don't understand that. And you know, I've talked about this with Ashley and I've read the other biography as well. And I think that, you know, I think Ashley tried his best, but he didn't, and Isaacson didn't also kind of manage to get at the core of who Elon is as a person and why he does what he does. And I think actually it's not super accessible.</p><p>So yeah, but just do the best you can, I guess. If...</p><p>I think a lot of people see the outward signs of Elon's success, his wealth, his power, his positive achievements for humanity, and they envy that, and they wish they could be that, and they wish they could be in their shoes. But I don't even know Elon personally, and I would not swap with him for all his wealth, not for a second. Obviously, and I think he would agree with this, in some ways, kind of enduring a curse. And...</p><p>Theo Jaffee (54:53)</p><p>Yeah.</p><p>Casey Handmer (55:02)</p><p>And I think we should just be grateful for the fact that we live at the same time as someone who is quite clearly so capable, despite the fact that it obviously aspects of their personality and their work ethic and so on obviously have caused them enormous personal sacrifices and pain. Yeah.</p><p>Theo Jaffee (55:22)</p><p>Recently somebody asked Elon like what should I do to become the next Elon Musk and he said like are you sure you want to?</p><p>Casey Handmer (55:29)</p><p>Yeah, exactly. I don't think he's a person who has made happiness a major priority or has achieved it either. I think he has moments of joy, obviously, but some people are just like they're set, just who they are as a person is not really set up for contentment. And for some of those people, it creates life -ruining mental illness, and some of them it creates this deep -rooted fire and passion to right a wrong or see their enemies suffer or something like that. And again...</p><p>Theo Jaffee (55:31)</p><p>So.</p><p>hedonic treadmill.</p><p>Casey Handmer (55:59)</p><p>Yeah, we're just extremely lucky that we live in an era when Elon is able to channel that energy into making cool technology that moves our entire species forward, as opposed to becoming some despotic warlord somewhere. If you've heard about like, Cesare Borgia or something like that, similar kind of instincts, but in 1500s Florence, there was no way to go and do massive reindustrialization of space. So instead, these people just kind of got trapped in cycles of violence. Anyway.</p><p>Theo Jaffee (56:28)</p><p>So last question, what's your favorite place that you've traveled and why?</p><p>Casey Handmer (56:34)</p><p>I'm probably here, California. Yeah, that's why I live here. No, I mean, I came out here for grad school in 2010. And after year or two, I realized I would be staying. I kind of, I agree to appreciate what California had to offer, both in terms of landscape and human factors and so on.</p><p>Theo Jaffee (56:36)</p><p>Really.</p><p>Casey Handmer (56:52)</p><p>But that said, I've traveled to a whole variety of interesting places and I think actually in many ways it was more about the age I was at the time than the places that I went to. Because I have in some cases gone back to places that I visited as a 19 year old or whatever and found extremely transformative at the time and more recently I went back and it was just like, this is just yet another shitty concrete city. And...</p><p>Yeah, and the thing that's missing is some combination of chemicals in my brain that just happens to exist when you're 19 years old and fades shortly thereafter. So I'd say if you do have the opportunity, if you're a younger listener in particular and you're thinking, wouldn't it be cool to go and travel to some crazy place, you should absolutely do it. Because in some cases, those places won't exist when you're older, but your ability to enjoy them in that way certainly won't exist when you're older.</p><p>It's a good thing. So yeah, I mean, I spent a lot of time kicking around in the Russian Far East when I was in my late teens and early twenties, just as a kind of a playground in a way, a place that had the right kinds of challenges for my personality and the things that I was interested in.</p><p>Theo Jaffee (57:42)</p><p>interesting.</p><p>Casey Handmer (57:56)</p><p>You know, I didn't really don't speak Russian at all and and there wasn't and and still isn't any kind of tourism industry in these places And it's really sparsely populated and it's it's quite dangerous in some ways I wrote the Wiki travel article for this area and as far as I know no one has revised it since so that was 14 years ago so that means that either either the subsequent English -speaking travelers who went there found that my article was accurate enough or No one has been I'm not quite sure I think I know of like I know of maybe half a dozen people who who have read my blog source in</p><p>videos and who've subsequently gone there and told me about it but yeah it's it's kind of it's an out -of -the -way place I'll put it that way.</p><p>Theo Jaffee (58:36)</p><p>Good practice for Mars.</p><p>Casey Handmer (58:38)</p><p>I don't think that's why I went there at the time. But yeah, in some ways, yeah, I mean, the history of human, at least like technological human habitation in these areas is extremely recent. Like we're talking like 1930s, 1940s, 1950s. Obviously, there are indigenous populations who live there, but they're mostly nomadic and extremely sparsely populous. It's an extremely tough climate. Yeah.</p><p>Yeah, just to put it mildly, you know, United States once again wins the lottery with climate and geography. I actually have to jump to a call, so we should probably wrap up.</p><p>Theo Jaffee (59:11)</p><p>Yeah. Well, thank you so much, Casey, for coming on the show.</p><p>Casey Handmer (59:15)</p><p>Yeah, thank you so much for having me. It's been fun and interesting questions as always.</p><p>Theo Jaffee (59:18)</p><p>Yeah. Thank you.</p>]]></content:encoded></item><item><title><![CDATA[#16: Stephen Grugett and Austin Chen]]></title><description><![CDATA[Manifold, Manifund, Manifest, prediction markets, and EA]]></description><link>https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen</link><guid isPermaLink="false">https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Thu, 13 Jun 2024 17:49:22 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/145590376/29f60005579b05e2b94dd98bb24c75a0.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Stephen Grugett and Austin Chen are co-founders of Manifold Markets, an online play-money prediction market and competitive forecasting platform. Stephen currently serves on the company&#8217;s management, while Austin recently stepped down to start Manifund, a unique, open-source grant program. This video is not sponsored in any way by Manifold, Manifund, or Manifest - I just think they&#8217;re cool.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>Stephen Grugett</p><p>1:20 - Are prediction markets actually bad?</p><p>4:11 - Would Manifold use real money if allowed?</p><p>5:24 - How Manifold would use real money if allowed</p><p>6:08 - Would Manifold use crypto if allowed?</p><p>7:17 - Can you ever get long-term returns from prediction markets?</p><p>10:01 - Would subsidies ruin markets?</p><p>11:23 - Why Manifold beat real money on predicting the 2022 elections</p><p>16:00 - Would Stephen implement futarchy?</p><p>19:54 - Manifold Love</p><p>23:22 - Bet on Love</p><p>26:21 - Why Manifold is miscalibrated</p><p>29:06 - Insider trading and market manipulation</p><p>31:42 - Is it easier to make money on prediction markets or normal markets?</p><p>32:37 - Good prediction market UI</p><p>34:35 - Why should people trust market creators?</p><p>35:34 - Derivatives on prediction markets</p><p>37:20 - Stephen&#8217;s ginseng adventures</p><p>40:55 - Audience Q: why don&#8217;t Americans consume American ginseng?</p><p>41:35 - Audience Q: cancel culture and Richard Hanania</p><p>45:50 - Audience Q: why aren&#8217;t there more institutional investors in prediction markets?</p><p>47:33 - Audience Q: can journalists help resolve markets?</p><p>49:45 - Audience Q: is there any role for sweepstakes other than regulatory arbitrage?</p><p>Austin Chen</p><p>51:14 - Are prediction markets insufficiently powerful?</p><p>54:22 - What prediction markets can do if not futarchy</p><p>55:36 - How Manifund was designed</p><p>59:35 - How Manifund chooses regrantors</p><p>1:00:49 - Why donate to Manifund?</p><p>1:03:09 - Does Dustin Moskovitz have too much power over EA?</p><p>1:04:29 - What Manifund would do differently with more money</p><p>1:05:52 - How Manifest gets so many interesting people</p><p>1:09:10 - How much did SBF&#8217;s fall damage EA?</p><p>1:10:04 - OpenAI</p><p>1:11:54 - Is this decade more important than other decades?</p><p>1:13:01 - Why aren&#8217;t more philanthropic organizations open?</p><p>1:15:35 - Manifund&#8217;s best projects</p><p>1:17:25 - How short AGI timelines would affect Manifund</p><p>1:19:21 - Audience Q: how Manifold ships fast</p><p>1:22:11 - Outro</p><h3>Links</h3><p>Manifold: <a href="https://manifold.markets/home">https://manifold.markets</a></p><p>Manifund: <a href="https://manifund.com">https://manifund.com</a></p><p>Manifest: <a href="https://www.manifest.is">https://www.manifest.is</a></p><p>Manifold&#8217;s Twitter: <a href="https://x.com/manifoldmarkets">https://x.com/manifoldmarkets</a></p><p>Manifund&#8217;s Twitter: <a href="https://x.com/manifund">https://x.com/manifund</a></p><p>Austin&#8217;s Twitter: <a href="https://x.com/akrolsmir">https://x.com/akrolsmir</a></p><p>Transcript: <a href="https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen">https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen</a></p><h3>More Episodes</h3><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: <a href="https://www.theojaffee.com">https://www.theojaffee.com</a></p><p><strong>Theo: </strong>Welcome back to episode 16 of the Theo Jaffee podcast. Today, I have the pleasure of speaking with Stephen Grugett and Austin Chen, two of the co-founders of Manifold Markets, a play money prediction market company. Prediction markets are like financial markets, except instead of betting on stock prices, you bet on the outcomes of future events. On Manifold, you can bet on markets created by other people or create your own on any topic you want. Manifold has all kinds of markets from who will win the 2024 presidential election to will AI destroy the world by 2030 to what will happen next in the manga <em>One Piece</em>.</p><p>This is a very special episode of the podcast, my first in-person interviews done live at Manifest 2024, Manifold's annual conference. The first interview with Stephen goes in-depth on Manifold itself, the theory and practice of prediction markets, Manifold love, and Stephen's background as a ginseng merchant. The second interview is with Austin. Austin recently left Manifold to start Manifund, a unique, fully transparent grant program. In our interview, we talk about Manifund, effective altruism, and the EA funding ecosystem. I had a great time at Manifest, and these interviews were some of the highlights for me. This is the Theo Jaffee podcast. Thank you for watching. And now here's Stephen Grugett and Austin Chen.</p><h3>Part 1: Stephen Grugett</h3><p><strong>Theo: </strong>Welcome back to episode 16 of the Theo Jaffee podcast, part one. This is my first ever live recording. We're here live at Manifest 2024, and I'm interviewing Stephen Grugett, the co-founder of Manifold with a live audience.</p><p><strong>Stephen: </strong>Thank you. Thanks for having me on. I'm super excited to be on your podcast.</p><p><strong>Theo: </strong>Awesome. Thanks. </p><p>So for the first question, Works in Progress just wrote an article called Why Prediction Markets Aren't Popular, which argues that, contrary to the traditional view that prediction markets aren't popular just because they're regulated, prediction markets are actually quite legal in the U.S., and Calshi and others are able to do them. And the reason they don't work is that they just aren't very good. So aside from being zero-sum, they're usually quite small and quite illiquid, and that it would take expensive subsidies to make them large and liquid. And also, one of the reasons that Austin Chen laid out in his leaving Manifold document is because he thinks prediction markets feel insufficiently powerful. So what do you think about that? </p><p><strong>Stephen: </strong>The first thing is I think the premise is not true. One of my favorite prediction market facts from Robin Hanson is that the turn of last century, prior to the 20s, there was more trading on prediction markets on U.S. presidential elections than there was on the stock market. Average Americans were speculating on these sorts of political contracts, and it was hugely popular. So I think that certainly, within the U.S., we would see huge volumes on at least election markets by themselves if they were legal. That's totally a regulatory issue.</p><p>I think there is the other question, though, of other use cases besides election speculation. There are, right now in the U.S., some limited regulated markets on things that don't touch on these subjects, and the volumes on these contracts haven't been that high right now. I think part of the reason for this is not necessarily an inherent lack of interest on the part of the public, but the fact that there hasn't been a platform that makes it really easy and simple and engaging enough for the public to consume. So that's one of the things that Manifold is trying to address. </p><p>And I think this just takes time. The regulatory barriers for prediction markets have prevented adoption in the past. I would guess that in a counterfactual world without any regulation, you would have seen a platform like Manifold arising much earlier with real money with very large liquid markets. That's a much larger part of public discourse. </p><p><strong>Theo: </strong>So if prediction markets were fully deregulated, like, tomorrow, would you leave Manifold entirely based on mana, or would you make it real money, or would you make a separate real money prediction market? </p><p><strong>Stephen: </strong>I think we would have both. So I think one of the things people don't get about play money is that it's not just an inferior version of real money, but its own thing entirely, and that it comes with a number of advantages. So the benefit of play money is that it's just way more casual and frictionless for people to consume. If you want to get someone to sign up for a real betting platform, that can be difficult. People have all sorts of psychological barriers. They don't want to invest their money. But when it's simple and a game and doesn't come with any financial commitments, it's much easier for people to participate. </p><p>There's that, and then there's also just the freedom aspect. You can do anything you want with play money. The moment you introduce real cash into the mix, then all sorts of regulations and know your customer and anti-money laundering regulations come into play that make life very difficult. So I think even a world where real money is fully legal, you probably, you would still see a large play money platform catering to this other source of consumer demand. </p><p><strong>Theo: </strong>Have you thought about extensively what Manifold would do with real money if you could? </p><p><strong>Stephen: </strong>I've obviously thought a little bit about this. I think we would spin up a separate USD-denominated version of many of our markets for people to trade on. I think even in a world where it is legal, you would expect pretty substantial regulations. So I would imagine Manifold USD-denominated markets would be much more severely limited and on fewer topics than our play money markets, but we would definitely have been creating as many as we could.</p><p><strong>Theo: </strong>But what about crypto? If there were no regulatory barriers, would you make crypto prediction markets or is there just too much speculation in crypto?</p><p><strong>Stephen: </strong>Crypto, in addition to having the same regulatory issues that all real money markets would be, has the additional burden of being much harder to use. One of the reasons for crypto in the first place is this kind of regulatory arbitrage thing where people turn to these decentralized mechanisms precisely because certain types of contracts cannot be enforced in a court of law in the U.S. But I'm more skeptical on this fuller Web3 vision where everything would have its own token and everyday Americans would be actively engaging on the blockchain. I think that's less likely due to how cumbersome and difficult it is to use these sort of products. So I actually think in a world with more liberalization and fewer regulations, you would just see way fewer people using crypto, both in prediction markets and in general.</p><p><strong>Theo: </strong>Do you think prediction markets are fundamentally by their nature zero-sum permanently? Or do you think there will be an equivalent to an index fund, something that traders can put their money into to expect some kind of return over the long run? Is there anything traders can do to do that?</p><p><strong>Stephen: </strong>Prediction markets on a mechanistic level are zero-sum in that the most common way to structure a prediction market is to have contracts on whether an event either does happen or doesn't happen, yes or no. That's inherently zero-sum. For a lot of our markets on Manifold, the environment isn't zero-sum because there is this third party which is typically but not always the market creator who's actively going into the market to subsidize. So I think subsidization is actually very important in a prediction market context.</p><p>The basic idea is that if you want to have your question answered and it's on a pretty narrow niche topic, you may not get as much liquidity on that from purely profit-seeking traders. A lot of questions that you may want an answer to have massive adverse selection where one party naturally knows much more about this topic than the other and the price would move very rapidly in response to trades. So to cut back against this a little bit, in order to work well, you have to pump your markets full of subsidies in order to entice traders to predict in the market. A subsidy is basically just cash that you allocate, that you put into the market. You can think of it as making the, adding more friction to price movements. The more subsidies in the market, the less the price will move in response to trades. But I think in that sense, it's not zero-sum. </p><p>But I guess the other part of your question is trying to use prediction markets as an instrument to gain equity-like returns. I mean, I think that doesn't really make sense to me. Even with the subsidy, it may not be zero-sum in the sense that there's a bunch of dumb, intentional dumb money in the form of subsidies being added to the market. There still isn't really anything like stock market beta or sensitivity to broad economic growth. But you can't, for instance, if you select 100 random prediction markets and invest $10 into each, you would expect that to return $0 and not to increase with the size of the economy. </p><p><strong>Theo: </strong>But wouldn't the subsidies that you would need to make prediction markets work in the way that you're describing be tremendously burdensome to the process? Incredibly expensive?</p><p><strong>Stephen: </strong>Not necessarily. This is one of the things that we found with Manifold. We're a play money platform. If the user experience is sufficiently compelling and game-like, you can get a huge crowd of people, such as the people in this live audience today and attending Manifest this weekend, who are interested in prediction just for the sake of it outside of the monetary rewards. And when you have this system set up, that means you can get by with a much, much lower subsidy than you would if you were actively going out and commissioning the traders to give you your forecasting estimate. So I think this is one of the nice things about Manifold, is that you can purchase information much more cheaply. The nature of the platform itself kind of elicits information out of traders at a pretty low cost, much cheaper than you would be able to otherwise. So each subsidy dollar in turn can then way more efficiently get you information than whatever the alternative is. </p><p><strong>Theo: </strong>Why did Manifold predict the 2022 midterm elections better than real money prediction markets like Polymarket and PredictIt?</p><p><strong>Stephen: </strong>This is interesting, because I was on the other side of this trade. I thought during the midterms that Polymarket's numbers would be more accurate. So I bet a lot on the other side, and I lost a huge amount of mana because I was wrong. And now I've learned my lesson, that Manifold's numbers are more accurate. I think, honestly, this is kind of an n equals one thing. I think people should be very wary in general of trying to judge the accuracy of any pundit forecasting platform tool or anything on just one election cycle. So one very simple story that you can tell about this is that Polymarket had more Republicans and Manifold had more Democrats, and the Democrats won. So really, we need to repeat this over several election cycles with different parties winning in each in order to get a better sense of each platform's accuracy.Form's true calibration. Is there any existing data on which platforms have performed the best on different elections, or is it just too recent, there haven't been enough elections? It's not too recent, and there are also other, even older academic prediction markets which have a track record behind them. One of the first big prediction market experiments in the 20th century, after the progressive era in which they outlawed all of this stuff, was the experiment conducted by the Iowa electronic markets in the 80s and beyond. At that time, they found that their markets were more accurate than both individual pundits and a bunch of different aggregates of pundits. There's a similar track record from the more recent prediction market attempts. </p><p>I got my start on prediction markets with Intrade, which is defunct. They were an Irish prediction market platform. I remember trading on the 2012 midterms, and I believe that their numbers were more accurate than pundits at the time. But there are a bunch of studies on this. You can find actual answers to these questions. I don't have them off the top of my head. </p><p>More recently, though, what we found is that Nate Silver and FiveThirtyEight have performed basically on par with prediction market and other forecasting platforms. I think that will change as prediction markets become more liquid and more people are trading on them. I do think in the limit case, with tons of money being actively traded on these things, that prediction markets will be the very best mechanism and will have the best track records. But they're already pretty large and pretty liquid. I don't know about that. There's millions of dollars that are being traded. </p><p><strong>Theo: </strong>So you're telling me that these thousands and thousands of traders, many of whom are pretty smart in aggregate, can't beat Nate Silver, even though they have financial interests in doing so? </p><p><strong>Stephen: </strong>Yeah, I think a lot of it is due to the big thing you need to guarantee that prediction markets can live up to their full potential: institutional liquidity. You need Goldman Sachs and hedge funds to be able to be counterparties to all of these bets done by retail traders on platforms like Polymarket. And that does not appear to be in the works anytime soon, mostly because of regulation. I think it is true, though, that having a hundred million dollars on the line should be very enticing to people. That is a lot of money, even for very talented, wealthy individual traders. But there are still these structural barriers that prevent a lot of individual traders from participating in real money markets. </p><p><strong>Theo: </strong>Accredited investor requirements and stuff? </p><p><strong>Stephen: </strong>Well, or the fact that US citizens legally can't participate in Polymarket. Many do. Many use VPNs to access these markets offshore. But the regulatory issue and the usability issues with crypto are a major barrier. </p><p><strong>Theo: </strong>So for my podcast audience, I'm sure everyone in the Manifold audience knows this, but futarchy is a political system where you would base policies on prediction markets. And so if you had the option to do so, would you replace our current political system with futarchy? </p><p><strong>Stephen: </strong>Ah, that's a great question. Maybe this is a little bit heretical, but I've never actually been that on board with futarki as a concept. So, firstly, I think the first thing is, my view is that prediction markets are a tool, or it's kind of a category error to talk about it as a form of government. Governments are not just decision-making mechanisms. They're people who have particular values, who implement decisions. Even Robin Hanson often formulates this as a bet on what will happen. Bet on values, not beliefs. </p><p><strong>Theo: </strong>Yeah, vote on values, bet on beliefs. </p><p><strong>Stephen: </strong>Yes. So even in this formulation, part of the governance formula has to include other stuff that isn't just the mechanism. So there's that aspect. But in terms of using prediction markets to totally replace all existing decision-making bodies, I'm more skeptical. I certainly think on the margin that governance quality would improve a lot if people actively were creating, subsidizing, all sorts of questions on different policy impacts of various proposals. That would be a great thing. People have talked about using NGDP futures to help central banks determine their monetary policy. I think all of those are great things that we should be doing. </p><p>In theory, a sufficiently liquid decision market on topics where decisions can be enumerated exactly in some domain should be good. There shouldn't be any problems with that. If the market price is predicting what the outcome of various policy interventions would be are out of whack, then rational profit-seeking traders will come in and correct them, and their probability should be accurate. Policymakers make a lot of decisions. A lot of decisions are about things in smaller markets where people don't really care about but in which there are very strong vested interests. If you're making some micro-policy decision about shrimping rights off the coast of Maine, maybe the shrimpers will be willing to collude and place bets that other rational profit-seeking individuals wouldn't be quite motivated enough to do. That's one issue with futarchy. </p><p>I think the other big issue, that's a problem with the mechanism, I think the other problem with futarchy is that it doesn't address the fundamental concept of the political. The real political question is who gets to create the markets? Which are the importantValues that people actually care about determine how we allocate the liquidity to subsidize the prediction markets to get the answer on. Even if we do move into a much more futarchical world, which I support, that won't solve that problem.</p><p><strong>Theo: </strong>Let me frame the question differently. Do you think if the Bay Area governments were replaced entirely with futarchy, would it lead to better outcomes?</p><p><strong>Stephen: </strong>I think replacing the Bay Area government with anything would lead to better outcomes, so yes.</p><p><strong>Theo: </strong>Clearly not anything, right? Replacing it with Stalin wouldn't.</p><p><strong>Stephen: </strong>I don't know.</p><p><strong>Theo: </strong>For my podcast audience, Stephen's brother, James Grugett, is one of the other co-founders of Manifold. Why do you have so much more mana than James? He has like 200,000, you have over a million.</p><p><strong>Stephen: </strong>A lot of my mana comes from betting against James, which is interesting. One of us was guaranteed to win and have more money than the other.</p><p><strong>Theo: </strong>On what markets? </p><p><strong>Stephen: </strong>I think our biggest source of disagreement, and one of my biggest sources of profits versus James, is on the success of Manifold Love, which is our dating platform. I guess for the benefit of Theo's audience who may not have heard of this, the basic premise of Manifold Love is that, you know, it's in part an OkCupid clone where you can create your own public dating profile, and then the twist is that we have prediction markets on each of the people in this ecosystem for people to bet on who would be a good match with each other. The thinking is that your friends, relatives, or other random strangers who scour through your profile would be interested and motivated in matching people off based on this, and that would be reflected in the market prices. </p><p>So this, obviously, this is an insane sounding idea. This is a thing that people outside of the Bay Area would not do and would probably roll their eyes or laugh, or some combination of all of these things. I first want to say that even though I never believed in this as a large venture scale business, it actually has been successful in producing multiple long-term relationships which are still going to this day. Who knows, maybe they'll result in marriage or something like that. So I think it's too easy for people to cavalierly dismiss crazy Bay Area ideas involving prediction markets. And even if they don't live up to the full hype, they're still capable. I feel like the premise of the Manifold Love actually was vindicated, but on a smaller scale. I think it can work in this community, at Manifest, in the Bay Area, for like-minded individuals. I still have my questions about how well it would be able to scale to the rest of the world. </p><p><strong>Theo: </strong>What are the fundamental limits? Just that not enough people know enough information about the couple to be able to make good decisions?</p><p><strong>Stephen: </strong>I think... Like, they'd be very small markets, necessarily, right? Well, I think a lot of people are just put off by the concept of public profiles. This is actually a huge barrier. I think it's not necessary for everyone to be on board with the premise of the app for the app to still succeed. Many people really despise and hate dating apps, and yet those are a big thing. When dating apps were first introduced, they were seen as really weird and gross and disgusting, and only the worst part of society would use them. But since they were so useful, adoption has gradually increased, and the bull case for Manifold Love is something like this story, that even though it sounds really weird, some people have told me it's repulsive, that over time, that would fade, and the benefits would become more apparent. I'm just not convinced, though. I think too large a chunk of society just really doesn't want to have public profiles with people betting on them.</p><p><strong>Theo: </strong>Speaking of manifold love, you did a related... I don't even know what to call it. Part game show, part live musical, called Bet on Love. How did you get the idea for this? How did this come about? What's the backstory? What was the idea behind it? </p><p><strong>Stephen: </strong>Yeah. I think it's interesting. Both manifold love, our dating site, and the idea for Bet on Love essentially grew out of the last Manifest, our first conference here. In particular, we noticed that a lot of the markets that people seem to have the most fun betting on were relationship or romance-related things, many of which involved Aella, and you can look those markets up yourself on Manifold. We were trying to think about how we could capture that energy and use it to drive more engagement. Obviously, the natural thing to do is to have a surrealist prediction market dating show musical with Aella as the star bachelorette. The show actually... My original vision was much more limited. Originally, I was planning on just doing this really small-scale, very low-budget indie event where it might even be at the same venue that Manifest is happening, out in a courtyard, and we just stream it on one webcam. After I explained my idea for a prediction market dating show featuring Aella to one of my friends, they told me that, in fact, Vibecamp had actually done a prediction market dating show featuring Aella, and that I could watch the footage of this video, or watch the footage of the recording. I did, and I was super impressed by the theater company that put it on. I knew immediately after I watched this that we needed to hire them and get Manifold involved in some capacity, and that tying their theatrical and musical genius to betting on markets could be a product which is super compelling to people. I really like Bet on Love. It was very entertaining. Very interesting. I guess I do have to say this is pretty polarizing as well. I think you, the audience, will enjoy Bet on Love if you like musical theater, if you are really into niche nerd humor, and you like dating shows. If you love all three of those things, you're absolutely going to love this. If you love one of these things a lot, you'll probably love it. If you love none of these things, you probably will not love it.</p><p><strong>Theo: </strong>I don't particularly love that. I don't love musical theater except for Hamilton, and I definitely don't like dating shows. They're boring, but there was something about Manifold Love. Maybe it was just the specific type of guy who was in it. I don't think it would work with most normal people. It wouldn't have the same charm. </p><p>Manifold has a calibration chart at <a href="https://manifold.markets/calibration">manifold.markets/calibration</a> that shows whether events happened as often as they predicted. If you go to that chart, you'll see a bunch of dots and a diagonal line. All of the dots are below the diagonal line, which suggests that events happen less commonly than they were predicted to at all data points. Why? Are the traders just overconfident?</p><p><strong>Stephen: </strong>Yes.</p><p>The interesting thing about this is that you might naively think you could just write a bot to bid the contracts up and that you would make money. The reason why things like this can persist is that that's harder to do than you think. The moment you introduce YesBot that bets yes on everything, people will see that your bot always bets yes on things and will bet against you or will exploit you. They'll bid the price up higher than what the true price should be, and then you'll be stuck holding the bag with your worthless yes shares.</p><p><strong>Theo: </strong>Has anyone tried making YesBot?</p><p><strong>Stephen: </strong>Yeah, they have. I think it is interesting. One of the first things about our calibration chart is that it's just a firehose of all of our markets. It includes even pretty low-quality markets and markets that don't have that many traders. One of our users actually has created this website called Calibration City that allows you to create calibration charts that are more granular and targeted towards markets with whatever attributes that you want that have 1,000 traders or that are on particular topics. I suspect that if you added more filters to filter for higher-quality markets that a lot of this effect would go away. But it still remains to be seen. </p><p>I don't know. I think the brute fact, even for our lower-quality markets, that they have this pattern is surprising to me. A priori, I think I wouldn't even have been able to predict the sign of whether our markets would be over- or under-confident. I don't really know why this effect exists and if or how long it will persist. But in general, when you find things, yes, bot is not going to work as a strategy. But if you do see consistent wrong patterns in markets, you can do more sophisticated things to try and correct those. This is a sign that there are possible trading strategies that you could use to profit from this since it does appear to be pretty systematic. </p><p><strong>Theo: </strong>When you were on the Dwarkesh podcast a couple years ago, you told them basically that you don't like insider trading, even though a lot of prediction market people do because they think it makes prices more efficient.</p><p><strong>Stephen: </strong>No, I love insider trading.</p><p><strong>Theo: </strong>On real financial markets? Or insider trading laws.</p><p><strong>Stephen: </strong>Yeah. The classic libertarian story is that insider trading laws are bad because markets are about information and giving good prices to the public, first and foremost, and that when you remove restrictions on who can trade, it makes the prices more reflective of reality and more efficient. So I think that's a pretty good argument. The counter argument is more of a fairness argument. It's not fair for corporate officers to be able to make so much money doing things which are relatively dumb, of having access to earnings reports before the general public, or more maliciously, it's bad that they have an incentive to try and sabotage the company or other things, et cetera. I think those are very real concerns and probably the ideal legislation would do something to limit that in some fashion. Maybe the absolute chaos would work. That would result in a society which is functional. It may be better in some ways than a more restrictive legal climate, but it probably also isn't the absolute best regulatory regime.</p><p><strong>Theo: </strong>So what do you think about other forms of suspicious market activity that isn't exactly insider trading or fraud? Like, for example, what Roaring Kitty is doing right now with GameStop, where he's somehow memeing the stock up multiple billions of dollars in market cap. Should the SEC do anything about that?</p><p><strong>Stephen: </strong>I think probably not. In general, whenever there's ambiguity about the harms of particular actions, as a good general principle, it's good to not have litigation or regulations there. The world is very chaotic. If the outcome is not certain, it doesn't really make sense to get lawyers involved. Or really, when you do add regulation on this, the only parties who actually win are lawyers because then there's increased litigation. Society doesn't really benefit anyway because it's ambiguous. There are benefits on both sides. It just doesn't matter from a societal perspective. I think the government, you know, financial regulation should be limited to more severe, severe harms, which everyone can recognize and which are dealt with in an easier fashion. </p><p><strong>Theo: </strong>Do you think it's easier or harder to, in the long term, do well on manifold versus actual financial markets? Because you might think it would be easier because they're less efficient, but you might also think it would be harder because they're more zero-sum and you can't just buy the S&amp;P 500.</p><p><strong>Stephen: </strong>That's a good question. So again, we have the subsidizer dynamic where people are putting up huge amounts of cash because they want to have their question answered. So as long as subsidizers are an important part of the ecosystem, or insofar as that's true, that makes it easier for people to earn money because the subsidizer is just paying you to do that. They're not paying you in the same way to trade GameStop stock. There isn't someone naturally tossing a bunch of money into that outside of other retail investors. They are paying you lots of money to trade.</p><p><strong>Theo: </strong>At the beginning of the interview, you talked about how one of the reasons prediction markets aren't more popular is because a lot of them are hard to use. So what do you think are the good elements of a prediction market user interface that will make people want to use it?</p><p><strong>Stephen: </strong>Simplicity is key everywhere. A big mistake other platforms and other forecasting platforms have made is just making it too complicated, having too many different market types, having too many order types, showing too much information on the screen, etc. The simplest consumer apps are things like Robinhood where they strip away all of the extraneous content and just have you focus on a few key numbers and make it super obvious which user flows you want to go down. In the case of Manifold, one of the flows that we try to optimize for is market creation.</p><p>Making that really easy is part of it. That includes having it all fit onto one screen. We don't have a multi-page setup. We try to keep that pretty minimal. The other aspect of that is we've tried to standardize market terms. When we launched Manifold, when you created a market, we allowed you to set the initial probability and choose the exact amount of subsidy to provide in the marketplace among other things. The model we've moved towards now is where the market automatically starts at 50% and we standardize on certain liquidity tiers. That's just to make it much easier so you don't have to think about what you want to do when you create a market. The lowest tier markets on Manifold all cost the same thing. You don't need to think about that. If you want to subsidize them more, we've recently introduced a market tiers feature which have liquidity at different levels and you can just choose among these discrete options. That eliminates a lot of the paralysis that comes from having too many different options available. </p><p><strong>Theo: </strong>So why should people trust market makers? What if they resolve markets incorrectly on purpose?</p><p><strong>Stephen: </strong>The big thing is reputation. One of the nice things about our platform is not only do traders accrue a reputation for trading well on the platform, but market creators do as well. The better market creators not only resolve markets fairly and quickly, but they also do a better job of anticipating edge cases and having really well thought out resolution criteria, which is a skill. So it's not just not being a scammer. There's also an art in crafting markets such that the entire process is smooth and unambiguous. Our view is that over time, the market itself will select for creators who are better at doing that. We internally at Manifold will promote their markets more versus other markets with worse criteria. </p><p><strong>Theo: </strong>What do you think about derivatives on prediction markets? Is that a thing that needs to exist?</p><p><strong>Stephen: </strong>Prediction markets themselves are a kind of derivative contract on information or other real world financial assets.</p><p><strong>Theo: </strong>This is interesting.</p><p><strong>Stephen: </strong>Actually, one of the things I feel like Manifold's user base now is pretty high caliber. Immediately after we launched Manifold, we kind of blitzed through all different sorts of random derivatives on Manifold, which weren't really that useful directly, but were really cool demonstrations of different things that you could do. So we had immediately users created leverage prediction markets where you would do things like resolve NA and return money most of the time. But in some world, you choose like 1% of the time the market will resolve 100 times more or something or give you 100 times the payout, something like that. We experimented with volatility using other prediction markets as volatility swaps on other prediction markets where you can do that in a few ways by saying, will this prediction market trade outside of this range within this particular date? That's where you can extract volatility as a separate signal. There are a bunch of other stuff as well. I feel like eventually those will be useful for the biggest prediction markets on things where people are putting up huge amounts of money and want to hedge their risk. If you created a five dollar market with your friends, betting on who's going to win the next game of pickleball, maybe it's not so useful.</p><p><strong>Theo: </strong>On your LinkedIn, it says you used to be the founder of Rareroot, an online ginseng marketplace. Can you tell us a little more about that, like how you got the idea, why ginseng, why you moved on?</p><p><strong>Stephen: </strong>I was not expecting to be grilled about my past as a former humble ginseng merchant. This is a very long backstory. The first commercial vessel to ever set sail from America to China was loaded with several tons of American ginseng, and American ginseng is a separate species from Asian ginseng, indigenous to Appalachia, and closely associated historically with the fur trapping trade. Fur trappers like Daniel Boone would collect ginseng and sell it to these ginseng merchants who would then ship it overseas to China during the off-season for the fur trade. So there's this very long history of trade. China is flowing in the opposite direction of what you might think. The key facts about American ginseng today are that it wholesales for about $1,500 a pound for the simplest type of roots. Many Chinese people value roots that have very interesting or exotic shapes, which can be worth a significant multiple over the base wholesale price. The most expensive individual ginseng roots have sold at auction for $500,000 to a million dollars. Ginseng occupies the same cultural position that a really fancy bottle of wine would in the West. It's a thing you would give your boss if you don't know what else to give, and there are different gradations of fanciness that you can calibrate your gift to.</p><p>My random business idea was to try and become the Alibaba of American ginseng. I noticed that there were several layers of middlemen between the growers of American ginseng roots in Appalachia and the ultimate consumers in China. Ginseng is typically exported to Hong Kong and then smuggled over the border to mainland China to avoid taxes. It's then shipped out to the rest of mainland China from a small town in southern China where a bunch of Chinese medicinal products are located. I was trying to think about ways to disintermediate these layers of middlemen through a website. However, I realized that no one in the Chinese traditional medicine world operates at startup speeds and they're much more set in their ways than people in Silicon Valley. I ultimately realized it would probably take a decade to build a serious business in this domain and that there were a lot of other interesting things I could do instead. I did sell a little bit of ginseng, but I only had two or three sales total, so it wasn't a huge success.</p><p><strong>Theo: </strong>Now, let's take some questions from the audience.</p><p><strong>Audience Member: </strong>Why don't Americans consume American ginseng?</p><p><strong>Stephen: </strong>Well, they actually do. People in Appalachia do consume American ginseng. I've also heard that truckers in the south will sell ginseng at truck stops. The most common way that Americans would consume ginseng is in Arizona iced tea, although that's mostly Chinese ginseng, not American.</p><p><strong>Audience Member: </strong>The next question is about my views on cancel culture and prediction markets, and specifically my views on the Richard Hanania controversy.</p><p><strong>Stephen: </strong>Cancel culture is bad. If you want to help people, you should try to help them improve their views. Prediction markets can play an important role in getting people who believe incorrect things to believe better things. They provide a better calibrated picture of how the world works, which can help people improve and hold better beliefs. However, prediction markets won't tell you whether things are right or wrong. They will tell you whether people believe things are right or wrong or will believe them at some future date, but they won't address those questions directly.</p><p>As for Manifold's moderation policy, we have tremendous faith in random internet strangers to mostly do the right thing. We want Manifold to be culturally neutral and not enforce particular political sides or stances on issues. We prefer to allow as much ideological diversity as possible. We believe it's bad for social media platforms to impose any particular narrative. We're trying to operate as close to a free environment for anyone of any political persuasion as we can, within the limits set out by the law and other structural factors that we face as a business. Regarding the specific case of Richard Hanania, I think it's bad he was cancelled.</p><p><strong>Theo: </strong>What specifically was this controversy, for the audience?</p><p><strong>Stephen: </strong>The original thing that set off his cancellation was when it came to light that a decade previously, when he was a college student, he wrote a bunch of dumb articles under a pseudonym. Some journalists discovered that the pseudonym was him and released these in the future. He released some statements saying that he disavowed the dumb things that he used to believe and doesn't believe them. For most people, many believe really dumb things in college or as teenagers. I think it's important as a society to understand that people should not be held accountable or publicly punished as an adult for things that they believed as a teenager. I think it would be very bad for platforms like Manifold to take a strong stance against content like that.</p><p><strong>Theo: </strong>Do we have any more audience questions?</p><p><strong>Audience Member: </strong>Why aren't there more institutional types in the market? You mentioned before you think that would improve the market.</p><p><strong>Stephen: </strong>This is a great question. A lot of it is actually just regulation. If you're an investor and you invest in a regulated exchange and you lose money, that's understandable. If you're investing your limited partner's money in some exotic financial instrument that's unregulated or is offshore etc., if you lose money you're going to get sued. This basic factor prevents a lot of institutional capital from moving into unregulated domains. If there's enough money in this space then eventually that demand will emerge. Crypto is a good example of this. Crypto even right now is still not legally kosher everywhere or even in the U.S., but there's beginning to be more and more institutional capital pouring in just because the opportunities are there. </p><p>The other reason why there isn't more institutional money in prediction markets is just that there's not that much money in general. I think similarly to crypto, the trajectory that prediction markets and Manifold in particular will follow is that we're starting with the consumer use case. Once we get more consumers and retail trading volume on our platform, eventually over time, institutional capital will follow especially if that's accompanied by deregulation. </p><p><strong>Theo: </strong>Anyone else?</p><p><strong>Audience Member: </strong>I guess I have a question. Is there a role that journalists and media publications can have and maybe being incentivized to help resolve certain difficult questions or participate in that process? </p><p><strong>Stephen: </strong>Sure, even today on a lot of markets on Manifold, you'll see that a common type of resolution criteria that people will employ is deferring to mainstream media to decide the outcomes of markets, particularly in cases where outcomes are ambiguous and you need some independent neutralish source to make some sort of judgment call. </p><p>For instance, we had a market recently on whether in the Israel-Palestinian conflict there would be an invasion of Rafah. Invasion is actually a totally ambiguous term. There's no strict legal definition of invasion. If you created your own personal market on whether it was an invasion in your heart, people may not bet on that because they don't trust your ability to have a reasonable understanding of what that means. We've had several markets on whether the New York Times will call it an invasion. That's a good way to operationalize this really difficult fuzzy claim. </p><p>A lot of the work is doing things like that. Another type of pattern that people look for is for a general media consensus on something, which is usually an indication of fact in cases where, like U.S. presidential elections, typically are not disputed but the last one kind of was and perhaps other ones will be in the future. In a politically tumultuous time, being able to enumerate a list of different journalistic bodies and say if most of them say this then we're going to resolve according to that, provides a reasonable standard and baseline.</p><p><strong>Theo: </strong>I think we have time for one more. Yes?</p><p><strong>Audience Member: </strong>Is there any role for sweepstakes other than regulatory arbitrage? </p><p><strong>Stephen: </strong>Yeah, so the concept in American law that makes something a sweepstakes is this concept of alternative method of entry, which means you have to be able to enter into the sweepstakes without paying. If you have to pay to participate in the contest to win a prize then it's not a sweepstakes. </p><p>The key thing that makes sweepstakes good and fun relative to other types of mechanisms is that it allows free play. As I mentioned earlier, even in a world where there are totally deregulated real money prediction markets, I think we do want this space of play money prediction markets where anyone can participate. Insofar as sweepstakes are a way of achieving this, I think they're good and will continue and persist into the future.</p><p><strong>Theo: </strong>All right, well I think that's all the time we have, so thank you so much Stephen Grugett for coming here and doing this live interview with me at Manifest. Everyone go check out Manifold Markets at manifold.markets and yeah, I think this was great. </p><p><strong>Stephen: </strong>Yeah, thank you so much for having me.</p><h3>Part 2: Austin Chen</h3><p><strong>Theo: </strong>Welcome back to episode 16 of the Theo Jaffee podcast, part two, again live on day two of Manifest. Today I'm interviewing Manifold co-founder Austin Chen. First question...67 days ago on April 2nd, you officially left Manifold and in your farewell post, you gave four reasons for doing so. Manifold is stable and doesn't have much left to iterate on, you're not excited for the next steps including the pivot, prediction markets are insufficiently powerful, and short AI timelines muddle everything up. So far, the Manifold market has predicted an 8% chance you'll regret it in two years. I'm assuming you don't yet regret it, but do you have any more details to offer, especially on the prediction markets being insufficiently powerful?</p><p><strong>Austin: </strong>The prediction markets being insufficiently powerful is a point I've thought about many times throughout my tenure at Manifold. It was pitched to us as a revolutionary mechanism that would help us figure out what the future will hold and how to navigate the many decisions you have in the world. One thing I noticed pretty early on is that a prediction market can only tell you very few bits of information. It will tell you how likely the thing is from 0 to 100 percent, that's the main source of information that a prediction market by itself gives you. But you need a lot of bits of information to navigate the world. When you're making a decision like what policy, what feature should I implement for Manifold, it became very hard for us to use our own markets to figure out what we should do. </p><p>James, my co-founder, has thought of some pretty interesting mechanisms to try to get around this. If you look at a prediction market, most of the bits of information are in the question itself. So James thought to invert the traditional market structure and let people submit the questions and crowdsource the question creation part as well. That could hypothetically generate a lot more bits of information. But that mechanism hasn't proven out to generate really good policies, really good paths, really good plans for navigating the future. So I still think we're kind of at the drawing board with regards to how do we use these predictive mechanisms to make better decisions. </p><p>I was a true believer in the beginning that prediction markets can really help us act in the world. Now I still think that there's a good business in prediction markets, they provide fun, they provide a game that people enjoy betting on, but I'm less sure that these are the things that will help us navigate.</p><p><strong>Theo: </strong>If they can't provide foundational governance value to society, what areas do you think that prediction markets would actually be better than the alternatives for predicting?</p><p><strong>Austin: </strong>They're a pretty good aggregation mechanism, one that doesn't really exist in other areas. They can cohere all of that into a single point, like you can get a much better question of what does the world believe about whether Biden or Trump will win by having them bet on a prediction market than you can with a variety of other mechanisms. So I think the aggregation function of prediction markets is probably the most valuable one. Besides just aggregating all the data into a single percentage estimate, you also have people ask comments and bet back and forth, which are additional add-ons. They're not core related to prediction markets, but as you extend this functionality and people are all in the same location, you get additional benefits. </p><p><strong>Theo: </strong>So now you're working full-time or mostly full-time at Manifund, which is a very unique charity organization that has a whole bunch of unique features like, for example, re-granters. So you have people who you entrust a budget to, to let them donate money. How did you make some of the design decisions behind Manifund?</p><p><strong>Austin: </strong>A lot of them were based on my own experience as a grantee in the ecosystem. I've received some grants from the Long-Term Future Fund, for example, who can give a pretty small grant, and a Survival and Flourishing Fund, which can give a pretty big grant. I noticed a bunch of shortcomings. For example, they tend to not give you very much other feedback, rather than did you get the grant, did you not get the grant. There's not a lot of other data points to look at. You don't have a sense of what kind of grants they're actually looking for. Most of these have what are called open grant databases, but it's really just a single sentence, or maybe just the place where the grant went, and how much money it was. It doesn't tell you what was in the application, or what are the decision processes behind why the grantor decided to pay out to the grantee. So those are things that I wanted to fix with Manifund. </p><p>On re-granting specifically, re-granting was a mechanism that was really popularized by the FTX Future Fund.</p><p><strong>Theo: </strong>Now the FTX name has been tarnished, I would say. It was good for its time. I remember during the FTX glory days, when Sam Bankman-Fried was on Nas Daily and the Dwarkesh Podcast.</p><p><strong>Austin: </strong>I was perhaps the last SBF fanboy and still a die-hard. I guess your idols die very slowly.</p><p><strong>Theo: </strong>SBF did nothing wrong!</p><p><strong>Austin: </strong>However, I wanted to separate out the Future Fund from FTX itself, because one massive unfortunate shortcoming was that they tried to tie these two things together. Future Fund, I was pretty close with a lot of people running it, like Leopold Aschenbrenner, Avital Balwit, and I've spoken to some of the other people involved as well. They were just really good people. Good both in the competent sense, but also in the virtuous, trying to do good things for the world sense. For instance, on their re-granting program, they made the decision to not announce who their re-granters were in public, because they didn't want this thing becoming a weird status badge that would change the dynamics of the EA ecosystem. They didn't want to be seen as the ones awarding status. I thought that was one small example of a decision they made very thoughtfully.</p><p>The Future Fund did a lot of really cool things. One cool thing they did was the re-granting program, where people were just empowered to have individual budgets of something ranging from hundreds of thousands to millions of dollars, where they could more or less make a decision on a grant without having to get external approval from committees or things like that. I think Future Fund would just do a safety check, but then the re-granter basically had full discretion of how to spend the funds. This is actually a kind of thing that's very rare in the entire grant-making ecosystem. Most of the time, people think that if you have to give out money, you have to do it with a process. You have to do it very carefully. You have to have written-up concrete justifications for why this grant is being given to be accountable. Future Fund was like, no, let's throw this out the window. Let's just let people give out money. Let's do it really quickly. We're going to try to put an emphasis on getting money out the door very quickly. I think these were all really great things. Things that I had suffered a lot when I was a grantee and I really wanted to promote.</p><p>So Future Fund collapsed, but then at some point later, one of the people involved in Future Fund put me in contact with one of the donors and was like, hey, we think the re-granting program is still really good. Even though FTX isn't around, we might still open when the fund is. So then this anonymous donor gave us 1.5 million dollars last year and 1.5 million dollars this year to distribute to ASX re-granters. And they are making a lot of the grants in Manifund right now.</p><p><strong>Theo: </strong>So how do you choose re-granters and how similar are good re-granters or philanthropists to good investors?</p><p><strong>Austin: </strong>We actually didn't choose the re-granters in this case. Manifund views our role as more of a platform, a neutral platform. The grant maker, the person who provided funds, had about five or six people in the ASX space. We validated their picks. We looked over them, made sure they looked like they were going to be able to give grants in the ASX space. But we did not make the decision on who the re-granters were.</p><p><strong>Theo: </strong>How similar are good investors to good philanthropists?</p><p><strong>Austin: </strong>None of our re-granters are investors. All of them work in ASX basically full-time. You can see the list, but there are people like Leopold.</p><p><strong>Theo: </strong>Very impressive list.</p><p><strong>Austin: </strong>I think we worked pretty hard to find good people, but again it was up to this anonymous person whose identity is still not known to the rest of y'all. I think they already had some connections to these people and as a result that's how we got the list of re-granters in the first place.</p><p><strong>Theo: </strong>So why should someone donate to Manifund over something like GiveWell or Open Philanthropy?</p><p><strong>Austin: </strong>You can't even donate to Open Philanthropy, so that's one reason you need to give to Manifund. If you want to give away your money, OpenPhil won't take it, as far as I can tell. Unless you're Dustin Moskowitz, I guess. Then OpenPhil will take your money. GiveWell does take your funding. GiveWell only donates basically to projects in the global health and development space. That's mostly things in projects in Africa or other ways that they can find to help out humans very cheaply. So depending on what your worldviews are, if you believe that humans alive today are important but maybe less important than the welfare of animals because of the amount of funding in these spaces, you might want to give to a different animal welfare related fund. Manifund doesn't have too much of that. What Manifund does have a lot of is AI safety research. So insofar as you think that the future of humanity, humans living in the future are still a pretty neglected cause, you might think that giving to projects on Manifund would be good.</p><p>Right now it's not really the case that Manifund even accepts that many direct donations. Most of the time when you go to Manifund, you are screening the projects yourself. Manifund is kind of like a Kickstarter where you can just look at the project proposals and yourself decide, I think this is promising. I think this is a shot. I think I want to donate to this. This is actually I think closer to the roots of EA than GiveWell is today. Because today when you go to GiveWell, you kind of think like GiveWell is this one trusted institution. You can just give money to them and they will distribute it wisely. But back in the day when EA was just getting off theWhen GiveWell was just getting off the ground, there was no other trusted source they could look at. They had to make all their decisions themselves. So I would say that if you are in a position of trying to give some money, it's a good thing to make that decision yourself a little bit. Try to put yourselves in the shoes of a grant maker and try to evaluate whether a project that is about to go out will work. Are the founders good? Is the plan of impact good? This is the kind of thing that the people at GiveWell, way back when it was getting started, had to think a lot about. </p><p>Nowadays, EA has become a lot more institutionalized in a way that I don't quite like. In that you just try to guess who these people are who are smart. It's a little bit more political, a little bit more affiliation-based rather than doing your own research. So Manifund lets you do your own research and make your own decisions about what to fund. </p><p><strong>Theo: </strong>Earlier you were talking about Dustin Moskovitz. So how much power do people like Dustin Moskovitz and Cari Tuna and Jaan Tallinn have over the EA funding ecosystem? Is there a centralization risk there?</p><p><strong>Austin: </strong>This is a thing that lots of people in EA discuss. I don't think as a practical matter Dustin or Cari have that much direct influence because they don't make day-to-day governing decisions at OpenPhil. If they want to make a change they'll probably communicate it out to this 200 person organization. And that message then has to trickle down to all the different grant makers and people who support the grant makers at the OpenPhil institution.</p><p>OpenPhil plus Dustin and Cari who are maybe the largest voices at OpenPhil but still, I think less than 50% of the stuff that gets done by OpenPhil you would causally attribute to coming from the heads of Dustin or Cari. OpenPhil as a whole is a big influence in the EA ecosystem. Yawn I think does more direct thinking about what to invest in and does make those decisions. They're both big players. Manifund is trying to be the place where all the other smaller players can find all the other grantees and kind of set up the marketplace, the clearinghouse for that. </p><p><strong>Theo: </strong>How would Manifund's priorities change if its annual budget were a billion dollars or ten or a hundred?</p><p><strong>Austin: </strong>I like to think that I often think of Manifund just as Future Fund running the same-</p><p><strong>Theo:</strong> Future Fund 2.</p><p><strong>Austin: </strong>Yeah Future Fund 2, the Future Future Fund. I think their playbook looked pretty good. Their explicit first year was trying to run a bunch of experiments on different ways you could distribute funding in large amounts. That's why they were excited about the re-grantor program. It differs from the OpenPhil classic. You have program officers at OpenPhil who are just in charge of large budgets and make decisions relatively slowly. So future fund wanted to try a different approach with maybe a hundred different re-grantors with small budgets who can just make decisions very quickly. That's the kind of testing future fund did. That's the kind of testing I would do. </p><p>I'm pretty excited by something like impact certificates which is trying to set up a venture ecosystem for charity or effective altruism grants. So that's the first thing that I would try out but I'm not so committed to it. I think the meta strategy is try lots of things and see which ones work and scale them up and the first obvious thing I would try would be impact certificates.</p><p><strong>Theo: </strong>So how did Manifest manage to get such a high density of smart and interesting people and internet celebrities? You think it would just be selection plus being in the Bay Area but very few other places and gatherings are like this. So why Manifest specifically?</p><p><strong>Austin: </strong>I think what you might not see is the amount of time that our team, mostly me and Saul, have put into just sending out invites. We have this CRM of maybe 300 different people who we thought would be really awesome speakers and we spend a bunch of time just writing individual emails to all of them. </p><p>Some of it is taste in that basically Manifest is a gathering of a lot of the people who I think are the best writers the most interesting thinkers in the world and I've tried to invite them and try to position myself in their shoes and think what would make a good event for them well why would they be excited to come and pitch it to them. That often involves calling out a few of the other names. Name-dropping I guess a little bit which I don't feel that great about necessarily but I think it's a core human being. The first thing you think about when you're going to a party is who else will come to this party who else do I know at this party. So I try to highlight that for the speakers who I invite. </p><p>So that's a big part of it just the manual work of spending 15 minutes per person sitting down to write an email from scratch to invite them to come to this event that we're putting on. I think two other things that are working in our favor. One is that we ran Manifest last year and it was a really good event. And that just leads to word of mouth growth, more or less. People tell their friends, hey, Manifest was a great event. And that is so valuable. The virality of having created a good product. They say if you build a better mousetrap, people will beat a path to your house. And I've seen that with Manifest, I think. So many people have said, oh, my friend told me about Manifest. That was so great last year.</p><p><strong>Theo: </strong>I couldn't miss it this year. I experienced massive FOMO last year when it was happening, so I had to make sure to be here all summer so that I could make it to Manifest.</p><p><strong>Austin: </strong>I'm glad that you work here and I'm grateful to have you here.</p><p><strong>Theo: </strong>It was well worth the $200.</p><p><strong>Austin: </strong>$200? Oh, because you got a student discount. There's a separate digression about how we use pricing discrimination a lot at Manifest. For the speakers, there's basically a negative price. We will pay for some of their flights and housing. For students, we try to make it relatively cheap. I try to charge people who have a lot of money a lot more. That goes into the economics of making something viable. </p><p>Just to finish answering your question, I think the last part about how a lot of really cool people come to Manifest is that we kind of lucked out with prediction markets as a topic. It turns out that many of the really smart people in the world just think that prediction markets could be cool, at least. They're kind of open to weird mechanisms. And this is pretty differentiated. A lot of the rest of the world has not heard about these things. So it turns out that running a conference just on prediction markets will draw out the right crowd. I don't know if this will sustain, especially if Manifold actually succeeds in growing. I don't think you could do such a good conference on blogging, for example. But we'll see.</p><p><strong>Theo: </strong>Because it's just not differentiated enough?</p><p><strong>Austin: </strong>I think so, yeah. Or especially like podcasting, or TikTok, or something. And I don't think those would be nearly as highly selected for interesting people.</p><p><strong>Theo: </strong>Earlier, we were talking about Sam Bankman-Fried and how SBF did nothing wrong. Stan SBF. So how much do you think the status of EA has been damaged by him?</p><p><strong>Austin: </strong>Quite a bit. I don't know if I'm the best person to answer this kind of question. I'm relatively new to EA. I only got into the space a couple years ago. My sense is that it's just much harder to be unapologetically EA. You always have to caveat with, oh, but the SBF thing, et cetera, et cetera. And I do know that I feel less intellectually excited by EA, either its ideas or its participants nowadays, than I did two years ago. And I'm not sure if that's because of the SBF thing, or they were just co-coinciding for other reasons. </p><p><strong>Theo: </strong>So a lot of what you do at Manifund revolves around future of humanity kind of stuff, including AI and AI existential risk and safety. So how have your views on open AI and AI in general, especially because you just did a podcast on OpenAI, but how have they changed since the Leopold Aschenbrenner piece the other day, if at all?</p><p><strong>Austin: </strong>Unfortunately, the Leopold piece dropped during Manifest, so I haven't read most of it. I don't think my views have shifted that much. But yeah, again, I just haven't really read it in depth and only skimmed it. Hard to answer. </p><p><strong>Theo: </strong>What about all the OpenAI drama over the last couple of weeks? You wrote on the podcast page a specific note about, oh, this was written before, like this, and this, and this, and this, this.</p><p><strong>Austin: </strong>I am maybe an apologist for Sams everywhere, not just Sam Bacon for you, but also Sam Altman in this case. I kind of, maybe because I've spent some time in the role of somebody similar to SBF or Sam Altman as an executive of a startup that was growing and had to make decisions, I kind of see reasons why things that look bad in hindsight, such as the massive fraud of Sam Bankman-Fried or the NDA thing with Sam Altman, weren't really that attributable to the leaders. With the fraud thing, I think I was probably wrong at the time. I now put a lot more weight on the fact that SBF knew what was going on, and that was bad. So I made a couple there. But with the Sam Altman thing, I'd say with the NDAs, as an executive, there's so many things that you're trying to do all at the same time. You often don't have that much time to go into the details for each one. You don't know in advance that, oh, this NDA thing is going to be bad or good, and it's going to blow up. You make 100 of these small decisions every single day. So it's not that surprising to me that this kind of thing would slip past Sam's radar.</p><p><strong>Theo: </strong>So in EA a lot, there's this concept of hinginess, which relates to whether it's better to donate money now or invest money for the future. So do you think this decade has greater hinginess than other decades? So will money donated now, if it's a pivotal moment in AI or something, have more of an impact than other times?</p><p><strong>Austin: </strong>I tend to think so. But I also have kind of direct financial incentives that would lead me to think so, which is roughly that on Manifold we take a 5% transaction fee of donations that happen. So it would be better for the Manifold budget if a lot more people donated now as opposed to wait a few years, something like that. So yeah, it is not a topic that I thought that deeply on. As far as we can tell, we get funding and we're pretty much given the mandate to spend this funding within a year or so. So the question of higher level portfolio allocation, should you try and save up more to donate in the future versus not, is not a thing I've spent a lot of time on. </p><p><strong>Theo: </strong>Why aren't more philanthropic organizations open? Especially the ones that have "open" in their name, like Open Philanthropy. Is it a naming curse, like Open AI?</p><p><strong>Austin: </strong>I think being open isn't a huge pressure on philanthropic organizations. There is some pressure, but it's mostly to be open to your donors, not as much to the general public. It's common for a philanthropic organization to host a dinner event where they talk about what they've been up to, but it's not as important to publish a blog post or a YouTube video to get the same message across. This is because the lifeblood of philanthropic organizations is donations, so a lot of it is optimized for the donor flow.</p><p>Effective Altruism (EA) probably does somewhat better at this than most other philanthropic organizations. They try not to consider the donor the end user, but rather, the recipient of the good stuff, the person whose utility is being maximized. So philanthropy is very difficult in some sense, much more difficult than regular capitalism, because you have to deal with three competing parties: your donors, the people who are doing the work, the grantees, and then the recipients of the good stuff. With typical capitalism, the people who are receiving the good stuff and the people who are paying you, the donors, are the same people. So you have more of a tight feedback loop. You know that whether or not the good stuff is actually happening, because they will keep paying you money if it is and stop paying you money if it's not. </p><p>There's a bunch of things in philanthropy, and it does lead to all kinds of weird things, such as the incentive to, not necessarily incentive, but just lack of incentive to talk more about what's going on. I mean, it is also the case that many capitalistic institutions are just not that open. Like, most companies are closed source, for example. They don't publish most of what goes on internally. They view that as a differentiating advantage. </p><p><strong>Theo: </strong>What do you personally think are some of the best projects that Manifund has funded, and why those specifically?</p><p><strong>Austin: </strong>I'm biased, because as a re-grantor myself, I usually pick out the ones that I put money into. One that comes up is Lumina Probiotics, the tooth bacteria thing. That one, we were very early on. It was before even Aaron had secured the sequencing for this. He came on to Manifund and was like, hey, I think there might be an opportunity to get this bacteria and then give it to a bunch of people. I think this could be a good charity. I think this could be a good business. And then we actually invested in that very early on. I think the fact that it is so well-known and widespread, at least within the rationalist EA community today, is kind of like a success in re-granting stock picking, I guess.</p><p>I think most of the grants that are on Manifund I think are pretty good. But here is another issue with philanthropy. I don't actually have that much expertise in AI safety. My expertise is in startups and building websites and technology. We are trying to run this grant-making program on behalf of people in AI safety. So my sense of whether our grants that we've been giving out have been good is mostly just based on second-hand reports. Do the people who I respect think that the grants are good? And they mostly do. Our donor thought they were good enough to want to renew the program for a different year. That is, I think, the strongest signal I have that we're doing something worthwhile. Otherwise, it is hard to say. Especially with AI safety specifically, it is such a field of really long feedback loops. In some sense, did the world explode or not? We won't know for another five years. And the projects that we're working on in the meantime, did they constantly affect that or not? Very hard to say.</p><p><strong>Theo: </strong>Would you fund Lumina if you knew that AGI, benevolent AGI was coming in five years that would be able to cure mouth diseases itself?</p><p><strong>Austin: </strong>That's a great question. I think yes, because I think concrete wins are really important of the kind that Lumina has basically already delivered. It's still a little bit hard to say how effective the bacteria is because it hasn't gone into a lot of people's mouths for a trial. But I think winning is just super important. And insofar as Aaron builds up the skills set of being able to market a thing and promote it and share with a bunch of people, I think that will be robustly useful in the coming AI future more or less. So I think helping him accomplish this goal will also mean that in two years, if AI stuff is going nuts, he will have a lot of the capacity, resources, network, talent to be able to help out with that. I think he actually wrote in his management application that he would prefer to be doing something, something AI safety, because he thinks that's more important. But this is just such an obvious low-hanging, dumb thing that society is dropping, the fact that we should not have cavities at all, that someone should go do this. And he was the one who thought about that.</p><p><strong>Theo: </strong>I wonder what other low-hanging fruits are just sitting there like that.</p><p><strong>Austin: </strong>Yeah, I think chasing that is actually probably much better for the next for a smart EA person who's trying to figure out what to do, trying to upskill in AI safety could be a good option.</p><p><strong>Theo: </strong>Of course, if everyone is doing AI safety and there's no one left doing anything else of value, then that would be a problem.</p><p>Are there any questions from the audience? </p><p><strong>Audience Member: </strong>I would say having observed Manifold and Manifund, a lot of your success seems to stem from the fact that you guys execute really fast and move quickly. What have you learned? I guess it's a two-part question. How do you think you guys got off the ground and were able to execute so quickly? And then, what have you learned in the process that you've iterated just taking action as an entrepreneur? </p><p><strong>Austin:</strong> It's interesting because I don't even really think of Manifold as moving fast. I just think of most other software organizations as moving slowly for some hard-to-describe reason. </p><p>We picked some really good winners with regards to the technology early on. We started with Next.js, we started with Firebase. These were both tools that helped us iterate very quickly. We happened to have a fair amount of skill in these before starting Manifold. I had been working on an online board game for a long time before this. James and Stephen had been working on another React site before this. So we came into this with experience launching websites and startups. I think that helped us maintain a very high development velocity.</p><p>I do think that software is just not that fundamentally difficult. It is a field where you can iterate very quickly, and we leveraged that a lot. So that's on the building side. Then there's also the feedback loop side. We took the YC advice of talking to users to heart. We have a Discord channel and a Discord server where a lot of our power users hang out. They talk to us. Whenever something goes wrong, we find out about it very quickly, or we can ask them about things all the time. </p><p>The Manifold site itself is also another way where people can talk to us because this is a particular nature of building a social network. We're just on it all the time, and just by being on the Manifold site, people can create prediction markets about Manifold. A lot of this was how we got to very fast iteration early on. Just knowing that it's possible, I guess doing things that you have familiar clarity with, that's all helped with the execution.</p><p><strong>Theo: </strong>Just ship, that's the key. Well, thank you so much for coming on the show.</p><p><strong>Austin: </strong>Thank you so much, Theo.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Stephen Grugett and Austin Chen. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts. Follow me on Twitter at Theo Jaffee, and subscribe to my sub stack at theojaffee.com. </p><p>Some of my biggest takeaways from this episode include, prediction markets work better the more large and liquid they are. It's fundamentally hard to apply them to certain areas like dating. And there's a lot of room for innovation in philanthropy, like what Manifund does. </p><p>Be sure to check out Manifold Markets at manifold.markets, Manifest at manifest.is, Manifund at manifund.com, Manifold's Twitter at Manifold Markets, Manifund's Twitter at Manifund, and Austin's Twitter at akrolsmir. All of these will be linked in the description. </p><p>I had a great time at Manifest, and really enjoyed doing a live, in-person interview with an audience. I hope to do more soon. Thank you again, and I'll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#15: Perry Metzger]]></title><description><![CDATA[Extropians, Nanotech, AI Optimism, and the Alliance for the Future]]></description><link>https://www.theojaffee.com/p/15-perry-metzger</link><guid isPermaLink="false">https://www.theojaffee.com/p/15-perry-metzger</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Tue, 14 May 2024 05:02:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/144609720/03cfd5348e1298ca483934f8ba4833ae.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Perry Metzger is an entrepreneur, technology manager, consultant, computer scientist, early proponent of extropianism and futurism, and co-founder and chairman of the Alliance for the Future.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>0:47 - How Perry got into extropianism</p><p>7:04 - Is extropianism the same as e/acc?</p><p>9:38 - Why extropianism died out</p><p>12:59 - Eliezer Yudkowsky</p><p>17:19 - Perry and Eliezer&#8217;s Twitter beef</p><p>19:46 - TESCREAL, Baptists and bootleggers</p><p>22:34 - Why Eliezer became a doomer</p><p>28:39 - Is singularitarianism eschatology?</p><p>37:51 - Will nanotech kill us?</p><p>45:51 - What if the offense-defense balance favors offense?</p><p>53:03 - Instrumental convergence and agency</p><p>1:05:35 - How Alliance for the Future was founded</p><p>1:12:08 - Decels</p><p>1:15:52 - China</p><p>1:25:52 - Why a nonprofit lobbying firm?</p><p>1:28:36 - How to convince legislators</p><p>1:32:20 - Can the government do anything good on AI?</p><p>1:39:09 - The future of Alliance for the Future</p><p>1:44:22 - Outro</p><p></p><h3>Links</h3><p>Perry&#8217;s Twitter: <a href="https://x.com/perrymetzger">https://x.com/perrymetzger</a></p><p>AFTF&#8217;s Twitter: <a href="https://x.com/aftfuture">https://x.com/aftfuture</a></p><p>AFTF&#8217;s Manifesto: <a href="https://www.affuture.org/manifesto/">https://www.affuture.org/manifesto/</a></p><p>An Archaeological Dig Through The Extropian Archives: <a href="https://mtabarrok.com/extropia-archaeology">https://mtabarrok.com/extropia-archaeology</a></p><p>Alliance for the Future: </p><p><a href="https://www.affuture.org/">https://www.affuture.org/</a></p><p>Donate to AFTF: <a href="http://affuture.org/donate">affuture.org/donate</a></p><p>Sci-Fi Short Film &#8220;Slaughterbots&#8221;: </p><div id="youtube2-O-2tpwW0kmU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;O-2tpwW0kmU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/O-2tpwW0kmU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>More Episodes</p><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:01)</p><p>Hi, welcome back to episode 15 of the Theo Jaffee podcast. We're here today with Perry Metzger.</p><p>Perry Metzger (00:06)</p><p>Hello?</p><p>Theo Jaffee (00:09)</p><p>So you've been into futurism, extropianism, and the like for a very long time, several decades, starting in like...</p><p>Perry Metzger (00:16)</p><p>35 years, maybe a little more depending on how you count it. Long enough that, you know, that one starts to know almost everyone and have seen almost everything.</p><p>Theo Jaffee (00:30)</p><p>So how did you first get into this scene?</p><p>Perry Metzger (00:32)</p><p>I think so. I was an undergraduate at Columbia in the 1980s and someone posted a book review of this book by Eric Drexler called Engines of Creation. And, you know, and I went out and I got a copy of the book and weirdly it meshed with all sorts of thoughts I had had as a student.</p><p>You know, once biotechnical, so, you know, in the 70s, you know, it was not unusual, for example, to have a Time Magazine cover about Genentech and how they, you know, were commercializing genetically engineered bacteria to produce, you know, to produce things like insulin and human growth hormone or what have you, which at the time was like, you know, this shocking thing. And I...</p><p>I started thinking at the time, well, gee, you know, you have these things that can manipulate things at the molecular level. You know, could you use them to make computers? Could you use them to build macroscopic objects? You know, I mean, we have trees, we have, you know, we have plants, we have whales, you know, why, why couldn't you do, you know, crazy things with biology like that? And I'd put that in the back of my mind. And then I encountered Eric Drexler and I encountered.</p><p>FM S. Van De Are and, and, and, and, you know, the book, True Names by, damn it, I'm having a, I'm having a senior moment. But the, the gentleman who coined the term singularity, Vernor Vinge Yeah, he pronounced, he preferred Vinge. I only met him.</p><p>Theo Jaffee (02:18)</p><p>Vernor Vinge? Vinge. Okay.</p><p>Perry Metzger (02:25)</p><p>you know, the one time, but it was a very fun multi -hour conversation. You know, I, you know, yeah, a sad thing that he's gone. Read a few books by him, read a bunch of other stuff. And one day, this was a while after I got out of school, my buddy Harry and I were hanging around at his apartment and he had...</p><p>Theo Jaffee (02:31)</p><p>Rest in peace.</p><p>Perry Metzger (02:54)</p><p>he'd recently gotten divorced and the way that he was entertaining himself was subscribing to a lot of zines. And these days, of course, no one remembers what these things were, but it used to be that a lot of people, you know, got their ideas into the world by, by basically making their own magazines on, you know, by Xeroxing up things and, and, and. Selling them to each other. And in, if you got one zine, it almost always had an ad for 20 more.</p><p>And he encountered this particular one called Extropy by a bunch of graduate students in Southern California. And the next thing you know, I was running a mailing list for people who were interested in the topics covered in Extropy. And the next thing you know, after that, we have a few hundred really, really interesting people from.</p><p>Theo Jaffee (03:46)</p><p>So you ran the Extropian mailing list?</p><p>Perry Metzger (03:49)</p><p>I started the Extropians mailing list. Yeah. It was a very heady sort of time. We had all sorts of cool people, Carl Feynman, Hal Finney. Hal, unfortunately, is dead now too. And Robin Hanson Yes, Robin and I have known each other since back then. And it's scary to think of how long back then was.</p><p>Theo Jaffee (03:51)</p><p>Wow.</p><p>my last podcast guest.</p><p>Perry Metzger (04:18)</p><p>but lots and lots of very interesting people suddenly popped up and it was one of the best floating cocktail conversations for a few years that I've ever participated in. Lots and lots of very interesting ideas being bandied about for quite a while. Unfortunately, it also had certain mutant offshoots as seen before one these days. But for the most part, it was a very, very cool time.</p><p>and a very cool bunch of people and I was very glad to hang out with them. You know, Tim May was one of our subscribers and he and a bunch of other people ended up going off to start the Cypherpunks movement, which I also got into and I ended up running a spin -off of the Cypherpunks mailing list called the Cryptography mailing list, which I still, you know, still exists. And I think I'm notorious to certain other people.</p><p>For having shut down the first conversation about Bitcoin online Because it was getting repetitive and we had rules against that But you know if I show up at in in certain cryptocurrency Circles, you know at various conferences or what -have -you You know, some people are like, you're the guy who shut down the first conversation of unbit coin and the answer is yes. Yes I am You know and more recently, you know, I've I've been involved in</p><p>you know, a lot in AI policy, not that I wanted to be involved in AI policy. I hate being involved in almost anything with the word policy attached to it. But it turns out that that although you might not care about the government, the government will care about you either way. And and so it's become necessary to do something about that. You know, I was involved a bunch in cryptography and cryptography policy when that was a much more controversial topic. So.</p><p>I suppose this sort of thing is not entirely surprising.</p><p>Theo Jaffee (06:18)</p><p>So when I was prepping for this podcast, I read through a bunch of, extropian stuff, the extropian principles and the like 1994 wired profile on extropians. And there was one thought that struck me the whole time, which is, holy crap, this is like identical to e/acc today, Effective Accelerationism. So is it literally just identical? Are there any substantive differences? Is it just a pure reincarnation?</p><p>Perry Metzger (06:36)</p><p>Yeah.</p><p>I think that a lot of people are older and there's also, I think, certain political differences. I think that the extropians were much more explicitly libertarian for a while. But I think, yeah, in substance, it's sort of the predecessor of e/acc in a lot of ways. It's amusing and kind of cool.</p><p>to see new people picking up the ideas and running with them. I've been kind of pleased by it. It's also been kind of cool getting to know a bunch of people as a result of the fact that all of this has gotten recycled. But yeah, your observation isn't wrong.</p><p>Theo Jaffee (07:28)</p><p>So why do you think Extropianism died out then? Or...</p><p>Perry Metzger (07:32)</p><p>It didn't. It's just that, you know, one of the things you learn when you're in enough of these long -term conversations that happen is that all of them are sort of like parties. And parties end at some point. If you, the party that goes on for six or eight days, eventually the guests get exhausted, start smelling bad, you know, run out of hors d 'oeuvres.</p><p>Theo Jaffee (07:35)</p><p>it evolved.</p><p>Perry Metzger (07:59)</p><p>and really desperately want to go home and maybe take a shower. All of these things end up being bunches of people that are interested in particular things and get enthusiastic about them and push hard. I mean, but the consequences of these things when the, you know, the influence of these things moves on. I mean, there were all of these really influential, you know, home computer clubs in the Bay Area.</p><p>in the 1970s and you ask, well, what happened to all of them? And what happened to all of them is that we all have home computers now. We don't even think of them as home computers. They're just computers. Lots of these movements have a moment where they flower and the ideas end up spreading in the world and then everyone moves on and does other things and it's cool.</p><p>Theo Jaffee (08:52)</p><p>Well, what you were talking about, with a party must come to an end at some point sounds like it would apply to a scene, but not necessarily a movement. You know, like communism, for example, lasted well over like a century, century and a half, and it's still very much alive today and it shaped the entire world. Yeah. It shaped whole countries. Why didn't extremism do that?</p><p>Perry Metzger (09:07)</p><p>Yes, although it smells much, much worse than ever before. But there was a... yeah. So I don't think that it died out as a set of ideas or as a set of things that people were interested in. You'll find that almost all of the things that people were interested in continued to obsess them and the people around them. If you look, for example, there are, I mean, you look at people like Aubrey de Grey,</p><p>or lots of other people who are interested in finding ways to cure the diseases of aging or retard aging itself indefinitely. A lot of those people, in some sense, were influenced by or an extension of the things we did. The people who are interested in cryptography, cryptocurrencies, privacy, et cetera, right now, are in many cases descendants of the Extropians mailing list and the Cypherpunks mailing list.</p><p>it's just that, you know, there are lots and lots of people who are working on various things. I, it's just that discussing things endlessly at some point becomes less interesting than going out into the world and working on stuff. So I think that what you've seen is that, you know, people like say Robin Hanson, who were very, I mean, Robin, announced.</p><p>wrote his original paper on idea futures for, I think it was the third or fourth issue, I think it might have been like the fourth issue of Xtropy. And there he is still to this day at GMU publishing lots of really cool ideas on related topics and being energetic about it. It's just that we don't give it a name anymore. But all of us are still out there.</p><p>Theo Jaffee (10:58)</p><p>I mean, you say that these conversations get repetitive and then people will stop, but it seems like the conversations on the extrobian mailing list in 1996 about AI risk are identical to the ones on less wrong in 2012 and then identical to the ones on Twitter today.</p><p>Perry Metzger (11:09)</p><p>that was long af.</p><p>By 1996, all of the stuff that was fun was gone. The early days of the mailing list, we had a rule about not keeping archives. So all of the most interesting, really early stuff is gone. But yeah, I mean, so one of the, maybe I'm giving myself too much credit here, but I perpetually regret at this point, you know.</p><p>So we had a few early members, people like Samir Parekh and what have you, who were teenagers when they joined. And Samir went on to starting the first company to commercialize an encrypted web server, the stuff that ended up becoming TLS. Every time you type HTTPS into a web browser, you're using that same technology stack.</p><p>which he then went off and sold for a lot of money and he went on to do all sorts of great other things. We had a bunch of people that age who did interesting things. We also had a teenage person who joined by the name of Eliezer Yudkovsky, and that seems to have gone much less well. I won't exactly say that I regret, you know, letting Eliezer join, but it turned into much more of a mixed success.</p><p>Theo Jaffee (12:22)</p><p>Hmm.</p><p>I mean, he might be the most famous person around today associated with the Extropians.</p><p>Perry Metzger (12:43)</p><p>maybe. But, you know, I mean, I'd probably say that he's more famous for creating, you know, for turning things that we thought of as descriptive into the objects of a cult. You know, first singularitarianism and, you know, then went on to create SIAI and MIRI, which got very little done.</p><p>but I guess he wrote a lot of good fan fiction or bad fan fiction. I never found it particularly readable, but never mind that. And yeah, I mean, pardon.</p><p>Theo Jaffee (13:18)</p><p>You haven't read HPMOR? You haven't read HPMOR?</p><p>Perry Metzger (13:23)</p><p>I tried. I had a very open mind. A lot of people who I respect or respected at the time told me that I had to read it. And I started and I got a few chapters in and at some point I just couldn't.</p><p>Theo Jaffee (13:37)</p><p>for the audience, Harry Potter and the Methods of Rationality, which is Eliezer's, like, what is it, 1200 pages, 200 pages long Harry Potter fan fiction about rationality and decision theory and that kind of thing.</p><p>Perry Metzger (13:51)</p><p>I think it's really a recruiting mechanism for his group. it works spectacularly well. There's this gigantic pipeline between the stuff that he's published and Younger of Divergent Kids and Mirian Effective Altruism and all of those things. It's kind of ironic that we find ourselves in a situation in which the people on the one side...</p><p>Theo Jaffee (13:55)</p><p>it works well.</p><p>Perry Metzger (14:18)</p><p>of the current debate about AI, which I'm sure you've covered in the past, but if you haven't, we can talk about it a bit. And the people on the other side of it all came from some of the same mailing lists and intellectual discussions and what have you, but drew very, very different conclusions. Like Eliezer came to the conclusion that it was his moral duty to build an AI god to rule the universe. And I would have been more disgusted by that, except for the fact that I didn't think that he'd ever succeed in building anything.</p><p>Theo Jaffee (14:22)</p><p>Yeah.</p><p>Perry Metzger (14:49)</p><p>I was there one day and Eliezer says that this didn't happen. I remember it happening. Other people I know remember it happening. I can't prove that it happened. But I remember Eliezer giving a presentation at a conference and one of the gods of AI research, Marvin Minsky.</p><p>standing up and saying, you know, everything you're talking about is stuff that we tried and it doesn't really work. And Eliezer then saying, but I'm smarter than you and I'll make it work, which he to be, you know, he's consistent, you know, he hasn't become much less arrogant over the years. But, but, you know, I didn't think that Eliezer was going to go off and build an AI that would rule over the universe and, and, and force libertarian ethics, which strikes me as being kind of</p><p>oxymoronic, it's, it's, you know, sort of like having, you know, the dictatorship in favor of freedom or what have you. but, I didn't think anything would happen there. And so I kind of noped out and went off and paid attention to other things. And while I was paying attention to other things, you know, they mutated a few times and now have become radical D cells and to use the current jargon, you know, LEA's are calling for things like bombing data centers.</p><p>you know, saying, you know, well, nuclear war is better than AI research because at least a small breeding population will survive a nuclear war and we might yet reach the stars, but you know, if there's AI research, we're all doomed, which I think is garbage and I'm happy to defend that, but nevermind.</p><p>Theo Jaffee (16:33)</p><p>So going back to what you said earlier about, Eliezer speaking at the conference. yeah, this has been like a public Twitter thing, for a while back in like July, 2023, you tweeted, that he presented at an extra conference about how he was going to build a geo FDI.</p><p>Perry Metzger (16:49)</p><p>I might be wrong, by the way. It might have been at a different conference. It might have been maybe at a four -site conference or something. I'm old. A few other people I know remember the same thing, but we were all at all of these conferences, so who knows?</p><p>Theo Jaffee (16:59)</p><p>And then...</p><p>And then Eliezer tweeted, I claim to have not given a talk at Extra 2 in 1995 and offered to bet Metzger or anyone in the habit of believing him $100 ,000 or 50 ETH. And then you didn't make the bet as far as I know. So.</p><p>Perry Metzger (17:09)</p><p>never happened.</p><p>Yeah, I couldn't prove that he was there and it didn't seem important enough. You know, it's true. I did not make the bet. You know, Mia culpa, Mia culpa, Mia maxima culpa. You know, I, you know, I was hoping that we would find the recordings from extra two, you know, Max claimed that he had them, but you know.</p><p>never was able to find the things. It might have been a different conference, by the way. It might have been a few years later. And it might be that I'm remembering the whole thing wrong, right? Because I'm old and people tend to, you know, when you get old enough, your memory for things that happened 30 years before, it ain't always the best. But if you were going back and reading the things that he wrote, they're pretty much consistent with my memory of him.</p><p>regardless of whether particular events occurred or not.</p><p>Theo Jaffee (18:19)</p><p>And then after he tweeted about the bet, you kind of disappeared from Twitter for a few months. So was that related or no?</p><p>Perry Metzger (18:29)</p><p>No, I had two issues, one of which was that I had a lot of work to do, and the other one of which is that I've had some health issues over the last year, and I tend to disappear from things unexpectedly for periods of time. I've been off Twitter for the last couple of days too, mostly because of that. But never mind. Too much information, probably.</p><p>Theo Jaffee (18:52)</p><p>Yeah, well, I'm glad you're back.</p><p>Yeah, I'm definitely glad you're back. So when we talk about extropianism and then some of its offshoots, a common kind of umbrella term that's used is test grill, which stands for transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and long -termism. So is that?</p><p>Perry Metzger (19:17)</p><p>This is sort of like this is like having a single term for, you know, for for Nazism, communism, the Grange movement and, you know, and and, you know, and marine biology. But, yeah, I've seen that. That's that's Timnit Gebru's thing, if I'm not mistaken. She's an interesting character.</p><p>Theo Jaffee (19:43)</p><p>totally incoherent. It's just not a useful term.</p><p>Perry Metzger (19:47)</p><p>I don't see why it's a useful term. But you know, if you're attempting to get grant money for, you know, for your discipline, and maybe it's a useful term, I see it as relative, you know, a lot of those things aren't related. Like, you know, I don't consider myself to be half of those things, at least. You know, I don't think that anyone who...</p><p>was in any of those things considers them all to be the same thing. But, you know, one of the more amusing things that I've noticed of late actually is the fact that there seem to be three distinct groups of people who are trying to get AI regulated or killed in one way or another. There's the MIRI EA group of people. There are the people like Timnit Gebru and what have you who claim that AI is horribly discriminatory and evil.</p><p>And then there are the people who would like to use the power of the government to grant them a monopoly so that their AI company gets as much of the market as possible without having to compete with anyone. And watching the interactions between everyone has been really kind of amazing.</p><p>Theo Jaffee (20:58)</p><p>The Baptists and the bootleggers.</p><p>So yeah, this is like Marc Andreessen's idea of the Baptists and the bootleggers.</p><p>Perry Metzger (21:09)</p><p>It's not originally his. I'm trying to remember the economist who came up with that phrasing. There's a Wikipedia page on it, and it'll probably give the right person's name. But yeah, it's an old idea. You have people who are true believers in some sort of cause, and then you have people who would just like to take advantage of it. And they have common interests. And the common interests often intersect in extremely bizarre ways.</p><p>Theo Jaffee (21:19)</p><p>Yeah.</p><p>Perry Metzger (21:38)</p><p>They intersected during prohibition, during alcohol prohibition in the United States, and they seem to be intersecting a lot in the AI debate at the moment.</p><p>Theo Jaffee (21:48)</p><p>Hmm. Do you think that Eliezer is a Baptist or a bootlegger? And why do you think -</p><p>Perry Metzger (21:52)</p><p>he's, he's, he's, I have no doubt in my mind that Eliezer actually believes everything that he says. I also think though, that he is in a position where it would be very, very difficult for him to believe anything else. the only thing he's done in his adult life, and maybe he'll come back and claim that I'm a horrible liar because he likes calling me that and say that he had a job once for six months doing something else. But so far as I know, the only thing he's ever been paid to do is work for his own nonprofit.</p><p>and it would probably be a rather unfortunate situation for him if he were to change his mind on this. There, there was a, there was a, a talk once given by Daniel Dennett on the unfortunate situation, that, that certain clergymen find themselves in if they no longer believe in the religion that they're a clergyman for, because there is nothing else they can make a living at. And yet here they are.</p><p>So many people find that they don't change their opinions or only do in private. I think Eliezer believes everything that he says and believes it very strongly. But on the other hand, he's also, it's also his profession. The only thing that he's ever done in his adult life is work for SIAI, work for MIRI. So, and I don't know who else would pay him to bloviate on the internet and write fanfic in place of doing AI research, but.</p><p>Theo Jaffee (23:18)</p><p>So why do you think he did such a total 180 in a relatively short period of time?</p><p>Perry Metzger (23:24)</p><p>I think it's relatively straightforward. So if you read his stuff from very, very late 1990s, early 2000s, his goal was to... So I... All right, take a step back. I wrote some thought experiments on the Extropians mailing list at one point to the effect of what would happen if an AI gained capability very, very quickly.</p><p>Because you could imagine a mature AI of some sort being able to do engineering many thousands, millions, maybe billions of times faster than humans. What if you had such an AI and it was hostile and it recursively bootstrapped itself? What would happen? And I think that Eliezer, at some point, and maybe I'm wrong.</p><p>you know, I, this is my hypothesis and some of this is just, you know, his writing. Eliezer at some point decided that, you know, since God didn't exist, he needed to create one and justified this partially in his mind by the idea that there would only ever be one AI anyway, because whatever AI came into existence would, you know, would take over everything in sight because it would recursively self -improve itself.</p><p>And so it would be good if the recursively self -improving AI happened to be one that created a utopia for mankind. I know one former EA who has said to me recently, and I shouldn't use their name because they haven't been public about this, that there's an extent to which the EA movement would not be satisfied with AI simply being...</p><p>safe in our society surviving because what they were promised was utopia. They were promised to be freed from the Hindu wheel of birth and re -death. They were promised that we would all live in bliss and paradise forever. And getting less than that is a failure.</p><p>Eliezer, you know, became very, very obsessed, as I said, with the idea that he and his cohort would build this thing and release it, and it would take over the world. And I think that at some point he realized that he and his cohort had made absolutely no progress on AI research whatsoever after spending millions of dollars of Peter Thiel and other people's money. Peter Thiel later came to seem to be pretty PO'd at Eliezer and what have you.</p><p>And at some point, you know, a few years ago, he wrote that, you know, that April Fool's Day Death with Dignity article on Less Wrong, which I don't think was actually an April Fool's article at all. But, you know, publishing it on April Fool's Day gives you plausible deniability. You know, from what I under -</p><p>Theo Jaffee (26:26)</p><p>Well, he believed in very high P -Doom before that, though, right?</p><p>Perry Metzger (26:31)</p><p>I think that originally he believed that everything was going to go great because he was going to build the AI that took over the universe and it would be aligned, whatever that means. I don't think it's a coherent concept. And it would bring about utopia and everyone would be happy. And I think that he believed that he was going to be the person to do that for many years. My understanding from people like Jessica who left MIRI,</p><p>is that to this day, they still have this idea, well, maybe what we can do, and this is like the most science fiction thing I've ever heard. It's not that it is prohibited by physics. It's just prohibited by reality. But apparently, some of them still have this idea that maybe,</p><p>They can get an early AI to upload a bunch of them so they could do thousands of years worth of research very quickly and still realize the singularitarian dream of, you know, 20, 25 years ago. But I think that Eliezer is mostly just depressed these days. From what I understand, he spends his time writing Glowfic, which I didn't know about until someone told me that it existed in connection with Eliezer. But we're talking so much about him. Why don't we talk about...</p><p>Theo Jaffee (27:46)</p><p>and</p><p>Perry Metzger (27:50)</p><p>you know, other stuff, you know, it's a...</p><p>Theo Jaffee (27:53)</p><p>Yeah, sure. So, do you think that Singularitarianism is kind of just like eschatology? Is it really like, scientific? Yeah. Well, you were involved in a lot of this kind of stuff early on, so...</p><p>Perry Metzger (28:00)</p><p>Yeah, it's a millenarian religion.</p><p>No, I was involved in something different. So we talked a great deal about the fact that the pace of technological change was going to keep growing and that there was likely going to be an inflection point around AI. This was descriptive, not proscriptive. There wasn't any sort of, well, it's our moral obligation to make this thing happen quickly in order to bring about some sort of millenarian utopia.</p><p>You know, and there's an extent to which singularitarianism is, I've heard it referred to as the rapture of the nerds, and I don't love the term, but it does seem to fit to some extent, right? Mostly we were talking about what was likely to happen and the sorts of things that one might, you know, that one might be interested in in connection with all of this. There was never any, it is our moral obligation to.</p><p>you know, to make AI happen as fast as possible or what have you. Almost none of us were going off to work on AI research. Eliezer did. I've only joined the AI research community in the last couple of years. I'm quite amateur at it, even right now. But yeah, I see it as a millenarian religion and not a very realistic one. I mean, it's, well, most religions.</p><p>And certainly most new religions are not particularly realistic, so that's perhaps the wrong way to put it. But I don't see it as being particularly sane, even by the fairly weak standards that people judge such things by.</p><p>Theo Jaffee (29:45)</p><p>Well, does it not seem to be the case that if you get human level AI combined with a whole bunch of other powerful technologies, you know, nanotech, gene editing, full -dive virtual reality, that the world that we live in after would be radically different? That's kind of the singularitarian hypothesis.</p><p>Perry Metzger (29:59)</p><p>Yeah. And by the way, well, yes, but I, and the world that we live in right now is radically different from the world that our ancestors lived in. Right. So imagine that you go up to, I don't know, a homo habilis. I think those were the first tool using, you know, hominids. And I'm sure that someone will now write in or tweet or something when they hear this and say, you're wrong. It was homo erectus or, or it was australopithecus or something else. I've probably gotten it wrong. Who cares?</p><p>You know, let's say that you go back to one of those folks and you say to them, well, gee, you know, if you actually pick up that stone tool and start working with it, eventually people are going to be doing video podcasts over the internet, living in centrally heated homes and eating strawberries in midwinter and all of the rest of this stuff. I mean, the world that such a creature lived in in our world bears no resemblance to each other. It's like, it's like.</p><p>crazily different. There are a few things that are similar. We still have sex and reproduction and a bunch of those things, but I imagine such a creature thinking about someone getting a cochlear implant or a newspaper for that matter. Things are radically different. And I think that in the future, things are going to be even more different, yes.</p><p>We're going to have extremely powerful artificial intelligences. We hopefully will eventually cure Alzheimer's and cancer and all sorts of age -related diseases will probably extend human life indefinitely. We'll be able to do things like doing neural implants to computer systems. We're going to be able to, you know, we're going to have a greatly accelerated</p><p>technologies in, you know, manufacturing technology, space technology, etc. The world is going to look extremely different. But that doesn't mean that history ends, or that it's going to be a utopia or that it's going to be a hell. It just means extremely, extremely different. And yeah, if you read, I think that it was run bookworm run is the Vernor Vinge short story.</p><p>about an uplifted chimpanzee. In the introduction, he writes that he had proposed to his editor that he might want to write a sequel about this technology, which makes a chimpanzee vastly more as intelligent as a person, that he might want to write a sequel about this being applied to a human being. And his editor wrote back and said, you can't write that story and no one else can either. And that's sort of where he started thinking about it.</p><p>this stuff. And yes, we have a great deal of difficulty, I think, predicting what's going what the world is going to look like once we have things like nanotechnology and AI. And I've given talks about this. You know, I would say that, you know, the history of technology has been the history of creating more and more and more general technological capabilities. You know, what is the Internet for? It's not for anything.</p><p>That's why it's so powerful. It's a general way to have computers talk to each other. What are computers for? They are things that allow us to do anything that can be expressed as an algorithm. And oddly, it turns out that recording a podcast or predicting the weather or entertaining a child all happen to be things that this technology enables. Nanotechnology is going to be insanely powerful.</p><p>It's going to allow us to live in a world where physical objects of incredibly rich capability are things that we can build. And right now, if you look out your window, there's, you know, if I look out my window, I see lots of trees and, you know, trees are things human beings can't build yet, right? But they are exquisitely nanostructured devices. They're capable of self -reproduction, which is not the most interesting thing about them.</p><p>say, but they're also capable of constructing themselves and putting out all of these photoreceptors and powering themselves and creating this amazing nanostructured material called wood. Wood is a really, really weird material when you think about it, and we take it completely for granted, but it's almost magical, right? And at some point, we're going to be able to do artificial things that are even more powerful than what natural biological systems can do. You know, the</p><p>prick in biological systems is that they're capable of assembling macroscopic objects like you or me, molecule by molecule, building them atom by atom. And we will be able to do that with artificial systems at some point. And it's going to be world changing, like dramatically world changing. And to be sure, that means that there is a limit to what we can predict about the future because as you know, the further out we go, the...</p><p>more different things are. That's a really terrible way to put that. I'm not very good at English today, but as time goes on, we're gaining more and more powerful technological capabilities. And for the most part, this is a great thing, right? I have friends who are alive today. I have one friend in particular who survived stage four malignant melanoma, which 30 years ago, if you got</p><p>malignant melanoma, never mind pretty much the stage, you were a corpse. It was a question of how long, right? And how do you survive stage four malignant melanoma now? Well, we understand the human immune system so exquisitely that we're capable of producing targeted molecules that can block or enhance various mechanisms inside the human immune system. So you can tell the immune system to go off and kill the malignant melanoma and it works, right? Doesn't work for absolutely everyone, but it used to be a death sentence and now it isn't anymore.</p><p>Theo Jaffee (35:57)</p><p>Mm -hmm.</p><p>Perry Metzger (36:21)</p><p>we're going to have a billion, a trillion such changes. It's going, the world is going to look very, very, very different. And probably if you give it a few hundred years, or maybe even if you give it 50, the world might look as different, you know, compared to where it is now, as it does for, you know, a person in the ancient world comparing themselves to today or even worse, right?</p><p>but that's not magic, right? Or, or a millenarian vision. That's just talking about, you know, technological change as it occurs in the world.</p><p>Theo Jaffee (36:53)</p><p>worse.</p><p>Yeah, I'm sorry, I gotta go do something really quick. I'll be back in like a minute or two and I'll edit this out.</p><p>Perry Metzger (37:11)</p><p>Sure.</p><p>Theo Jaffee (39:54)</p><p>Alright, I'm back. Sorry about that.</p><p>Perry Metzger (39:56)</p><p>very good. So we can have a trim point at some point like here. Yeah.</p><p>Theo Jaffee (40:00)</p><p>Yeah, Riverside makes all this really easy. It's great.</p><p>Perry Metzger (40:04)</p><p>Yeah, in fact, you can just cut by text, which is like the most amazing thing on earth.</p><p>Theo Jaffee (40:08)</p><p>Yeah. It's only gonna get cooler from here. I'm really excited.</p><p>Perry Metzger (40:13)</p><p>Yes, well that's what we were just discussing.</p><p>Theo Jaffee (40:17)</p><p>Yeah, I'm really excited for when I can get an agent to just edit my whole podcast for me and transcribe it and come up with like intro videos.</p><p>Perry Metzger (40:26)</p><p>You can already transcribe it very easily. In fact, most of these systems out there will...</p><p>Theo Jaffee (40:30)</p><p>Yeah, but it makes all kinds of errors and stuff.</p><p>Perry Metzger (40:33)</p><p>You can probably aim an LLM at that and ask it to try to find a bunch of them.</p><p>Theo Jaffee (40:39)</p><p>I did, I wrote a script where I had it go through whisper to transcribe it and then I ran it through GPT -4 in chunks with like a custom prompt that was like, you know, I am doing a podcast where I talk a lot about AI so, you know, be aware of that and fix everything. It still wouldn't get everything. Not even close, maybe next month. It got a lot of things, yeah, but not everything.</p><p>Perry Metzger (40:58)</p><p>Did it get a lot of things?</p><p>Well, you know, I will I'll tell you a sad story, which is that I so I've got a book coming out. It's not about any of this stuff. It's a it's a children's book about computer architecture, believe it or not, published by Macmillan. I think in like fall of 25 or something like that, it's going to be a graphic novel. There's this great illustrator associated with it named Gerald. I every time we go through to look for mistakes, there are more mistakes.</p><p>It seems like they breed behind the couch. So if human beings can't read through their own book and find all the mistakes, maybe it's not entirely surprising that even a human level AI or a weekly superhuman AI can't quite find all of them. But at some point, of course, at some point you also wonder what constitutes a mistake if that person misspoke.</p><p>Do you correct what they said in the transcript? I don't know. All sorts of interesting questions.</p><p>Theo Jaffee (42:04)</p><p>Yeah. So back to what we were talking about earlier. You mentioned how nanotechnology is going to become a thing. It's going to be very world changing and very powerful. Yeah. Yeah.</p><p>Perry Metzger (42:11)</p><p>It's going to be transformative. Yeah, it's going to be one of the most transformative technologies in history. I'd say the most transformative other than AI.</p><p>Theo Jaffee (42:20)</p><p>So then what's stopping Eliezer's prediction of misaligned super intelligent AI that learns nanotechnology better than anyone else and creates self replicating diamondoid bacteria robots that reproduce using atmospheric.</p><p>Perry Metzger (42:35)</p><p>that eat everyone and turn everyone into diamond paper clips because paper clips seems to be the thing. So I think that the answer to that is that none of this is going to happen overnight. And in the course of, so let's take a step back and trust me, this is all relevant. So it turns out that you are already in a nanotechnology war, right? You are in the middle of one as we speak. And in fact,</p><p>Theo Jaffee (42:40)</p><p>Yeah.</p><p>Perry Metzger (43:03)</p><p>If you stop fighting this war, if your heart stops beating, within hours, you know, horrifying nanobots are going to start eating you and turn you into a putrefying, into putrefying sludge. Everyone knows this, right? But they don't think of it in terms of like nanotechnology, right? The biology around you is nanotechnology. Now, how is it that we all are not turned into sludge as we're walking around day to day? Well, in fact, every once in a while you get an infection.</p><p>that nothing can treat and you in fact do die. Like, you know, a bunch of, you know, not that many people tend to die every year of say influenza, but a few people do, right? You know, 20, 30 ,000, I think in a given year. That's nanotechnology, right? Now, how is it that all of us don't die of that? Well, it turns out that you are also made out of crazy nanotechnology.</p><p>and your body is filled with these things that are intended to look for invaders and stop them. Right. Now, let's look at something that looks very, very different. Now, maybe I don't know if you live in San Francisco, there probably aren't any police who are actually stopping people from committing crimes. But let's imagine that you're in most of the United States. Right. You know, in most of the world, right. The reason you can walk down the street and you generally speaking don't fear being mugged.</p><p>is because there's a substantial cost to being a professional mugger, which is that there are people whose full -time job is hunting down professional muggers and people who break into houses and things of that sort, right? We have unaligned human beings, you know, if you want to use the jargon, all around us. And we've built mechanisms like armies and police forces and even fire departments to some extent that exist.</p><p>to stop people from doing bad things to other people. And this is only going to continue, right? So as AI and nanotechnology get developed, we will find ourselves in situations, I don't know if you've seen there's this video that made the rounds some years ago of AI -driven drones, like going around and killing politicians and doing stuff like that. It was very dystopian and got a lot of people talking.</p><p>Theo Jaffee (45:25)</p><p>I don't remember that. I may have vaguely heard of something like that.</p><p>Perry Metzger (45:27)</p><p>I can, I can probably find, you know, track it down and forward you a link for the show notes or something. but, but, but the, but why is this an unrealistic vision? It's an unrealistic vision because people have an incentive not to let it happen. And I don't mean that they have an incentive to somehow like brainwash everyone on earth into no longer remembering how to build.</p><p>Theo Jaffee (45:34)</p><p>Yeah, sure, I'll put it in the description.</p><p>Perry Metzger (45:55)</p><p>you know, drones or what have you, right? I mean that they have an, because that sort of thing is impossible, they have an incentive to build defenses, to build systems that stop other people from doing bad things to you. Regardless of what you think about the current war in the Middle East, whether you think that, you know, whatever side you support, you know, the state of the art in anti -missile systems, in anti -drone systems, in anti -artillery systems is kind of impressive.</p><p>And those systems have been built because certain people were worried about that they might come under attack from such systems and didn't want to sit around waiting for it. As AI is developed, as nanotechnology is developed, we will discover ways that bad people can abuse these systems. Bad people have abused every technology in human history. And what do we do when we discover this? We build countermeasures. We build...</p><p>Theo Jaffee (46:25)</p><p>Yeah.</p><p>Perry Metzger (46:53)</p><p>ways to stop people from doing bad things. And this goes back as I said, back to the dawn of history, to the fact that we all have immune systems, the fact that we have culture, the fact that we have as part of our culture, various cultural mechanisms for punishing people, for attempting to take advantage of other people, the fact that we have police forces, the fact that we have militaries, the fact that we have...</p><p>you know, that we have espionage agencies and all sorts of other things. Societal mechanisms and biological mechanisms and technological mechanisms have been built to counter bad things. And this will continue, right? So it is true that if one single maniac in the future had a superhuman AI and access to nanotechnology and decided one morning that they should turn everyone on earth,</p><p>into, you know, I don't know, into instant cocoa. I get tired of talking about paperclips. Paperclips are so boring. Whereas Swiss Miss or Nestle's Quick, those are exciting, right? So you've got a madman out there and he's decided to turn everyone on earth into instant cocoa. And if there's no one opposing him, yes, he'll be able to. But that's not what's going to happen. What's going to happen is that these systems are going to be built slowly over a number of years by lots and lots of different groups.</p><p>And as they build them, they will construct mechanisms that will stop other people from doing bad things. In the not that distant future, I expect to see law enforcement deploying a lot of AI -based systems to help them tracking down things like phone scammers. I expect to see people, you know, people are already using AI -based systems in law enforcement, in military applications, in other places like that. It will continue.</p><p>So if there are, you know, if there are many, many, many, many people who have AI and there are many, many, many, many people who have nanotechnology, you don't have to worry that you're going to be turned into instant cocoa because you're going to have systems that will say, hey, this other bad person is doing something to try to turn you into instant cocoa and I'm going to stop them. I mean, like if someone, let's put it this way. If someone breaks into your house, you know,</p><p>and starts watching your television, you're going to call the cops, right? You're not, you know, and you can say, well, what stops someone from breaking into anyone's home and sitting in their living room and watching TV or breaking into their house and stealing their stuff? And the point is that we have a system of disincentives and countermeasures that severely disincentivizes this sort of behavior, at least in most of the country. Now, again, there are places where people seem to believe in crime.</p><p>But, you know, in most places, we disincentivize bad behavior, we punish it, we hunt it down, we try to stop it. That's why we don't have to worry, right? Yes, it's true. Nanotechnology will be extremely powerful, and it will not just be in the hands of bad people, it will be in the hands of a great number of people, and many of them will not want to sit around and be eaten.</p><p>Theo Jaffee (50:08)</p><p>That may be true, but the kind of canonical doomer counterargument to that is that what if the offense -defense balance dramatically favors offense? Like in Nick Bostrom's Fragile World Hypothesis, where he talks about what if the world is actually very easy to crash, we just don't have technology powerful enough to do it yet? What if nanotechnology gets to such a point where it is possible for people to just brick the entire world with a failed experiment or...</p><p>a misaligned either superintelligence or person or whatever. And I know you already said, yeah, but we have countermeasures against that. But I think in response to that, they would say, yes, but like, do we really want to risk it? And what if the probability that such countermeasures would work is actually much lower than you think?</p><p>Perry Metzger (50:58)</p><p>I think that, so this is a long conversation, but you know, I mean, so Nick is very, very fond of obsessing about things that I think aren't worth obsessing about. Like, you know, there's, for example, you know, the doomsday argument, which I think is junk, but which he yet spends an inordinate amount of time talking about, but let's not get ad hominem about this. Let's address this notion directly.</p><p>I don't think that it's true, right? I think that we already, first of all, I think we already have a lot of understanding, about where likely offense slash defense is likely to be played there. And I think it's mostly a question of resource expenditure. I don't think that there's an absolute advantage for either offense or defense in most of these situations, but that if you are willing to expend sufficient resources, that your</p><p>probably in a pretty good position. Arguing this definitively probably would require a few hours, not, you know, like, however many minutes we want to spend on it. But to give you just, like, a hint on this, right, you know, the, let's say that tomorrow morning, you know, someone, you know, let's say that we're living in a hypothetical future world, you know, where...</p><p>There's lots of AIs, there's nanotechnology deployed, there are lots of such systems around. And someone decides that they want to go for, I believe there was a great paper actually by, that I once read called Limits to Global Ecophagy, which was really, really kind of neat as it asked the question, how long would it take for nanobots to eat the world? And it came back with the answer that it would take so long. It doesn't seem like a long time, but it would take weeks.</p><p>And that sounds like it's a terrible amount of time, but it turns out that that means that within hours you have things that have probably noticed and are in a position to start doing something about it. You can't... Well, so you almost certainly can't help but notice within hours. That was a paper by Bob Fridus, actually, and it's a pretty good one. I think it's a reasonably good read.</p><p>Theo Jaffee (53:06)</p><p>Hopefully you've noticed within hours.</p><p>But again, it's exponential. Yeah.</p><p>Perry Metzger (53:24)</p><p>though maybe not the sort of thing that most people enjoy reading when they go to bed at night. I have weird tastes. But generally speaking, it's the future someone decides to release something that will turn everyone on earth into Ming vases because they've got a thing for Ming vases or what have you, or they've released one of Eliezer's hypothetical. By the way, so Yadkovsky says things like, I think you could build biological viruses that would take over people's minds. I don't.</p><p>really think that's possible. You can do it in very, very narrow situations. There's a lyssavirus which is what causes rabies. It does sort of take over the minds of animals, but in a very primitive kind of way. All of these things.</p><p>Theo Jaffee (53:58)</p><p>Mad cow disease.</p><p>Yeah.</p><p>I mean there are certainly chemicals that can alter your brain state and emotions and personality.</p><p>Perry Metzger (54:17)</p><p>They can, yeah, in a very crude way. I think that he imagines things that could take over your brain and make you obey the wishes of a particular machine and do whatever it desired or what have you in an extremely sophisticated, rich sort of way, which I don't think is possible, right? But again, let's say we've got our hypothetical situation where.</p><p>someone desperately wants to release nanobots that convert everything on Earth into Ming vases. So I think that by the time people can do that, there are going to be nanobots everywhere. And they are going to be doing all sorts of things, like for example, cleaning bacteria and viruses out of the environment, doing things like cleaning the air of particulates, like...</p><p>checking whether or not someone is releasing biological agents or hostile nanomachines. I think that the odds of something not being detected are very low. Now, if you believe the notion that there's going to be hard takeoff, that someone will wake up one morning, build an AI, and that by the next afternoon they'll have access to all of these amazing technologies, then yeah, sure, I'm wrong if that's true. But I don't think that that's at all possible.</p><p>The amount of work needed in order to construct a mature technology like that is insane. Even if you have access to things that can do all the engineering you could possibly want, the amount of physical stuff that needs to be done, like just acquiring and degassing ultra -high vacuum equipment to start doing experiments is like a serious effort.</p><p>All of the things involved in such things are serious efforts. I think a much, much more realistic scenario is that what's been happening, right? So has anyone, you know, I think that Yudkovsky and company never imagined that we would have systems like whisper or GPT -4 or GPT -3 .5. I can hear Eliezer screaming in the background on Twitter, Metzger is lying I, of course, I envisioned this. Look at this obscure pod.</p><p>cast I was on 17 years ago, look at this thing I wrote on less wrong. Well, OK, fine, whatever. But I think that if you read the bulk of their materials, they talk about building a seed AI that bootstraps itself to super intelligence. And they don't talk about some sort of gradual development. But if you look around us, AI is being developed very gradually. The AIs around us are being released at regular intervals by.</p><p>Theo Jaffee (56:55)</p><p>Well...</p><p>Perry Metzger (57:00)</p><p>Organis commercial and academic organizations that are doing lots of research and development much of it in the open and they are making incremental progress and in certain respects these systems are deeply super human already and in certain respects they're deeply sub human still and it's happening bit by bit and it's happening in many places not in one place. I.</p><p>Theo Jaffee (57:20)</p><p>Well, I think they would argue that the way that you get to AGI doesn't matter as much as the endpoint. And if the endpoint is a superhuman artificial intelligence, no matter if it's based on an LLM or if it's based on some like pure Bayesian seed AI or whatever, then it will end in the destruction of humanity because of instrumental convergence.</p><p>Perry Metzger (57:36)</p><p>Yeah, they can argue that. But the well, so the instrumental convergence strikes me as being like, you know, so so the notion of instrumental convergence for those that don't know, you have to take a step further back, which is that according to the Doomer argument, all AGI's are going to of necessity have some sort of goal.</p><p>and be vicious optimizers in pursuit of that goal. And again, I can hear Eliezer's voice in the back of Twitter somewhere screaming, you're lying Metzger. But this is effectively what they argue. That if you build an AGI, it's going to have goals. It's going to be superhuman about optimizing those goals. And that the goals will necessarily be weird and alien, like say turning everything into paper clips or paving the moon or who knows what. The two problems here are,</p><p>There's no reason to believe that any of the AIs that we build necessarily have interesting goals of their own. And you could say that the goal of whisper is to transcribe speech into text. Or you could say that the goal of GPT -4 is to predict the next word or maybe at a higher level to produce a conversation that people find maximally probable or reasonable, right?</p><p>But these aren't really goals in the way that humans or even ants have them, right? There's this notion that if you build an AI, it's a person or an independent agent in some meaningful sense and not a tool. And I think that although you could build AIs that are not tools, that most of the AIs we're building are tools and that most AIs that we build will be tools.</p><p>Theo Jaffee (59:12)</p><p>So you're saying it's just.</p><p>So you're saying that they simply do things and they don't want to do things. They're not agents at all.</p><p>Perry Metzger (59:34)</p><p>Well, what's an agent? OK, so there are all these terms that we throw around when we're discussing AI. Simple ones like alignment that don't really have a definition or agent that don't have a definition. Much more complicated ones like conscious that philosophers have been arguing about for thousands of years that have very, very poor definitions. The problem, by the way, of the hard problem of consciousness, in my opinion, is how you define it. Once you've defined it, like discussing it,</p><p>rigorously discussing it becomes either trivial or drifts off into mysticism. But anyway, what's an agent? I mean, if I have a system that I have asked to tend the fields on my farm, is it an agent in a meaningful way? Or is it just a tool? I don't know how to define that particularly well. The real question to me is,</p><p>Is it sitting around off hours talking to itself about how awful its job is and how it would really like to run away and commit a mass homicide spree or something? If it's not actually talking to itself off hours about how bored it is and how it really wants to, I don't know, you know, turn Los Angeles into glass or something, then why are we worried? The things that we're building at this point. So the original vision, you know, of these</p><p>you know, Bayesian monster machines, isn't what we've built, right? What we've built are these systems, and I'm drastically oversimplifying here, okay? But this is essentially right, okay? What we had was the following problem that was standing between us and AI. We had the problem that I could show you as many, you know, that I as a human being could recognize pictures of cats.</p><p>And I couldn't write down some sort of closed form explanation. How do you recognize a cat in an image? OK? You know, I could have bitmaps, and a human being could easily say, it has a cat in it, doesn't have a cat in it. I could have, you know, digital pictures of all sorts. Cat, no cat, cat, no cat. But how do I explain to a machine what I'm trying to do here? And it turned out to be really, really, really difficult until we realized that what we could do,</p><p>was simply give the machines vast numbers of examples of pictures that had cats and didn't have cats and allow them to use statistical learning to figure it out. And this changes a lot, right? And the most important thing that it changes is that the way that we're building these machines is that we're giving them examples of what it is that we want and...</p><p>We are not saying, yes, this is a machine we want to release into the world until they do. But Eliezer and company made extremely heavy weather of the notion that you could build something that was incredibly intelligent, but how would you get it to want to do something that you wanted it to do? But if you're using statistical learning techniques, the systems naturally want to do what you want them to do.</p><p>Like, I'll give a stupid example that people don't think of very much, right? Like, could you, if, okay, you've got your eyes open, you look around at the world around you, could you voluntarily decide not to recognize objects around you?</p><p>Theo Jaffee (1:03:05)</p><p>no, but for example, if you</p><p>Perry Metzger (1:03:09)</p><p>W -w -w -why not?</p><p>Theo Jaffee (1:03:11)</p><p>because you just do it. It's not conscious. But if those objects are letters and you're not very good at reading, then you might be able to kind of choose not to read. Like if I'm looking at Japanese hiragana or in some cases Hebrew text, it takes me effort to read it. So I can also choose to not expend the effort and not read it.</p><p>Perry Metzger (1:03:13)</p><p>Well, it's not even just that you just do it. You...</p><p>Right. But if you're sure. But if we're talking about things that are in system two and not system one, right. If we're talking about like recognizing a chair, if I look across the room, I see a chair. I can't. I literally don't know how I would get myself to stop. And this is because you've got this extensive set of neural circuitry that you use for looking at the world. And most, by the way, of your circuitry.</p><p>Isn't something where you have to extend volition for it to work or even where you could stop it from working by an active volition. You could probably extend an active volition to get yourself to fall on the floor from a vertical position, but you don't have to extend volition as you're standing around like waiting online to go into the movies or, or at the checkout at Trader Joe's. You don't extend volition to stand, you know, vertical. It just happens.</p><p>Right? Most of these systems that we build that have very, very intelligent and interestingly intelligent behavior and your visual subsystem is a big, complicated, rich subsystem that's like probably more complicated and bigger than any of the AIs we've built so far. Most of them don't have volition in an interesting way and don't need it. Right? And if I'm building a system that's picking fruit,</p><p>or laying out circuits in a new chip design or designing a bridge or helping me find molecules that dock to receptors on cell surfaces. None of these things require independent volition or volition at all. Whisper doesn't have volition any more than your visual system does. You know, it's got inputs which are, you know, which are sounds and it's got outputs which are text.</p><p>And this is a slightly rough approximation because, you know, it's there and both encoded in an interesting way. But it can't choose to instead say, no, today I've gotten bored with this online strike. I instead want to be repurposed, you know, making burgers at McDonald's, which I think would be a more interesting career than being a speech recognition system. No, it doesn't do that. It has it has no memory. It has no capacity for self -reflection. It has no consciousness. The</p><p>bulk of the things that we are interested in building are going to be tools. Now, this doesn't mean that people can't build things that are not tools, that do have self -reflection in a meaningful way, that might get bored, that you could even convince to become genocidal, right? But that's okay, provided those are a minority of the systems out there and don't have some sort of overwhelming control. And by the way, I think it's inevitable, given that there are eight billion people in the world now.</p><p>and that in the future there will be far, far, far more billions of conscious creatures and entities out there. It's inevitable that over time at some point someone's going to build, and it might even be relatively soon, who knows, that someone's going to build something that's not a tool but a thing, but a thinking conscious thing. But it's not required. There's this whole section of the Yudkowskian dogma.</p><p>called about orthogonality. And I would like to note that one form of orthogonality that none of these people considered was the possibility that agentness and consciousness and volition and all of those things were orthogonal to being able to solve interesting problems. Like really interesting problems can be solved by these systems without needing those things. Human beings have consciousness and a desire to survive and</p><p>a variety of other features like this, all because we need this to survive. We evolved to have these things. These were important features that we gained from our past. But you don't need these things for the most part, in order to have interesting useful systems. It is not necessary that that that you have systems like this.</p><p>It's that you have consciousness and an inner life and a desire to think about philosophy and your off hours. I mean, when GPT -4 isn't talking to you, it's not thinking about philosophy. It's off, de facto. Those are features that we could add to systems but are not required for them to be useful to us.</p><p>We are building tools, right? And we don't have to build tools, but so long as most of the things that we've built are things that are tools and under our control and mostly do the things that we want, we don't have to worry so much. And I think that that's almost certainly going to be the case. And yes, at some point people will build things that are not tools and maybe they'll even build things that desire that they, the desire eating the whole world. But so long as they do that at a point where we have countermeasures, it doesn't matter.</p><p>And I think that it's inevitable that we will have countermeasures.</p><p>Theo Jaffee (1:08:43)</p><p>By the way, the arguments that you just made remind me a lot of Quinton Pope, AI researcher and former podcast guest. His excellent blog post where my objections to we're all going to die with Eliezer Yudkowsky, which when I was like full X -risk doomer mode after like ChatGPT and then GPT-4 came out last year helped sow some seeds of doubt. I'm much more optimistic now.</p><p>Perry Metzger (1:09:06)</p><p>Just a few. Yes. Can we pause for 10 seconds so I can put my watch on a charger? Okay, one moment.</p><p>Theo Jaffee (1:09:14)</p><p>Apple Watch.</p><p>Perry Metzger (1:09:31)</p><p>Apple watches will do a very wide variety of things, but they will not give you an alarm to the effect that your watch is down to 10 % charge, which is annoying as all hell. Mine does not.</p><p>Theo Jaffee (1:09:41)</p><p>Mine does that.</p><p>Yeah, I've been having the same exact issue too, where I charge the Apple Watch and it should last for like almost two days and then it gets down to 10 % in half a day and usually -</p><p>Perry Metzger (1:09:55)</p><p>That's your dead, that's your battery dying. You're going to have to go to Apple and get it replaced. I need to do the same thing.</p><p>Theo Jaffee (1:10:00)</p><p>Well, I've gone and fixed, or I fixed it temporarily by just rebooting it. And then it worked. Yeah, so.</p><p>Perry Metzger (1:10:05)</p><p>Maybe I should reboot mine more often. Maybe that's the reason I'm not getting alerts about low battery. But anyway, back to, so you were telling me, talking about Quentin Pope and how we're not all going to die with...</p><p>Theo Jaffee (1:10:20)</p><p>Yeah, so that if you recall that podcast episode, we're all gonna die with Eliezer Yudkowsky on the bankless podcast helped throw a whole lot of people into like holy crap mode. And this was like right after ChatGPT came out when people were like,</p><p>Perry Metzger (1:10:35)</p><p>That threw me into holy crap mode, by the way. It's the reason I ended up founding Alliance for the Future.</p><p>Theo Jaffee (1:10:40)</p><p>But it found it put you in the holy crap mode for a different reason, I imagine.</p><p>Perry Metzger (1:10:43)</p><p>Yes, it put me into, holy crap, if I don't get involved in politics in a way that I don't particularly love doing, what's going to happen is that the entire conversation is going to be dominated by people who I deeply disagree with and who I think are going to have very, very bad policy ideas. That's the very gentle way of putting it.</p><p>Theo Jaffee (1:11:05)</p><p>So can you tell us the founding story of Alliance for the Future? Like how did it come to be? Why Brian Chau and Beff Jezos?</p><p>Perry Metzger (1:11:15)</p><p>Well, why Beff Jezos? Because Guillaume Verdon is a wonderful guy and having him on our board was too good an opportunity to pass up. Why Brian Chau? Because Brian is not only a cool person in the space, he happened to want to do the job and is, you know, and is doing a good job at it. And when you're recruiting for a nonprofit that has no track record, you know,</p><p>And you have someone who is as good as Brian who appears you you know, you grab them and you say, please please work for us But take going all the way back to the original question. How did Alliance for the Future get started? So what happened was I realized After things like Eliezer's time, you know blog, you know website posting and His bankless podcast and things like that that if people</p><p>you know, that there was an incredible amount of money and effort being expended on pushing the doom message. And that if people didn't scramble very quickly to try to mention the fact that maybe we're in fact not going to all die and in fact maybe the only way to make sure, by the way, we should get to this in a little while, but I very strongly believe that if you pause AI research, you increase danger dramatically.</p><p>And I mean that very literally. And I was very worried that people like Eliezer would and William Macaskill and Dustin Moskowitz and Cari Tuna and all of the rest of these people, all of the people that Sam Bankman Fried funded, which is I think that even now there's residual SBF money floating around in a bunch of this stuff.</p><p>Theo Jaffee (1:12:42)</p><p>Yeah, I -</p><p>Perry Metzger (1:13:11)</p><p>You know, I was very, very worried that if these people got their way, we were going to be in horrible danger and we were going to get a dystopian future and we were necessarily going to get a dystopian future because they would conclude that the only way to keep the world safe was totalitarianism in the end. And if you read the proposals that lots of people make on less wrong and elsewhere on the, you know, the, it's really, really simple. We just make sure that access to general computation,</p><p>gets eliminated and that people aren't allowed to do this research and we have the AI police who come and arrest them and and by the way those people who claim that this isn't really the case, you know, I invite them to read things like SB, you know, 1047 in California or what have you You know, but but the you know, I I was looking around and I kept thinking well surely Someone is organizing to do something about</p><p>this and I kept waiting and I kept waiting and no one was doing it and I finally realized well you know if you want it to happen you're going to have to do it and I really don't like doing it right I have an AI startup that I should be spending all of my time on which I think does interesting stuff you know I have a personal life that I would like to be spending time on you know I'm an old man so I'm not nearly as energetic as I was say 35 years ago</p><p>You know, I can't go without sleep for days on end and still be productive. But it seemed like it was necessary. So I, you know, talked to friends who had DC connections who introduced me to other people. And I, you know, we put a team together. We put together a 501 C4 because it gives us more freedom than having a 501 C3, even though our, you know, donations are not tax deductible. You know, you know, I may I pitch for, you know,</p><p>Our URL for for two seconds. Yeah, AF future 2Fsaffuture .org slash donate. You know, we need your money. You know, but our stripe integration is still kind of crap. Our IT person is working on that right now. But but it's but you know, it's OK. The money is flowing in. We've actually managed to be effective.</p><p>Theo Jaffee (1:15:09)</p><p>Yeah, go for it.</p><p>Link will be in the description.</p><p>Yeah.</p><p>Perry Metzger (1:15:33)</p><p>You know, I've had do -mers asking me, so why is it that you weren't aware of this thing that happened six months ago as an organization? And the answer is because we've existed for two months. Thank you. And other people are like, well, why is it that you weren't aware of everything that was happening in every state legislature in the United States? And the answer is we started a couple of months ago, and we don't have the surveillance systems for that yet. But thank you for telling us that we need to. And.</p><p>And it appears based on, I have a buddy who's on a small city council in Minnesota. Okay. He's in a small town in Minnesota. He's on the city council there and he has gotten communications from EA associated research organizations, basically push polling him, trying to convince him that he should be sponsoring local legislation to stop AI. Like, so these people clearly have too much money on their hands. They're spending it every.</p><p>Theo Jaffee (1:16:26)</p><p>Wow.</p><p>Perry Metzger (1:16:30)</p><p>So, you know, we're going to have to be a hell of a lot more efficient. One of the problems I've got is that AFTF doesn't have hundreds of millions of dollars a year to spend on this stuff. And these people do. You know, I had a doomer like making fun of me over the weekend for saying that they had thousands of people working full -time on X -Risk when it's only about 400. Like, okay, let's assume that they're right and it's only 400 people working full -time.</p><p>to try to push this narrative. I mean, that's a hell of a lot of people. It's even a hell of a lot of people by US legal lobbying standards. That's a serious campaign. I think they actually have thousands of people on it. But even if it's only 400, it's crazy, right? So I found a bunch of people, and we incorporated, and we set ourselves up. And now I find myself like,</p><p>Theo Jaffee (1:17:04)</p><p>Yeah.</p><p>Perry Metzger (1:17:25)</p><p>running a D, well not running, Brian runs it, but now I find myself as the chair of a DC advocacy organization, which is not something I ever expected would happen in my life. But you know, you live long enough and all sorts of unexpected things happen.</p><p>Theo Jaffee (1:17:39)</p><p>By the way, what you were telling me earlier about how, you know, the decals have all these crazy ideas. I was talking to a pretty prominent person in the space, like a month ago. I don't think I would characterize them as a decal, but they're definitely like, you know, tangentially involved in EA rationalism, that whole complex. And they were talking about, you know, yeah, AI is very scary and maybe we should, you know, focus on stopping it. And I said, well,</p><p>Wouldn't the most effective way to literally stop AI progress be bombing open AI? Or something like that? And they said, well yeah, I mean, like we've talked about it, it just doesn't seem like feasible. You know, it seems like it might be like a net harm to the cause.</p><p>Perry Metzger (1:18:27)</p><p>Well, yes, but at some point, some of them are going to decide that it's not a net harm and they will act independently of the others. When someone like Eliezer says that he doesn't support terrorism, I think what he really means is that he does not personally think that it would be effective, which I think is very different from saying that he doesn't support it. I might be wrong. I mean, for all I know, he's preparing the lawsuit in London right now for libeling him for saying that he secretly believes that terrorism...</p><p>is morally justifiable but perhaps not effective. And maybe I'm wrong. Maybe he doesn't actually believe that it's morally justifiable. But I certainly feel like at least a lot of these people seem to have that position that it would be morally justifiable. It's just probably not effective. And some of them will decide that it's both morally justifiable and might be effective at some point, which is kind of a scary thing to contemplate. But I want to get back to the question of whether pausing AI</p><p>would be dangerous because I try to make this point a lot and there's only one way through this problem, which is to grapple with the actual engineering problems associated with artificial intelligence and the actual societal problems. And you do that by building things, by engaging with them, by seeing how they work, how they fail to work, by putting them out in the field and seeing,</p><p>how people use and abuse them. And there is an extent to which the doomers are right that this is a powerful technology. I would, in addition to worrying a great deal that we will make absolutely no progress through amphiloskepsis, which appears to be the main strategy of MIRI, navel gazing. You looked puzzled for a moment, you know. They...</p><p>They seem to believe that you can figure out how to align AI by thinking very hard over a long period of time. You can't do that. No, no engineering discipline works that way, right? The way you figure out how to make something work well is, is by building things and incrementally refining them. But equally.</p><p>Theo Jaffee (1:20:33)</p><p>I don't think that's what you're doing.</p><p>I don't think they actually think that. I think more like they believe that if they do build AI, it will probably end the world. So they will probably fail in their mission of aligning AI and they know that. But, you know, they can't run the risk of trying to build AI.</p><p>Perry Metzger (1:20:53)</p><p>By the way, your audio just went from being good to being bad. You may have switched microphones unintentionally.</p><p>Theo Jaffee (1:21:01)</p><p>Is this better? Okay.</p><p>Perry Metzger (1:21:02)</p><p>Yes, it is. Yeah. I think they have a variety of views there. I don't remember if I've said this so far in this podcast, but I believe that there are a bunch of people, you know, at MIRI who still believe that like having themselves uploaded and doing thousands of years of research very quickly is still, you know, is one viable, I put big air quotes around that because it's ridiculous, one viable way of attempting to.</p><p>to get aligned AI. But anyway, ignoring all of that, though, there is the problem that we are not the only actors in the world here in the United States. And there are a lot of other countries, some of them with much more advanced manufacturing and research and engineering capabilities than ours, that are also interested in AI and are not going to agree with the Yudkowskian vision, I suspect. I was in an argument recently with some people who are allies of mine in DC.</p><p>who were arguing, well, we could just stop, you know, and they were not, we weren't even talking about AI as such. We were talking about, you know, geopolitics and who is ahead on manufacturing technology, electronics, et cetera. I don't know what it is that you're studying, I have forgot, or if I knew I had forgotten. Okay, so if you were an EE and you were doing random projects these days,</p><p>Theo Jaffee (1:22:22)</p><p>computer science.</p><p>Perry Metzger (1:22:29)</p><p>You almost certainly would be asking companies in Shenzhen to send you PC boards that, you know, you'd draw something up in KiCad, you'd send it off to them, and a day later you'd be getting PC boards from them. There aren't a lot of companies, there are almost none in the West, that offer services as good as the ones the Chinese do. If you go out there and you look at small embedded Linux boards that you can use in various projects, things that are like Raspberry Pi -ish,</p><p>There are, you know, there is the the Western designed Raspberry Pi with a Broadcom chip in it. And there are also all of these great boards that you can get made in China, like the Orange Pi, which has, I believe, a RockChip 3588 in it, which also has a neural processing unit, which the Raspberry Pi does not. And that is a Chinese designed, non -U .S. fabricated microprocessor in that thing.</p><p>The state of the art in technology is not such that we can just giggle about the Chinese not having the ability to catch up with us. I know people who say things like, well, the Chinese don't have, you know, extreme ultraviolet, you know, silicon fabs. And the Americans don't either, it turns out. Like Intel doesn't have the ability to do cutting edge fab stuff. You know, TSMC does. You had something you wanted to say.</p><p>Theo Jaffee (1:23:37)</p><p>BOOM!</p><p>Yeah, I mean, the Doomer counterargument to that, one of them is, well, if you think that China catching up to the US on AI research would be bad, then open sourcing all of our AI would simply hand our frontier advances to them. And that would be a bad thing.</p><p>Perry Metzger (1:24:13)</p><p>They are going to have the frontier advances no matter what eventually, right? So what one of the things people don't get about how to think geopolitically is the notion that we are protected by superiority. We are not protected by superiority. We are protected by a balance of power in which people believe that it is dangerous to attack, in which they believe that they have far more to lose from warfare and other non -cooperative strategies than from.</p><p>cooperating, and so they do not. Which is not to say that I want to hand the Chinese, you know, the plans to some sort of sophisticated, you know, command and control AI or what have you. I don't. But we'll get to the open source question in a moment. For the last, like, 370 odd years, okay, the way that people in the West have recognized and</p><p>then it came to be recognized globally. That so long as the competing major powers in the world have relatively similar capabilities, relatively similar, you know, relatively similar worries about the capabilities of the opponents, et cetera, we end up with reasonably peaceful, you know, conclusions. You end up with war.</p><p>when a great power believes that it has an overwhelming advantage over its counterparties. This has been the understanding since the Treaty of Westphalia, and it seems to be mostly true. I know a lot of people on the EA side. EA funded, not that long ago, a bunch of Kurzgesagt videos about how terrible nuclear weapons are. And not that I particularly love nuclear weapons.</p><p>But there's a strong argument to be made that the presence of nuclear weapons has prevented us from having a giant world war of the sort that, you know, that happened. The first and the second world war were not the first great power conflicts, major great power conflicts in our history. It's just that they seem to encompass a large, much more of the Earth's surface. But we haven't had a great power conflict of that sort since 1945.</p><p>And why is that? Because everyone was too bloody scared to get into one, right? Now imagine a world in which there hadn't been nuclear weapons. I think it would have been almost inevitable that we would have ended up with a war between the communist bloc in the West somewhere in the 1950s or 1960s. And it probably would have been bloody as hell. There would have been tens or hundreds of millions of people killed. And we didn't. And we didn't do that because everyone more or less believed that the other side was in a position to deter it.</p><p>So there's a possibility in five years, 10 years, 15 years, maybe it's sooner, maybe it's later, who knows, that the Chinese have lots and lots of autonomous weapons systems and believe that they could easily just overrun Taiwan with them. And the trick to having them not do that is to have them know that the West and the Taiwanese and the Americans and everyone.</p><p>on the other side also have lots and lots and lots of autonomous weapons systems and that there would be a price for them attempting to do such a thing, that there would be a possibility that they would lose, right? If people overwhelmingly, if great powers are as amoral as infants, generally speaking, if you've ever dealt with a toddler, that's in certain ways not the worst model for the way that great powers operate to some extent.</p><p>You know, if the great power believes that it's going to win, it's probably going to do abusive things. And if it believes that it'll be deterred, it probably won't. The key here, in my opinion, given the inevitability that the Chinese are going to have capabilities like this, is for us to have capabilities like that, in which case they'll never actually try to use them, because they won't believe that they could do so safely. Now you ask the question, well, you know, if we open source all of our AI technology, if we just release</p><p>all of this research online, aren't they going to get a tremendous advantage? They're going to get something of an advantage, but we're also going to get an advantage, right? We, if we cut off internal communications among ourselves, will be unable to make the sorts of progress that we need in order to have balancing systems. And in my opinion, the key is not having the ability to overwhelm them or make them afraid that we will overwhelm them. The key is to be able to balance them and to be able to balance other powers.</p><p>If we get rid of open source AI, which by the way requires that we make all sorts of things that are traditionally like sacred values in the United States, like being able to openly publish about things, like being able to openly talk about things, like being able to just like release a bunch of data on the internet if you feel like it. If we decide that we want to make that stuff illegal, if we want to go for a pervasive societal attitude that all of this research is too dangerous to allow anyone to hear about.</p><p>What we'll end up doing is kneecapping ourselves, right? The advantage we have, the capability we have in spite of the fact that we have a tiny fraction of the number of manufacturing engineers in China and a tiny fraction of the number of electrical engineers that they have and a tiny fraction of the number of material scientists they have, et cetera. The advantage we have is free and open communication and a very competitive and vibrant venture capital segment.</p><p>I think that it would be incredibly immensely stupid for us to kneecap open source. I think that the greatest safety we have is in having lots and lots and lots of players on all sides have artificial intelligences that they're using for all sorts of purposes. Most purposes aren't going to be bad, right? Most purposes are going to be doing things like weeding fields and designing cancer drugs and coming up with ways to...</p><p>Theo Jaffee (1:30:05)</p><p>Yeah.</p><p>Perry Metzger (1:30:28)</p><p>to fix horrible social problems. But I think we're better off with massive decentralization with lots and lots of people having their toe in the water. By the way, we already have massive numbers of people with their toe in the water. I mean, I don't see how you could get the world to forget what we already know about AI research. It's a terribly sophisticated technology by the standards.</p><p>of a high school student who's only starting to study algebra basically. But it's not that bad if you're a computer scientist, right? The things that we have figured out turn out to be relative. I mean, there are no deep secrets, right? The deep secret is that there are no deep secrets. The biggest secrets were that statistical learning was going to win over good old fashioned AI mechanisms. I'd probably talk too long, though, without.</p><p>giving you a word in edgewise. I have a habit of doing that when I'm not feeling well, which is a large fraction of the time, unfortunately.</p><p>Theo Jaffee (1:31:33)</p><p>Yeah, so I'd love to get back to Alliance for the Future. Specifically, do you think that a nonprofit lobbying firm type thing is the best way to achieve good outcomes for open source, free and open source AI? Or like, why did you decide on this format?</p><p>Perry Metzger (1:31:48)</p><p>I mean...</p><p>Well, because I didn't have the budget in my own company and I really wasn't in a position to justify it. So we're working with a lot of organizations outside ourselves. One of the features of having a DC nonprofit of this sort is that a lot of what we do is talking to people, feeding them information, getting them the ability to do things that they need to do. We discovered...</p><p>You know, I had a reporter saying to me, you know, a few days ago, well, wasn't SB 1047 out since February or whenever it was? You know, how come you only learned about it right now? And I was like, well, you know, for better or worse, we only learned about it right now. We can argue about why that would be. But it turned out that our learning about it, because we were told about it by a couple of people who, you know, who like very forcefully brought it to our attention, meant that we were in a position.</p><p>to tell a bunch of other nonprofits and to tell a bunch. We discovered that a large fraction of civil society organizations that you would have expected would have been very concerned about this didn't know as of a week ago, right? They had no idea it existed. We found out that a bunch of venture capital firms had no idea it existed, that a bunch of startups had no idea that it existed. I'm talking to some folks at a very, very large company right now who are part of their policy group.</p><p>and who didn't really know much about this thing a week or two ago, and now they've geared up to talk about it a bunch. One of the things that a nonprofit of the sort that AFTF is can do is it's in a position to spread information around like that. Another thing we can also lobby, we can write position papers, we can do editorials, we can do all sorts of things. And it turns out that this is how the game is played.</p><p>I don't really love the way that politics happens in the United States, but the way that politics happens in the United States is you have advocacy organizations and you have lobbying branches for inside of large companies and there are professional lobbying firms and all sorts of other things like this in the ecosystem. DC, DC has its 501 C threes and it's 501 C fours and it's 501 C sixes and it's, you know, and it's companies that do lobbying and it's companies that do communications and.</p><p>Theo Jaffee (1:34:00)</p><p>Well.</p><p>Perry Metzger (1:34:12)</p><p>You know, and it's an ecosystem. And if you don't play the game, you're not in the game.</p><p>Theo Jaffee (1:34:17)</p><p>How do you convince legislators when you're playing the game that allowing AI regulation is actually, or not allowing AI regulation, that not doing AI regulation is actually in their interest and not merely, you know, the morally right thing to do or whatever? Because it seems like there are huge  forces playing the opposite direction.</p><p>Perry Metzger (1:34:33)</p><p>So.</p><p>So there are two things going in our favor. One of them is that for good or ill, it seems like the EA folks, in spite of their overwhelming financial advantages, are not very good at this. And I could speculate as to why that is. And maybe it would even be intelligent speculation, but it's not really my place. So when we find ourselves going in and being taken relatively seriously when we talk to people.</p><p>And when we talk to people, we explain to them, you were told that this piece of legislation was something that was very widely supported by industry, that lots and lots of people in academia think is good, that lots and lots of people believe is necessary and very normal. And in fact, it's kind of a bunch of extremist stuff. And the claims that you've made about what's in your own law aren't true.</p><p>And I can't, by the way, I cannot blame, it's common to say, why didn't this legislator read his own bill? And the answer is because he has 120 ,000 pages of bills that he's got to deal with in a given session. And of course he didn't read the bloody thing. How could he? You can't expect them to. And I think that it's not reasonable to. You can ask something very reasonable about why do we have a system in which we expect legislators to deal.</p><p>with these massive volumes of stuff going through. But you can't actually practically expect that they've read everything. And so sometimes you have to go in and you have to say, look at this paragraph in your bill, this paragraph that says a thing that is opposite to the thing you believe it says. Here, let's read it. OK? And you obviously can't be a rude asshole like that. But the point is that, you know,</p><p>Theo Jaffee (1:36:25)</p><p>Yeah.</p><p>Perry Metzger (1:36:29)</p><p>I will say right now that in the current fight in California, it is my strong expectation that a bunch of people more or less openly lied to the sponsors of this bill in order to get it pushed forward quickly. They told them that it was a widely supported piece of legislation, that there would be very few people who thought that it was a bad idea.</p><p>that it would get them lots of positive press, that it would help their political careers, that it would, you know, that it would bring back their lost hair, you know, anything almost. I get the distinct impression. And again, this is, I get the distinct impression that a bunch of the people on the EA side, because of their fanaticism about this, do not understand the concerns that legislators would have about doing things that,</p><p>are frankly hated by a large fraction of their constituents and don't understand that lying to them about what the consensus is is a way not to make friends for the long term. I mean, I think, you know, things like SB 1047 are likely to be extremely counterproductive for the EA side because what they end up doing is convincing legislators that they cannot trust the lobbyists who are pushing this sort of thing because they don't have the interests of the legislator in.</p><p>If you're talking to someone, you can't, and you lie to them too much, eventually they will notice.</p><p>Theo Jaffee (1:38:01)</p><p>So do you think that there's anything good, like positive expected value, that the government can do on AI? Or is Alliance for the Future's goal to kind of just get them to do nothing at all?</p><p>Perry Metzger (1:38:11)</p><p>No, I think that there's a great deal of stuff that we probably need, right? First of all, I mean, there's the stuff that you would probably consider to be not doing anything, but which I consider to be doing something, which is we probably need federal preemption of local AI laws because of the fact that one of the strategies that EA has chosen is to try to get laws passed in as many little municipalities and states as they can. But more than that.</p><p>There's a lot of controversy around this stuff right now. For example, there is a lot of arguments about copyright law and the use of copyrighted materials in training. Okay. And for good or ill, it's probably going to be the place of the legislature at some point to provide clarity so that we stop having lawsuits so that everyone, everyone may not be happy at the end of that process. And in fact, one of the, one of the definitions of a compromise.</p><p>in such circumstances is that you find that no one is happy at the end, right? You kind of know that it's okay if they're, it's bad if there's one party that's ecstatic and like lots and lots of other parties who feel screwed. If everyone feels like they can live with it, but they're not actually gloriously joyful, you've probably reached a reasonable level of compromise. But there's, there's stuff around, you know, around actual bad uses of AI. Now, now we focus a lot.</p><p>when we're talking about like the AI obsessions with attempting to stop AI research and development itself. But if you look at the other side of that, there are clearly uses of AI that most of us would probably consider a little scummy. You know, you can come up with stupid examples that are very obvious like scamming the elderly. Sure. Scamming the elderly. I don't think is I'm sure that there is a lobby for that, you know, among certain grifters, you know, in and.</p><p>Theo Jaffee (1:39:53)</p><p>scamming the elderly.</p><p>Perry Metzger (1:40:05)</p><p>Maybe there are people in Nigeria and boiler rooms in Pakistan or what have you where there are a lot of scam operations are operated off of who believe that they have a moral right to scam the elderly. But I think most people don't think that they have a moral right to scam the elderly. So actually having some law enforcement effort put behind, ignore the AI thing. I mean, everyone.</p><p>in the United States gets lots of scam calls, right? And they are a persistent nuisance. And wouldn't it be nice if they were actually the object of more attention? But there's lots of other stuff, right? Like, we have to answer how much surveillance do we want in our society? And we're going to get to the point soon where you could imagine the police in a major metropolitan area having real -time feeds from hundreds of thousands or millions of cameras. I mean, the price of</p><p>The price of cameras and the price of the hardware to drive them and to pump their data over the internet has gone down to very, very low numbers. We're talking about a couple bucks a piece, sometimes less. At some point, we're going to be able to scatter them like dust around. And do we want a society where people can pervasively track and record and note down the actions that every human being in our society takes in public?</p><p>And maybe there are some legitimate uses for such information. Maybe there aren't. But it's a debate that actually needs to be had about how we want to confront questions of privacy, of individual liberty in our society. Do we want, and this can be done right now, do we want tracking of every car and every person in our society at all times?</p><p>Do we want the government to have access to that information? Do we want private organizations to have access to that information? I mean, there are people I know who argue that you do want that sort of thing. It's at the very least a legitimate argument to be having because this is a real thing, right? These are real capabilities. These are things that are actually going to be possible in the not that distant future or possible now. So they are worth discussing. One of the things that angers me a great deal,</p><p>Theo Jaffee (1:42:06)</p><p>Mm -hmm.</p><p>Perry Metzger (1:42:31)</p><p>is that because of the Doomers focusing so much on science fiction scenarios, no one is discussing very realistic scenarios. Things that can happen in the near term or immediately are already happening. And I think that those are probably more salient. So one of the things we would like to do is actually develop a legislative agenda for, you know, that discusses...</p><p>some of the threats that are more salient, the things that we might actually have to really worry about, you know, scams, surveillance, you know, how we react to training data, data sets, privacy, you know, should people, you know, should I be able to ask an open source AI?</p><p>or well, not an open source AI, I didn't mean that, but should I be able to ask a publicly available AI for intrusive information about your medical data and get an answer out? Probably not. But depending on how people train the things and the liabilities they have or what have you, this could end up being an issue. So we do have a lot of stuff that we want to discuss with Congress and with state legislatures.</p><p>But most of it is not in the direction, you know, most of it I think would be laughed at by a lot of the EAers. You know, someone like Yudkovsky would probably say, why are you thinking about this meaningless drivel when, you know, when the entire world is going to be destroyed soon? Well, I don't actually think that the entire world is going to be destroyed soon. So, pardon?</p><p>Theo Jaffee (1:44:09)</p><p>it's logically consistent. It's logically consistent though.</p><p>Perry Metzger (1:44:16)</p><p>Yes. The one thing that I can say for Yudkovsky and McCaskill and all of these people is that they may be distasteful, but they are reasonably internally consistent. Reasonably, not completely. I think that a lot of the things that they say are developing certain cracks, especially given the fact that we are living in an era where we're actually being confronted by real AI systems.</p><p>and are being forced to see whether or not they actually do the sorts of things that they claimed at one time, and they don't.</p><p>Theo Jaffee (1:44:50)</p><p>Yeah. So last question, what does your roadmap look like for the future of Alliance for the Future? Like what kinds of things will you be doing in the near term and then maybe farther out?</p><p>Perry Metzger (1:45:03)</p><p>Most of the stuff that we have to do is incredibly boring. We have to retain more staff. We have to raise more money. We have to build better donor relations software. We have to track more and more of the initiatives going on in state legislatures, in various portions of the federal government. We have to build a lot more connections with organizations that have interests similar to ours.</p><p>One of the things we've discovered is that there are a ton of organizations that are on the same side as this, but don't have enough time to think about it very much because they are, say, you know, an industry group that has to deal with, you know, 50 or 100 issues in a given legislative session. We only have to deal with one. Where, you know, most of what we're interested in at this point is just really boring stuff about building the organization. But our</p><p>You know, I mean, our goal is pretty straightforward. We want to stop the, you know, stop overregulation of AI and make sure that people are focusing on actual salient issues for our society associated with AI instead. And over the next few years, that's what we'll be doing. I mean, at some point, I think that this particular battle is going to be won and AFTF will probably, you know,</p><p>Most such organizations in the end start mutating and taking on different roles than they started with. And I'm sure in five or 10 or 15 years, AFTF will do that sort of thing. I mostly care about what it's doing in the next few years and how effective we are in trying to stop, in trying to stop doomerism. If we can stop doomerism, if our societal transition for this sort of thing ends up being,</p><p>less sculpted by paranoia and science fiction scenarios and insanity and ends up being more sculpted by people thinking things like, gee, I might actually be able to help a bunch of these kids learning math by giving them individualized math tutors or what have you. I think we have the possibility of having... There are some really, really amazing and cool things we're going to be able to do if this happens, right? If AI is left alone.</p><p>The US has had year on year GDP growth at or below 2 .5 % for a very long time, even though if you go back a century, it was more like 5%. And this is a really big problem because it means that, among other things, that we're slowly strangling ourselves on our national debt, that people are much poorer than they need to be.</p><p>to me, and this is going to sound boring to like 98 % of people, but I think this is really exciting. I think if we just can, if AI brings GDP growth, you know, above, you know, four or 5 % for the first time in forever, you know, and I know people who'd probably say, well, you know, it can probably do a lot more than that. And I'm not going to be utopian and, and, and, and, and go there. Maybe it can, but if we double.</p><p>GDP growth because of the of the widespread adoption of AI it's going to mean that in the lifetimes of people who are around right now You know ignoring life extension or anything else, you know by the time they're 50 they're going to be well, let's see So that would be if it was at 5 % I think that would be a doubling every 15 here So we would expect them to be like 16 times wealthier by retirement</p><p>if they're in high school now. That's crazy. That's a big, big difference, right, over the sorts of economic growth we have right now. And if it turns out to be true that AI could bring economic growth to double digits or higher, maybe it could, maybe it couldn't, things are even better. But even if we just got to a really modest goal, like 5%, this is life changing for hundreds of millions of people.</p><p>Theo Jaffee (1:48:55)</p><p>Yeah.</p><p>Perry Metzger (1:49:20)</p><p>And if we can have a small part in making sure that that happens. I don't care if AFTF gets any credit for anything that it does. I don't care if it becomes a famous organization. I don't care if it becomes a household world. I care a great deal about making sure that we don't end up with crippling regulation or bans or things like that on the most promising technology that I know of that's being deployed right now.</p><p>If we can succeed in doing that, that's a win.</p><p>Theo Jaffee (1:49:53)</p><p>Yeah, well, I think that's an excellent place to wrap it up. So thank you so much, Perry Metzger, for coming on the show.</p><p>Perry Metzger (1:50:01)</p><p>Well, thank you so much for having me, Theo.</p>]]></content:encoded></item><item><title><![CDATA[#14: Robin Hanson]]></title><description><![CDATA[Cultural Drift, Ems, Elephants, Institutions, and The Future]]></description><link>https://www.theojaffee.com/p/14-robin-hanson</link><guid isPermaLink="false">https://www.theojaffee.com/p/14-robin-hanson</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sat, 27 Apr 2024 15:07:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/144073363/9c445f269af928af8465696675d95cc4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Robin Hanson is a professor of economics at George Mason University, the author of <em>The Age of Em</em> and <em>The Elephant in the Brain</em>, the writer of the blog Overcoming Bias, and one of the most interesting polymaths alive today.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:24 - Mathematical models and grabby aliens</p><p>9:11 - Will we run out of value in the future?</p><p>12:23 - Kurzweil&#8217;s Law of Accelerating Returns</p><p>14:29 - Posadism</p><p>17:53 - Moral progress and Whig history</p><p>20:29 - Will there be a trad resurgence?</p><p>23:00 - Will Israel&#8217;s ultra-Orthodox problem globalize?</p><p>25:39 - Why will fertility rate keep dropping?</p><p>30:14 - Is declining fertility solvable technologically?</p><p>32:20 - What is wokeness? Has it peaked?</p><p>35:02 - Will virtualization make society more multicultural?</p><p>39:30 - How do institutions coordinate so well?</p><p>42:50 - Will ems care about death?</p><p>46:16 - Personal identity and death</p><p>49:30 - How much of Age of Em is applicable to LLMs?</p><p>51:09 - Why we shouldn&#8217;t worry about AI risk</p><p>55:40 - What if people don&#8217;t see AIs as their descendants?</p><p>1:00:41 - Other future tech deep dives</p><p>1:02:43 - Our very long-run descendants</p><p>1:06:08 - Time and risk preferences</p><p>1:08:34 - Wouldn&#8217;t ems be selected for docility?</p><p>1:11:24 - How Robin got involved in rationalism</p><p>1:13:22 - Girls getting the &#8220;ick&#8221;</p><p>1:16:56 - Have humans evolved since forager times?</p><p>1:18:28 - Cultural evolution</p><p>1:20:30 - Culture and prestige</p><p>1:22:49 - Why medicine in the US is bad</p><p>1:25:54 - Is academia the best truth-seeking institution in society?</p><p>1:28:52 - Peer review</p><p>1:31:13 - Which institutions are actually good?</p><p>1:32:33 - Why universities are all the same</p><p>1:37:40 - Bitcoin and speculation</p><p>1:46:44 - Demarchy</p><p>1:50:03 - Futarchy</p><p>1:53:56 - Applying prediction markets to dating apps</p><p>1:57:38 - The broadest thinkers and books in the world</p><p>2:00:59 - How Robin balances his many interests</p><p>2:01:58 - Teaching</p><p>2:03:12 - Outro</p><h3>Links</h3><p>Robin&#8217;s Homepage: <a href="https://mason.gmu.edu/~rhanson/home.html">https://mason.gmu.edu/~rhanson/home.html</a></p><p>Overcoming Bias: <a href="https://www.overcomingbias.com/">https://www.overcomingbias.com/</a></p><p>Robin&#8217;s Twitter: <a href="https://twitter.com/robinhanson">https://twitter.com/robinhanson</a></p><p>Grabby Aliens: <a href="https://grabbyaliens.com/">https://grabbyaliens.com/</a></p><p>Age of Em: <a href="https://archive.is/LMrr9">https://archive.is/LMrr9</a></p><p>The Elephant in the Brain: <a href="https://www.elephantinthebrain.com/">https://www.elephantinthebrain.com/</a></p><p>Beware Cultural Drift: <a href="https://quillette.com/2024/04/11/beware-cultural-drift/">https://quillette.com/2024/04/11/beware-cultural-drift/</a></p><p>Playlist: </p><div id="youtube2-avyrA5PYsik" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;avyrA5PYsik&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/avyrA5PYsik?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Spotify:</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:3549,&quot;numEpisodes&quot;:13,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-03-22T15:11:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)</p><p>Welcome back to episode 14 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Robin Hanson. Like previous guest Bryan Caplan Robin's day job is a professor of economics at George Mason University. There's much more to him than just that, however. He's a world -class polymath who's worked in literally dozens of fields and was a pioneer of many of the things we love today. Before going into economics, he studied physics, worked in AI research in the 80s, and did a PhD in social science.</p><p>He's proposed over a thousand ideas for alternative institutions, most famously prediction markets, which are today a multi -billion dollar industry. He was early on crypto, too. He was friends with Hal Finney, who many believe was Bitcoin's creator, Satoshi Nakamoto. He's been involved in futurism since the 90s, creating the idea of the great filter and grabby aliens, and writing a 400 -page deep dive on mind uploading called The Age of Em. He's also into human psychology and rationality. He's written on the blog Overcoming Bias since 2006.</p><p>where Eliezer Yudkowsky was originally a co -blogger before leaving to create Less Wrong. And he co -wrote with Kevin Simler the book The Elephant in the Brain on the hidden motives behind nearly everything we do. Most recently, he published an essay called Beware Cultural Drift, warning about the danger of having a global monoculture that's slow to adapt to changes. There was a lot to cover in this episode and I had a lot of fun recording it. This is the Theo Jaffee podcast. Thank you for listening and now here's Robin Hanson</p><p>Theo Jaffee (01:24)</p><p>Okay, we're on. Welcome back to episode 14 of the Theo Jaffee Podcast. We're here today with Robin Hanson</p><p>Robin Hanson (01:30)</p><p>Nice to meet you, Theo.</p><p>Theo Jaffee (01:33)</p><p>Nice to meet you too. So my first question has to do with your idea of grabby aliens and the great filter, which are, you know, these ideas about the, you know, explanations for why we don't see aliens and how our society might move in the future. Um, so grabby aliens and rationalism both heavily rely on mathematical models, but with a lot of these mathematical models, even like small inaccuracies in the inputs can make the output like wildly inaccurate. So how do you typically account for this?</p><p>Robin Hanson (02:04)</p><p>Well, you want robust models for which that's not true. So our model is a three parameter model where each parameter is fit to data and the parameters are as follows. Basically, advanced Davidian civilizations appear in space and time. They appear at random places in space.</p><p>They also appear at random points in time, but the time at which they appear is proportional to a power law. So the power law has a constant and it has a power. And then once they appear, they expand at a speed. Those are the three parameters. And we fit each of the parameters to a particular piece of data we have.</p><p>and I claim that that model is robust in the sense it's not very sensitive to these parameters. You can change these parameters by a substantial amount and then the model only changes by a proportional amount. It's not highly sensitive to some particular choice of parameters. So I mean we could explain where these parameters come from and then you know what we've concluded from that but I'll pause and let you push farther if you want. Okay so...</p><p>Theo Jaffee (03:11)</p><p>Yeah, yeah, let's go into that.</p><p>Robin Hanson (03:15)</p><p>The speed of expansion comes from the fact that if you model different speeds of expansion, you'll find that at low speeds of expansion, you predict that each one, when it's looking out into the sky, will see many of the others. Because light goes much faster than they expand, and so they will see them a long way off way before they get here. Each will see the other one coming. Since we look in the sky and we don't see...</p><p>other huge alien civilizations taking up enormous spheres in the sky expanding at a rapid rate. We can then conclude that we must be in the parameter space of the model where they don't see each other coming and that's where they're expanding pretty fast, say half the speed of light or even faster. So we then conclude, well I guess that's the that parameter value. They are expanding very fast because</p><p>we look up and we don't see anything and most would see something in the model if they were expanding slowly. So that's one of the key parameters. Now, another key parameter is the constant in front of the power law. So as you change the constant, you will change on average when these things appear in the history of the universe.</p><p>you know, make the constant lower and it'll take longer before they appear. You make the constant higher and they will appear sooner. So we can take our date at the moment.</p><p>as a random sample from when the dates of these things appear, and that constrains the constant in front of the power law. That is, we can basically assume a uniform distribution of where we are in the distribution of alien civilizations. We could be really early, we could be really late, but we're somewhere, assume, say, a uniform distribution over the rank. Are we in the first percentile, the 99th percentile, somewhere in the middle? And that then gives us a distribution over this constant.</p><p>in front of the power law. And the power of the power law comes from the history of life on earth. So the key idea is that in order to become an advanced civilization like ourselves, you have to go through a number of difficult steps and you have to do that before the window for life on your planet ends.</p><p>and a simple statistical model of what happens when something has to complete a whole bunch of difficult steps before a short window. Usually it would not succeed before the end of the window, but once in a while it gets lucky and does. A statistical model of that says that the time at which they do succeed in appearing goes as a power law. And the power is the number of steps it has to go through in this history. And we can now use</p><p>timing of events on the history of Earth to roughly guess the number of steps that we've gone through. So a best guess of six, but hey maybe a shorter three or nine, come from...</p><p>how long it took from life to be possible on Earth till the first apparent appearance of life on Earth, and then how much is there between now and when it looks like life would no longer be possible on Earth. Those two time durations are the datums that we can use to pin down how many steps there have been, and again, middle estimate of six. So those are the three parameters. Each comes from data. Put those together into a stochastic model, a model of probabilities, and you can run it many times.</p><p>to get the distribution over what the history of advanced civilizations in the universe looks like. And from those distributions, as you vary the parameters, we can draw, say, the following two conclusions. Roughly, advanced aliens appear roughly once per million galaxies. So...</p><p>There's trillions of galaxies in the universe, so there's lots of them in the universe, but they're still pretty far apart, once per million galaxies isn't close. So that says it'll be pretty hard to see them because they're pretty far away. And the other key parameter is if we go out and become one of these advanced alien civilizations that expands and becomes visible in the universe, we will meet others in roughly a billion years.</p><p>once per million galaxies meet them in a billion years. So those are pretty specific answers to pretty important questions. Now, it's not a billion point one or something precisely, it's in the ballpark of a billion years. All these answers are rough. That's part of the answer to your precision question. We're not giving very precise answers here, but compared to the basic question you might've asked, we now know a lot more than we used to.</p><p>Theo Jaffee (07:59)</p><p>So going back to what you said about like the steps it takes intelligent life to arise, how does that model account for like, do you think it's possible for life to arise spontaneously on like a non carbon based, non biological substrate?</p><p>Robin Hanson (08:15)</p><p>The general model doesn't care what the substrate is. It just cares what this power law is. Basically, how many steps does it have to go through? And what's the overall chances constant? And it allows for a wide range of different paths to advance life. I think it's if there are many different paths and some of them have more steps than others, it's overwhelmingly likely that it's the one with the fewest steps that will happen most often.</p><p>So we can, you know, if it takes us six steps, we can be pretty assured nothing out there is happening in less than six steps. Six steps must be the minimum. And the only other competitors out there we might interact with would be also things that took six steps, because the rest are very unlikely. So, of course, if it's five steps for us, it's five, but whatever our number of steps is, it's pretty much going to be the same for everybody else.</p><p>Theo Jaffee (09:11)</p><p>So while we're on the topic of aliens in the future, you have written like very extensively about future economic growth rates and you predict that eventually our rate of like resource and technological growth will stagnate. But how can that be possible when the like combinatorial space of atoms is just like gigantic? Like, yeah, sure. We will probably end up getting fewer actual resources in terms of like the number of atoms that we can gain access to.</p><p>But in terms of the ways that we can combine those atoms to create things of value, it seems like it would be a very, very, very long time before we exhaust that. This is kind of David Deutsch's idea of the beginning of infinity. So what do you think about that?</p><p>Robin Hanson (09:57)</p><p>So just to be clear to everyone, we're talking not the next few years. We're talking over thousands or millions of years as we get much more advanced and explore a much wider space of possibilities than we have now. Now, we already have a lot of experience, say, with computer programs, exploring searching spaces of possibilities.</p><p>I did part of this as part of my career. For example, we had a space of possible statistical models and a certain family of models. We were searching in that space for more likely models. And we found that, say, compared to a simple search, being a little clever about heuristics gave enormous advances in our ability to find more likely models. So there's really a lot to be gained by being clever about search. But it was also just clearly true that...</p><p>If you have decent ways to search, you're going to be looking at the low -hanging fruit first. You're not searching at random, you're searching for the best things as quickly as you can. And low -hanging fruit first implies high -hanging fruit last.</p><p>it means that the search has to slow down. That's kind of implied by any ability to not search at random, but to grab the best stuff first easier, is that you are able to prioritize the search space to find the things that are more likely to be promising first, which means the things that are less likely to be promising are going to be what's left over after you take the low -hanging fruit.</p><p>So it's just a matter of like how fast, how sort of just how much does it asymptote eventually, but consistently when we do search, you know, in computer systems and in design spaces, we do reach this sort of a asymptote situation where things slow down a lot. And the world economy is a big, deradic example of this.</p><p>That is, as the economy doubles, we get twice the capacity to search in all the different dimensions we might want to. Yet, the growth rate doesn't double, stays about the same, because we pick the low -hanging fruit first. So, you know, the fact that economic growth is steady is a strong testament to the fact that the search gets harder with time, because our ability to search gets much bigger, yet we don't find stuff that much faster.</p><p>Theo Jaffee (12:23)</p><p>What do you think about Ray Kurzweil's idea about accelerating returns?</p><p>Robin Hanson (12:28)</p><p>I mean, there can be parts of search spaces where you, you know, find things and you find more things and they help. So just generically, if you think about any system that has some sort of reinforcing process, the reinforcing process can either accelerate growth or decelerate growth or hold it steady. Those are the only three mathematical possibilities, right? Our experience with systems is that overwhelmingly we have the decelerating growth.</p><p>And once in a while we have things that are constant growth, and then more rarely we get accelerating growth, like a nuclear bomb is accelerating growth, right? But accelerating growth typically doesn't continue to accelerate forever. It accelerates over a range and then it slows down.</p><p>Like, you know, many of you listeners probably have personally experienced some point at which you were trying to figure something out and then you started to get it. There was a period of accelerating growth where things were falling into place and you were able to figure things out. In fact, they figured one thing out, helped you figure another thing out, and that was great. It was fun, but it didn't last forever. In your personal experience, it ran out a bit, and then you experienced decelerating growth.</p><p>So, you know, I just think we have enough experience with lots of kinds of systems. Like you might say the surprising thing is we have had continued to have at least steady growth in humans for a while, then we've had these jumps to faster growth modes. That's the most dramatic deviation from this expectation of decelerating growth is that we've got overall continued growth of human civilization. So the question is how far you expect that to go. And...</p><p>Again, once we reach the limits of physical expansion, like the speed of light, and we reach the limits of the materials we can work with, we have all the basic elementary particles and forces and we can't find any more, we'll search the space of all the vanes to arrange them and we will reach diminishing returns.</p><p>Theo Jaffee (14:29)</p><p>I see. By the way, are you familiar with pesadism?</p><p>Robin Hanson (14:33)</p><p>Posadism. Nope.</p><p>Theo Jaffee (14:34)</p><p>Passages um this guy I forgot his first name, but his last name was Posadas he was like a communist and he came up with this idea of Yeah, J Posadas He was yeah an Argentine Trotskyist who had this vision of alien civilizations Basically, he believed in communism. So he believed that you know kind of like he took Karl Marx's theory that like</p><p>the end state of human civilization is communism to its logical conclusions that advanced aliens would be communists too. So civilizations should search for advanced aliens because if advanced aliens were to find us, then since they would also be communists, they would help us lead the global communist revolution until you go over the world. I ask because it's basically one of the very, very few other people who I've encountered who's thought about this in the same way that you have.</p><p>Robin Hanson (15:32)</p><p>I find it remarkable that people can look at our recent history of cultural change and take some recent trend in our cultural change and imagine that that trend will apply over billions of years when it hasn't even applied over 100 years yet. This is one of the key blind spots humans have.</p><p>Basically, we are driven by cultural evolution and our cultural evolution makes us change our cultures fast, but we're kind of blind to that and you know, we tend to think our culture is best and that whatever issues our cultures have with other cultures then we're right and they're wrong and we're gonna be right for the entire future of the universe and that you know...</p><p>Liberal democracies say will be what the universe wants or commune or whatever it is. I just think if you look at how much has changed in such a short time in the past and then try to project the last few hundred years of human history into billions and trillions and more years of the future of the universe, it's just really hard to imagine that we have gotten it right in terms of the fundamental cultural issues if we didn't even notice them a few centuries ago.</p><p>Now, I might say we could find some things that are more robust. Robust issues are centralization versus decentralization. We can sort of see that that's a robust issue. Robust issue is degree of competition versus coordination, how many scales of organization there are. Those, in some sense, we can define issues as long -lasting issues, issues of what the unit of mines are be, how large would mines be.</p><p>How much do they merge into hive minds? At what scale? Whether natural selection continues? What are the forms of preferences creatures have? How do they encode them? How well do they know them? These are some of the things we can identify as pretty robust issues that last for a long time. But to think that our temporary answers to those questions, we could be confident will be the best answers across.</p><p>vast time scale of the universe seems kind of crazy.</p><p>Theo Jaffee (17:53)</p><p>Hmm. Um, David Deutsch also writes about this kind of idea, but his perspective is more like we will continue to improve over time, just as, you know, our morals so far have improved from, you know, being kind of forager values where we would fight constantly to being farmer values, or you'd have like a despotic king ruling over people to being more modern, like liberal, secular, democratic values. And that we continue to improve values into the future. But.</p><p>Do you think more values in the future will be arbitrary and we won't see them as improved? We won't necessarily see them as improved.</p><p>Robin Hanson (18:30)</p><p>I mean, first of all, you just have to notice there's a selection effect. If every cultural thinks it's best, then when it looks back at its history, it's gonna see things improving. That's more directly implied by cultural arrogance. All cultures think they're best. That's also accrued across space at any one time. Each culture thinks it's better than all the rest. Yes, well, also Whig foreign policy.</p><p>Theo Jaffee (18:49)</p><p>Whig history?</p><p>Robin Hanson (18:55)</p><p>you think you're best than all the other things that coexist with you or all the things you might imagine, but basically you can therefore predict that the future will think they're best and they will have seen more improvement. The question is whether you would approve of their improvement. And this is actually something I've been thinking a lot about lately and even in the last few hours.</p><p>Old people like me have seen our culture change in our lifetime. And then we are expected to embrace those changes, to think that our culture is now better than it was when we were young. But when we were young, we assimilated the culture of our world when we were young. And then later on, the world changes and its culture changes, and we're supposed to change with it. We each have to ask, well, was I wrong back then?</p><p>This is more right, but have people offered me arguments for that or they just telling me, you know, you're out of touch old man And I think it's hard to To you know engage that really I Don't think the world really offers as much evidence that in fact their new values are better than old ones They offer conformity pressures and sanctions if we don't agree but We're not usually persuaded</p><p>more conjured, pressured into accepting the new value changes.</p><p>Theo Jaffee (20:29)</p><p>Do you think that as these kinds of changes keep improving that we'll see like an increase in like return to tradition movements? Not just like in the sense of mainstream conservatism, but like I'm seeing this a lot more now, like trad Cath people on the internet who are trying to be like, you know, medieval Catholics or something.</p><p>Robin Hanson (20:51)</p><p>Well, the dimension I would call your attention to is diversity and variety. Our world today has vastly less variety of culture than we did a few centuries ago. So three centuries ago, basically the world was divided into the hundreds of thousands of little tiny peasant cultures.</p><p>each of which was pretty independent but pretty near subsistence. And if they drifted off the rails, they got punished by famine or pandemic or invasion, and that held them pretty close to functionality. Selection kept them in line. But then we merged small peasant cultures into national cultures, and then we have seen the rise of a world culture. And that means vastly less cultural variety.</p><p>And that, I think, means we should expect our cultures are going off the rails. We have vastly less cultural variety and vastly less selection in that our cultures, our few world cultures, are quite rich, peaceful, healthy, and if they go off the rails it'll actually take quite a while for them to fall because they have a lot of slack to survive that. And that's what I think is going to be happening over the next few centuries.</p><p>and the solution is some sort of variety that will, you know, defy the mainstream culture and do things different, like say the Amish or Heretim, and then in a few centuries they will grow and rise and replace the dominant cultures. So in that sense, I think in a few centuries' time scheme, yes, you will see the revival of...</p><p>traditional cultures, they will be different. Some of them will be more traditional, but the key thing is they will find a way to be highly fertile and highly insular. That is, insularity is especially the key. You can't diverge and do things different from the dominant culture unless you are insulated from it. And that's in some sense the most distinctive features of these small fertile subcultures is their insularity.</p><p>Theo Jaffee (23:00)</p><p>This talks, this sounds a lot about the biggest current domestic issue in Israel, which is you have like basically two classes of Israeli Jews. One is modern, secular, liberal, and the other is very traditionalist, Orthodox, and they have conflicts all the time because the Orthodox don't serve in the military and are like net burdens on the tax base. So should we expect to see this kind of thing everywhere?</p><p>Robin Hanson (23:23)</p><p>Right?</p><p>So this is a vanguard in the sense that if nothing else happens over centuries, these very highly fertile cultures which are growing fast will in fact replace everyone else because everyone else's fertility is going to keep declining and be below replacement. There are some transition issues to work out as small cultures get bigger.</p><p>Certainly one of them is this pacifism. This is also true of the Mennonites and the Amish in the US. They're pacifist. But I think the pacifism is mainly a strategy for insularity, because young men, when they go off to war, get a lot of cultural impressions from their fellow soldiers. They didn't want this to happen to their young men, so they made them pacifists. Once you have large enough groups to have their own military units, this is less of a problem. They can all go to war together and maintain their culture.</p><p>But this will be a transition point. So I guess the question is when the Haredim will be willing to make that transition to joining the military, but maybe in their own special units. The fact that they're subsidized is a happenstance of Israeli history. The Amish and the Mennonites aren't subsidized here in the US. They're kind of taxed, actually.</p><p>but they're still growing very fast. But any of these transitions is a risk point where they might fail to maintain their insularity or their high retention rates. And so we can't be that confident in predicting their rise in the sense that there's just lots of things that can go wrong. So for example, the Mormons are a failure case.</p><p>They were once a highly insular, highly fertile subculture, but the Mormon Church made a conscious choice to integrate the Mormons and integrate it with the rest of.</p><p>national and international society, they succeeded in that, and now the Mormon fertility is falling at the same rate as the rest of the country just 20 years behind. They are not going to succeed in being a insular fertile subculture, and that sort of thing could happen to the Haredim or the Yamash or others, but if there are enough of them doing things differently, it won't happen to all of them.</p><p>Theo Jaffee (25:39)</p><p>Why should we expect mainstream society to continue decreasing its fertility rate?</p><p>Robin Hanson (25:45)</p><p>Because we can see strong cultural trends that are causing it. It's not just an abstract number we can track. We can track the particular more proximate causes that are pushing it. And they seem robust and beloved. You can try to resist them, but a lot of people don't want to stop them. They will, in fact, push hard back if you try to reverse some of these trends.</p><p>Theo Jaffee (26:11)</p><p>Well, you know, in the last few decades alone, we've seen, you know, a major norm, 50, 60 plus years ago, was that homosexuality was wrong. And now, like, if you were to say that homosexuality is wrong, that would be enough to get you kicked out of polite circles. So we've had, like, a complete reversal. Yeah.</p><p>Robin Hanson (26:27)</p><p>So cultures do change, but the question is can you cause them to change in a desired direction? So the key thing is that the main way cultures change is that different factions fight over the changes and compete to influence the changes. And like if you look at the culture section of a newspaper, what that really means is these are the people most respected for...</p><p>influencing the direction of cultural changes and not everybody gets to play on an equal battleground in that space. So culture definitely changes and it's changed big time, in fact so much and so fast I think you should be disturbed by whether...</p><p>We can believe that that's all functional and adaptive, but it's not willing to just be changed in any particular direction in any particular subgroup once. Each group that tries to push for one change will typically face other groups that are pushing the other way. The question is, who will win?</p><p>Theo Jaffee (27:24)</p><p>So why would it be unlikely that the pro -fertility push in mainstream culture would win?</p><p>Robin Hanson (27:31)</p><p>Well, we can see a number of more fundamental causes again. So for example, I just over the weekend read a book by the classic founders of the field of cultural evolution, Boyd and Richardson. And in a 2004 book, their favorite model was that.</p><p>Basically, all culture needs an idea of prestige and people copy prestigious and a lot depends on what counts as prestige. And so in our society, the sorts of things that get you high prestige tend to require a lot of education. And a lot of education tends to put off fertility. And that's their story for fertility decline, which is plausible.</p><p>Obviously it's also tied in with say gender equality. If it was only men getting educated so highly it would be less of an obstacle. But we do have a lot of gender equality and along with prestige going along to the people who have high education, years of education, and that's causing low fertility. Another thing in history was that rich people could make their kids...</p><p>higher status by investing money in them, but the more kids they had, the more they divided up that money. And so there was an incentive for elites to have fewer kids in order to give each one of them high status. And so if we looked at prestige of individuals, signs of individuals, at that.</p><p>money could be invested in than that produced a selection effect to lower fertility even among elites centuries ago. But in addition to that, we have trends toward more parental care. That is, we have higher standards now for how much attention parents should pay to kids. We have standards of switch from what they call cornerstone marriage to capstone marriage, whereas in the past you would marry somebody young when you're less formed, less clear where you would go or succeed. Now the standard is more you should wait until you have a</p><p>steady, successful career and you've formed your personality, you know, just what your hobbies are and then you should find somebody who matches all those things. But by the time you do that, there's much less time for fertility to happen. We have norms limiting grandparent involvement and raising of kids and in kid careers. You know, there's just a whole bunch of these trends and many of them are quite beloved.</p><p>Urbanity, stronger urban living, which also seems to pretty clearly discourage kids. Less religion, religion's always been pretty strong. Correlate to fertility. And most people are not very open to reversing these trends.</p><p>Theo Jaffee (30:14)</p><p>Hmm. Well, do you think that this kind of thing could be solvable technologically? Like if there's some technological innovation that lets women get pregnant, you know, in a healthy way when they're 50 or 60.</p><p>Robin Hanson (30:20)</p><p>Oh sure, but the -</p><p>Well, the simple thing would work. I mean, the simplest thing that would work is just to have men and women freeze their egg and sperms at the age of 20 and then unfreeze them when they're ready to have kids, even if that's the age of 45. That would work, but it's a big ask because most people don't want to do that. Another thing that would work is to pay parents.</p><p>to have kids and borrow the money from the future tax revenue those kids will pay. That would also work, but again, you'll have to be inclined to want to solve the problem. And if you do that, it's gonna cause changes in some of these beloved trends. But I mean, I'd say there are ways we could, if we got our head into it, solve the fertility problem, but the fertility problem is really only a symptom of a deeper problem. And we are...</p><p>We have much worse options for the deeper problem. The deeper problem is just we have very few cultures which rapidly change and we're weak selection pressures and that can plausibly make fertility go off the rails but can also make lots of other things go off the rails. Norms of, you know, when you, what medical treatments you use or norms of war and peace.</p><p>Theo Jaffee (31:39)</p><p>Like what?</p><p>Robin Hanson (31:49)</p><p>norms of family. I mean, our life is full of social norms and prestige markers, and they can all just go wrong. The space of possible cultures, most of it isn't very good. Generic idea of evolution is selection will keep designs and structures in the small part of the space that's the good part of the space that's functional and productive, and random drift takes you off to the bad parts.</p><p>Theo Jaffee (32:20)</p><p>So what do you think about wokeness? Is that like a concept that is meaningful? Is it like a, you know, symptom of a maladaptive culture? Is it like a cause of a maladaptive culture? Do you think it's peaked?</p><p>Robin Hanson (32:32)</p><p>it's, it's, it's, I mean, it's just clearly, it's not peaked, but it's clearly just evidence of cultural change, and if you think about it, there's no particular reason to think it's functional, not that there's no particular reason to think it's dysfunctional compared to any other cultural change, but just realizing how rapidly our cultures are changing.</p><p>you have to realize there's nobody driving this train. There's no guiding force that's there to make sure these things are channeled into more productive, functional, adaptive forms. That doesn't exist. It's not how it works. We are just in a world where culture changes pretty randomly. And you have to realize that if this is a fragile thing that is valuable when it works right and it's in a functional structure, if you just make random changes,</p><p>Pretty soon those won't be very good.</p><p>Theo Jaffee (33:27)</p><p>Hmm. Is that also your explanation for, let's say, the current dysfunction that California has?</p><p>Robin Hanson (33:37)</p><p>I'm less willing to sort of attribute things to very particular cases. You know, this argument is strongest at the general level and it's just harder at the specifics. But...</p><p>I'll note that most people really want to argue at the within culture level. So, like the culture sections of newspapers or a lot of op -eds or a lot of things are basically people passionately arguing about which way their culture should go. And when they make those arguments, they will refer to sort of who's more prestigious and who's with us and who we are and what we have valued in the past. And those are the resources that you can use to persuade people that your story of where we should go is right and people should follow in your direction.</p><p>And most passionate discussion in politics and culture is about that. It really isn't looking at a distant point of view of how cultures evolve and what that might mean. It's about here we are and I want to go this way and you want to go that way and I'm right and you're wrong.</p><p>And part of that story is to look in the past and say, you know, where we've came from, from there to here is exactly the same thing I want to do, continue going from here to the future. And people want to appeal to our shared sense that what we did, our changes in the past must have been good changes in order to argue for how new changes should also be pursued.</p><p>Theo Jaffee (35:02)</p><p>So let's talk about this hypothetical future that one of my previous podcast guests, Greg Fodor, talks about extensively that I think is plausible, where eventually virtual reality technology will get good enough and reliable enough so that people will essentially virtualize their whole lives. And when that happens, they will have a much greater degree of control over the level of participation in society that they have compared to today. So do you think...</p><p>that could be like a solution to this kind of monoculture problem if people are capable of simply just removing themselves from the culture and creating something entirely new.</p><p>Robin Hanson (35:44)</p><p>Compared to the past, the distant past, our world is largely virtual in the sense that if you look around you, most of the surfaces you see are artificial surfaces constructed to have artificial appearances. We're not out in the woods or the jungle or the sea. We are mostly in artificial worlds constructed.</p><p>primarily as we see them for their appearances and how they are convenient for our lives. And we have enormous abilities compared to the past to find other compatible people to interact with and to form whatever subcultures we want to. And that's the way today is different from the past. And that does not imply, as far as I can tell, cultural diversity. That's not the natural outcome of that change. So I don't know why continuing along that same path into the future.</p><p>with more virtuality and more ability to select your associates would make it any different. Basically the highest level point is that as the world finds it easier to communicate with each other and to travel to meet each other and then to trade with each other and to move from one place to the other that makes the world culture more integrated. Now...</p><p>we have more variety of things like musical genres or TV shows or some other, you know, maybe particular hobbies of quilting or whatever it is. It becomes possible for there to be more such things in the world and for people to find a smaller niche closer to the kind of quilting they like, say. But those features, those kinds of cultures don't actually...</p><p>say very much about your life. They just, you know, those don't change how many children you have or whether you value living with your kids or whether you value your career or how you feel about death. Mostly they don't. Mostly they just a separate part of the world. Rationalists aren't really very different. So...</p><p>Theo Jaffee (37:38)</p><p>Well, sometimes they do.</p><p>What about rationalism?</p><p>Robin Hanson (37:47)</p><p>Rationalists, of course, are especially low fertility, so they are very much integrated with the lower fertility elites in the culture. And it's a... I mean, I think there's a strong emotional desire to see yourself as part of a distinctive subculture, but you don't usually ask for that subculture to influence very many aspects of your life. And rationalism doesn't influence very many aspects of a rationalist life.</p><p>Theo Jaffee (38:11)</p><p>Well</p><p>What about the polyamory part? That seems like a pretty clear departure from social norms as a result of in online culture.</p><p>Robin Hanson (38:20)</p><p>It is, but if you look at the world as a whole, the world has been converging culturally for a long time pretty strongly, and that's the dominant trend. So...</p><p>I mean, you know, go around the world, you definitely see things that look different around the world. Buildings are different and clothes are different sometimes, and, you know, holidays are different, maybe even work hours are different, but the world is converging quite a lot, culturally, and quite strongly. So, for example, if you look at regulation around the world, it varies hardly at all. Compared to having 150 countries worth of...</p><p>potential variation, we actually have far, far less actual regulation variation. You can certainly saw that in the pandemic where the whole world basically did it the same way. We see it in lots of other areas. Elites especially are converging culturally around the world. Non -elites are farther behind on that trend, but non -elites are usually farther behind on most cultural trends. That's because elites lead the way.</p><p>Theo Jaffee (39:30)</p><p>So how do institutions coordinate so well? Like during COVID, how every university and almost every media outlet and the federal government and almost every state government was basically on the same page pretty much all of the time, at least for like a year, a year and a half, two years. Is it just culture?</p><p>Robin Hanson (39:47)</p><p>Right? That was primarily culture, especially culture of elites. The very beginning of the pandemic, the usual public health experts gave the usual advice, and then elites around the world suddenly started talking to each other intensely for a month or two. And at the end of that, they came to a very different conclusion about how...</p><p>world should respond to the pandemic, the official public health experts immediately caved and changed their mind and accepted the new perm announcement of the new elites, of the elites, and everybody in the whole world did it that way, same together, because that was the consensus of elite culture worldwide.</p><p>Theo Jaffee (40:29)</p><p>You said earlier something about when it comes to culture, like nobody is in charge, but is that not like a counter example? Because you said that a lot of elites -</p><p>Robin Hanson (40:36)</p><p>Nobody was in charge of it. I mean, culture produces conformity and correlation without anyone being in charge. That's kind of the key nature of mobs, basically. Culture is a mob.</p><p>Theo Jaffee (40:47)</p><p>So it would be like a category error to say that the public health experts were in charge or that the elites talking to each other were in charge.</p><p>Robin Hanson (40:56)</p><p>Well, the elites talking about each other constituted the elite culture and the elite culture decided, but it wasn't any individual person or institution. It was the culture as a whole.</p><p>And so that's a form of organization humans have long had, is gossip producing consensus of mobs with shared opinions and even shared mob action without any center to direct it.</p><p>Theo Jaffee (41:31)</p><p>What are the characteristics of elite culture that make it powerful? Or is it not the culture that makes it powerful? Is it just the, you know, elite human capital?</p><p>Robin Hanson (41:42)</p><p>The superpower of humans is cultural evolution. That is, humans are able to learn and change much faster than other animals. And the main way we do that is by passing things on via culture. But culture doesn't work if you copy at random for your associates. You need to differentially copy from those who have been successful compared to others.</p><p>And so prestige is an important part of our strategy for differentially copying from the successful. So culture doesn't work really without some concept of procedure success that you can use to decide who to copy. So that makes prestige very powerful because we're all inclined to copy the prestigious. So that means.</p><p>there are people who are prestigious and they get together and talk and they agree on things, the rest of us are going to cave and go along, for the most part, for whatever the prestigious decide. That makes the prestigious very powerful, because that is the main vector of cultural evolution, is for people to copy the prestigious.</p><p>Theo Jaffee (42:50)</p><p>Alright, so let's switch topics a little bit. I'd love to talk about Age of Em, which is probably one of the most interesting deep dives about the future that I've ever read. So one thing that kind of stuck out to me is when you talk about Em's copying one another. Oh yeah, and for the audience, Age of Em is a book about a hypothetical future scenario where the ability to emulate a human brain on a computer becomes possible and cheap and widespread and the implications of that.</p><p>Robin Hanson (42:55)</p><p>Okay.</p><p>Theo Jaffee (43:21)</p><p>So, when you talk about Ems going off into stubs that then, like, end, you say that Ems won't think about it as, am I about to die, but will I remember this? But you kind of skip over the actual philosophical idea of, like, do stubs die?</p><p>Robin Hanson (43:45)</p><p>Right, so the key idea here is that we often have difficult philosophical questions associated with identity and death and things like that, and some of us think about those things philosophically and try to analyze them, but the vast majority of us don't. And for the vast majority of us, we don't really think about them that much. We just do whatever our culture says to do, and do that happily, without much reflection.</p><p>even when the philosophers all say we're wrong. So if I have a world and I want to know what they do, I don't think it's very important to think, well, what will their philosophers say? I more want to know what will cultural and economic selection produce in this world?</p><p>and then they will just do whatever that produces and they won't really know why they do it and they won't that much care. They will just do what everybody else around them is doing because that's what we do. So in this context, it seems that there's huge economic payoffs from making these stubs, which are basically short -lived copies.</p><p>which are spun off after it's been deleted shortly afterwards, but do something useful in the meantime. So for example, if you work eight hours a day, then you have 16 hours a day you're not working. So then basically your work hours have a tax of a factor of three. You have to pay for all the other 16 hours of resting up before you get the eight hours of work.</p><p>Instead, you can, when you're ready for work, make lots of copies of the work -ready versions, have them do eight hours a week, but only have one of the copies go on to rest for the next day. Now you can basically save a factor of three on labor productivity by having many short -lived copies that work for eight hours and then end instead of resting it for the next day. Now, those copies could say to themselves, gee, I'm about to die, this is terrible, and instead of working, decide to have a revolution or something.</p><p>But my claim is they won't. That is, they will get used to this as a usual practice that they are comfortable with. And they will then, because the ones who do so will get a factor of three in labor productivity, which is a pretty big advantage in a Malthusian world.</p><p>Theo Jaffee (46:16)</p><p>Hmm. So, do you think humans die every night when we go to sleep?</p><p>Robin Hanson (46:22)</p><p>I don't know. I don't care. Most of us don't care, right? That is, we know everybody else who's on this goes to sleep and wakes up and they seem to be okay with it, so why shouldn't I be okay with it? Some philosopher can analyze it and decide that I do die every night and maybe I should then be upset, but if I accept that argument, I'm gonna be upset in a way that people around me aren't upset, and then I'm gonna be weird, and I will be at a disadvantage because maybe I won't be willing to go to sleep.</p><p>It just looks like not much of a cultural win for me, right? I sh -</p><p>Theo Jaffee (46:53)</p><p>Well, is this not a question with a pretty clear scientific answer? Like, eventually, we'll be able to figure out, like, what are qualia, what is consciousness, under what circumstances do qualia persist?</p><p>Robin Hanson (47:04)</p><p>Actually, no, I don't think so. I don't think these are clear. These are not clear questions with clear scientific answers. No, I don't think so. I think they are questions that people will remain uncertain about indefinitely.</p><p>Theo Jaffee (47:07)</p><p>That will never happen.</p><p>And what about the Moravec transfer? Same thing.</p><p>Robin Hanson (47:23)</p><p>So what does that mean?</p><p>Theo Jaffee (47:25)</p><p>It's when it's like a hypothetical procedure for uploading your mind into a computer where you know instead of doing like a destructive brain scan then you have like little nanobots that scan each neuron individually and figure out what it does and then comes up with a simulation and integrates that into your existing brain and just does that again and again until your brain is very slowly replaced.</p><p>Robin Hanson (47:47)</p><p>Right? Okay. But again, most people don't think about those things. Look, in our world today, we are quite alienated from the world of our distant ancestors. Our world is quite strange, and there are many ways if you hold us to the standards of our ancestors, we are weird and even ugly and should be upset about the world we're in. We put up with, like,</p><p>Most of our ancestors would not put up with the degree of domination and ranking that we put up with in our jobs today. They would be upset and outraged and think that we just have no pride because we're willing to put up with this. But we put up with it because all the people around us do. And if we didn't, we would lose on our prestige games. Most prestigious people around us put up with it and accept it.</p><p>and we want to emulate them and be like them and so we put up with it too.</p><p>Theo Jaffee (48:50)</p><p>So, when it comes to human brains, you said there's probably, like, we will be confused about questions of identity forever, but do you think the human brain is kind of fundamentally uninterpretable, like in LLM, or will there be actual, like, computational structures that, you know, will be able to say, oh, hey, look, our, you know, memories are represented as, like,</p><p>Robin Hanson (49:10)</p><p>Oh, I'm sure we'll figure out lots about the brains, but we'll also figure out a lot about LLMs. We'll know a lot more about LLMs in a few decades than we do now. We'll be able to identify structures in them and attribute them, you know, various events and patterns to those structures we find. That'll be true for the brain.</p><p>Theo Jaffee (49:32)</p><p>How much of the age of Em do you think is applicable to, like, the near -term future of LLMs? When they're, like, kind of human -level, they're not, like, absurdly superhuman, they can run at faster speeds than the human brain, potentially?</p><p>Robin Hanson (49:45)</p><p>Well, the key thing is that Em's are full substitutes for human labor, and LLMs are not, and they're not close. So when you have a descendant of LLMs that is a full substitute for human labor, then the parallels will be much closer. At the moment, LLMs are really a niche market, and...</p><p>And people so far haven't... People basically keep making LLMs from scratch rather than making them from previous ones. So that'll be a key point perhaps in development if that ever switches. Well, humans, you make a human and they're a child and you slowly improve the human over time as it gets experience and matures.</p><p>Theo Jaffee (50:26)</p><p>What do you mean by that?</p><p>Robin Hanson (50:35)</p><p>with LLMs you keep every new generation of LLMs, you go back to the data and remake it from scratch so they don't remember or inherit from their previous versions. So, you know, many of the features of the age of Em are based on this assumption that you'll want to keep using the same Em's for many decades of subjective experience rather than just...</p><p>Theo Jaffee (50:46)</p><p>I see.</p><p>Robin Hanson (51:01)</p><p>stamping them in the lab and using them for an hour and throwing them away from the very beginning with no history or any sort of memory.</p><p>Theo Jaffee (51:09)</p><p>Hmm. So on the topic of AI and LLMs, how would you simply state the case against AI risk to people? Cause in your article, AI risk again, it relies on a lot of like somewhat.</p><p>Robin Hanson (51:22)</p><p>Wait, I don't have an article by that title, do I?</p><p>Theo Jaffee (51:26)</p><p>I think so. Let me look it up.</p><p>Robin Hanson (51:28)</p><p>but the title is AI Risk Again.</p><p>Theo Jaffee (51:31)</p><p>Yeah, AI risk, comma, again.</p><p>Robin Hanson (51:33)</p><p>That must be many years old then, right?</p><p>Theo Jaffee (51:36)</p><p>No, that was March 3rd, 2023.</p><p>Robin Hanson (51:39)</p><p>Okay, but presumably that's referring to some of my other more elaborate articles, but okay.</p><p>Theo Jaffee (51:45)</p><p>Yeah. But what about the opposite of elaborate? Like how would you state this case as simply as possible without economic arguments to someone who's like, for example, watch the Terminator or I, Robot so they're scared of AI and they see GPT -4 and they're like, oh wow, wow, this thing seems really impressive. Oh, we should be scared of the Terminator soon.</p><p>Robin Hanson (52:05)</p><p>So I think I have outlined two distinct lines of argument. The most recent line of argument that I focused on is asking people, and I did like a dozen AI risk conversations with people on YouTube that are recorded as you can see, basically asking people, well, ignoring AI, what did you expect to happen with your other kinds of descendants?</p><p>Did you expect to be able to control their values? Or did you expect to not have any conflicts with them? Did you expect to win all conflicts you might have with them? And almost everybody thinks that with respect to their squishy bio -human descendants, that those descendants would in fact become more powerful than them. They would win conflicts with them, and their values would be different from theirs.</p><p>And there might often actually be such conflicts. That's what everybody expects from their ordinary descendants. And it's what everybody has seen for many generations. And therefore, it's what they've accepted. They don't seem to mind that. But when it comes to AI descendants, they change their standards.</p><p>They are worried that we shouldn't have those kind of descendants unless we could make sure they never have a conflict with us or we'd always win the conflict with them or they could assure us that their values would never change. So people just holding different standards to the AI descendants than to the other descendants. And my main argument is that they really are your descendants and the same sort of evolutionary habit that should make you indulgent and supportive of all of your descendants, regardless of how they might differ.</p><p>from you should apply to your AI descendants. So that's one line of argument to say, you know, don't hold them to different standards in the abstract than you hold all your other descendants. Now, the faster change gets, the faster you may see your descendants.</p><p>and the fast the more of that history you may see sooner because change is faster but that would happen in the age of M2 because the Em's would also be changing faster and history would be happening faster so a slow human would see a lot more of that change as well but if you were okay with the Em's as your descendants quickly having different values from you being more powerful and winning conflicts with you then why not for the AI? So a separate line of argument is to say</p><p>AI is produced by capitalist firms who are doing it for profit.</p><p>And they will, of course, if their products hurt customers, that will be bad for them. They will therefore test their products regularly in many different ways. And in fact, computer products are among our most tested and monitored products of any sort because it's so much cheaper to test and monitor them. So AIs being built by large capitalist firms are for profit. They will make sure that their customers are not too unhappy with their experience with them.</p><p>And so they are very unlikely to be suddenly blindsided by their products vastly changing. They will be watching out for tendencies for their products to change and for that to cause bad experiences with customers. And...</p><p>for the intermediate time when these products are being produced by for -profit firms, you should expect to be as similarly happy or unhappy with them as you are now with most of the other products that you get from large capitalist firms.</p><p>Theo Jaffee (55:40)</p><p>Well, in response to that first line of argumentation that AIs are basically our descendants, could you not just say like, no AIs are not our descendants, they're like creation, so there's something different. Like, especially if it turns out LLMs are not like the golden path to AGI, and AGI will take something entirely different to be built, and that something won't simply be like the, you know, distilled total of human knowledge and human data and human preferences, that'll be like a...</p><p>maybe a Bayesian superintelligence built from scratch. Like, I could definitely see why someone wouldn't see that as our descendants.</p><p>Robin Hanson (56:17)</p><p>So natural selection is the powerful general theory that is much more general than simply the nature of biological creatures that pass things on through DNA. DNA -based evolution is one of many kinds of evolution, and we have general theories.</p><p>of evolution that apply not just to DNA -based biological organisms, but to culture and many other kinds of natural selection. The key concept of natural selection is variation in selection, as Donald Campbell famously argued in the 1960s.</p><p>And this is a general process. And in this general process, the key point is just there are some things, and they have descendants. Descendants is really just whatever literally descends from them, i .e. arises from them. And natural selection just requires that the descendants have some correlations and features to their ancestors.</p><p>pass down through whatever means and whatever those means is called genes. Genes are just our name for whatever is the mechanism of passing on features from ancestors to descendants. And as long as there's variation in those features and there is selection, i .e. not all features are equally productive of reproducing and surviving in the world, then you have natural selection. And by these general abstract theoretical conditions, AIs are our descendants, whether they're LLMs or Bayesian networks or</p><p>whatever else they are, as long as they share some features with us. And I'm pretty confident that if there are aliens out there making their own AIs, that our AIs will have things in common with us compared to the AIs of the aliens.</p><p>Theo Jaffee (58:05)</p><p>Well, still just playing devil's advocate here. Like someone could say, yeah, well, sure. You can make an argument where you define the word descendant to mean, you know, something that shares features with us that we had a part in creating either voluntarily or involuntarily. But like, I don't care about that. You know, that's it's, it's a computer. It's, it's not like my children.</p><p>Robin Hanson (58:26)</p><p>Well, you can care about whatever you want to care about, but I'm pointing to the standard concept of a theory, the word descended, I'm just making up a definition of word, I'm pointing to the concept that's in a theory, and this theory predicts, robustly, that...</p><p>creatures will have both a rivalry toward coexisting creatures with different genes, but also a support and indulgence toward their descendants, even if they have different genes, and that these AI are such descendants. You don't have to accept what natural selection tells you you should want, but then I don't know what basis you have other than saying, I like these descendants and I don't like those. I guess that's your right, but...</p><p>Again, why not dislike your other descendants just as much and be unhappy if they displace you and if they have win conflicts with you? That's also your right. You could just say you don't like any of your descendants.</p><p>Theo Jaffee (59:28)</p><p>Well, I think maybe people expect that their biological descendants will be less different. Like, many of the AI risk arguments take the form of, the AI will very quickly gain resources and kill us all, whereas our own biological children won't do that because they love their parents.</p><p>Robin Hanson (59:43)</p><p>Well, in the dozen conversations I had, almost everybody agreed that in fact their biological descendants would get pretty different pretty fast. As they are all. They might. That's a possibility. The reason why you don't think it will happen isn't because it couldn't happen.</p><p>Theo Jaffee (59:54)</p><p>but not fast enough to kill all them.</p><p>Is there any historical precedent?</p><p>Robin Hanson (1:00:06)</p><p>But there's no strong reason you think the AIs will do so easy. It's just the possibility with the AIs that scares people, not any particular reason I think it will happen, just that it could, but it could happen with your biological descendants. They... biological descendants have in fact killed parents and grandparents in the past. Yes, systematically, indeed. I mean, in many societies when people became too old to be productive, they were killed.</p><p>Theo Jaffee (1:00:23)</p><p>Yeah, but not systematically. It's usually very rare and they're like heavy norms against it.</p><p>Robin Hanson (1:00:35)</p><p>That's a common historical cultural practice.</p><p>Theo Jaffee (1:00:41)</p><p>So, the Age of Em is one of the most detailed deep dives I've ever seen into a future scenario. So, I think maybe the only other one I've seen is Nanosystems by Eric Drexler, but even that isn't really like a dive into the future economic and social implications. It's more so a dive into like the future possibilities of nanotechnology. So, are there any other deep dives like this that you know of? And...</p><p>Robin Hanson (1:01:03)</p><p>The technology, yes.</p><p>Theo Jaffee (1:01:11)</p><p>What science fiction do you think is the most realistic, the most detailed in ways that aren't silly?</p><p>Robin Hanson (1:01:18)</p><p>Well, I was inspired to write Age of Em in part by nanosystems. But as you say, nanosystems only looked at sort of the technology possibilities and not at the social implications. Drexter was interested in social implications, and he wrote about them in other places, but he just didn't have as much training in social science, I think, to do his thorough analysis of the social implications.</p><p>I wrote Age of Em and hope and part of inspiring other people to take up the example of doing such detailed analyses. So far they haven't.</p><p>the, I mean, for example, David Brin's Kiln People was a predecessor of Age of Em, where he did the best job I had seen in science fiction of trying to analyze similar issues, but still much less detailed of an analysis that I gave in Age of Em. I would, I mean, it might be true that I picked an easier problem than most are, in the sense that...</p><p>You could say more about Age of Em than you might be able to say about other scenarios, but I still think people could say a lot more about other scenarios than they have. And I guess I have to conclude that people are not actually that interested in working out such detailed implications because showing a demonstration of how it's possible hasn't inspired other people to do so.</p><p>Theo Jaffee (1:02:43)</p><p>Also in The Age of Em, you, like, the kind, the whole book is kind of overshadowed by the idea that, you know, this age might only last, like, one or two objective years, and then after that, like, something much stranger will happen, potentially. But, you know, there's not much detail about what that thing might be. So do you think, in, like, the very far future, is there anything meaningful that can be said about our descendants in, like, a million years?</p><p>Robin Hanson (1:03:11)</p><p>Well, yes, there's just fewer things that can be said. So one thing that can be said is that if physics is really a strong limitation, as it seems to be, then our descendants in a million years will still have to deal with only being less than a million light years away from here. And...</p><p>only having access to the volume and materials in that space. They'll have to deal with constraints of conservation of energy and second law of thermodynamics and, you know, other key constraints of, say, the speed of light of activity in that volume. Those are things I think we can say. I think we can talk somewhat robustly about</p><p>the two main paths of competition or coordination at the highest level. That is...</p><p>Either our descendants will not coordinate to sort of control their entire overall pattern of activity and therefore be in a world of competition with each other fundamentally at the highest levels, even though they can have smaller scale coordination in that competition, but the highest level might just be competition. And if that's true, we can make some claims about what they will want, I think, in that competition, how they will behave. The other option is that somehow,</p><p>our civilization manages to coordinate to enforce some rules that limits the competition. That will have to happen if it does before we leave out to explore the universe because if we have substantial colonists who head out into the universe before such coordination is created and enforced then it will be too late afterwards. So there's a limited time window to do that.</p><p>But it's in principle possible. So we can say either the world will manage to sort of send out political officers with every vessel that leaves here to enforce some central rules on behavior everywhere, or they won't, or the political officers will fail, in which case there will just be competition on the larger scale. And I think...</p><p>We can expect that competition will produce the kind of results it has in the past in terms of evolution toward a more efficient mechanisms and processes. And I think we could also guess that we can say some things about what the preferences of creatures who evolve are, because we have literatures on that today. We have literatures on what preferences involve in natural selection in particular in the context of investment funds and what preferences over investment funds people would have.</p><p>if they are natural selection produce their preferences.</p><p>Theo Jaffee (1:06:08)</p><p>kinds of preferences might they have.</p><p>Robin Hanson (1:06:10)</p><p>Well, so asexually produced creatures should not have a time discount, whereas sexually reproduced creatures have roughly a time discount of a factor of two per generation. That is roughly the time discount that humans seem to have. So if humans start competing with some asexually reproduced creatures, maybe AIs or Ms, then...</p><p>In trading with them, we will buy the present and they will buy the future because they care more about the future. And that's a robust prediction. I think that time preferences will go away.</p><p>Another prediction is about risk preferences. So for example, in the investment world, you get logarithmic risk with respect to correlated risk over all your copies, but you get risk neutrality about risk that's not correlated because you can insure against that by diversification. That looks like a robust argument that would probably continue to hold. So we can say something about the degree and kinds of risk aversion that descendants would have. And I think...</p><p>Theo Jaffee (1:07:16)</p><p>Can you explain the time preference thing a little bit more for the audience? Like what does it mean that sexual creatures discount to per generation?</p><p>Robin Hanson (1:07:22)</p><p>Right, so typically a sexually reproducing creature has a choice between investing in itself now or investing in its descendants. But its descendants share only half its genes and its descendants will mostly be able to use the resources when they reproduce a generation later.</p><p>So the really choice is between spending stuff on yourself now or spending stuff on your descendants in a generation. And so since your gene, your children have half your genes, then this is a choice of...</p><p>Doing things for your genes now full on, or doing stuff for half your genes in a generation, and therefore, this is a factor of two per generation discount rate. That is you're trading off doing stuff for yourself now and doing stuff for your kids in a generation. That's only for sexually reproducing creatures for which their kids only have half their genes. Asexual reproducing creatures, their kids have all their genes, so they should not be discounting the future for them at all.</p><p>Theo Jaffee (1:08:34)</p><p>So in a couple of critiques that were written about Age of Em by Brian Caplan and Scott Alexander, both of the main arguments are kind of that Em's will be selected more for docility. Like they won't evolve that much cultural drift from humans because like we will select against that as the main customers of the Em's and the people who are keeping their infrastructure alive in the physical world. So what do you think about that?</p><p>Robin Hanson (1:08:39)</p><p>beside the point.</p><p>Yes, but it's useful.</p><p>Well, so first of all, the classic story about humans is that we domesticated ourselves. So there are a number of features that domesticated species have that distinguish them from other species.</p><p>And then humans have these features. So when we domesticate horses and cows and pigs and, you know, dogs, etc. They differ in predictable ways because of that domestication. And humans differ in exactly those ways as well. So we have domesticated ourselves. So we are, in fact, more docile than other animals because we have become domesticated. This already happened. Long ago, really. Now...</p><p>For a long time, a fear about the future has always been that somehow changes will enslave us, or worse, make us enslaved and not even know or care. This is one of the most robust dystopian visions of the future that anybody ever has. Really. Overwhelming. Like at the beginning of the Industrial Revolution.</p><p>say around 1900 or before that, when people were trying to create dystopian visions of the future of industry, these were their main complaints. Exactly the complaints that humans would be enslaved or so domesticated we wouldn't even notice we were enslaved. So we clearly just have a very strong sensitivity to this possibility and it goes along with our strong egalitarian norms that humans are distinct from other primates exactly from having very strong egalitarian norms wherein we resisted anyone being the head of the tribe.</p><p>and putting themselves up there, and human foragers are famously egalitarian, famously strongly resistant to any individual humans dominating the rest. So this is just something we humans are primed to be afraid of and to be outraged by the possibility of, even though of course it already happened. We already are self -domesticated. So...</p><p>I don't think there's any particular reason to think that this is a thing that will happen in Age of Em more than any other future scenario, other than the fact people are just primed to be afraid of it because this is just a generic strong fear about any future scenario.</p><p>Theo Jaffee (1:11:24)</p><p>Hmm. So on the topic of rationalism and AI, how exactly did you meet and get involved and start blogging with Eliezer Yudkowsky? How did you get involved in the rationality community in the first place?</p><p>Robin Hanson (1:11:38)</p><p>Well, I had a blog called Overcoming Bias and I invited some others to participate and share the blog with me, including Eli Izer and Nick Bostrom and some other, Hal Finney, including who is a best guess founder of Bitcoin. And they did start blogging with me and we discussed AI risk on the blog because that was a big issue for Eli Izer then.</p><p>The blog was called Overcoming Bias, so it had a rationalist sort of theme right there, and we discussed some issues of rationality on the blog. And then a few years later, Eliezer decided he wanted to make his own blog called Less Wrong, based on some ideas that he'd have many participants contributing and have a karma system to rank them so that people could see what was quality. He did in fact make that blog. I helped him in the sense of allowing hard links from my blog to his blog in order to</p><p>raised the Google rank of his blog using the Google rank of mine at the time. And he</p><p>He created this community where people were talking about rationalist issues. They are less wrong. And then the karma system eventually seemed to have gone wrong, but still for a while people liked it and they were discussing things there. And so the rationalist community sort of started in less wrong there, but it grew to other places and all along, you know, Eliezer was using this community to push AI risk.</p><p>Theo Jaffee (1:13:09)</p><p>Interesting. So let's switch topics a little and talk about the elephant in the brain, which is, I think, my favorite thing that you've written. My friend who introduced me to your ideas, his favorite thing you've written. And for the audience, it's a book about how humans have hidden motives that we don't naturally reach for while explaining our actions. So.</p><p>Robin Hanson (1:13:22)</p><p>death rates.</p><p>Theo Jaffee (1:13:36)</p><p>One thing that's kind of a meme now in young people's circles is girls getting the ick. So I don't know if you've heard of this.</p><p>Robin Hanson (1:13:45)</p><p>the ick regarding a man for sick in particular.</p><p>Theo Jaffee (1:13:47)</p><p>Yeah, yeah. So it's like there are these, you know, TikToks, Instagram reels of like a guy doing something slightly awkward. Like, you know, it could be literally anything. The way he jumps into a pool, the way he, you know, reaches up to open a cabinet or something. And then, um, girls are like, Ooh, I just got the ick from that. So what do you, what do you think is the elephant in the brain explanation if there is one for that?</p><p>Robin Hanson (1:13:59)</p><p>Okay. Right.</p><p>Right?</p><p>Okay.</p><p>Well, the usual explanation, I would think, is pretty close to the surface, is that women are very selective, or they see themselves as very selective among men. They are willing to mate with any woman, man who asks. It's very important to them to be selective, and so they are trying to be very selective.</p><p>And so they lean into intuitive reactions that are selection reactions, in particular rejections, in ways that the rest of society finds impolite or rude in other contexts. So we are again relatively egalitarian, and so we try not to overtly reject each other, or we make excuses for it. We certainly don't like to go out of our way to insult people and to put them down. That seems rude and...</p><p>arrogant, and that's true in most of the rest of society except apparently we make an exception for women rejecting men. Apparently not only is it okay for women to eagerly reject men, but they can insult them in the process and put them down and bond together over their...</p><p>you know, rejection of men and basically declaring men are unequal and men are should be unequal, the lower half of the men distribution is morally unworthy and ick basically and maybe should not exist. That's apparently a kind of inequality attitude that's okay in our world although we have many other aversions to inequality talk. We could discuss that more.</p><p>Theo Jaffee (1:15:53)</p><p>Do you think this, do you think that women's propensity to reject men is adaptive in the sense that, you know, they will select better mates or maladaptive in the sense that there will be like fewer overall children?</p><p>Robin Hanson (1:16:00)</p><p>It's.</p><p>Well, obviously some degree of holding standards is adaptive. And of course, there's going to be the additional...</p><p>value of signaling that you have standards. So people might be hold too high of standards exactly to show off that they have standards. But if people were still settling down and picking someone soon, we wouldn't have fertility problems. Those come from people spending decades being picky. When they finally pick somebody, it's kind of too late. That's more the problem. It's the delay in picking and not so much the high standards.</p><p>And then that interacts to some degree, I guess, is that people hold very high standards. And they say, nobody around me now could possibly meet my standards. Somebody maybe later will.</p><p>Theo Jaffee (1:16:45)</p><p>Also,</p><p>So you also talk about how a lot of the explanations for these behaviors come from, like, forager times. Do you think humans have evolved genetically at all in the last, like, 10 ,000 years, and if so, how?</p><p>Robin Hanson (1:17:11)</p><p>Well, there's certainly data about, say, milk processing.</p><p>You know, so some people can't process milk and others can and that certainly seems to have spread in the last 10 ,000 years. So we have a few pieces of evidence about that. We also have the general data that we expect rates of DNA evolution to be proportional to the size of the population. So the prediction is as the population is getting larger, selection is beginning faster, although it's been happening over a shorter time period. So, you know, you have to take that in. But presumably selection is much faster lately.</p><p>than it was before because of the larger population, and we see some specific kinds of selection. But honestly, the usual story, which seems right to me, is that there's been relatively little DNA evolution, but a lot of cultural evolution. Cultural evolution is overwhelmingly where human evolution has been for a while.</p><p>a lot of cultural evolution, but then the challenge is to think more carefully about what exactly that is and how it works. And I think I didn't think very much about it until a few months ago, and so I realized that most people kind of think they understand culture, but they haven't really thought much about it, and there's a lot more that they should learn.</p><p>Theo Jaffee (1:18:28)</p><p>Like what? What would be the most important things to learn about this?</p><p>Robin Hanson (1:18:32)</p><p>Well, first is that it's this autonomous process that culture tells you what you should value and what you should do, and you just accept that. And then it changes and you just accept that. And there's nobody driving this train. Key features of this culture are what counts as status, what counts as prestige, because you copy the behavior of high prestigious people.</p><p>And so the definition of prestige can really make a big difference to what direction culture goes. If prestigious people are the people who have the most years of education, then that'll encourage people to get a lot of years of education, maybe even more than are useful for other reasons. If prestige goes with individual wealth, then people will focus on accumulating individual wealth and passing on wealth to a smaller number of kids so that they can be individually wealthier.</p><p>a lot depends on what we decide counts as prestige. And nobody is driving that train. Over time, what counts as prestige has changed. And we didn't vote on it, and we didn't analyze it and decide it together, and there wasn't a process that was just anticipating the consequences of this and figuring out what was best for us. Culture is a very crude process. Like, if you think about your organs and your body and your...</p><p>Body reactions, you change in time of day and time of year. You have all these complicated ways in which your body is primed to change its behavior in different contexts, because your bodies like yours have been evolving for many millions, even billions of years, but culture has only been evolving for a few thousands of years, and it doesn't have all those complicated conditional processes to, you know, adjust for context. It's a much cruder thing.</p><p>You should not think it's this very subtle, well -worked -out, systematic thing that will carefully adapt to all sorts of details.</p><p>Theo Jaffee (1:20:30)</p><p>So in terms of the like relative decline in wokeness and similar, you know, if you can call it wokeness, call it progressivism, leftism, whatever, specifically in tech circles in the last couple years as a result of Elon Musk buying Twitter, is that like a genuine cultural change or is that simply people following the behavior of a prestigious individual?</p><p>Robin Hanson (1:20:51)</p><p>Well, that's what cultural change is. There isn't something else. There isn't a whole other thing. That's what it is. It's whatever the prestigious people are doing is culture. There is no other source.</p><p>Theo Jaffee (1:20:53)</p><p>Oh, yeah, that makes sense.</p><p>Well, which way does a causation go? Does, you know, what's prestigious?</p><p>Robin Hanson (1:21:08)</p><p>both. That is, if you aren't doing what culture says, you look less prestigious. And whatever the prestigious people are doing, that's what culture is.</p><p>Theo Jaffee (1:21:21)</p><p>So then how does culture change over time? If it's just, you know, if culture and prestige.</p><p>Robin Hanson (1:21:27)</p><p>Well, the biggest event in the 20th century that influenced culture was World War II. And both the rise of Hitler in the first place and his fall and losing the war were both pretty unpredictable. But they still had enormous consequences for culture. So you can see how things that were at the center of remaking world culture were the result of conflicts.</p><p>that it was hard to predict who would win. So that's also been true for the major changes in culture over the last half century in our world. Ex ante, they were hard to predict. It was hard to know who was going to win. Later on, we tell ourselves the story that the winner was inevitable. And we should have known all along that they were going to be the winner and we should have accepted. But you couldn't really tell early on who was going to win.</p><p>Theo Jaffee (1:22:22)</p><p>Do you think World War II was a bigger cultural change than, like, the fall of monarchy at the end of World War I globally, or, like, the fall of communism globally in the late 80s, early 90s?</p><p>Robin Hanson (1:22:33)</p><p>I don't know, I mean they were of a similar magnitude, so I don't that much care to say which one is exactly the bigger. They were big.</p><p>they were not that predictable.</p><p>Theo Jaffee (1:22:49)</p><p>So another elephant in the brain topic is medicine. This is maybe the most famous thing that's come out of it. So why is health care spending both so high and life expectancy so low in the US relative to other countries? Like, do we have a worse signaling problem or is it bad institutions?</p><p>Robin Hanson (1:23:05)</p><p>Right. So just to discuss to the audience, the elephant in the brain is mostly about why is medicine weird compared to what I think it's not really focused on the US versus other places. So most of the book, we're just looking at average typical human behavior and trying to understand that. I'm not very interested in the variations across space and time. If you don't even understand the average, you have no business trying to figure out the variations because they're just harder to figure out.</p><p>So the average medicine is basically not very useful. And so we try to explain our fascination with and obsession with medicine and the fact that it seems to do very little on the margin by saying that medicine is something we use to show that we care. And so we use medicine to...</p><p>show people that even though they're sick, we might betray them, but we won't leave them. We are going to stay with them and take care of them, and that's very reassuring, and that's the main function of medicine. But that doesn't explain why different times and places might do things different. So if we want to say why is medicine different in the US in the 20th century, say, I think we have to go to</p><p>It's one of the key stories the US tells itself about why the world should be grateful to it. So the world has a few stories about why the US has saved the world in many cases and they should love us. So World War I, World Wars are an example of that. And then the Cold War are also examples of how we say we saved the world from very dire problems and they should all be grateful to us. And then medicine is another one because we say basically we gave modern medicine to the world. We point to key adva -</p><p>in medicine happening in the US and the world copying them and being all the better for it. So when we have a thing that we are proud of as being the source of and spreading it to the world, then we tend to double down on it. So we double down on military spending even though we have very few, you know.</p><p>neighbors nearby who might cause us any problems. We still spend enormous amounts on the military, reaffirming our story of how we saved the world from the Nazis and the communists. We also like to tell the world that somehow we gave them civil rights and legal sort of legal procedure and we'd like to double down on that because that's our story of how that came from us because we went wild with that.</p><p>soon after World War II, and we also say that we gave the world medicine, and so we continue to spend a lot of it in part reaffirming how wonderful it is, which reaffirms how wonderful we are for having given it to the world.</p><p>Theo Jaffee (1:25:54)</p><p>Do you think that academia right now in 2024 is the best institution that we have for aggregating information and seeking the truth? Like, do you think there are existing other institutions like, you know, the blogosphere that might be better? Is academia still a P?</p><p>Robin Hanson (1:26:08)</p><p>was.</p><p>I mean, certainly not the only institution we use. So for different kinds of information, we use just a different institution to aggregate information. So clearly, for example, just ordinary business practice is one institution for aggregating information. Most business practice isn't mediated by academia. It's mediated by prior nearby business practice. So business people are looking around to see what other people are doing and copying them, trying them out. And we aggregate information about what businesses should do through the competitive market.</p><p>with businesses looking over their shoulders, what everybody else is doing. That's the way we aggregate information about business. The way we aggregate information about say marriage or families isn't much to do with academia either. We in a world where other people around us are having marriage practices, family practices, and we copy what our parents did, what people around us do, et cetera. That's how we're aggregating information about those topics.</p><p>Theo Jaffee (1:27:03)</p><p>What about physics? You know, that's mostly like academic, theoretical, and experimental.</p><p>Robin Hanson (1:27:05)</p><p>Well, most practical physics isn't being aggregated by academic physicists either. So everybody should know that like a lot of famous physics inventions were first happened practically and then theorists came up with explanations, say thermodynamics, laws of entropy, et cetera. They were first practiced. And so an awful lot of physics practices first happens in industry and then people doing things and then...</p><p>academic physicists try to make sense of it and synthesize it, you know, systematize it. But theoretical physics is more accepted from academia, in part because nobody cares and it doesn't matter.</p><p>So we mostly let academia aggregate information about abstract stuff that nobody else cares about. When the rest of us care, we are much less willing to listen to academia. And so often, academia then just tells us whatever we want to hear. So there's an old saying that a leader is someone who figures out which way the crowd is going and gets out in front.</p><p>Academics often do that about, you know, practical ways to live. People just have a lot of ways they think we should live and then academics typically find a way to support that. And this is also true in government policy. Mostly the government doesn't listen to academic policy. Mostly the government decides on policy in some other way and then finds academics who support whatever they're saying to justify it and they find them and then those people look influential, but they're much actually much less the fact that somebody was</p><p>there to be found to support it for any of you it matters. If they couldn't find anybody to support it they would have been less likely to do it, but as long as they can find some support that's enough.</p><p>Theo Jaffee (1:28:52)</p><p>So do you think academia has always kind of been bad like this or has it gotten worse over time? Did it used to be much better?</p><p>Robin Hanson (1:29:00)</p><p>I don't know, but certainly peer review is something that people today think of as essential to academia, but a century ago it just wasn't a thing so much. Like in 1900 there was hardly any peer review, that wasn't how things happened. You know, journal editors just had a lot of discretion and decided what they liked.</p><p>And so for most of the famous history of science up until the 20th century, that wasn't peer review. So I think people are not that sure what academic institutions are exactly, because they've changed a lot over the years. And certainly often you just had a community of people and a small enough community of elites that they could just manage each other informally. And sometimes that works very well. And as the community gets larger, it fragments and it just can't manage itself that way. And so,</p><p>today is just really much larger than it ever was and so there is no small community of people who run the whole thing. It's very decentralized.</p><p>Theo Jaffee (1:30:00)</p><p>Do you think peer review actually matters or is it another kind of like elephant in the brain signaling thing that we do kind of to show that we care about truth?</p><p>Robin Hanson (1:30:12)</p><p>I've spent a lot of time thinking about alternative academic institutions and about the institutions we're in, and so I'm confident in saying that the institutions we are in are not remotely optimal for the purpose of producing intellectual progress. They mostly people doing things to win their local games, but...</p><p>Theo Jaffee (1:30:28)</p><p>Hehehe</p><p>Robin Hanson (1:30:35)</p><p>Still, there are many kinds of abstract topics where the best thought about them will be found in academia, nevertheless, even if this isn't optimized for that. I do think we know much better ways we could organize academia in terms of promoting progress, but there isn't much of a constituency for that, so it's not going to happen anytime soon.</p><p>I have my own particular proposals for what you would try to do to make things different, but again, the limit is making anybody care. There's very little interest in improving academia, even among academics. Mostly people want to win the academic game, and that's what they focus on doing.</p><p>Theo Jaffee (1:31:13)</p><p>So you've written very extensively about alternative institutions, but what do you think some examples are of institutions that are actually good?</p><p>Robin Hanson (1:31:21)</p><p>existing ones? Well, in some sense, existing ones are all beating out the immediate alternatives that they could be easily displaced by, so the existence of institutions tells you something about their staying power and that they are, you know, filling a niche and continuing to fill the niche. So you have to give them all credit for that.</p><p>Theo Jaffee (1:31:22)</p><p>Existing ones, yes.</p><p>Robin Hanson (1:31:43)</p><p>You know, they are all sitting next to a space of alternatives that people often do try to replace them with. And the existing ones are keeping them at bay. So they got to get credit for that. All of them. Now some of them are keeping others at bay by, say, locking down the control of the national government, say, and making sure that voters never think to reject them or something. Others have a wider range of competition they're pushing away. But...</p><p>I mean, academia is relatively decentralized, and so academia does succeed so far in pushing away attempts of other groups of people, say bloggers, to gain their position of respect. They have so far won against those challengers. Decentralized.</p><p>Theo Jaffee (1:32:28)</p><p>Did you say centralized or decentralized?</p><p>Yeah, it is, you know, I... It still is crazy to me how powerful human culture is, where, you know, a thousand different universities can have basically the exact same culture, you know, scattered all over the country and the world. Like, one of the things I noticed when I go to University of Florida in Gainesville, like a year ago, I took my cousin to visit Arizona State University on the other side of the country. There's no affiliation at all between UF and ASU.</p><p>And I was like, Oh my God, this is like the same thing. It's got the same kinds of buildings. It's got the same companies that are contracted to it. It's got, you know, the same kinds of tour groups, the same kinds of professors and classes and courses.</p><p>Robin Hanson (1:33:07)</p><p>Right?</p><p>right? Right, so academia has been a unified culture for quite a while now worldwide and so there's a sense in which it isn't allowing much diversity of the sort that you know the center doesn't like. So in general if we had a bunch of different kinds of academia in different places doing it different we might have more competition among them.</p><p>see which ways went out, but when there's centers of academia enough to sort of enforce standards on everyone, then that's how it works. So like in most disciplines, there's a small number of people who are at the top of the discipline, and they control the major funders and the major journals and the major jobs, and their shared opinion about what that discipline should be doing dominates worldwide.</p><p>So the different disciplines can compete with each other to some extent, but they mostly stay off each other's territory. Well, the Center is the most prestigious people.</p><p>Theo Jaffee (1:34:19)</p><p>Well, what is the center of academia, if academia is so decentralized?</p><p>Robin Hanson (1:34:25)</p><p>and their opinions about what, again, who should get what jobs, who should get what grants, who should get what publication journal slots. They decide those things. And those things, you know, basically the most prestigious people decide the next most prestigious people, et cetera, all the way down the ladder of prestige. So when prestige is very important, the top prestigious people basically have a central power even without any other official, you know, roles of power.</p><p>All they really need to do is declare some things prestigious and other things not. And that works all the way down.</p><p>So I mean, quite commonly, most disciplines, there's a set of relatively established methods and.</p><p>claims in the discipline and then the only people really allowed to challenge them are the most prestigious people. So if somebody much lower prestige challenges those things they're routinely slapped down and rejected because it's not their place to do such things. Lower level people are supposed to question smaller things, do supporting work to the most prestigious people. The most prestigious might be allowed to change the fundamental assumptions about methods or conclusions in the field, but low status people that's just not their place.</p><p>Theo Jaffee (1:35:44)</p><p>Well what about kind of outsiders academia who have tried to change it who are pretty prestigious like Peter Thiel and more recently Bill Acme -</p><p>Robin Hanson (1:35:53)</p><p>I know of no academic field where Peter Thiel's opinions carry much weight.</p><p>Theo Jaffee (1:35:59)</p><p>Yeah, but why? I mean, he went to, you know, the most prestigious university, Stanford, one of them, and...</p><p>Robin Hanson (1:36:04)</p><p>prestige is particular, so being a prestigious sociologist doesn't carry much weight in economics or physics, right? Yeah, but that's a different prestige letter, and so it counts for other kinds of things, right? So we trust the most prestigious doctors to decide who doctors can be, but you know, we don't trust the president to decide that, even if the president's very prestigious, right? So we just have...</p><p>Theo Jaffee (1:36:12)</p><p>and he's a billionaire.</p><p>Robin Hanson (1:36:33)</p><p>ideas about what kind there are different kinds of prestige and what scope they have and you have to use prestige within your area.</p><p>Theo Jaffee (1:36:43)</p><p>And I think Peter Thiel has had somewhat of an impact in like, if not coursework, the culture of like CS programs at schools, especially elite schools. I think the startup vibes of MIT and Stanford and probably most universities are different from what they used to be because of in part Peter Thiel's ideas.</p><p>Robin Hanson (1:37:08)</p><p>I'm, you know, if that's true, I'm happy. I mean, I'm not in those worlds, so I don't know those, but yes, like the startup world is another world and academia is a different world, but they overlap somewhat and they come somewhat compete for outside prestige. And so often they are using each other for their various purposes. I certainly know that startup companies often ally with academics in order to add prestige to their startup. Presumably vice versa. Academics ally with...</p><p>Theo Jaffee (1:37:32)</p><p>So.</p><p>Robin Hanson (1:37:35)</p><p>startup people to add prestige to their academic things.</p><p>Theo Jaffee (1:37:40)</p><p>So pretty recently you wrote an article that I believe is called Why Crypto? Where you, yeah, you talk about your positions on Bitcoin and the cryptocurrency industry, which by the way, for the audience, Robin was like remarkably early on this. He was friends with Hal Finney, I believe before Bitcoin was actually a thing.</p><p>Robin Hanson (1:37:45)</p><p>Yes.</p><p>Oh yeah, long ago. But I wasn't really into the crypto thing then, but... And I, you know...</p><p>Theo Jaffee (1:38:03)</p><p>So.</p><p>To not misrepresent your position on this, you believe that Bitcoin is mostly speculative?</p><p>Robin Hanson (1:38:16)</p><p>Most of the value of crypto was realized in Bitcoin nearly early on in the process and since on most of the activity to create other coins and other things you could use the coins for hasn't really panned out much. They could but it hasn't so far. But...</p><p>Theo Jaffee (1:38:35)</p><p>Well, what do you mean by most of the value is realized early on?</p><p>Robin Hanson (1:38:39)</p><p>Well, that is having a crypto coin, a Bitcoin, having it as a store value, having it as a thing you can use to make some trades with, that was a new thing and that was added. That was a value added by the Bitcoin early on. Since then, people have made lots of other coins to do lots of other things, but they haven't actually achieved much value from those other things.</p><p>there is the value of just having the coin and being able to use it as a store value or as a money to trade. And that's a value that's continued from the beginning once people had Bitcoin.</p><p>Theo Jaffee (1:39:02)</p><p>Okay.</p><p>So the reason that the price of Bitcoin has gone up from, you know, one cent per Bitcoin to $70 ,000 per Bitcoin, how much of that is just pure speculation then?</p><p>Robin Hanson (1:39:25)</p><p>Most of it. Most all of it, yes. But.</p><p>Theo Jaffee (1:39:29)</p><p>How is it possible that if it's just pure speculation that the market can remain rational about that for so long?</p><p>Robin Hanson (1:39:35)</p><p>Well, that's not necessarily irrational. That is, there is no particular rational value for Bitcoin. It can be all sorts of different things.</p><p>When worlds of speculation are created, they just have internal dynamics that can take them all sorts of different directions, just like with culture. Culture can just go a lot of different directions, and so can the price of Bitcoin go a lot of different directions. Clearly, in some sense, ex -SANI people should have been surprised to see it go so high so fast, or they would have driven up the price initially. So, you know, this had to have been a surprise.</p><p>but a lot of directions of markets and culture are surprises. But the point of that post is to say all of the speculation had an interesting effect, which is some people made a lot of money in the speculation. And then as commonly happens in the world, when somebody makes a lot of money, they need a story to tell themselves about why they were justified in making that money and what...</p><p>you know, what would justify their use of it? People mostly feel a little embarrassed to have a lot of money and they need a story to legitimize. They're having it and using it. And different kinds of ways to make money produce different stories like that and so they produce different kinds of rich people who live their lives differently. And so this kind of way to make money produced a different kind of rich person.</p><p>Theo Jaffee (1:41:03)</p><p>Hmm.</p><p>Robin Hanson (1:41:12)</p><p>And that's interesting because they do more interesting things with the money.</p><p>Theo Jaffee (1:41:19)</p><p>That kind of reminds me of Warren Buffett's essay, The Super Investors of Graham and Doddsville, where he outlines a scenario where like everyone in the US flips a coin every day and the people who get heads transfer, you know, some amount of money to the people who get tails or whatever. And that process continues over and over again until only one person is left with all the money. And so, like, by the nth day, there would only be like, uh,</p><p>Robin Hanson (1:41:41)</p><p>Right?</p><p>Theo Jaffee (1:41:48)</p><p>10 ,000 people left and they would each have like a million dollars or something I forgot the exact numbers But it's like and then they would go on to write books about how I became a millionaire in Just seven days with only 30 seconds of work a day You</p><p>Robin Hanson (1:41:51)</p><p>But...</p><p>Yeah.</p><p>Right now, different worlds have different amounts of financial speculation and different amounts of the inequality produced by this process. So, you know, ordinary business investment produces inequality this way, but at the time scales at which businesses rise and fall. And so...</p><p>People, you know, wealth becomes more unequal as a natural result of businesses winning and losing, but it's slow because businesses win and lose slowly. When people more directly do financial speculation, then that process can happen faster, but most people don't do that much financial speculation.</p><p>so that limits it, but there are times when it becomes in fashion and lots of people do financial speculation and then more inequality is produced in those short periods, but so far those have been rare periods. Crypto has just been one of those rare periods with a rare subset of people who then went wild speculating and therefore produced enormous amounts of inequality in a short time.</p><p>we should expect that sort of thing to continue over and over again through history, and each person should perhaps wonder how eager they should be to speculate, because, you know, they will be participating in a process that produces inequality, and to what extent do they want to be part of that? Now, sometimes these processes produce other things the world values besides the inequality, like business competition produces industry and a...</p><p>growing economy that we all benefit from, and so it seems like we should all tolerate a substantial amount of inequality produced by business competition because that's what makes our world rich. Other times there are somewhat separated worlds of speculation like crypto, which looks like we could counterfactually imagine if that all went away. The rest of us wouldn't mind so much. We might wonder, should we allow those little worlds to happen?</p><p>And that's the kind of question people have about that and about, say, stock market speculation, say, the 19 up to the 1929 crash or the 2008 crash, right? There's a burst of speculation that get bigger before those crashes. And people are often critical about whether that thing should have been allowed or whether some people are exploiting others in that process. And that's all perfectly reasonable to discuss. But I just thought it's interesting to notice that.</p><p>In crypto, enormous amounts of inequality created, the richest of them got a lot, others lost a lot, and the winners needed to tell themselves a story about why they deserve their money, and that story had to fit with how they use the money. And so, in crypto, the story they tell themselves is that they were pursuing big -idea innovations about how the world could be substantially different, if only certain crypto.</p><p>coins and processes were realized. And then when they're rich, they pursue that vision by spending some of their money on various visions that could change the world a lot. And that's a positive outcome in my mind of crypto because there are in fact a lot of opportunities for ways to invest to make the world much better.</p><p>Theo Jaffee (1:45:15)</p><p>Why was crypto speculated in so heavily and not any other asset class? Is it just kind of pure unpredictable randomness?</p><p>Robin Hanson (1:45:23)</p><p>Well, it was, I mean, it is literally speculation that is crypto is electronic money. Electronic money is literally a thing you can own and that its value can go up and down. So it is, you don't have to do any indirect thing to take a thing that's happening and make something else, something you've speculated about it. Most of the rest of the world, you have to work to make that connection happen, right?</p><p>If you want to speculate about, say, pickleball, you can go play pickleball, or you can buy some pickleball rackets or quarts, but if you want to speculate on it, you'll need some business that's invested in pickleball, and there aren't that many of them, and you have to figure out what to do. But crypto, all the things were things you could buy into, and there were thousands of them, and each of them could go up or down, so... And there was this story that...</p><p>It would be huge, of course, more plausible than pickleball. You know, it's just hard to imagine how big pickleball could ever get. So it's hard really to bet on pickleball. Imagine it'll go up by a factor of 100 in value. But with these coins, you had more of a story of how they could be huge in the future. You know, the entire finance industry could be displaced by the crypto world. And so that, you know, fuels speculation more.</p><p>Theo Jaffee (1:46:44)</p><p>So you've also written about a form of governance called demarchy, which is where you have a network of decision -making groups where the membership of each body is randomly selected from those who volunteer to be on it. But wouldn't that prevent natural elites or good leaders from being able to exert power if it's just randomly selected from all those who volunteer?</p><p>Robin Hanson (1:47:05)</p><p>Sure, I'm not a big fan necessarily of demarchy, but I am just a fan of people collecting big ideas for how things could be different. I mean, I just want people to know about them and to think about them because we just don't think enough about how we could change things.</p><p>Theo Jaffee (1:47:21)</p><p>Do you think the state of homeowners associations in America is kind of counter evidence to demarchy? Cause you know, it's pretty similar. I would say you have a decision -making group that's given a surprising amount of power over neighborhoods and the membership of HOA is, is kind of elected, but usually it's not super competitive. So it's mainly people who volunteer to be on it and they seem to be very dysfunctional.</p><p>Robin Hanson (1:47:42)</p><p>Right? Right?</p><p>again, but they're beating out competitors for that slot. So, you know, the main thing is that ordinary homeowners feel a little awkward about other sorts of institutions you might put in that slot again, and so they're very egalitarian, democratic instincts are pushing them to favor that institutional form there, even if we might look and think it's kind of inefficient. So...</p><p>Again, when something exists and there's alternatives to it that it keeps pushing away, you gotta give it some credit and wonder, how's that working? How does it do that?</p><p>Theo Jaffee (1:48:22)</p><p>What if it's just a coordination problem? Like what if everyone knew that they would be better off if they could say team up to abolish the HUA but...</p><p>Robin Hanson (1:48:28)</p><p>I mean, any one homeowners association in the country could decide to change how it's incorporated and make new rules and do it different. It's not that hard.</p><p>And new, I mean new, new homeowners, new sets of homes could just follow different rules. So clearly, people aren't very inclined to make new sets of homes near each other with different sets of rules. Something must be pushing them to do all these same old rules.</p><p>Theo Jaffee (1:49:02)</p><p>just the same kind of convergent culture.</p><p>Robin Hanson (1:49:06)</p><p>Right, for some reason, they must have done focus groups or something. Homeowners, when they hear about these alternative rules, they're not that eager for them. Maybe they sound suspicious. Maybe they sound unhomy. I don't know, but that would be interesting. I'd like to know what happens when you ask homeowners, hey, how about we have a different set of rules for doing your homeowners association? What do they say?</p><p>Theo Jaffee (1:49:29)</p><p>Often they don't care.</p><p>Robin Hanson (1:49:32)</p><p>Well then, if for example, the developer thought that the home, somehow it would be run better with a different set of rules.</p><p>why don't they make a different set of rules? Because, presumably, there's at least a weak effect of if it's run better, then reputation comes back to people about that brand of home. I want to buy that brand of home because I know people living over there in that brand of home and that seems to go pretty well there. So you would think they'd have some reputation incentives to produce homeowners associations that are well run.</p><p>Theo Jaffee (1:50:03)</p><p>And then, of course, the other very famous system of government that you've written about is called futarchy, where, yeah, invented. Where, for the audience, it's where a policy would become a law when a prediction market would clearly estimate that they would increase national welfare and that national welfare measure would be defined by elected representatives. And of course, you were also very early on the idea of prediction markets. I...</p><p>Robin Hanson (1:50:11)</p><p>That's my invention.</p><p>Theo Jaffee (1:50:33)</p><p>believe you were one of the inventors.</p><p>Robin Hanson (1:50:35)</p><p>Well, it's an ancient idea, so not really something anyone can invent, but I was one of the first advocates for using prediction markets much more widely than they are used today. That is, the same mechanism can be used for other purposes. Previously, the mechanism was used for people to enjoy betting, and then the customer was the better. I was...</p><p>pointing people toward the opportunity for someone who wants the answer to a question to subsidize a betting market to get the answer. And that's a possibility that still hasn't been realized as widely as it could.</p><p>Theo Jaffee (1:51:10)</p><p>Hmm. So, how would the measure of natural welfare be immune from Goodheart's Law? Which is, for the audience, when a measure becomes a target, it seems to be a good measure because people will try to game it.</p><p>Robin Hanson (1:51:23)</p><p>I mean, clearly, Goodhart's Law can't be true that way, because our world is full of measures. There's measures all over the place that we're using all the time, and still we want to keep using them. So it's just not true that measures lose all their value when you use them. Some do in some contexts, so you might want to understand what are those scenarios, but most don't. So, for example, we don't want to die.</p><p>Theo Jaffee (1:51:38)</p><p>Not all, but some.</p><p>Robin Hanson (1:51:49)</p><p>So, lifespans are a measure of health, and we would like processes in places, etc., that promote longer lifespans. And our using lifespans as a measure hasn't destroyed the value of lifespans as a measure. It's still a pretty good measure, even though many institutional incentives are tied to it, or even wealth. People want to get rich, and the fact that people want to get rich doesn't mean that...</p><p>It's not valuable to see how rich you are.</p><p>Theo Jaffee (1:52:20)</p><p>Clearly it's not like as good of a measure as it could be because of, you know, the whole concept of quality adjusted life years. It might be better to live 70 very long, very healthy years where you die quickly at the end than live to be 80, but the last 20 years of that are just like miserable, slow decline.</p><p>Robin Hanson (1:52:32)</p><p>Right?</p><p>Sure. And in fact, we often do use quality adjusted life years as a measure. That is, in fact, more common as a measure among the people who have such measures. And that hasn't destroyed the value of such measures. I mean, there was actually...</p><p>Interesting or say the US Congress I think at some point passed a law that said you shouldn't be using quality adjusted life years as a measure because if you just use life years then elderly people come out looking better or even just death rates if you ignore how old somebody is then you're ignoring how many years they have left and people who want to push policies to help old people prefer that measure they just say let's just look at what affects mortality ignoring life years and or quality adjusted</p><p>life use as well. So there's often politics associated with which metrics get used, but that doesn't mean metrics can't be used. There's a nice book called How to Measure Anything. I recommend it. It says measuring things is, you know, a difficult thing that you can work at and get better at, and so you can do it for most anything if you work hard enough at it, and I think he's right. Measuring is work, and it returns gains to effort.</p><p>Theo Jaffee (1:53:56)</p><p>What do you think about applying prediction markets to dating apps? Like the state of dating apps right now is really bad. Really bad.</p><p>Robin Hanson (1:54:05)</p><p>Well, except they are beating the competition. So, okay. I mean, the question is, do people... So, for example, I actually think the old institution of a matchmaker was an effective institution. Mastermakers did actually learn about other clients and what they might like in putting them together. And matchmaking would be an effective institution today. And people just don't like the idea.</p><p>Theo Jaffee (1:54:08)</p><p>But they're reading the competition, but maybe that's just because they haven't -</p><p>Robin Hanson (1:54:30)</p><p>So it's more people's personal aversion to the very idea of a matchmaker. That's the reason why I'm using matchmakers, not that they couldn't make good matches. So a lot of this has to do with people's sort of ideology of dating and what's supposed to go into it. I mean, for example, parents being more involved in matchmaking made a lot of sense and in fact seems to have gone better for people. Parents know a lot about their kids and they can often make contacts with other parents in ways that people, kids can't do themselves.</p><p>parents helping with matchmaking was a big advantage. And we've rejected that too, not because it didn't work, but because we just don't like the idea. So I think prediction markets could probably actually do better also, but they would face the similar objections to parents and matchmakers, which is we don't like that. So the question is, can you do prediction markets in a way that people won't react that way to?</p><p>Theo Jaffee (1:55:25)</p><p>Hmm. Yeah, like there's a lot of debate. I... You're probably pretty familiar with manifold markets and manifold love. Yeah, you went to last year's manifest, right? I'll probably be at this year's one, by the way. Awesome. So there's a lot of like internal manifold debate about manifold love. Manifold love is, you know, a prediction market applied to the idea of dating apps where you have people bet on...</p><p>Robin Hanson (1:55:35)</p><p>Right?</p><p>Right?</p><p>Okay, well then I'll see you there.</p><p>Right?</p><p>Theo Jaffee (1:55:54)</p><p>like which couples will work. And some people at manifold think it's great idea with lots of potential and others think no, it'll never work.</p><p>Robin Hanson (1:55:55)</p><p>Right?</p><p>Theo Jaffee (1:56:04)</p><p>So do you think maybe both at the same time that it is a great idea with a lot of potential, but it won't work because people won't want it to?</p><p>Robin Hanson (1:56:12)</p><p>Prediction markets in general are, they're a general technology with a enormously wide range of potential application. So I don't think it's worth having strong opinions about which things will work. I think it's better to have opinions about what are the good things to try first. So it's more important to have good heuristics about where it's cheap to try things and where there's big value to be gained if you do try.</p><p>So I would think, you know, dating, there's certainly huge value out there to be gained, but...</p><p>There's not so much value to be arbitraged in the sense that it's hard if you see two people who should be together to like gain the profit from convincing them to go together. There are many other contexts in the world where when things aren't efficient, there's ways to make money from that. And I think we should focus first on those kinds of applications because that will just attract a lot more energy and attention to trying to make that extra money. So.</p><p>So it's of substantial value, but it faces some cultural obstacles and it's hard to just spread on the basis of its efficiency because, again, there's not profits to be made. So I would try to focus people's attention more on places where if you adopt something and it works better, you can make money because stuff spreads faster there.</p><p>Theo Jaffee (1:57:38)</p><p>Hmm. That makes sense. So you have like a very unique outlook in the sense that it's so broad and that you've drawn from so many different subjects. So who else do you think are some of the broadest thinkers in the world?</p><p>Robin Hanson (1:57:56)</p><p>I would have to go research to figure that out, I guess. I studied history of science long ago, and one of the main things I learned is that when scientists have stories about their history, it's usually wrong.</p><p>when historians of science go study what was actually the history of an area of science, they just get the different answers than the scientists I'm telling each other. So, you know, from that I learned people are way too quick to make these judgments about what's going on. And so if I look at a question like this and I go, well, that would be a fair bit of work to figure out. I don't know. I would have to figure out who is actually being strong polymath learning lots of different areas and adding to them. I certainly have known some people who have contributed to multiple areas and</p><p>I, you know, impart by knowing multiple areas and making connections between them. So I respect that, but I don't know who's at the peak of doing it a lot. But if you had some people to read about and talk to, that would be interesting to maybe learn about which of them gained how much how from their different kinds of knowing many areas.</p><p>Theo Jaffee (1:59:02)</p><p>What about in the past? Not necessarily alive today.</p><p>Robin Hanson (1:59:08)</p><p>Again, I know of some people that I happen to come across because they learn multiple things, but I don't know which of them is, you know, big suffocating. Like the name Herbert Simon comes to mind, for example. He's, uh, I guess he won the Nobel Prize.</p><p>from his work combining computer science and economics and other areas of social science and systems design. That was a pretty broad area of things to combined. James March, similarly, I don't think he got the Nobel Prize, but he was somebody who did organizational innovation things while learning other kinds of social science.</p><p>people who have done both evolutionary psychology applied to other areas of life have impressed me at times because they do these crossover things. Most recently I've just been seeing people who do cultural evolution and apply it to other things outside of anthropology. That's interesting.</p><p>Theo Jaffee (2:00:11)</p><p>A few days ago I tweeted, um, what book that you've read has the... I forget the exact wording, but it was like, what book that you've read has the highest density of like, huh wow, I never thought about it like that, moments per page. Do I need to bring to mind for you?</p><p>Robin Hanson (2:00:26)</p><p>I mean, again, these are idiosyncratic. A lot depends on what you'd read before you read any one book.</p><p>Often the first book in an area you read is very insightful because, you know, you could have read some other book and got the same insights from the other books. So it's really hard to say, you know, generalize from your experience to other people's. So I try to, again, look, I speak on enough different topics that I think it's fine when my intuitions say, I don't really know that much about that one, to say, okay, I'm not going to have an answer because I don't really know.</p><p>Theo Jaffee (2:00:59)</p><p>And since you do speak about so many different topics, how do you kind of balance them? Do you have periods where you are only really thinking about one and really diving into it? Or are these different ideas always closer to the top of your mind when you're -</p><p>Robin Hanson (2:01:18)</p><p>Well, I mean, one of the fun things about having many different topics is that at any one moment you can be pulled to, you know, so it's fun to have a number of different topics you're thinking about at any one time so that depending on your mood, you can go to one or the other. And it's always fun to play hooky on one to do the others, honestly. Like if you have only one thing you're working on and if you don't feel like it today, you feel like, well, I got to make myself do it today because it's the thing I'm working on and it's hard to get motivated that way. It's much easier to get motivated to say, I should be working on this, but I'm going to do this one. That's what I'm going to do.</p><p>Theo Jaffee (2:01:37)</p><p>What do you mean?</p><p>Oh.</p><p>Robin Hanson (2:01:48)</p><p>That's fun. And you can then switch it back. The other day you can say, oh, I should be doing this. Why don't I do this one? Because this one's fun. And you get motivation that way.</p><p>Theo Jaffee (2:01:51)</p><p>Yeah, that happens to me a lot too.</p><p>And also, you teach law and econ right now at GMU, right? So how do you reconcile the breadth of your interests with the specific subject matter of a class? Do you try to bring influences from outside what you'd find in a normal textbook to a class? How separate do you give it?</p><p>Robin Hanson (2:02:20)</p><p>My standard is to not be a dilettante. So I think a dilettante would be someone who reads and maybe even talks about subjects without knowing enough about them to contribute. So the standard I hold myself to is if I'm going to go into a new area, I should learn enough about it to be able to contribute. So I should, in fact, contribute.</p><p>That's my standard. So if I try to keep that track of things and say, what have I gotten into that I was never able to contribute to? And that counts against doing things like that. But the things that I got into that I was able to contribute to, that counts to doing more things like that. So if I get into law and economics and I can find original contributions, then I think, OK, I know enough about this to be here. And I can justify my being in that area.</p><p>Theo Jaffee (2:03:12)</p><p>Well, I think that's an excellent place to wrap it up. So thank you so much, Robin Hanson for coming on the show. I really enjoyed this one. Yeah.</p><p>Robin Hanson (2:03:18)</p><p>Nice to talk to you, Theo.</p>]]></content:encoded></item><item><title><![CDATA[#13: Nick Simmons]]></title><description><![CDATA[All about Urbit]]></description><link>https://www.theojaffee.com/p/13-nick-simmons</link><guid isPermaLink="false">https://www.theojaffee.com/p/13-nick-simmons</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 14 Apr 2024 02:01:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/143553542/53af9fbfa57068ca25fd7ef1ae21ad64.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Nick Simmons is a founding partner at Octu Ventures, a member-driven venture DAO investing in teams building on Urbit. Urbit is a new computing paradigm that provides complete ownership of your digital world.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:19 - What actually is Urbit?</p><p>5:43 - Urbit ID and Schelling points</p><p>9:05 - Why Urbit?</p><p>10:23 - Roko Mijic on Urbit vs. TikTok and Crypto</p><p>17:32 - Urbit vs. Worldcoin</p><p>22:26 - Niche or growth model?</p><p>28:50 - Why haven&#8217;t Urbit star prices recovered since 2021?</p><p>33:13 - Intrinsic value of Urbit address space</p><p>36:37 - Urbit as digital land</p><p>42:51 - Urbit and DeFi</p><p>45:42 - Personal AI on Urbit</p><p>51:35 - Urbit-native hardware</p><p>55:58 - Urbit design and aesthetics</p><p>1:02:15 - Outro</p><ul><li><p>Urbit: https://urbit.org/</p></li><li><p>Octu: https://octu.ventures/</p></li><li><p>Urbit Blog: <a href="https://urbit.org/blog">https://urbit.org/blog</a></p></li><li><p>&#8220;Creating Sigils&#8221;: <a href="https://urbit.org/blog/creating-sigils">https://urbit.org/blog/creating-sigils</a></p></li><li><p>&#8220;On Christopher Alexander&#8221;: <a href="https://urbit.org/blog/on-christopher-alexander">https://urbit.org/blog/on-christopher-alexander</a></p></li><li><p>Nick&#8217;s Twitter: <a href="https://x.com/Halikaarn1an">https://x.com/Halikaarn1an</a></p></li></ul><h3>More Episodes</h3><p>Playlist: <a href="https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj">https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj</a></p><p>Spotify:</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:3549,&quot;numEpisodes&quot;:12,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-03-22T15:11:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>Subscribe to my Substack:</p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)</p><p>Hi, welcome back to episode 13 of the Theo Jaffee podcast. We're here today with Nick Simmons. Nick is a partner at Octu Ventures, which is a venture firm that invests in companies that build on Urbit.</p><p>Nick Simmons (00:13)</p><p>Hey, Theo, great to be here.</p><p>Theo Jaffee (00:16)</p><p>Alright, so first question is, I know the dreaded question for any kind of Urbit guest is, explain for the audience, for people who may not be as technically minded, what actually is Urbit?</p><p>Nick Simmons (00:30)</p><p>Okay, so the really top level overview here is that you could say that Urbit's a new internet. It's a new way to connect computers and the people that use those computers with applications, platforms, protocols that do all the things that we want computers and the internet to do. So that is sending messages, sharing in social spaces, connecting business, anything like that. Urbit has the potential to build new ways of coordinating</p><p>human activity, economic activity, social activity in ways that those of us who work in the project think are probably saner and more stable over the long run. Now, what does that actually mean? To get slightly more technical, Urbit is three key pieces of technology that all unite into this network stack. So there's Urbit OS, which is a completely deterministic functional operating system. So it's a full computational stack and</p><p>Right now it runs as a virtual machine on top of any Linux environment, but there's no reason you couldn't run it on a chip. And in fact, shout out to ~mopfel-winrux who is an Octu member and who is working on a project called Nock FPGA to actually design a chip that implements the core Urbit instruction set, which is called Nock. So that's Urbit OS. And the overall goal of Urbit OS is to allow for a computational stack that's</p><p>Theo Jaffee (01:56)</p><p>that.</p><p>Nick Simmons (01:56)</p><p>extremely stable. The Urbit Core development team is slowly working towards what they call, you know, Kelvin zero, which is a totally frozen instruction set. So the core of the computational stack just doesn't change and for it to be simple. So right now, I mean, the, the, the Nock instruction set, which can be broadly, broadly analogized to a kernel.</p><p>fits on a t -shirt. It's, you know, it's dozens of lines of code. The Linux kernel is millions of lines of code. This has really striking and obvious upstream, you know, social and organizational implications because when you have both extreme complexity and an ever -changing pile of, you know, what programmers call cruft, which are to say overlapping abstractions, patches on patches on patches.</p><p>Theo Jaffee (02:51)</p><p>Thank you.</p><p>Nick Simmons (02:55)</p><p>to make a computational system run. You basically require a giant bureaucracy to make that run. It's more technical knowledge and it moves faster than any one person or even small group of people can keep their heads. And so that kind of necessitates a bureaucracy. And computers probably shouldn't require bureaucracy. We're probably past the point of the society where we should be, where that's necessary. And that has all sorts of implications around, okay,</p><p>if all of the methods of organizing a bureaucracy start to look totalitarian, or they start to look like they impose a lot high coordination cost, then there's obvious downsides to that. And so the idea of Urbit OS is that you should be able to run a computer and that this gets into why people call Urbit a personal server, a computer that could serve up content to other computers, say other people, and it should be stable.</p><p>It should be easy for a layman to run. You should not have to have, say, a certificate in Apache server maintenance in order to run a server, which is basically the case right now in terms of the servers that major internet platforms run. And it should simply be yours. So the old tagline for Urbit was simple, durable, yours. The second part of the Urbit stack is Urbit ID.</p><p>So this fun fact, this is actually the first or second implementation, I believe of the ERC-721 technical standard, which basically just means NFTs. So these are Ethereum NFTs and there's three main levels of them, planets, stars and galaxies. Planets are kind of the individual level, Urbit node. Stars are, you could think of them as an ISP or maybe kind of like a community Schelling point node.</p><p>Theo Jaffee (04:39)</p><p>What do you mean by Schelling point?</p><p>Nick Simmons (04:41)</p><p>A Schelling point would be just like a coordination point. So, you know, something that a community wants to run in a very technical sense. What stars do is that they are their points for packet routing. So actual, you know, packets of data between any other node in the network and also a peer discovery. So I'll get into this in a minute, but essentially and then we'll go into galaxies. So galaxies, there's so there's about four billion.</p><p>planets, about 165,000 stars, and then 256 galaxies. And galaxies are the root nodes of the network. They're the source of truth for major software updates. And they also form kind of a DAO that makes decisions on whether to upgrade the network or change it in any way. The main example of this is that during DeFi summer, the fees on Ethereum obviously went way up. And so the cost and fees to</p><p>boot a new Urbit plan of our star became somewhat prohibitive. I think it was about $350 in Ethereum gas fees at one point. And so Tlon which is the company I used to work for, which is the main company that incubated the Urbit network, they actually devised their own layer two roll -up to reduce those fees. And so the Galactic Senate had to vote to decide that this was, you know, these new layer two identities were going to be valid. And so the way this works in practice is that,</p><p>Every planet has a star sponsor, every star's galaxy sponsor, and you can leave your sponsoring entity without permission. You just have to find a new one to adopt you. And so this enables a network architecture that roots around damage, which is to say offline nodes. So for example, if your star is not reading packets anymore, it's not online, you simply find a new star.</p><p>or if your star decides it doesn't like you anymore, you find a new star. And it doesn't have any say so over letting you go. You have the right of exit. And so, pure discovery in this context is basically that you have to go find someone else's planet for the first time when you wanna talk to them. And so the stars handle that decentralized, initial sniffing out, if you will, of where this other person is based on,</p><p>their public key identifier that you have, which is basically their name. So for example, my Urbit planet name is ~simfur-ritwed and my co -founder in Octu is Kenny, AKA ~sicdev-pilnup. And so if for the first time I ever sent Kenny a direct message, a DM on Urbit, my planet sent a packet to my sponsoring star, which then queried a whole bunch of other stars and it found which one was sponsoring</p><p>sick death pilnup and it passed on the data but it also let both of us know where in the whole Urbit network topology our planets were sponsored, were located. So that's the big data dump on how Urbit works. I'm sure you have questions or I'm sure there's something like clarification on for the audience. So I'll pause.</p><p>Theo Jaffee (08:02)</p><p>Yeah, so clarification for the audience would be just like, that was a very good explanation that went into a lot of depth, but just for like a very short, like 20, 30 second explanation of what exactly is Urbit and why should I care? Why should an audience member care? What would you say to them about that?</p><p>Nick Simmons (08:20)</p><p>Sure, my explanation would be that there's a whole bunch of decisions that have been made over the entire history of the internet that assume certain things to be true and bake in certain qualities of the internet. And they're not all true anymore. We have a whole bunch of conditions that make them irrelevant now. We can build better network forms of communication and collaboration that aren't dependent on that history and those historical choices.</p><p>And Urbit is one of the best projects that I found that actually opens up to those possibilities of coordinating people and capital and social connections in new ways. And obviously that needs to be dug into. But fundamentally, this is about allowing people better coordination tools to reflect, to build projects and social graphs that better reflect their actual desires.</p><p>Theo Jaffee (09:19)</p><p>Hmm.</p><p>So, I don't know if you've heard of Roko Mijic, I did not pronounce the last name correctly, but he's somebody who's been involved with Urbit for a while, he used to have his Urbit handle, ~bacsul-lissyl I believe, on his Twitter profile, and he tweeted pretty recently,</p><p>Urbit spent 22 years failing to build something there isn't even a market or need for. Think about it. Imagine we wave a magic wand and make Urbit work as intended. People get their own personal serverlets where they can easily host their own content on their own hardware. Meanwhile, in the year of our lord 2024, Gen Z can't even unglue themselves from TikTok for five minutes. You really think they're going to build their own personal website and then also self -host it on their own hardware? And even if they did, would it really make any difference to the world? In 2024, you get away with saying all sorts of naughty things on Twitter thanks to Elon buying it.</p><p>And as a bonus, it's all easy to use and easy to read because it's a single app, rather than millions of separate websites. Most of the value of the internet is in, and he bolded this part, most of the value of the internet is in the semantic and social network, the connections, the sorting and filtering.</p><p>And these network hubs are easy for rent seeking parasites to capture. Urbit can't really solve that because it doesn't have global state. It's distributed, not decentralized. The fundamental innovation came with Bitcoin and later Ethereum. DApps. DApps are much harder for parasites to capture because they can be made trustless. No middleman. So what do you think about those two big critiques? One is that like, who cares? Like Gen Z is not going to bother to do all this when they can just, you know, download TikTok in a second and then just spend hours on it. And then critique number two is just that, um,</p><p>Urbit is distributed, not decentralized, doesn't have global state.</p><p>Nick Simmons (10:59)</p><p>Yeah, so I actually, I agree that it's very important to have global state. As the first critique, so this is something that the Octu thesis actually gets into. And if anyone's curious, you can go to octu .ventures, you know, mild plug and read our white paper. And we definitely think that Urbit is a technology on the level of Linux or blockchains in general, or TCP IP, where</p><p>We're not particularly interested in trying to access, you know, overall, you know, organic adoption into the millions right now merely because the Urbit meme makes sense to people on a mass scale. We're very focused. We can talk a bit more about the, you know, the Octu investment thesis later, but our broad perspective here is that Urbit allows you to build new kinds of businesses, new kinds of products, new kinds of protocols.</p><p>that can capture organic interest and solve problems for people, even just starting on a B2B scale. And so I'm really not concerned about the fact that, yeah, people use TikTok. Yeah, people use Twitter and those are, you know, optimized products right now. They give people what they want. What we're interested in is where can't you build an optimized product for a use case, whether it's B2B or, you know, a social,</p><p>graph or whatever, although we think that the social graph part will come later, where the actual architecture of the internet as it stands just doesn't allow it. So we're very focused on how do you build tools, how do you open up new landscapes, if you will, rather than how do we scale the Urbit meme as previously defined to millions of people. And the second question around global state, so</p><p>First of all, I think that Urbit integrates with blockchains very well. The Urbit Foundation has several partnerships that are very interesting there with Near, and I hear some other stuff as well on the horizon. And obviously, there have been Bitcoin and Ethereum OGs that have been very involved at Urbit for a long time. My Octu co -founder, Kenny, was on the founding team at MakerDAO. Once you get inside the Urbit ecosystem, you meet some very, very kind of like early pressing crypto people.</p><p>who all love Urbit and work on building stuff inside the ecosystem. So I think that Urbit absolutely does need to be able to call out to sources of global state, global consensus. And I'm personally a huge, I'm personally very bullish on Urbit ID and reputation as anyone who's met me in the Urbit context can attest, no pun intended. Because I think that we're actually,</p><p>very competitive and very far ahead of most of the existing alternative, you know, digital or self -sovereign ID projects or reputation projects. And the fact that Urbit is a deterministic VM and has this network to talk between nodes actually accentuates this quite a bit. And it's also worth mentioning, by the way, that most of the projects that we have either funded at Octu so far or that we see on the horizon are starting to...</p><p>all resemble each other in the sense of probably producing a bunch of interaction data between their users and then being able to use that as a pretty high signal way to mutually signal reputation via those interactions in a way that can probably be globally useful. And so I think there are ways to make internal global state and then sure, write it to a blockchain, write it to a global consensus layer to verify it. But...</p><p>One interesting thing about blockchains and Urbit is that blockchains for the most part, at least with any application that we've seen so far, have a pretty limited repertoire of things that you do on them and therefore opportunities to create reputation, identity, really granular proof of human, if you will, in the new context.</p><p>Certainly there are dApps, but dApp usage is pretty bad for the most part and anything outside of purely financialized transactions. Blockchain social in general is pretty much a flop. Blockchain -based interactions are the only things that have to write the chain. I usually hit this dilemma of either adoption creates fees or you simply never get adoption in the first place because there's too high of a barrier.</p><p>Certainly L2s and things like Solana are interesting, although, you know, now Solana has its own uptime issues and obviously there's a very valid critique based on, you know, over centralization there. So I think that like, I would agree that Urbit needs to be a, Urbit needs to be a computing and identity and coordination layer that talks to global consensus layers, AKA blockchains, L1s, what have you. But I,</p><p>I see that more as like something where there's a lot of mutual benefit and everyone I talk to pretty much agrees in every ecosystem rather than some failing of Urbit as a complete stack that it won't get.</p><p>Theo Jaffee (16:29)</p><p>So going back to what you said earlier about Urbit ID and why you're bullish on it and it's ahead of a lot of other kind of digital cryptographic identity projects. What do you think about Worldcoin Sam Altman's Worldcoin?</p><p>Nick Simmons (16:42)</p><p>Yeah, so it's interesting that you bring up Worldcoin because I did a deep dive on the digital ID space a couple of years ago and I actually I went to the IIW, which is the Internet Identity Workshop, which is the main conference every year for this field. And this was I think 2022 and Worldcoin was there. They did a demo, saw the orb. Worldcoin is interesting because,</p><p>they are simultaneously, I think a little bit ahead of the curve on one of the main failure modes, but then encapsulate one of the other giant failure modes that I see here. So to explain this, like,</p><p>Worldcoin actually, I think, is a little bit smarter than people give them credit for in the sense that they do have some pretty sophisticated thinking around how do you hash the original biometric data and use that in a way to prove personhood, basically. And I think that some of the critiques there are kind of superficial. However, the real problem that I see with Worldcoin, for the application that I'm talking about, to be clear, I mean, you know,</p><p>Sam Altman stated rationale for Worldcoin is to provide this basis for UBI. I can't even really comment on that. I don't have a firm opinion on that because it's so far away from what I'm trying to do with digital ID. But when you look at the history of really most tech paradigms and how they arise and how they get network effect, it's not by...</p><p>Theo Jaffee (17:56)</p><p>Thank you.</p><p>Nick Simmons (18:20)</p><p>say, you know, going to the third world and getting a whole bunch of people to scan their eyeballs and then bootstrapping from a whole bunch of people that don't have capital and don't have access to kind of like, you know, elite technological social circles. It's the opposite. Urbit so far, even though a lot of the tooling hasn't been built to make ID and reputation work in a really granular way, already has the opposite.</p><p>I mean, if you have any contact with the Urbit community, you understand that this is a very, very special, high value, high trust community of very prescient and capable people with all sorts of interesting connections. And so the way that I think about digital ID usually comes into two failure modes. The first is when you have a good design, but you don't pair it with any kind of use case that actually gets you organic adoption among people whose actions,</p><p>constitute high value, high signal in network reputation data. So again, Worldcoin I think is doing a good job of designing some aspects of this. They really have found an interesting way at least to prove personhood, but they're not deploying it among people in San Francisco who are gonna use it as a real use case and we bound that use case that has little takeoff network effects in the same way that say, I don't know, like the Homebrew Computer Club.</p><p>started in a garage in Cupertino and a whole bunch of those guys do very interesting stuff. And if you were there, you were part of the scene. It's the truth of music scenes. I actually used to work in music industry. It's really true for anything and blockchains or digital ID is just a way to codify it. The other kind of inverse failure mode is that you have a really good, you know, early network effects. You have a scene.</p><p>but you don't legibleize it in a way that actually allows people to, you know, to kind of prove it or attest it. So this could just mean that it's legible and lots of things are legible and that's fine. Or it can mean that you're just not, that you think you're representing reality. You think you're representing these interactions and you're not representing kind of what makes them valuable or you're not proving this is actually someone's, you know, central way that they actually signify what they're up to.</p><p>And so for example, I think there have been some digital ID protocols that I think got decent uptake among crypto savvy people, but it didn't give them the opportunity or didn't give them the incentives to actually use this for the majority of their online or even crypto based activity. And so the defection risk there and the fact that it's just not capturing their activity in a way that</p><p>really telegraphs commitment to that ID paradigm, that reputation paradigm, means that it's ultimately destined to fail because the defect rate and the flake rate is too high.</p><p>Theo Jaffee (21:22)</p><p>Alright, so let's talk a little bit about the future of Urbit and different directions it can go. So, do you think that Urbit needs a kind of growth model to survive or could it survive indefinitely as kind of like a niche product?</p><p>Nick Simmons (21:37)</p><p>Well, that's an interesting question. So, I mean, I pretty firmly believe that first of all, Urbit's growing. If you look at nodes on network, if you look at developer growth, and especially, I think, if you look at capital coming into the ecosystem this year, the growth of new startups and the technical maturity of core parts of the stack, I especially want to call out the aims upgrades. So the networking upgrades that allow for a lot lower latency.</p><p>and a lot more simultaneous connections between Urbit nodes. And then the other big thing this year is New Mars, which is an updated runtime. And so much actual, you know, faster computation and larger loom size. So much, much larger memory. The joke inside, you know, Urbit circles is that you want to be able to host the AVI file of Shrek 2 inside your Urbit and play Shrek 2. And now there's also a great,</p><p>Theo Jaffee (22:27)</p><p>Thank you.</p><p>Nick Simmons (22:30)</p><p>Twitter thread of the day from Ted Blackman, ~rovnys-ricfer, who's the CTO of the Urbit Foundation around some of the, the loom expansion and runtime improvements that could actually allow some early, you know, machine learning activity inside the Urbit runtime, which is wild. Definitely not something that I think people really saw coming a couple of years ago when it was extremely limited. And so built, you know, building on those technical improvements, I definitely see that there's a lot of growth coming at the same time.</p><p>I think that, you know, obviously you would be unfortunate if, you know, if for whatever reason, you know, I don't know, we entered into the mother of all, you know, winters for capital and attention and, you know, usage and so forth. I think that's true of any protocol. However, Urbit exists, like the, you know, the PKI exists, the address space is remarkably well distributed.</p><p>You go to the Urbit conferences, you kind of sniff around and I mean, you can also just look on chain, like, you know, how many unique wallet owners for stars and planets and galaxies. And that I think does a particularly meaningful at the galaxy level is that a remarkable array of people are committed to this project and have, you know, and have significant, you know, interest in it, financial and, you know, just attentional and technical. And so,</p><p>The protocol is already out there. It has application to reality. It has use cases. It has people building on it. I love the Bitcoin meme of, honey badger don't care. Oh, there's a downturn. Oh, Jim Cramer comes on CNBC and talk shit about you. Who cares? Keep building. So yeah, there's always gonna be upticks and downticks. I really think this is something where I'm fascinated by the idea,</p><p>Theo Jaffee (24:04)</p><p>Okay.</p><p>Nick Simmons (24:25)</p><p>of building just indestructible primitives, indestructible networks. And we can get into this later, but like, I am personally fascinated by the idea of making these networks as unkillable as possible and really kind of crazy extrapolations that that leads to. Can we put an Urbit, you know, node on a, on a CubeSat? Can we inscribe the actual</p><p>you know, binary notation of an Urbit, you know, virtual machine on rocks in the desert. Like these are fun thought experiments, but I think they actually lead somewhere productive in the sense of there really are unlike say a lot of legacy systems where the ownership in the social graph is, is illegible and just kind of like dictated by fiat or it's on paper somewhere or, you know, there's some actually,</p><p>The degradation of digital information is way, way, way more advanced than people understand. I mean, a huge amount of content that has been produced on the internet, in fact, does not live anywhere. It's actually been lost. It's actually been wiped. So yeah, I think that like there's, with the advent of things like Urbit and blockchains that have distributed ownership, distributed consensus, and also are, you know, writing.</p><p>information to a distributed state layer, it really is a different paradigm in terms of what it means for something to be growing, active, or defunct. Like, I think we're already, you know, it's, have you heard of this, have you heard of this kind of, you know, crackpot hypothesis about whether, you know, we could tell if there was an industrial civilization in the fossil record millions of years ago?</p><p>Theo Jaffee (26:19)</p><p>Um, that sounds vaguely familiar.</p><p>Nick Simmons (26:22)</p><p>So I mean, it's an interesting thought experiment. I don't think that there's evidence that we had a fossil fuel civilization millions of years ago, but then the question among geologists and so forth is, okay, would it have left enough evidence? And I think there's a loose corollary you could make to something like, or a bit which is how big of a civilizational catastrophe would it take to wipe out the evidence or even kind of like enough of a subset of Urbit IDs or let's say, the Bitcoin blockchain.</p><p>anything like that, to the point where if there was interest and if there was a compelling need for it, that you could bootstrap it back up again.</p><p>From a game theoretic level, all you would really need is a handful of galaxy IDs that say, okay, this is still the network and we're gonna reconstitute it and we're gonna vote to spawn perhaps more galaxies, more stars if a whole bunch of keys have been lost. And this sounds just kind of like science fictional speculation, but again, I really think that this is an important factor in what's the difference...</p><p>in what we're building now for the long term, versus things that don't have these provable interactions on the identity layer and on the kind of like social contract layer.</p><p>Theo Jaffee (27:46)</p><p>So going back to what you said earlier about Urbit growing, Urbit stars were selling in 2021, they were selling for like $28 ,000 on average on Uniswap and OpenSea. And now they're more like a thousand dollars and they haven't really recovered even as the broader market for cryptographic assets has recovered like significantly. You know, Bitcoin has gone from like 20 ,000 back to like 70. It's hit a new all time high since 2021. So why hasn't Urbit address space recovered too?</p><p>Nick Simmons (28:16)</p><p>You know, I don't try to speculate too much on price. I think that where, you know, so many things in the crypto ecosystem in general are very meme driven. But I will say this, that I think that there's a general tendency, and Urbit is by far, you know, not the only project where you see this, where tokens or NFTs or something, you know, bespoke like Urbit address spaces and sort of digital land becomes a meme coin for the general idea.</p><p>in the absence of other ways to gain exposure. And so purely from a personal perspective here, my suspicion is that the level of interest in Urbit, as far as I can see, and I've been well -placed to observe this, keeps going up. And so it's a very valid question to ask, okay, why hasn't the, you know, why hasn't the star price, you know, kept pace? And I think the answer honestly is that now there are more ways</p><p>to essentially invest in the ecosystem rather than simply buying a star in OpenSea. And so for example, Octu does not invest in Urbit address space. We all have a lot of Urbit address space. We're all generally bullish on its value going up. I do think that there's gonna need to be some technical work and some kind of like product and ideation work around how do you make address space specifically? And because the interesting thing is, okay, it's a non -fungible token, right? But,</p><p>really, you know, if you just buy a random star on OpenSea that's never been booted, that's never interacted with anybody, sure, maybe you like the sigil. And I mean, the sigils are great, the names are great. But other than that, it is still pretty fungible. And this is even true to the point where for a while there was a project called Wrapped star to make it even more fungible. And so I think that there does need to be some work done on building things that imbue given</p><p>chunks of the address space with value, maybe as subnets, maybe as something that, you know, accrues reputation, maybe as web of trust that mutually reinforces reputation on the group level or the subnet level. But as to the price again, I really just think that what happened is that all of a sudden, I mean, when I joined the project, there were two Urbit companies, both of them mostly sold address space. Then there was a little bit of hosting company. Yep.</p><p>Theo Jaffee (30:39)</p><p>wanted.</p><p>Nick Simmons (30:42)</p><p>And now there's a lot of startups. There's a lot of technical projects where you can put your time and energy. There's a lot of grants from the Urbit Foundation to work on stuff. And then now there's actually two projects in the works. There's a Send Chain from ~tiller-tolbus at Chorus One. And then there's another Urbit L1 project from Sunny Agrawal at Cosmos. It's mostly being worked on by Laconic.</p><p>that and both of these aim to not only give some some global consensus layer to Urbit as an Urbit native, you know, L1 blockchain, but also has some interesting ideas around how the reputation layer and the economic layer could work in ways that incorporate address space. So I would definitely encourage everyone to keep an eye on those things. I think both of them are kind of like in the, you know, publishing early communicase stage. But I don't ultimately see, you know,</p><p>undifferentiated address space, stars basically, as being really, I think, ever again, so much the concentrated, you know, bet or meme coin on the overall health of the ecosystem, at least not for a very long time. And where I'm choosing to put my efforts in Octu is investing in early stage startups that build things on Urbit. And that's where my kind of very opinionated thesis is.</p><p>Theo Jaffee (32:09)</p><p>Do you think there is any kind of proper intrinsic value of an Urbit star? Like, can you go about valuing Urbit address space or is it more like Bitcoin where there's no real way to value it because there's no cash flow. It's just a purely cryptographic asset.</p><p>Nick Simmons (32:26)</p><p>Well, I think that both of these things have something in common, which is scarcity and that ultimately you start to get, you know, increasing value within a scarce, you know, namespace or, you know, a scarce currency, you know, store value space in the case of Bitcoin. When you have a whole bunch of people who reach, who reach a social consensus on.</p><p>what the value of that scarcity is to them in terms of preventing various kinds of downside risks. In Bitcoin's case, it's, you know, hey, we're worried about money printing. Like there's no actual limit to the amount of fiat money that a government or central bank can print. So we're all going to subscribe to the, you know, social illusion. And I don't say that in a negative way that, hey, it's valuable that there's only ever, you know, 21 million Bitcoins. And so this Bitcoin that I own has</p><p>has a value as a percentage of that. Similarly with Urbit, I think that if Urbit got to the point of adoption that Bitcoin had, of course you'd see address space go up. Not financial advice, but that really seems to stand to reason because you have a whole bunch of people agreeing that the scarcity aspect as it represents trust and distribution of the network and so forth also</p><p>benefits from that scarcity and they want a piece of that. Now, what happens before that, I think is very interesting. And that's why I say that like subnets and like biting off chunks of this scarcity and maybe either imbuing them with some sort of, you know, economic function, or simply saying, okay, you know, this, this corner of the network, this, you know, thousands or millions of planets are attested to in a given way.</p><p>and we're going to carve out this chunk and use it for a given use case. And within that, there's even a greater degree of scarcity. And it's very interesting to think about the network growing with multiple examples of that that then have to have some sort of foreign policy with each other or some sort of equivalency. And it's almost like, I mean, the original metaphor of Urbit as land is very apropos here. When you're on a frontier or you're even, I guess, settling in a different planet,</p><p>land is cheap and so people have the ability to go homestead in different parts of it and build their own thing and define it their own way. And then over time, the borders creep to each other and they have to figure out how they're gonna relate to each other. And then the value of land maybe in between goes up. And I think that that's the process that we are gonna have to watch and see with address space.</p><p>But it also informs again that I don't see the value in just owning huge swaths of undifferentiated address space and waiting for it to accrue value in the near term. It's much more around, you know, what are the cities you're going to build? How are you going to bring people there? And what are the industries you're going to build there?</p><p>Theo Jaffee (35:34)</p><p>Can you go into a little more detail about the intention behind the original land metaphor? Because I think that was probably one of Curtis Yarvin's best pieces on Urbit.</p><p>Nick Simmons (35:43)</p><p>Yeah, I mean, I'm a big fan. Look, I can't speak for Curtis. He's certainly a prolific podcast guest. I encourage you to get in contact. And I know he doesn't often speak about Urbit design stuff. Very good, very good. So I won't presume to speak for Curtis. But as somebody who has watched the growth of the network, I do think that it's</p><p>Theo Jaffee (35:57)</p><p>It's in the pipeline.</p><p>Nick Simmons (36:12)</p><p>It was a very good design decision on his part to make it so large, even though it leads to this current price drop that you see while people kind of acclimatize to the homesteading phase.</p><p>This really is, Urbit really is a new world. And this is almost a little cliche to say at this point, but I really believe it. And giving people huge swaths of node IDs, whether for humans or for infrastructure points or whatever, is really, really important when you have the ambition to replace the entire, you know, up to the minute,</p><p>paradigm of network computing. And by the way, a little bit of a side note here, something that people talk about a lot is that below the planet star galaxy paradigm we've talked about before are something called moons. So moons are derivative identities. They do have their own keys, but they are permanently tied to a planet owner. And the...</p><p>Design space for moons has always been a little bit underspecified, but I'm very interested in ways that moons can not only provide IoT device identifiers, which is actually an underexplored direction in IoT in general, in terms of verifying where your data came from, certainly in a deterministic way. ~datnut-pollen, I read a great piece on this back in the year about the Tlon called the lunar IoT in the internet, or something like lunar IoT in the internet, or things like that.</p><p>But I'm also very interested in something that I call node dispersal, which is that blockchains and protocols like Urbit, so far, if you look at where nodes are being hosted, it's gonna be a preponderance of, depending on the technical requirements, you're gonna see an awful lot of validator nodes for various blockchains in AWS buckets. And then even if it's actually self -hosted, where's it gonna cluster? It's gonna be San Francisco, Seattle, Berlin, Lisbon, New York.</p><p>And that seems like a, I think that we need a scale pass. Like that's an over concentration of, you know, of these supposedly decentralized networks. You know, an interesting way to view the whole proof of work versus proof of stake, you know, debate or argument is that, and this is one lens to look at it from. This is definitely not the only one.</p><p>but I find it instructive sometimes, is that proof of work fundamentally is a bet that the distribution of cheap or free energy and the ability to exploit it as it's distributed over the Earth's surface is, you know, and obviously modulo people knowing about something like Bitcoin or something like, I guess, you know, or Dogecoin or, you know,</p><p>pre -merge Ethereum, and knowing that you can do this, that this is a thing you can do. But with Bitcoin in particular, the Bitcoin mining has penetrated deep into say, provincial China. It's penetrated deep into the provincial US, deep upstate New York, for example, all over the world. And then it has technical innovations like, oh, flared natural gas on an off,</p><p>like an off -grid drill site, you go and you mine Bitcoin with it and it's not economical to export the energy in terms of a gas pipeline. It is economical to export the hashed mathematical results of Bitcoin mining over a packet radio. But then proof of stake is basically a bet much more on, okay, this blockchain is going to scale via</p><p>people who have the cultural, you know, imprimatur and the social connection to the people who started it who tend to be, you know, educated Western technical types. I don't think either of these has a monopoly on insight, but I think it's an interesting, you know, dichotomy between how resources are distributed and to the extent that, you know, blockchains are supposed to be this very durable layer of, you know,</p><p>distributing information, access, and economic rails across the physical space, how they're distributed. And so in that context, I think that moons are super interesting in terms of how can we achieve physical decentralization via node dispersal across every possible boundary with the Urbit network. Can we put moons as Urbit nodes on devices?</p><p>Can we optimize for geographical decentralization? Can we put them in our space? Can we put them across political boundaries? And two follow -up questions here. How can we measure this in the most robust way to actually provide a score, if you will, of how decentralized the network is? And two, what do you do with them? And what's the practical upshot? So.</p><p>One super interesting thing about Urbit, right? It's a network of personal servers and you run a client locally and you can dial into APIs, you know, blockchain sources of data, et cetera, et cetera. I assume you've heard about this Wells notice that Uniswap just got.</p><p>Theo Jaffee (42:05)</p><p>The what that Uniswap just got?</p><p>Nick Simmons (42:07)</p><p>So Uniswap just got something called a Wells notice from the SEC and we don't know the contents yet, or at least we didn't have as of last night. And a Wells notice basically is just serves as intent that the SEC is gonna bring legal action against you. So basically the story here is that, and this actually goes back to one of Roko's critiques is that, you know, dApps are solving for supposedly things that that Urbit was trying to solve a while ago. Well,</p><p>One thing that dApps have not solved and that they are getting in trouble for right now is that the actual front end server layer is the vulnerability. And so this is most obviously true for DeFi right now, but it's also probably going to be true for certain kinds of, you know, AI to consumer apps in terms of your dialing in to use a model, where does that model live, but also where are you and how you access it. So, Urbit excels at this. And actually there's been quite a bit, you know, quite a...</p><p>bit of interest from DeFi protocols in, hey, how do you use Urbit to serve up the front end so that someone can use a DAP, can use an AMM, an automated market maker? How do you actually interact with the blockchain rather, in an actually decentralized way for the user? And so one idea around making Urbit node dispersal extremely robust is that,</p><p>this allows you to have, you know, umpteenth degree redundancies for mirroring sources of data, mirroring front ends and actually like, you know, keeping, being a backup for any kind of global state, you know, coordination protocol, blockchains or otherwise, there are other options there too, as well as representing reality. So things like,</p><p>Po -Apps, proof of tenants protocol, proof of location protocol, things like helium, packet radio, et cetera. Very interested in how we can make Urbit and networks like it actually span the earth and start to instantiate. I mean, the original metaphor is Borges. It's "Tl&#246;n, Uqbar, Orbis Tertius". And the story, of course, is about.</p><p>a map that is the same size as the territory that it describes. And, you know, Curtis know what he was talking about when he was using that metaphor. And I'm very interested in ways to make that real.</p><p>Theo Jaffee (44:38)</p><p>Yeah.</p><p>So change topics a bit. Let's talk about AI and the future of AI on Urbit. Because I think that like, you know, the perfect application for, you know, having a decentralized personal server would be to run a decentralized personal AI on.</p><p>especially because it seems like actually good personal AI is imminent. You know, who knows what Apple is about to announce with Siri in a couple of months. And then OpenAI is coming out with GPT -5 soon after that. Most likely sometime by the end of the year we'll have very good LLMs and people are already building out the infrastructure. So what do you think is the future of AI on Urbit, both implementation and then what people will actually be able to do with it?</p><p>Nick Simmons (45:25)</p><p>Yeah, so this is super interesting. There's a whole bunch of different threads we could pull. So obviously I think that access to protocolized AI compute and model access and also honestly markets for data sets that you would feed to a model on large and small scales. I think Urbit helps with the protocolization of that quite a bit. And I really don't know anything else that applies nearly as much there.</p><p>But there's also very interesting things to say about what you can do with all the interaction data that you create. So each, each Urbit virtual machine that you know, you or I run is an event log. It's just literally a list of computational events that you've performed with your machine and it lives locally and it doesn't have, you know, global state by default, but certainly you can, you can.</p><p>create a protocol to write this out to a blockchain whenever you want. And there's actually been some interesting early experiments and kind of like peer to peer local consensus like agreements on that Urbit computational level between orbits. This is something called Agora that Quartus and the Dalten collective experimented with a couple of years back. And so because you have this deterministic event log,</p><p>of everything you've done. And that includes, of course, the connections you've made with other Urbits and data you've gotten from other Urbits over the network. It's very interesting to think about training a personal model on your activity as a kind of proof of human and then zero knowledge hashing that and using that as your badge of identity in a very kind of up to the minute way. And again, as we were talking about,</p><p>you know, some of the pitfalls of digital ID protocols, this is, I think that proof of human is gonna become an arms race. And that if you don't have the, if you don't have the primacy of someone's everyday, you know, online behavior to draw reputation data from in order to kind of like model them as a human, then you're gonna fall behind pretty quickly. At the same time, you're gonna, you are definitely going to need to, you know,</p><p>to not simply leak everyone's data or even just kind of behavior patterns, if naively obscured in ways that leave them vulnerable. And so it's very interesting to think about personal models deployed to prove who you are, to prove belonging, but also to act on your behalf and learn from your behavior. And it does seem like Urbit is the kind of thing that would provide a moat there.</p><p>for you to run a personal model in, interact with data set marketplaces, and then command a fleet of virtual assistants on your behalf, which I mean, the moon identity seems to align well with that. But ultimately, I think that the really, my top level thoughts about Urbit AI is that AI is probably going to cause a massive,</p><p>sorry, is probably going to cause a massive collapse in the social context and the legacy, you know, reputation, identity, meaning systems that we currently live under. I mean, they were already straining pretty badly. And this is a key part of the Octu thesis is that we think that Urbit has massive potential to build what we call digital civil infrastructure.</p><p>which is to say, to encapsulate the values that its users and builders actually watch, digital Jeffersonianism is one way to put this, of allowing people to actually have a defensible moat around their family, their community, their business, their data, and to say, hey, I'm gonna interact in marketplace, I'm gonna interact in society.</p><p>but there's a line past which cannot be trespassed. And honestly, I think that AI is gonna make this necessary sooner rather than later. And it's extremely, extremely important that we encode the values that we want into a defensible digital substrate before that happens, because there's really no guarantee that they will simply be uptaken.</p><p>whether it's by, you know, mega corps that use AI or not, or whether it's simply by these models in the sense that they're, you know, eating all the data from the internet in the past, you know, 30 or 40 years of written code.</p><p>Theo Jaffee (50:31)</p><p>And then not just AI, but the rest of the future of Urbit. For example, what could like an Urbit hardware ecosystem look like? Whether it be CPUs, I know you talked about this, Nock FPGA earlier, or like actual full blown.</p><p>like computer hardware, like consumer desktop or laptop hardware. And what about Urbit on mobile? Like what could that look like? Would it be more of an app on existing OS's or would it be its own mobile OS or its own mobile device? So yeah, what's, what's the future of the kind of actual like hardware layer of Urbit? How will people interact?</p><p>Nick Simmons (51:10)</p><p>So yeah, so first of all, this is already very much a thing. So the obvious shout out here is to Native planet, which is a great company in Austin, Texas that builds Urbit hosting hardware. So with a bunch of very, very clever, very cool software integrations to make it easy to run your own planet or star on a custom built hardware box. They have several models, highly recommending.</p><p>recommend checking them out, nativeplanet .io. And so yeah, so there's already hardware that's purpose -built to run your, your Urbit nodes locally, to run your Urbit VM locally. I think there's massive potential here. Some of the areas I've already mentioned in terms of node dispersal, I think that Urbit, you know, optimized sensors, you know, space hardware, things like that. And I think there's a, there could be very interesting kind of, you know,</p><p>flywheels and mutually complementary economies here where the economic mandate of no dispersal and like, you know, provable, like real geographical decentralization can reduce the costs for the ancillary, you know, business use cases of, hey, we want, you know, sensors here to track X, Y, and Z, or to provide, you know, relays for other networks or, you know, like packet radio, mesh nets, that kind of thing.</p><p>I'm very interested in the Urbit sensors, very interested in Urbit hardware that creates a real geographical decentralization. In terms of Urbit apps, I mean, there's some, Tlon has an iPhone app, which works quite well and highly recommend everyone check that out. That's really made it much easier to use Urbit on the go.</p><p>If you're asking like, is there going to be an Urbit phone like a salon, like the salon, you know, side of phone, that's certainly an interesting question. Um, I see a lot more hardware interest from urbaners than I did a year ago. Uh, and so I think that people are kind of, you know, trying to wrap their heads around this stuff. One aspect of that, I think maybe accelerating this a little bit is that Apple recently breached what was previously considered to be quite a bright line in taking in.</p><p>take a hard line against PWAs, progressive web apps, which were basically a bit of an end run around some of the, you know, frankly pretty absurd strictures on what can and can't get accepted in the app store. The whole question of whether, you know, an entrant to the smartphone business that can get around some of the Apple and Android, you know,</p><p>quasi monopolies for apps, especially around crypto integrations is, is certainly super interesting. And I kind of suspect that it's, you know, pressure building behind a dam and that we will get there maybe even sooner than we think at the same time, geopolitics and the geopolitics of chip production and, you know, sophisticated electronic production in general are a big, big question mark, especially with, you know, all the stuff around Taiwan right now. So that could throw a wrench and all that stuff.</p><p>I really want to see good thinking and I want to see honestly pitches here around how you unite the Urbit vision with hardware, real use cases, and that geographical and technological kind of like hosting environment, decentralization and dispersal factor.</p><p>Theo Jaffee (54:54)</p><p>So we've only got a few minutes left, so I'm going to go into a little bit some of the design decisions behind Urbit. So, for example, where did the syllable -based naming conventions come from? I noticed recently a lot of passwords are now very similar to Urbit syllables, where you have a bunch of normal -sounding syllables or English words separated by hyphens. So which came first?</p><p>Nick Simmons (55:22)</p><p>You mean passwords outside of Urbit</p><p>Theo Jaffee (55:25)</p><p>Yes.</p><p>Nick Simmons (55:26)</p><p>I think that has to be convergent evolution. I think I've noticed a little bit of what you're talking about recently. I mean, the whole topic of password optimization to get around what they used to call a dictionary attack has been going on for a very long time. And of course, now we're going to just use the suggested password managers, browsers and so forth to run. So I couldn't really tell you if there's any, I mean, I'm really highly doubt there's a connection there.</p><p>Theo Jaffee (55:34)</p><p>us brings us.</p><p>Nick Simmons (55:56)</p><p>I don't know the full story of exactly why the Urbit, you know, I mean, the syllables are meant to be, you know, human memorable because it's easier than memorizing, you know, a bunch of dotted quads, right? And obviously there is a correspondence between the Urbit syllable names and the, and the sigils. There's a great post by, I believe by Gavin who created the sigils called appropriately creating sigils. If you go,</p><p>way back in the urbit .org blog, that's worthy of reading. And yeah, I mean, I'm a big fan. I don't know the full history, but I'm a big fan of how that works because they really are memorable. And I remember at one point, maybe a year or two after I originally joined Tlon, I realized that I probably knew upwards of a hundred people, so easily like over a Dunbar number of people by their Urbit names, their Urbit planet names, rather than,</p><p>Theo Jaffee (56:26)</p><p>I love that post.</p><p>Nick Simmons (56:55)</p><p>in most cases, the real names.</p><p>Theo Jaffee (56:59)</p><p>That's pretty cool. And then what about the influence of Christopher Alexander? I know there's another I can talk about that. I'm actually reading the pattern, a pattern language right now. And I didn't even know, I've known about Urbit for like years. I had no idea that Urbit was inspired by Christopher Alexander.</p><p>Nick Simmons (57:04)</p><p>Ah!</p><p>Yeah, I'm not a Christopher Alexander expert. So I would recommend talking to Galen or possibly to some of the OG design team at Tlon Ed Fablefaster has some great thoughts on Christopher Alexander. And really, the whole design genesis of Urbit and Tlon's products is fascinating. I will say that one of the things that really attracted me,</p><p>to Urbit in the first place was a sense of really like that it was important to know and understand the whole teleological history of technology and product design. And it's actually funny, I was just on another podcast from a hut ventures that will be coming out next week. And one of the at the very end, we talked a little bit about you, hey, you know, what would you be doing if you weren't working in crypto or tech?</p><p>And my answer, which has always been true, is that I'd probably be a historian. And I've always been super, super interested in tech history. I have a little bit of a family connection. My grandfather worked in the ENIAC in the 40s. And so I grew up using little punch cards, scratch paper. And I kind of thought that was normal as a little kid. Didn't everyone's grandpa work on computers, right?</p><p>Theo Jaffee (58:24)</p><p>Oh, I saw it.</p><p>Nick Simmons (58:37)</p><p>Yeah, it's in the, I'm assuming you went to the Computer History Museum. Yeah, I know, one of my favorite places. I could spend hours in there. I've been there many, many times. I know, right? I know. Yup, yup. I mean, that's when it was started, I think, the museum. And so they're, yeah, they have a backwards looking perspective there maybe. But yeah, so I've always been super interested in,</p><p>Theo Jaffee (58:42)</p><p>Yeah, yeah, I did. I'll do it.</p><p>I just wish it would have been longer. It ended in like 1995, 2000. I missed the last 25 years.</p><p>Nick Simmons (59:06)</p><p>how this stuff developed. And one of the things that really made clear to me about Urbit is that like, there have been a whole bunch of junctures in the history of computing, the history of network computing and the history of, you know, mass onboarding people into these network computing platforms where particular decisions were made. They were usually contingent on what came before and what the current, you know, ground conditions were at the time. And some of them worked out really, really well for</p><p>I'd say the average person and the ability of developers and entrepreneurs to build what they wanted. Like we're definitely living in something that is far from the worst possible world here. Are we living in the best world? No, of course not. Hence why I work in this stuff, hence why we all work in this stuff. But the thing that really impressed upon me was that there were times when I, for example, early on, I think Bill Gates and some others in their early 90s wanted to essentially encircle.</p><p>the internet and circle some of these protocols. And instead we got TCP/IP, which is the open protocol. We got SMTP for email, which is the open protocol. Although honestly, the dominance of Gmail as the front end to that is eroding that a bit. There are some pretty scary censorship going on with G -Docs at least.</p><p>Theo Jaffee (1:00:19)</p><p>Very. Shout out to my last guest.</p><p>Nick Simmons (1:00:22)</p><p>Oh, really? And so I do think that it's, it's very important to consider, okay, if you're at one of these junctures, and you're in a position to build something or fund something that can, you know, make the right decision, and return agency and return, you know, network effects and coordination ability to the users to the developers to the auditor, nurses, harder than is in the right place, you should do that.</p><p>And that's the highest calling you could possibly have as a technologist. And I really think that the degree to which the Urbit ecosystem and everyone I've met in it considers these questions and knows the history and knows, hey, it's important to build this stuff right, is one of the things that grabbed me in the first place and still gets me out of bed in the morning.</p><p>Theo Jaffee (1:01:12)</p><p>Well, I think that's an excellent place to wrap it up. My answer to where, what would I want to do as a career if I wasn't interested in tech is similar. I would probably want to be an architect. Also Christopher Alexander inspired. So yeah, I'm thrilled to see that he's had an influence on Urbit too. Well, thank you so much, Nick, for coming on the podcast. I really enjoyed this episode.</p><p>Nick Simmons (1:01:34)</p><p>Absolutely, I enjoyed it too. You asked great questions. And if anyone wants to follow up and continue the conversation, you can drop my Twitter handle in the show notes. And as mentioned, my main project right now is being a founding member and partner at Octu Ventures, which is a member -driven venture DAO that invests in seed stage Urbit projects. And we have some writing on our website, octu .ventures.</p><p>And I really try to make that the vessel for most of my energy into Earth these days. And I'd love for people to check that out.</p><p>Theo Jaffee (1:02:14)</p><p>Alright, well links to everything will be in the description, so thanks again and talk to you later.</p><p>Nick Simmons (1:02:18)</p><p>Awesome. Thanks to you.</p>]]></content:encoded></item><item><title><![CDATA[#12: Paul Buchheit]]></title><description><![CDATA[Creating Gmail, Fixing Google, Narrative Understanding]]></description><link>https://www.theojaffee.com/p/12-paul-buchheit</link><guid isPermaLink="false">https://www.theojaffee.com/p/12-paul-buchheit</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Fri, 22 Mar 2024 15:12:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/142846348/72867f21ab3ee0eb60410bfe86f32d71.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Paul Buchheit is a programmer and entrepreneur who joined Google as its 23rd employee. He created Gmail, developed the first prototype of Google AdSense, and suggested the company&#8217;s motto, &#8220;don&#8217;t be evil&#8221;. He later co-founded FriendFeed and served as a managing partner at Y Combinator.</p><p>0:00 - Intro</p><p>1:15 - Issues with Google</p><p>3:47 - AI risk</p><p>5:21 - AI centralization and decentralization</p><p>8:01 - Open-sourcing frontier AI</p><p>9:59 - Paul&#8217;s Predictions</p><p>14:28 - Centralization, free speech, and censorship</p><p>24:16 - Trends in ideology</p><p>32:00 - Freeing people of narratives</p><p>35:49 - Alignment</p><p>39:06 - Startups and YC in 2024</p><p>50:30 - Email and communication interfaces</p><h3>Links</h3><p>Paul&#8217;s Twitter: <a href="https://x.com/paultoo">https://x.com/paultoo</a></p><p>Paul&#8217;s Blog: <a href="http://Creating Gmail, Fixing Google, Narrative Understanding">https://paulbuchheit.blogspot.com/</a></p><p>YouTube: </p><div id="youtube2-Tzo6DJT9GOk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Tzo6DJT9GOk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Tzo6DJT9GOk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><p>Apple Podcasts: </p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:3785,&quot;numEpisodes&quot;:11,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-02-26T02:56:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://twitter.com/theojaffee">https://twitter.com/theojaffee</a></p><p>Subscribe to my Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:02)</p><p>Alright, we're recording. Hi, welcome back to episode 12 of the Theo Jaffee Podcast. We're here today with Paul Buchheit.</p><p>Paul Buchheit (00:09)</p><p>Alright, great to be here.</p><p>Theo Jaffee (00:12)</p><p>All right, so let's get into some questions. So as we all know, you created Gmail and worked at Google very early on. And now you've become a bit more critical of it, especially after the recent Gemini debacle and a general mission creep from Google in recent years. So when did the current problems with Google start to become apparent?</p><p>Paul Buchheit (00:37)</p><p>Oh, you know, I don't know. I don't want to define myself as a Google critic. But, you know, clearly...</p><p>things have, you know, it's not the same company, it's a much larger company, but I think if you kind of look back at the history of it, sort of the alphabet era, which started in 2015, I think, there haven't really been a lot of great new products that have launched. My sense is that the company kind of pivoted from innovation more to just defending the search monopoly.</p><p>So.</p><p>Theo Jaffee (01:20)</p><p>So the market seems to still think that Google is incredibly valuable. It's pretty much consistently beaten the market over almost every time horizon, except maybe the last month or few months. So are you still on Google?</p><p>Paul Buchheit (01:35)</p><p>Oh yeah, I mean, obviously they have a lot of great assets. It's not going to disappear tomorrow or anything like that. I mean, they're still the leading internet company. But these things take time. It's kind of more a question of where are things headed five years, 10 years. And also just like, you know,</p><p>what kind of products are being created. And in particular with AI, it's important that, you know, it's such a powerful and influential technology, it's important that it's accurate and honest in not distorting the truth or distorting history.</p><p>Theo Jaffee (02:19)</p><p>So that said, a few years down the line, what do you think the bear case and the bull case are for Google? Like in the bear case, do they go out of business or is Google too strong for that? And then what about the bull case?</p><p>Paul Buchheit (02:30)</p><p>No, companies don't really, of that scale, don't really go out of business, right? The bigger question is actually just around the direction of AI. AI is larger than really any technology we've ever witnessed before. And so I don't know, like I said, I'm not like a stock market speculator or anything. I don't.</p><p>be giving anyone financial advice or investing advice. I'm more interested in just kind of where technology is headed and how that impacts humanity and how we do our best to make sure that these technologies really empower individuals and help people to retain their freedom and their rights because AI has the potential to be the most horribly...</p><p>oppressive and dystopian thing we've ever done, or it could be the best thing we've ever done. So I think we need to do our best to push it towards something that's good for humanity and not something that eliminates or enslaves us.</p><p>Theo Jaffee (03:37)</p><p>So thank you.</p><p>Hmm. And when you say eliminates or enslaves us, are you concerned more with the AI itself becoming rogue, or are you concerned more with humans using it for bad purposes, or both?</p><p>Paul Buchheit (03:53)</p><p>You know, I think the latter. I mean, AI is a tremendous superpower. But if you think in terms of, you know, like Orwell's 1984, that was kind of fictional, but with AI, we could actually do something much worse, right? You can monitor everything anyone ever says or does.</p><p>you can distort reality in real time. So if you imagine, you know, you're in lockdown or whatever, right, for the new pandemic, and you're having a video chat, you're on FaceTime with a family member, or maybe you're doing a podcast, you know, the AI could literally be altering things in real time. So you think that you're talking to your, you know, to your mother, and she says, hey, everything's great, but in actuality,</p><p>Theo Jaffee (04:35)</p><p>you</p><p>. . . .</p><p>Paul Buchheit (04:49)</p><p>the AI is just faking the whole thing or altering what people say. And so the potential for just horrible dystopian possibilities is essentially unlimited. But at the same time, the explosion of intelligence also means that we can do really wonderful things. We can solve the hardest problems, things that seem intractable. If you're worried about climate or you're worried about...</p><p>future of education, medical care, we can make things such that the world in 40 years is better for everyone than it is for anyone now, in terms of giving people access to things that make them healthy and happy.</p><p>Theo Jaffee (05:21)</p><p>So you're worried more about AI advantaging the incumbent, like a government, in order to...</p><p>Paul Buchheit (05:44)</p><p>Absolutely, right. So I think the real threat is essentially centralization of power. And certainly, if you look at history, all the worst things people have ever done, whether it's the Soviets or the Nazis or whoever, that happens when someone is able to completely centralize power. And AI has that potential because especially...</p><p>Theo Jaffee (05:56)</p><p>. . .</p><p>Paul Buchheit (06:07)</p><p>when it's the channel through which all information is filtered, it becomes impossible to resist. And we see this kind of thing going on, obviously, in China, where people are not able to speak the truth, and people can get disappeared, and there's really nothing you can do about it.</p><p>Theo Jaffee (06:23)</p><p>Well, what about the idea that AI could remarkably decentralize?</p><p>Paul Buchheit (06:35)</p><p>Well, that would be good. That's what I'm hoping for.</p><p>Theo Jaffee (06:36)</p><p>the internet decent information.</p><p>Like how the internet decentralized information.</p><p>Paul Buchheit (06:45)</p><p>Yeah, I mean, hopefully. Unfortunately, because of the computational requirements of AI, it does have, I think, some tendency towards centralization. Building one of these models requires billions of dollars in compute. And so it does have some tendency towards centralization. But I think this is part of why we need to continue supporting, first of all, multiple...</p><p>AIs, you know, it's much better if we have AIs being built by Google and OpenAI and Facebook and Anthropic, anyone else who shows up. I think that the more competitors we have, the better our chances are of freedom. And also open source AI, I think is really important and powerful. And so a lot of the, I think, concerns that people have around safety,</p><p>Theo Jaffee (07:34)</p><p>Yeah, I'm inclined to agree that locking down AI might actually make things worse, especially if you're worried about some kind of monomaniacal paperclip maximizer if there's nothing to counter it.</p><p>Paul Buchheit (07:41)</p><p>A lot of times their answer is we need to lock this down and we need to centralize it. And I think that's actually probably the most dangerous thing we can do.</p><p>Theo Jaffee (08:01)</p><p>Do you think that frontier AIs should be made fully open source? Or are there safety risks inherent to that?</p><p>Paul Buchheit (08:09)</p><p>You know, I don't know that there's a clear black and white answer to that. I definitely favor open source and I think that we should do our best to support that. And I'm really glad that Facebook has taken that advantage. They've kind of turned out to be the unexpected hero in terms of actually advocating for open source AI. But at the same time, I understand people's concerns. I think it's like there are...</p><p>legitimate risks. So it needs to be an open process where people are continuing to debate this. And the big thing that I think we need to watch out for is just efforts that try to shut that down, legislation that would prohibit open models, things of that sort.</p><p>Theo Jaffee (08:59)</p><p>just generally the -</p><p>optimistic with you know.</p><p>with these risks must be avoided and may not be.</p><p>Paul Buchheit (09:13)</p><p>It's both. It's either the best thing we've ever done or the worst. And it's up to us to decide which future that's going to be.</p><p>Theo Jaffee (09:24)</p><p>you know, generally which one it will be in advance. Too hard to predict.</p><p>Paul Buchheit (09:29)</p><p>future is not decided. We have to, it'll be one of those things, but there's no guarantee, right? The future is inherently uncertain. If one of those things were guaranteed, then there'd be no point in worrying about it, right? The reason we care is because I think there's an opportunity to steer things in the direction of something that preserves and expands freedom and liberty.</p><p>Theo Jaffee (09:59)</p><p>Well, you do seem to be pretty good at predicting the future. I think Jessica Livingston said that you have like one of the best track records for investing of anyone at YC. And then while I was doing some background research, you have an article on your blog called 10 Predictions for the World of January 1st, 2020. And five of those 10 were remarkably spot on. Those five, by the way, were you predict that all data lives in the cloud and will be accessed.</p><p>Paul Buchheit (10:06)</p><p>Thanks for watching.</p><p>Theo Jaffee (10:27)</p><p>with computers as basically stateless caches, which is true for a lot of people. You predicted Android and iPhone will kill off all the other mobile phone platforms. Android will be bigger, but iPhone will be cooler and work more seamlessly with Apple's tablet computer. That was totally spot on. Three, Facebook will be a big success, possibly as big as Google. Yeah, Facebook will turn out to be huge. Five, you were a little bit early on this one, but you predicted that, you know...</p><p>Google will release an amazing question answering service that can answer complex questions and is in many ways smarter than any human. It turned out to be open AI first, but Google still has. And then number 10, politics will evolve much faster than in the past due to the internet and social networks. And all this talk about mimetic warfare and whatnot, that's definitely turned out to be true. So.</p><p>Paul Buchheit (11:14)</p><p>Yeah, I picked the wrong right -wing television personality there.</p><p>I thought it would be Stephen Colbert, but it turns out it was Donald Trump who won the 2016 election.</p><p>Theo Jaffee (11:26)</p><p>Well, still, just in that niche, picking a television personality to win an election was not a trivial prediction. So do you have any similar predictions for the next five, ten years?</p><p>Paul Buchheit (11:38)</p><p>You know, I think that it's getting much harder to predict the future. So I play this game with the startups. Sometimes I like to do kind of a time travel exercise because startups generally operate kind of on a 10 year time scale. So from the time that we would like seed fund a company at Y Combinator until IPO is, you know, roughly 10 years. So for example,</p><p>Theo Jaffee (11:52)</p><p>Thank</p><p>Paul Buchheit (12:07)</p><p>And sometimes longer, you know, Reddit is only going to IPO, I think like this month and we funded that in 2005. So in that case, it was close to 20 years. Um, but, uh, maybe a more recent one is like DoorDash we funded in 2013 and then they IPO'd in 2021. So I like to try to think about things on a 10 year time scale with, with startups. And so the exercise I take them through, um, is to, is to kind of say, like, let's say I get inside of my, um, DeLorean time machine.</p><p>because I'm a back to the future fan. And I punch in 10 years in the future. So here I am in March 2034. And I get out of my DeLorean and I clear away the time fog and kind of take a look around. I talk to some people and I say, hey, what's been like the big change? In my basic theory with startups is that if you have a massive success,</p><p>It didn't just happen on its own. It's always riding on top of some fundamental underlying technological shift. So like the reason that Google became massive wasn't just that Google was that special.</p><p>that the internet itself was exploding. And so in order to create these hundred billion dollar or trillion dollar companies, you need to be riding on top of some underlying technological shift. So like DoorDash and Uber, for example, really exist because of the smartphone. So like those kinds of services are a product of the smartphone and everything that that enabled. And so I try to get...</p><p>Startups understand like what is the underlying technological shift that's enabling you to become this hundred billion or trillion dollar company in 10 years? And part of the problem I started running into was that I was having a harder time looking 10 years into the future and so I kind of like maybe back around 2017 Started noticing that I couldn't see it started to be very hard to look, you know 10 years into the future. We're sort of approaching</p><p>sort of the event horizon. And so it's actually, I don't know what happens in 10 years anymore. We are, I think, currently at the point where a lot of things are being sorted out. So, you know, 2024 is a very important year.</p><p>Theo Jaffee (14:26)</p><p>you</p><p>new lab.</p><p>Paul Buchheit (14:43)</p><p>Well, you know, the character of the AI that is being built, I think in large part derives from the society that creates it. And so if the AI is constructed in a totalitarian environment, I think we end up with like a totalitarian AI. And, you know, to the extent that we have a society that values freedom and individual dignity,</p><p>and we embed those values in the AI, I think that's our best chance for actually having an AI that empowers humans to live better.</p><p>And so, you know, right now, what that comes down to, to me, you know, one of the fundamental issues in our society is freedom of speech. And it's, it's, you know, it's the first amendment for a reason. It's the most fundamental thing because once you lease freedom of speech, nothing else particularly matters because it can all be lied about, you know. And we just had an incident of this with the COVID pandemic.</p><p>where it was, I think almost certainly, I would say, 95 % probability produced through gain of function research in a lab. In Wuhan, quite possibly funded by American government and scientists. And that...</p><p>escaped from the lab accidentally, I believe, and caused a worldwide pandemic. We had lockdowns, we had all of that. But four years ago, you weren't allowed to discuss this fact. So we were four years ago, the world was locking down and people who started asking questions about where the virus came from were censored and slandered. And so if we can't even talk about the most important thing in the world at that time.</p><p>you know, what freedom do we have?</p><p>Theo Jaffee (16:54)</p><p>So now it seems like people are much more allowed to talk about the lab leak hypothesis. So do you think what turned that around was just kind of the natural error correction mechanisms of our civilization? Like, you know, the truth can't be hidden for too long? Or was it like specifically something like Elon Musk buying Twitter?</p><p>Paul Buchheit (17:11)</p><p>Yeah, that was it. That was a big part of it, obviously. He kind of smashed the Overton window. Before you would get shut down, and part of it, which I think people don't even totally recognize yet, is the mechanism to which Twitter was used to control the narrative. And so, you know,</p><p>our understanding of reality is through storytelling. Which facts are reliable? Which people can be trusted? What do these things mean? And so a lot of the old Twitter was about being able to enforce that institutional narrative because not only would you potentially get banned from Twitter for discussing the lab leak,</p><p>But the most prominent voices were the Blue Checks, right? Who were a group of people who were, the original Blue Checks were for the most part aligned with that institutional narrative. And so those voices were the loudest. And so they could always shut down dissident voices. And so one of the things that was obviously most controversial, but I think also,</p><p>maybe more impactful than is realized was his eliminating those that are that original class of blue checks who worked to enforce the institutional narrative.</p><p>Theo Jaffee (18:46)</p><p>you</p><p>What would you characterize the institutional narrative as? Is it wokeness or degrowth or statism or a combination of all the above or something more complicated?</p><p>Paul Buchheit (19:03)</p><p>Yeah, you know, it's a combination of things, obviously. It's a little bit hard to, I think, pin down any one factor, but the core element of it is, again, centralization of power. And so, you know, whether that's through, you know, intelligence agencies or...</p><p>ideologies that seek to impose a single worldview. And all of those things fold in. Degrowth is something that has been in the works for a long time. It depends how deeply you want to dig on these things. And it can be a little bit hard to really go into it because a lot of it, the roots go back pretty far. But if you sort of understand how...</p><p>communist revolution works or something like that, it all comes down to being able to centralize power. There's a good book actually that a lot of people haven't read. Orwell wrote this book, Homage to Catalonia, about his experiences. He was an English guy who was a socialist anarchist and he went to Spain to fight in the</p><p>Theo Jaffee (20:13)</p><p>to what end.</p><p>Paul Buchheit (20:33)</p><p>Spanish Civil War. He wanted to go kill the fascists, so he went there to go fight against the fascists. When he first shows up in Barcelona, he kind of marveling at this wonderful classless society, kind of the anarchists have taken over, it's really awesome. And he goes off to the front to fight and ends up getting pretty badly wounded, he almost died. But when he makes it back to Barcelona, things have completely changed while he was away.</p><p>the communists have been consolidating power in Spain. And what's actually happened is the particular group faction that he was a member of, it was like this United Marxist Workers or something like that, had been reclassified as fascists. So essentially, the communists would always just reclassify whatever group threatened their power.</p><p>Theo Jaffee (21:11)</p><p>you</p><p>Paul Buchheit (21:28)</p><p>as being part of the fascists and his friends and comrades were being disappeared into secret prisons and he actually has to sneak out of the country, he and his wife, to escape a similar fate. And that actual experience informed his telling of 1984, which is this incredibly powerful and predictive...</p><p>story of how things go. And so understanding how the language is used to manipulate our ability to even have intelligent conversations about things, so that if you want to talk about the lab leak, that's a racist conspiracy theory, right? And so now you get branded as like a racist. And of course, Disney doesn't want their advertising to show up next to racist or whatever, right? And so it becomes very easy.</p><p>for people to justify censorship because no one wants racist content or whatever. And so you just start applying these labels in order to control what it is we're even allowed to discuss.</p><p>Theo Jaffee (22:38)</p><p>So to what end do you think this power is being centralized? Just power for power's sake? Or are the people doing this, like, do they have a more specific goal in mind?</p><p>Paul Buchheit (22:50)</p><p>I mean, a lot of it is power for power's sake, right? Power tends to accumulate. So I don't want to suggest that everyone who's participating is part of some sort of grand conspiracy. That's not how things work. Actually, I like to think of like, how does capitalism work, right? Someone who's cutting down trees doesn't know that they're making pencils.</p><p>or whatever, right? Everyone just kind of does their little part of the job and they don't necessarily understand kind of how it all fits together. And so for the most part, you know, everyone is just responding to incentives in their environment. And so a lot of times, you know, the incentives are quite naturally, you know, a politician likes to accumulate power. So for example, if you're a Senator or someone in the House of Representatives,</p><p>The way that you gain power is just by bringing in more and more money. So if you want to get like a committee appointment or something like that, it's essentially like a pay to play system. You have to bring in millions of dollars of donations. And how do you do that? Well, you kind of sell influence, right? And so they're a part of the system, but it isn't like they're consciously saying, hey, I want to destroy America. It's just what they do kind of as a byproduct.</p><p>Theo Jaffee (24:16)</p><p>So when did you start to think about things this way? Did you always kind of think this way or was it more of like a journey, kind of like how Marc Andreessen now thinks a lot like you on this, but back in 2008 you looked at his blog and he was like, you know, an enthusiastic Obama supporter.</p><p>Paul Buchheit (24:31)</p><p>Yeah, I mean, certainly.</p><p>Obama presented, I think, a pretty attractive image of unity. You kind of hope for better. In terms of the larger pictures of how power accumulates, I think it's just something I've always been aware of. My mother would talk about this stuff in the 1980s. I was kind of aware of a lot of these trends 40 years ago.</p><p>Theo Jaffee (25:07)</p><p>Like what?</p><p>Paul Buchheit (25:09)</p><p>I mean, this overall movement towards, essentially, communism. Communism is a confusing term because, again, the marketing is really great. Who doesn't want fairness and equality for everyone? But then when you understand how that's actually achieved, it's, again, through total centralization of power and elimination of individual liberty in a communist society.</p><p>Theo Jaffee (25:18)</p><p>Thank you.</p><p>Paul Buchheit (25:40)</p><p>And again, when I say communist, I mean like state communism, not some sort of theoretical communism, but the kind that actually always exists, right? When you start talking about, we're doing something for the common good, you can rationalize anything, right? And actually, again, if you read the Orwell homage to Catalonia, some of the characters in there are actually saying, well, yeah, even if we kill innocent people, it's still fine, it's for the revolution.</p><p>And it's very easy to rationalize any sort of horrible atrocity when you start talking about this is for the greater good. And a lot of it also ties into the degrowth things, the population reduction. It's kind of out in the open. If you look at it, there's a lot of people who advocate for reducing the global population to about half a billion people, which means, you know,</p><p>eliminating over 90 % of us. And so if you start to understand, okay, if you're trying to eliminate 90 % of the humans, how do you do that? And then you see that they're starting to try to lock down the food supply. In Europe, they're shutting down farms. You start to see how people are being pushed in a particular direction.</p><p>Theo Jaffee (27:06)</p><p>pushed by who just by ideals.</p><p>Paul Buchheit (27:10)</p><p>Again, these things kind of come down to just people responding to incentives.</p><p>If you understand like goals, right? So a lot of the environmental movement, you say, well, if you're in favor of reducing carbon emissions, why aren't you in favor of nuclear power? Right? Because nuclear power would actually solve the problem. People don't want to solve the problem. The problem is a justification to...</p><p>impose more controls. And some of it is just power seeking and some of it is part of, you know, whatever the larger vision is of having a society that's more centrally managed. And some of it I actually think is like a personality trait. Some people just like centralization and I'm kind of like a person who likes decentralization. And it's...</p><p>Theo Jaffee (28:00)</p><p>Thank you.</p><p>Paul Buchheit (28:18)</p><p>There is definitely, and this kind of goes back to AI, right? Where some of the people say, well, in order to make AI safe, we need to have it all done in like one centralized effort, like the Manhattan Project or something where it'll be controlled by experts and people who will act in the common good. And my belief, which I think is kind of backed by history, is that when you get a bunch of experts acting for the common good, you end up.</p><p>something more like the Soviet Union, because what really happens when you centralize power is the power ends up in the hands of kind of the most effective psychopaths, right? Because the way that you rise up in a centralized system is through political means. In a more decentralized system, in like a more market -based system, in order to be successful, you actually have to create value and actually have to deliver value, right? Like you can't make a really successful startup unless you actually...</p><p>make something people want. But in a political system, you can just capture power.</p><p>Theo Jaffee (29:23)</p><p>you</p><p>Well then there's the issue of the people who are actually doing the power capturing. Like your average environmentalist, climate change protester is probably not thinking like, oh, I need to give the government control over the world so we can reduce the population and exert power over everyone. They're thinking, you know, we need to save the planet. We need to save the environment. And they might even be thinking that they don't like control. You know, they'd say they don't like power. So is just, is, you know, the will to power kind of just like an emergent property of a lot.</p><p>Paul Buchheit (29:46)</p><p>Absolutely.</p><p>Theo Jaffee (29:53)</p><p>of people who are moving towards non -wanting power.</p><p>Paul Buchheit (29:59)</p><p>I mean, the example of activists is an interesting one because I actually think in general most people are good, whether they're communists or fascists or whatever. People get sold on these things with good intentions. I think about North Korea, there were people who fought and died in that North Korean army to...</p><p>to only to have themselves and generations of their descendants locked up in what is essentially a giant prison. But I'm sure the people who fought for that, they didn't understand what they were fighting for. I'm sure there were good people who thought they were fighting for freedom and equality and these good things. The problem isn't for the most part bad people, it's bad ideas, it's bad narratives. And so part of it is this idea that if we only...</p><p>Part of the environmental narrative is essentially that people left their own freedom will destroy the world. And a lot of it is because of a zero -sum belief system. This is like a Malthusian, right? This idea that the population will always outstrip the resources. And again, these are ideas that go back a very long time. And so they believed that the only way...</p><p>to create a society that isn't just in permanent famine and starvation is to limit people's freedom. Because if people have freedom, they'll just keep reproducing and will always have a shortage. And so, for example, China's one child policy was viewed as being very smart and progressive by these kinds of people because they...</p><p>They said, you know, if people are given the freedom to decide how many kids they'll have, they'll have too many children.</p><p>Theo Jaffee (32:00)</p><p>So.</p><p>At the end of your long pin post about the narrative on Twitter, you wrote,</p><p>interesting I want to get back to that. How would you go about like doing this in society in a society where people are so you know where people cling so hard to their narratives? How do you free them of it?</p><p>Paul Buchheit (32:44)</p><p>I think, again, just awareness. Being able to at least tell multiple narratives, and I think there's an awareness of how much narratives drive things. It's slowly entering into awareness. One of the ideas I would like to do, and I haven't really pursued enough, is I think it would be interesting to create something that's almost more like a news publication, but that...</p><p>reports from this meta -narrative context. And so if you look at whatever the issues of the day are, you know, let's say like just looking at Twitter earlier, you know, there's these stories about they deployed the National Guard into the New York City subway, right? And so there's a lot of storytelling around like what's going on there, right? But generally, you just get one side's story or the other side's story, but I think,</p><p>what I would like to see is both sides, both narratives kind of laid out side by side on essentially equal footing. And there might even be a third narrative. And so I think if you actually put things side by side, it creates a kind of awareness because you start seeing like, oh, all of these things are actually just stories. And that's not to say that the stories are equally good or equally beneficial or harmful, but they are all ultimately just stories.</p><p>There's an author I really like, Byron Katie. She writes a book called Loving What Is. It's completely unrelated to all of this, seemingly. But her practice is one of actually identifying the narratives in your own life, the thoughts that cause you trouble. And so people get very locked into this idea, oh, my mother didn't love me, or my children should...</p><p>pick up their socks. Actually, ironically, the author's ex -husband is named Paul, and so she's constantly, her examples are always like, Paul should do this, Paul should do that. And so she teaches a practice of how you can essentially identify these stories. And then she creates essentially counter narratives. So she asks you to do these turnarounds and find three ways in which...</p><p>these turnarounds are at least true or even more true than the story that you initially believed. And she says, essentially when you do that, you don't let go of the story, the story lets go of you.</p><p>And because, you know, these stories, when you believe them...</p><p>it seems so real because it takes over.</p><p>but when you see a bunch of stories side by side, it kind of loses that.</p><p>Theo Jaffee (35:49)</p><p>Interesting. And in the very last sentence, you said this is the beginning of alignment. So did you mean human alignment, AI alignment, or both?</p><p>Paul Buchheit (36:00)</p><p>Both. Right. You know, we are evolving as a species and this is, I think the biggest change since the advent of agriculture, at least. You know, in terms of how our species functions and is organized. And the reason I think agriculture is so important is before agriculture, humans were...</p><p>you know, just kind of these like little tribes of, you know, small groups of a hundred people, something like that. And agriculture is what enabled truly like the rise of the machines. Because that's when we started having, you know, large organizations, governments, cities, corporations, and these things are already sort of a kind of meta life form, right?</p><p>corporation has a life of its own, a large corporation or a large government, no one truly runs anything. Biden isn't actually in charge of the United States government. He's obviously very influential. CEO of Google isn't actually in charge of Google. These are large collective organisms.</p><p>Theo Jaffee (37:22)</p><p>Is Elon Musk in charge of Tesla?</p><p>Paul Buchheit (37:25)</p><p>Yeah, more so than most CEOs. But again, it's not like he has magical power, right? Part of what makes him very effective, I think, is that he's able to insert his thinking into the employees. I don't if you read the Walter Isaacson book. It's really good. Yeah, it's worth reading. But...</p><p>Theo Jaffee (37:45)</p><p>Oh yeah, it's on my bookshelf.</p><p>Paul Buchheit (37:51)</p><p>And actually the one with Steve Jobs is really good too. And I think part of what makes these characters so effective is that they show up. Like if there's a problem on the assembly line and it's holding up production, Elon is like there. He's there alongside the person and he's asking like, asking hard questions. Like why is this a problem? Like there's one section in the book where the production was being held up because they needed some kind of like,</p><p>plastic part to put over the battery before they shipped it or something like that. And he just kind of shows up and starts questioning all of the assumptions. And it kind of came down to they didn't need this part at all. And so things were getting held up because of the lack of a part that they didn't even need. So he has this wonderful algorithm of like questioning each part of the things, you know, like the best part is no part, the best design is no design. And by showing up at these critical points,</p><p>and actually essentially micromanaging it, I think that really gets into the culture and that gets into everyone's head. So I can imagine if you're at Tesla, the last thing you want is for Elon to show up. Right?</p><p>Theo Jaffee (39:06)</p><p>Yeah. So switching topics a little bit, this is actually a pretty good segue into startups. So you're still a managing partner at Y Combinator, right?</p><p>Paul Buchheit (39:16)</p><p>I'm, I think the word is something more like emeritus. I show up, but I don't do group anymore. So the way that we run Y Combinator is, you know, there's a couple hundred startups per batch, but then it gets split up into smaller groups, and then each group has a number of partners who's responsible for taking those startups through the program. And so I...</p><p>Theo Jaffee (39:45)</p><p>So if you could start all over again in 2024, right? If you were, you know, 20 something, just getting out of college and you wanted to build a startup, what kind of startup would you build? Always AI? Only AI?</p><p>Paul Buchheit (40:02)</p><p>Obviously AI is really important, but I mean I think it has to come from the person. It has to be what you're interested in. Part of what makes Y Combinator work is that we don't pick the ideas. We pick the founders and the founders bring the ideas. So there's often this misunderstanding that somehow we've decided like this is a thing that's hot this year and that's never the case. It really comes down to what do the founders believe in. And so...</p><p>I think it needs to come from your own experience and from your own insights in terms of what makes a good startup. So I don't put out here, here's the thing you should do. It's hard for me to know what I would be thinking if I were 20 years old.</p><p>Theo Jaffee (40:51)</p><p>And then for the future of Y Combinator, how do you see the future of Y Combinator playing out over the next few years? What kinds of startups should they fund, especially an AI that won't just get blown up by the next OpenAI release?</p><p>Paul Buchheit (41:06)</p><p>Again, you know that we are our strategy is essentially we fund really smart and effective founders and and we don't a Lot of times they pivot right and so a lot of times the idea that they come in with is not the one that's good And so the idea is we want people who are able to move fast and iterate and and you know make intelligent decisions</p><p>One of the biggest predictors of sort of success is basically just how quickly does a person iterate and so it's fine to come in with like a dumb idea or whatever. The thing that's not fine is just to get stuck on that. And so a lot of what we do is essentially just push people, you go talk to customers, whatever. And a lot of times the, you know, concerns about competition I think tend to be overblown. There's some things that are just like obvious if you put yourself...</p><p>Theo Jaffee (41:50)</p><p>you</p><p>you</p><p>Paul Buchheit (42:03)</p><p>Like one of the things people come in, you know, I get pitched on a lot of times are like email ideas and it's kind of like it's like Gmail but with one extra feature and you know, that's not really like a viable business. You have to be doing something that isn't just like trivially done by a larger competitor. But you know, it's kind of a law of large numbers. We fund a couple hundred startups per batch and it's...</p><p>it's, you know, we can take a lot of bets.</p><p>Theo Jaffee (42:38)</p><p>So what startups or founders right now do you currently think are the most promising and why? Like a lot of people talk about perplexity, for example.</p><p>Paul Buchheit (42:47)</p><p>Yeah, Perplexity is cool. I think they're doing a good job. And again, actually, that's a great example of a company that's iterating very quickly, right? They're continually improving the product. They're continually engaging with users and making it better. Will they be able to survive long term? Obviously, they're competing directly with Google. That's pretty hard. But they're going after something that's a real need.</p><p>And the advantage they have versus Google is that they don't have an existing business. So the problem that Google has is they actually have this really amazing business with putting ads on search results. But owing in part to the fact that they've had essentially a search monopoly over the last decade, the search results page has just gotten flooded with ads to the point where sometimes you search for something and all you get is ads for the first</p><p>the entire screen is just full of ads. And so perplexity has the advantage that they don't have that legacy to protect. One of the tricks that a startup can do is that they can essentially destroy an incumbent's business because an incumbent, this is the classic sort of innovator's dilemma, let's say the new business is only 10 % as good as the old business, that's terrible for the incumbent but still great for a startup.</p><p>Theo Jaffee (44:13)</p><p>Do you think perplexity in Google can coexist?</p><p>Paul Buchheit (44:18)</p><p>Yeah, certainly. I mean, you know, these things are always... When something's this big, things always play out differently than you expect. You know, if you go back in time 20 years, when we were at Google 20 years ago, we were just about to launch Gmail, actually, you know, April 1st, 2004. At the time, we were very worried about Microsoft. And so actually, like inside of Google, people were like really scared of Microsoft and, you know, there were this terrifying threat.</p><p>And here we are, you know, 20 years later the two companies obviously are competitors, but they coexist You know very successfully they're both multi -trillion dollar companies and But Microsoft has shifted a lot right like they 20 years ago Microsoft was entirely this company based on the windows and office Monopoly and that's not their business anymore. I mean they still sell that stuff, but that's not that's not what makes Microsoft</p><p>Windows has sort of been de -emphasized and now they're just this enterprise software company.</p><p>Theo Jaffee (45:25)</p><p>What other startups or founders do you think are very promising right now?</p><p>Paul Buchheit (45:31)</p><p>I don't have a catalog for you, unfortunately. Anything with AIs, interesting to look at. Because obviously, I think that's the biggest trend is how does all of this play out. We have a lot of companies that are looking at storytelling. I think it's kind of...</p><p>really intriguing possibilities in terms of just enabling people to do really great things like, you know, my daughter writes a lot of like fan fiction and things like that and, you know, isn't it going to be really cool when you can just automatically turn that into something that's, you know, a quality animation or something like that, animated television shows. There's a lot of things right now that requires a very large budget to produce.</p><p>And within probably a year, just an amateur will be able to make something just as good using these AI tools. And so I think we'll see an explosion of creativity and of content because it's a thing that enables people. And obviously that's like disruptive in a lot of different ways, but it's remarkably hard to predict these things. In hindsight, it's like super easy, but...</p><p>I can tell you at the time it's never as easy as it looks in hindsight. One of the examples that comes to mind was at Google, again about 20 years ago we had this product called Google Video and most people haven't heard of it but it actually launched before YouTube. And so when we were working on Google Video I actually remember we'd be like, what are people gonna upload? What even is there? It was a thing like YouTube basically.</p><p>you could upload videos and we would host them. At the time, the only thing we could think of is, well, probably they're just going to upload copyrighted content and porn. What else is there, really? What could people possibly upload that would be so interesting? There was a lot of skepticism of the Google video product. Also, part of the reason Google video failed is because they are overly cautious. When you would upload,</p><p>Theo Jaffee (47:36)</p><p>you</p><p>Paul Buchheit (47:55)</p><p>a video to Google video, you had to fill out this great big form showing who are the actors and directors, as though it were like a Hollywood production, and then it would have to go through a review process. And then the startup, YouTube, just comes along and just makes this thing where you just upload and that's it. And you don't have to jump through any of those hoops and it's live. And they went viral with some videos, which were, of course,</p><p>Theo Jaffee (48:14)</p><p>Thank you.</p><p>Paul Buchheit (48:23)</p><p>Copyrighted I think the first viral YouTube video was a Saturday at live sketch Lazy Sunday, which is like a really good. It was hilarious but now you know YouTube is this incredible repository of Educational content everything imaginable, but you know you can learn so much on YouTube it's you know if you want to learn how to be an electrician or my son is really into 3d printing he watches all this stuff about 3d printing and</p><p>material science and mathematics and it's YouTube is probably the greatest like educational resource that has ever existed in like the history of humanity and we didn't anticipate that</p><p>Theo Jaffee (49:03)</p><p>So you talk about AI leading to like an explosion in creativity. And one important realm of creativity is software. You know, Paul Graham likes to talk about hackers and painters, you know. Software is like art. So do you think AI is fundamentally like an enhancement for existing software developers, or is it more of a replacement? Like, do you think there will be more or fewer developers, if you had to guess, you know, these predictions are hard, but if you had to guess, do you think there will be more or fewer developers in like five years?</p><p>Thank you.</p><p>Paul Buchheit (49:34)</p><p>Um, both. So I mean, what it means to be a developer will change, right? Um, so as the tools get better, um, the way that you use them changes, what it, the accessibility certainly improves when you can just describe what it is you want. Like, what does it mean to be a developer when you're really just talking to an AI and telling it what you want and then kind of iterating on that. So what I imagine is, is essentially a dialogue system, right? Where it creates a product.</p><p>And it's, and you're like, well, no, not quite like that. You know, you can continue to iterate and refine what it is you're looking for. But certainly it's going to change a lot. But again, you know, five years is like forever at this point because things are moving so quickly. So it's, like I said, AI makes it extremely hard to look into the future at this point.</p><p>Theo Jaffee (50:30)</p><p>Mm -hmm. So switching topics a bit to talk about Gmail and email in general. So obviously you've been in this field for like a very, very long time. But in 2024, how important do you think email still is considering we have all kinds of other forms of communication like Slack for the workforce and Twitter DMs for, or Instagram DMs for personal communication?</p><p>And email is increasingly used by scammers and whatnot.</p><p>Paul Buchheit (51:02)</p><p>Yeah, I mean, it's always been popular with scammers. I think that the thing that email has is just the fact that it's universal. It doesn't belong to any one platform, and it's kind of the thing that binds everything together. So when you sign up for an account on Amazon, it doesn't DM you on Instagram, right? It sends you an email. And so it's kind of like the base layer. But yeah, certainly,</p><p>a lot of communication moves off of email, which is good. Email is not always the best way to communicate. I think especially for anything that's kind of complicated, I would avoid email if there's like, if you have a stressful issue to discuss, don't ever do it over email. Email is good for just factual things, right? It's good for, here's a document.</p><p>here's a receipt, whatever. But yeah, I think it's good that we're creating other channels of communication.</p><p>Theo Jaffee (52:10)</p><p>So when Gmail came out, you pioneered a whole bunch of new text interfaces that previous email systems didn't really have. I believe Gmail was the first to have very fast search, and it was the first to natively support email threads and whatnot.</p><p>And so now we're in like another big opportunity with text UI, which is communicating with LLMs. So what would you want from a text interface or any interface with a large language model?</p><p>Paul Buchheit (52:43)</p><p>I'm not sure I understand what you're asking. How would I change the interface of like chat GPT you're asking?</p><p>Theo Jaffee (52:50)</p><p>Yeah, chat .gbt or any other kind of AI. Like how would you want to change it so that it's better to interact with, just like Gmail is better to interact with other humans and other email platforms.</p><p>Paul Buchheit (53:01)</p><p>I think it's a very different situation. So part of the reason we were able to innovate so much on Gmail was that it had been, email had been kind of a stagnant product. It just hadn't changed in a long time. Like, you know, our competitors at the time in the webmail space were Popmail and Yahoo mail, which are just like these incredibly clunky, I mean, I guess you're young enough, you've never seen these kinds of things. They were...</p><p>the whole webpage load would reload every time you did anything and it was just covered in ads, you would get, the default quota you would get from Hotmail was two megabytes. And you know, for comparison, like a photo that you take on your iPhone is probably like five megabytes. Right? And so like it was just such, there were such,</p><p>poor products and they had just kind of been stagnant for a long time. So there was a tremendous amount of space for us to innovate because no one had done anything in a long time. If anything, it seemed like they were making the products worse. And so there is no analogous situation. It isn't like the chat GPT interface is 10 years old, right? Like this is the, the AI stuff is the most innovative space going on right now. And you have,</p><p>you know, a lot of really smart people at Google and OpenAI and Thropic, all these companies are competing in this space. So I don't, I wouldn't try to compete on interface. You know, there's a lot of people working on it. There's, there's, you know, perplexity, all these apps are trying to present different interfaces to it. The thing I think that is going to happen, and partially this is just like a technology improvement is essentially.</p><p>the point at which I can just like whip out my phone and it's like opening the camera app and then I can just start having a conversation with the AI about whatever it is the camera is pointed at. So I think that the next step is that you go beyond kind of that simple.</p><p>chat interface, essentially, to be more one of a conversation with an intelligent agent that's actually on your device. And I think that's really cool, because then I just open it up, and it has the camera and the microphone, and then I just have a conversation with it in real time.</p><p>Theo Jaffee (55:14)</p><p>like the Ragnar one.</p><p>So are you bullish on the Rabid R1? Have you seen it? Yeah. It's pretty similar to what you're talking about. There's also just, I mean, the way I solve this particular issue is I have an iPhone 15, so I just have an action button, and then I map it to chat GPT, and it'll just open it, and then I can talk to it from there. But it's still not the same as having something, you know, live always on at the OS later. I think Apple made both.</p><p>Paul Buchheit (55:26)</p><p>I haven't looked at it.</p><p>Mm -hmm.</p><p>Yeah, I think one thing I'm waiting for, it'll be interesting, is just what Apple, when will Apple finally show up? Because they're arguably in some ways in the best position because they own the platform that is in my pocket. And so in terms of being able to have something that's really tightly integrated, they're in a really good position. But it seems like they kind of missed the boat in terms of actually building good AI models.</p><p>Theo Jaffee (56:17)</p><p>Well, have they missed the vote or do they just have something really cool that still hasn't come out yet? A lot of people think they're gonna...</p><p>Paul Buchheit (56:22)</p><p>Could be, that could be it. But usually the thing is that, you know, it takes a while and it takes iteration to actually launch a product and have it be good. You know, we were kind of in the same place a year ago or more than a year ago when ChatGPT launched and everyone just assumed that Google had this thing that they could just launch the next day and that would be better. And obviously that isn't true. It actually, there's a big difference between having a product that kind of like,</p><p>you're working in a research capacity and actually having something that's out in the world and dealing with the more adversarial environment of having actually millions of users working with it. And of course these products, the AI products learn, right? I mean, they're learning from your interaction, right? ChatGPT is learning from you using it.</p><p>Theo Jaffee (57:21)</p><p>Yeah. So I think that's a pretty good place to wrap it up. So thank you so much, Paul Buchheit, for coming on the show.</p><p>Paul Buchheit (57:27)</p><p>Great. Sure. All right. Great chatting.</p><p>Theo Jaffee (57:31)</p><p>Yeah.</p>]]></content:encoded></item><item><title><![CDATA[#11: Bryan Caplan]]></title><description><![CDATA[You Will Not Stampede Me: Essays on Non-Conformism]]></description><link>https://www.theojaffee.com/p/11-bryan-caplan</link><guid isPermaLink="false">https://www.theojaffee.com/p/11-bryan-caplan</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 26 Feb 2024 02:56:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/142051277/0bd171e5cfc2f0bda9b8938dc6c01875.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Bryan Caplan is a professor of economics at George Mason University, research fellow at the Mercatus Center, adjunct scholar at the Cato Institute, writer at EconLib and Bet On It, and best-selling author of eight books, including <em>You Will Not Stampede Me: Essays on Non-Conformism</em>, the subject of this episode.</p><p>0:00 - Intro</p><p>2:04 - The Next Crusade</p><p>3:44 - Moderating X</p><p>6:11 - Inventing Slippery Slopes</p><p>8:04 - Right-Wing Antiwokes</p><p>10:20 - Nonconformism and Asperger&#8217;s</p><p>12:02 - Making society less conformist</p><p>16:44 - The rationality community</p><p>20:30 - Polyamory</p><p>23:28 - Caplan vs. Yudkowsky on methods of rationality</p><p>26:40 - Updating on AI risk</p><p>29:35 - Checking your nonconformity</p><p>31:10 - Making LinkedIn not suck</p><p>33:53 - The George Mason economics department</p><p>38:35 - Does tenure still matter?</p><p>40:03 - Improving education</p><p>46:50 - Should people living under totalitarianism conform?</p><p>49:30 - Natalism and birth rates in Israel</p><p>51:19 - Hedonic adaptation in the age of AI</p><p>53:52 - Should we abolish the FDA?</p><p>57:15 - Being a prolific writer</p><p>1:00:30 - Bryan&#8217;s writing advice</p><p>1:02:35 - Outro</p><h3>Links</h3><p>Bryan&#8217;s Twitter: <a href="https://x.com/bryan_caplan">https://x.com/bryan_caplan</a></p><p>Bryan&#8217;s Blog, Bet On It: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:820634,&quot;name&quot;:&quot;Bet On It&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2d45a1-c3a4-4fe1-bc20-e8e00e0c60b6_1280x1280.png&quot;,&quot;base_url&quot;:&quot;https://www.betonit.ai&quot;,&quot;hero_text&quot;:&quot;Caplan and Candor&quot;,&quot;author_name&quot;:&quot;Bryan Caplan&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.betonit.ai?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!iEMP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2d45a1-c3a4-4fe1-bc20-e8e00e0c60b6_1280x1280.png" width="56" height="56" style="background-color: rgb(255, 255, 255);"><span class="embedded-publication-name">Bet On It</span><div class="embedded-publication-hero-text">Caplan and Candor</div><div class="embedded-publication-author-name">By Bryan Caplan</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.betonit.ai/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><p><a href="https://www.betonit.ai/"><br></a><strong><a href="https://www.amazon.com/You-Will-Not-Stampede-Non-Conformism/dp/B0CQPJ6DCT/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;dib_tag=se&amp;dib=eyJ2IjoiMSJ9.95bGnREBhOREb8kL8JeVpjT4w1MPCEfRYcUNZuMAEHVdVGJYv9Ns80cVoAGLtZJPOA6b5DTvnMfO6tF6ZAfRybM2TL7EtaHQlsvbOZVOZro.EzrhsuVDhljbtjiqE_YqD9_RP0YXHFyVHMYEu9GOviI&amp;qid=1708877990&amp;sr=8-1">Buy </a></strong><em><strong><a href="https://www.amazon.com/You-Will-Not-Stampede-Non-Conformism/dp/B0CQPJ6DCT/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;dib_tag=se&amp;dib=eyJ2IjoiMSJ9.95bGnREBhOREb8kL8JeVpjT4w1MPCEfRYcUNZuMAEHVdVGJYv9Ns80cVoAGLtZJPOA6b5DTvnMfO6tF6ZAfRybM2TL7EtaHQlsvbOZVOZro.EzrhsuVDhljbtjiqE_YqD9_RP0YXHFyVHMYEu9GOviI&amp;qid=1708877990&amp;sr=8-1">You Will Not Stampede Me</a></strong></em><strong><a href="https://www.amazon.com/You-Will-Not-Stampede-Non-Conformism/dp/B0CQPJ6DCT/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;dib_tag=se&amp;dib=eyJ2IjoiMSJ9.95bGnREBhOREb8kL8JeVpjT4w1MPCEfRYcUNZuMAEHVdVGJYv9Ns80cVoAGLtZJPOA6b5DTvnMfO6tF6ZAfRybM2TL7EtaHQlsvbOZVOZro.EzrhsuVDhljbtjiqE_YqD9_RP0YXHFyVHMYEu9GOviI&amp;qid=1708877990&amp;sr=8-1"> on Amazon</a></strong></p><p>Playlist:</p><div id="youtube2-sdJRQ6924HY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;sdJRQ6924HY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/sdJRQ6924HY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Spotify:</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><p>Apple Podcasts:</p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:5118,&quot;numEpisodes&quot;:10,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-12-26T01:03:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://twitter.com/theojaffee">https://twitter.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Hi, welcome back to Episode 11 of the Theo Jaffee Podcast. We're here today with Bryan Caplan.</p><p>All right, so let's get into some questions. First question, in your essay Crusades and You, you talk about the eight crusades of hysteria and herding that you've lived through. Islamist Iran, the war on drugs, Free Kuwait, the war on terror, the Iraq War, the 2008 financial crisis, COVID and BLM. So do you have any ideas about what the next crisis might be, crusade, or do you just have no way of knowing?</p><p>Bryan (00:35)</p><p>Hmm. Hmm. Gee, that's a really tough one. Yeah, if you could figure out what the next crusade's going to be. I mean, a lot of this does hinge upon there being a shocking event. I think there wouldn't have been any George Floyd protests without George Floyd. It really does depend upon having the right shocking event at the right moment. In terms of what would be next, hmm.</p><p>Yeah, I mean, I really wish I knew. I mean, normally I will say I don't feel like I've been very good at foreseeing which things would happen next. Obviously, I didn't see the Israel-Palestine thing. I didn't see Ukraine coming. I was writing about it as a possibility, but that's very different from saying that's the one.</p><p>Theo Jaffee (01:21)</p><p>Do you think Ukraine would count as a full -blown crusade on the level of the others?</p><p>Bryan (01:25)</p><p>No, no, it's more of a minor crusade. I don't think we've had any true full -blown crusades since COVID. You don't have them all the time. That's at least one of the saving graces is that probably most years there isn't any one issue that everyone is supposed to be thinking about and getting worked up over. But maybe one year and three is in that category.</p><p>Theo Jaffee (01:48)</p><p>So in the identity of shame, you talk about the dangers of large, unselective groups.</p><p>So one such group is X, Twitter. So how do you think it should go about moderating itself to, you what's the right amount of selectiveness, if any, to avoid trampling free speech?</p><p>Bryan (01:55)</p><p>Well, it's an interesting point. It's not like, at least I've never met anyone who identifies with Twitter itself or X itself. Even Elon Musk is not going to say, I love everything happening on my platform. It's all fantastic. So that is very different from what I'm talking about in that essay, which is if you identify as Irish, then you sit around talking about how great everyone is Irish ever was and defending Ireland against any possible criticism. I don't think that...</p><p>Twitter, or any social media platform actually falls in that category. In terms of what they could do in order to improve their brand, I think a lot of what they have done since Elon is improved the brand from being a place where woke voices only are wanted to an actual vibrant center of argument, and one where they are not trying to stamp out any particular view. And they're just saying, you there's bad views, and it's not our job to go and get rid of them.</p><p>So in the end,</p><p>that's most of what the brand is. I think that is actually a brand worth defending. Same thing goes for Substack, by the way. So Substack recently has come under pressure to go and hunt down possible Nazis and get rid of them. And I wrote a piece saying, this really is a strong example of the slippery slope where once you get rid of them, who's next? It seems unlikely that that would be the only group that you would get rid of and then you would stop because the people that want to get rid of them aren't the kind of people to stop.</p><p>Theo Jaffee (03:35)</p><p>First they came for the communists and I did not speak out</p><p>Bryan (03:38)</p><p>Yeah, yeah. It's important to realize that often the slippery slope argument is wrong. You need to go and look at particular cases and see what's going on here. I do think wokeness is one where if you did not have a slippery slope argument before, wokeness would cause you to invent it just to see how eventually things that seem to be completely normal become forbidden thought crimes, which is just weird. All the way to, there's two genders, it's like 10 years ago.</p><p>So that's something that people will get in trouble for thinking. What is the alternative view even? And now people do get in trouble for it, strangely.</p><p>Theo Jaffee (04:16)</p><p>Can you go into a bit more detail about what you mean by inventing a slippery slope?</p><p>Bryan (04:21)</p><p>Right. So the slippery syllable argument, which we've all heard, says if you go and make one exception to a good rule, you won't just wind up making one exception. You'll wind up making other exceptions and further exceptions, and finally there is no rule. There's a great scene in the Brazilian movie City of God where they start off with this one character who says, well, look, I'm going to go and commit some crimes only against bad people.</p><p>and we're not going to actually kill any innocent people here in the Brazilian underworld. Then what happens is they come to a point where either you have to kill an innocent security guard or get shot yourself and shoot someone and say, eh, exception approves the rule. And then the voice of admiration says, and then the exception became the rule. And then you get a montage of all the horrible things they start doing. So that's the slippery slope argument in general. Not always true, obviously. There are exceptions that we make that don't spiral out of control into...</p><p>eviscerating the original rule. It requires some judgment, but also just some experience in seeing what kinds of exceptions eventually spread far and wide. What I would say is that in the case of Woken, it's one where the exceptions that started being made just expanded so rapidly and in directions that would just have been confusing to almost anyone if you had forecasted 20 years ago.</p><p>If you just imagine going back in time 20 years and saying the following things will be reasons for a person to be shunned, you would just like, what? The story, of course, is not, you don't just wake up and say we're gonna start shunning people for the following list of things almost everybody believes. Instead, you start with marginal cases and then you shun some more and more and more and finally you end up where we are.</p><p>Theo Jaffee (06:09)</p><p>Are you as worried or maybe not as worried, but how worried are you about right -wing anti -wokes compared to the woke left?</p><p>Bryan (06:17)</p><p>Yeah, I'll say about as low as you can be while still being positive. I think they just have so little cultural influence. And the cases that people have pointed to of them abusing power, I think when you actually study the facts, I don't think it is reasonable to think of them as abusing their power. So Florida is probably the main case that people talk about. This is one, look, you've got public schools, they got a curriculum, and what's gonna be in the curriculum? Should it be the...</p><p>what a pile of woke dogma or should it be regular stuff? And you're like, well, you can't do both. And choosing between those, yeah, I don't see why it should be woke dogma. On terms of any kind of censorship on college campuses, and if you actually know how college campuses work, this is just absurd to be a worry. It is such a remote possibility that anyone is going to have to worry about this in real life, right? You know, it's a big world, so you can find very isolated examples, but it's really rare.</p><p>And if you understand how universities work, and I do, because I've been in universities now for 27 years, the entire DNA of the system exists to go and promote wokeness and crush dissent. They have a bunch of rules that have hindered them from doing it, including, of course, tenure. Woo, tenure. Valuable for me, because I actually am a dissident, not necessary for the others. But in any case, one of the...</p><p>The simplest examples to me is, agreement studies departments, many people feel like it would be 10 to about to censorship to get rid of them. It's like, well, suppose we had departments of creation studies being funded by taxpayers. Would it be censorship to get rid of those? Like, no, I think it's a violation of the first amendment that you have taxpayer support for them in the first place. When you have an academic discipline that actually is just dogmatic propaganda where you cannot be a practitioner of the discipline while saying,</p><p>highly critical things about it, then yeah, I don't think that there is an issue of academic or intellectual freedom. It's the issue is the other one around of taxpayers being forced to support a secular religion.</p><p>Theo Jaffee (08:25)</p><p>So what do you think about the relationship between nonconformism and Asperger's? Because Peter Thiel has said, you know, individuals with Asperger's have an advantage in Silicon Valley. And Elon Musk has said that he has Asperger's, and of course, he's wildly successful. So what do you think about that?</p><p>Bryan (08:42)</p><p>Yeah, great question. I would say two things. First of all, that Asperger's people do not really need to think of nonconformism as a conscious philosophy because they're doing it already. So in a way, there's this old line, if I'm not here to help the saved, I'm here to save the sinners. So similarly, the reason you write a book about nonconformism is not primarily to go and tell Asperger's, people with Asperger's to stop conforming, they're already not doing it, but rather to go and get people, the vast majority that,</p><p>are paralyzed by fear of strangers judging them and point out that that is a silly fear to have in the modern world. See, the main reason why my book is useful for people who are on the spectrum is that I do emphasize being strategic about it and realizing a lot of times being nonconformist is fine or helpful, but there's other times when it is actually going to hurt you in real life and to recognize the difference between those cases.</p><p>Theo Jaffee (09:41)</p><p>We</p><p>think.</p><p>Bryan (09:42)</p><p>Right, and as to how you would do it, I'd say step one is try small deviations and see what happens to you. Right, so start small, see if people freak out at you. If they don't, you can probably go a bit further. If on the other hand, the smallest deviation gets you crushed, that's a different story. So if you say, well, I'll just do a small deviation, I will refuse to do a foreign language in high school. Yeah, you might not even be able to graduate high school if you do that, sorry.</p><p>Theo Jaffee (10:07)</p><p>What do you think can be done to make people on the whole less conformist? Not like individual people, but society. Like, to what extent is this even possible and not baked into human nature?</p><p>Bryan (10:18)</p><p>Well, it does vary quite a lot between countries. If you go to Japan, I think they're obviously a lot more conformist than we are. I think they themselves will agree that we are less conformist than they are. So since it is something that varies, it can't be that everybody is always at the same maxed out level. Obviously, even in Japan, there's people that do things like dye their hair, and the first Japanese person to dye their hair was definitely not conforming, in a country where pretty much 100 % of the people are born with black hair.</p><p>Let's see, so what can be done at the societal level? A lot of it does hinge upon individuals doing it. And if individuals do it, it becomes easier. So that would be where I would start. Probably, in terms of what arguments are the ones that are most helpful, the honest one is just saying, look, we have a lot of these emotions that come from our ancestral environment where we lived in bands of 20 to 40 people. And the modern world is so different from that.</p><p>Historically, there just wasn't any such thing as anonymity, and now anonymity is the main thing we have vis -a -vis almost every other person in the world. And then to say, well, we've got these emotions that don't really fit our modern environment, so you can either keep doing that stuff that doesn't really optimize for the situation we're in, or you can try to do something else. Obviously, it's really hard for people to go against very strong evolved emotions, but for that, you just say, like, just baby step it.</p><p>Just do a little bit. Just find some small thing where your emotions tell you to conform, but your reason tells you that you can totally get away with it. You actually want to do it. It will benefit you. And then just break from the mold to that small degree. And we'll start from there.</p><p>Theo Jaffee (12:05)</p><p>Do you</p><p>think the average person would even respond to like an evoPsych argument like that?</p><p>Bryan (12:12)</p><p>The average person know, I mean, of course, there's the general base rate of almost everyone's impossible to persuade of almost anything. So I just begin with that, all right? Then the next step is, all right, given that what can be done? It's like, well, there's a subset of people that are a bit more flexible anyway. So out of people that are a bit more flexible, I mean, I would think that, I would say that out of people that are open to arguments of any kind,</p><p>appeal to Darwinian thinking is in a way one of the easiest because it's so widely accepted in principle. So you are starting with a principle that is widely known and accepted among people that would even listen to an argument. You say, well, you're alienating creationists. All right, yeah, I didn't think I was going to do very well with them anyway. And then, let's see, what was I going to say there? Oh, yes, and then it's also one where it's very easy to...</p><p>get people to see that introspectively this is correct. When you just say, well, suppose that you could go and get $1 ,000 by wearing an embarrassing shirt in front of a bunch of people that you knew would never know who you were. Would you do that? It's like, I don't want to. Yeah, but why not? It's like, well, because it's gonna hurt my reputation. We stipulated in the thought experiment that it won't really hurt your reputation. So does that make you feel better about it? It's like, I still don't really.</p><p>It's like, all right, well, but it's like a thousand bucks. How much could it really matter? How about 10 ,000 bucks? Will you wear the stupid shirt in front of a bunch of people that will never have any idea who you were in order to get $10 ,000? There's got to be some point where you would do it. But along the way you also learn, oh gee, like I just care a lot about the opinions of people where really I don't have any good reason to care about. It's gotta be evolution here that is tricking me.</p><p>Theo Jaffee (14:01)</p><p>Was that the actual amount of money, by the way? Yeah. Yeah, OK.</p><p>Bryan (14:04)</p><p>No, no. I'm alluding to a famous experiment on the spotlight effect where they, as part of the experiment, they just made people wear a stupid shirt and then walk through a room and then ask, they asked first of all, the person, how many people noticed your stupid shirt? And second of all, they asked the people, did they notice the shirt? And there was a massive disparity where people just thought people were paying a lot more attention to their shirt than they really were.</p><p>which is another reason, by the way, not to worry about nonconforming is that to a large degree you're just invisible. People are so caught up in their own heads and they're thinking about themselves all the time, it's just hard to realize how little other people are thinking about you. Once you realize that, it's very liberating.</p><p>Theo Jaffee (14:50)</p><p>So you talk a lot about stuff like, you know, focus on the truth and don't let other people influence what you think unjustly and quantitative decision making and betting on your ideas and a lot of things that remind me a lot of Eliezer Yudkowsky's rationalist movement. So how similar would you say your methods of rationality are to the kind of standard Yudkowsky less wrong rationality methods? And secondly,</p><p>Bryan (15:05)</p><p>Mm -hmm. Mm -hmm.</p><p>Theo Jaffee (15:17)</p><p>What do you think about the rationality movement? Do you think they're true nonconformists or just kind of collectivists?</p><p>Bryan (15:25)</p><p>Yeah, so let me start with the second question first. I've got a very positive view of the self -styled rationality community. They've always done right by me. Sometimes it seems like they get a bit cultish to me and they get fixated on some strange ideas. But then again, if you go and compare them to almost any other group, then it's a lot less clear what's going on. In terms of my specific levels of agreement and disagreement.</p><p>So the most glaring one is I'm not really worried about artificial intelligence. I even have a bet with Eleazar on the end of the world on January 1st, 2030. And he's saying, well, it's not the end of the world, it's the end of humanity on the surface of the Earth. Oh, sorry, my mistake. But I misspoke. But that bet says that if there's any human beings left on the surface of the Earth on January 1st, 2030, then he owes me some money. You might wonder, well, how?</p><p>Do you do a bet on the end of the world? And the answer is the person who is the optimist, namely me, prepays. And that's what I did. So I'm still feeling fine there. I think that Eliezer in particular and a lot of other people have just allowed their youthful fondness for sci -fi to carry them away on flights of fancy and paranoia. Obviously, they disagree. I don't have any really good argument to change their minds at this very moment or under these time constraints, but that's where I stand.</p><p>In terms of other things, I think that they are pretty crazy about polyamory too as being something that is widely going to work out for people. I agree with Isla that there's probably five or 10 % of human beings that are psychologically equipped to be happy doing this, but that leaves a whole lot of others who aren't. And especially I think that for families with kids, it's probably a really bad idea unless you just don't care about.</p><p>getting to raise your own kids or have them grow or you don't mind having them grow up in a broken home. I think that is actually really bad. Not by the way because I think that it messes up your future. I just think it messes up your childhood. Just unpleasant for kids to have to be going back and forth between multiple homes and dealing with adults that are in conflict with each other.</p><p>I think that's another case where they are underestimating the power of evolution. I think jealousy has so strongly evolved, I think that most people just cannot get rid of it. And if you say, well, we'll all be rational about this, all right, well.</p><p>It's the kind of thing where people's emotional constitution generally doesn't actually adjust very well. It is very standard among practicing polyamorous to wind up saying, yeah, well, there was this period where we were totally lying because of jealousy or the jealousy tore us apart. So we think that is probably another big issue. But, you know, overall, I've had great relations with the rationality community. They're fun people.</p><p>They're not very pushy, except on the AI risk, even there, I've yet to meet someone that yelled at me for not worrying about it, which is different from almost every other community that's worried about some terrible disaster, if you call it into question.</p><p>Theo Jaffee (18:35)</p><p>Do you think</p><p>your arguments on polyamory apply just as much to kids living in more traditional societies where you have like, in most of these traditional societies of polyamory, it's like one man and multiple women and they all live in the same house and do you think that kids there are also not well off?</p><p>Bryan (18:55)</p><p>That's a good question. So that's usually called polygamy rather than polyamory. There is quite a bit of social science of polygamy saying, well, a few things. One of them is, in very primitive societies, that's not really how it works. In very primitive societies, it's more like you just have pair bonding for two or three years while the kid is a toddler, and then the relationship dissolves. But since, if you live in a band of 20 to 40 people, you still see both of your parents, so you've got that going. You don't need to...</p><p>Theo Jaffee (19:04)</p><p>Thanks.</p><p>Bryan (19:25)</p><p>have a shuttle system between the huts of people who live within sight of each other. By traditional societies, you mean more of the ancient empires or something like that. Ones where the very most successful guys have had hundreds or I think there's one guy out of over a thousand kids. It's definitely one where kids have very little contact with their dads. So there's that. Also, it's very noted in societies like that, there's just a lot of conflict between the mothers.</p><p>most grotesquely in things like the Turkish Sultanate, where there was a period when the Sultan's first job was to murder all of his brothers. Gruesome, or mostly half -brothers, murder all your half -brothers. I they even murder the full brothers just to be safe. Anyway, that's pretty gruesome.</p><p>Theo Jaffee (20:06)</p><p>Thank</p><p>Bryan (20:15)</p><p>Let's see. And then the other major issue that most social scientists have had with polygamy is that if it's widely practiced, then it means that you've got a lot of guys who don't get to marry anyone are left alone, and there's a lot of other side issues from that. You know, I think that in the modern world, I'm not at all concerned about polygamy becoming so widespread that we start seeing these negative consequences. I mean, we don't even see billionaires having harems in the modern world in the sense of...</p><p>They've got a bunch of women and they have kids with all of them and they all hang out. Elon is sort of the closest, but even he's not actually doing that. He's not really doing that. So I think that we are so culturally far from it. Let's see, like the Harvard anthropologist, Joseph Henrich, I think he's testified in some hearings that are somehow related to.</p><p>Theo Jaffee (20:50)</p><p>Yeah, I thought that too.</p><p>Bryan (21:09)</p><p>prevent legal non -recognition of polygamy. And I think that's pretty paranoid too. This is going to lead to some horrible negative effects because it's just a small fringe thing. You can imagine that it would spread, but I don't see it spreading. I think the main thing that has spread is just broken homes, but not from polyamory, just from monogamous people don't stay together.</p><p>Theo Jaffee (21:33)</p><p>So back to my first question on rationalism. How similar do you think your methods of rationality are to teleasers?</p><p>Bryan (21:41)</p><p>Hmm, let's see. I think there's a lot of similarities. I mean, I would say that I'm especially influenced by Phil Tetlock's super forecasting, where a lot of his advice is start with base rates and then do adjustments up and down. Scott Alexander has this line where, specifically about AI risk, where he says, well, this just leads to base rate ping pong, where I have my base rate, you have your base rate. My base rate is like number of times that the world has ended.</p><p>and his base rate is number of times that a superior intelligence has come into contact with an inferior intelligence. This is one where...</p><p>In principle, you could sit around saying, oh, we can't figure out what the base rate is, but I don't think it's actually that hard in practice unless one side is determined to go and get a certain kind of answer. So I do put a lot of reliance on base rates. A lot of my arguments with Tyler Cowan come down to, they'll say, oh, here's something that could happen. And it looks like that's never happened before, base rates say no. And he's like, well, but you're not engaged in the argument. And I'll say, yeah, well, you're not engaged in the base rate.</p><p>So I think the base rate is a lot more important. People tend to get really sucked in by the details, which leads them astray. Whatever you're going to tell me, I'm going to treat it as a...</p><p>modest adjustment of the base rate rather than something that's rocking my world and saying, oh my god, I can't believe it. I will pile on and just say that I haven't seen that Tyler has any great predictive abilities. Super smart guy, very knowledgeable, but in terms of saying anything falsifiable that's gonna happen before it happens, I think he's probably below average for a thinker, maybe above average for a human, but that's not his forte.</p><p>In terms of other methods, of course, base is rule. This is a very big part of the way that I approach the world, as it is for anybody that's part of Tetlock as well. Just things like you can see some evidence in favor of a view and rationally become less confident because you're expecting to see even stronger evidence in favor of the view.</p><p>something that people have trouble with. But you see a headline and it says, you know, 100 people murdered by an immigrant terrorist. And then you say, well, but if we go and average over all the headlines of the past three years, it's only 200. And I think that a person that had a reasonable view would have thought it'd be 500. So actually, oh, this is in fact a reason to become more optimistic. Emotionally, of course, this drives people crazy, but the logic is completely sound.</p><p>You've got to specify, well, what did you think was going to happen? What would have been consistent with your view? The style of the normal person who just opens up the newspaper and says, see, everything I said has been proven. That is something that Bayes will stop you from doing because you'll say, well, wait a second. What would have to be on the newspaper headline for me to say that my view was disproven? What would it even look like?</p><p>You're always going to be able to go and find something that is an example of your complaint and then claim vindication, but that's ridiculous.</p><p>Theo Jaffee (24:45)</p><p>Yeah, so going back to what you said about base rates, where you said your base rate is that the world will not end and Tyler, sorry, Scott Alexander's base rate was how many times the superior intelligence has come into contact with an inferior intelligence. Back in July, 2022 for the audience, Brian and I had lunch and one of the things we talked about then was AI risk. And he mentioned his bet with Yudkowski about how...</p><p>Kaplan thought that it was not going to end the world. And since then, of course, chat GPT has come out and you've made another bet that looks like you're going to lose it about AI capabilities. So on that, I know you haven't made a huge update, but have you updated on AI risk at all?</p><p>Bryan (25:22)</p><p>Mm -hmm.</p><p>Yeah, of course, very slightly. So before I was skeptical that there'd be an AI that would be able to get A's on my economics exams, I did a bet on that. First of all, I went and gave GBT 3 my economics exams and got a D after hearing a lot of people saying, it's so incredible, it will blow your mind. And I even had a friend say, oh yeah, it'll be able to get A's in your test and got a D. And I'm like, all right, well, they're overselling again.</p><p>just like the last hundred times they've ever sold. So I did do a bet on that. And then when GBT 3 .5 came out, it was able to get A's. So I will say, all right, that's considerably more impressive than I was expecting. The progress was a lot faster anyway, but there's still a world of difference between you can get A's on my econ exams and you're gonna destroy the world through one way or another.</p><p>mean, there I've just also had a lot of more particular arguments, like there's gonna be a kill switch, a lot of kill switches. It's not that human beings are just going to hand over the reins and let the AI do what it wants for itself. Then in terms of just, I mean, if the base rate for anything designed by human beings, often there have been things designed by human beings that have ended up being terrible for human beings, but only because some human beings consciously unleash them on other human beings.</p><p>Which is where I would say, I think is actually where almost all the AI risks should reasonably be put. It's not that the AI will achieve autonomy and then will go and do bad stuff to us. Rather, it's that there's gonna be some humans that will say, help me come up with the best possible plan to go and kill as many other humans as I can. So that seems a lot more likely, which is what we've seen with almost all of the great technological achievements in the last 200 years. I think you'd have to be a fool to see electricity and then not wonder.</p><p>Could this be used for bad purposes? Yeah, of course electricity can be used for bad purposes. Of course mass production can be used for mass murder. Nuclear weapons can go and exterminate vast populations. But in all these cases, it is not that the technology takes over. It's that human beings do bad things with their tools.</p><p>Theo Jaffee (27:40)</p><p>And while we're still talking about rationality, what do you think are the best ways to check yourself to make sure that you are being a non -conformist and not just a contrarian or a collectivist?</p><p>Bryan (27:53)</p><p>I think a</p><p>big part is coming up with concrete tests of what's going to happen if you do something that is not conforming and seeing what happens. So, I mean, obviously just applying simple rationality processes and saying just because most people think it doesn't mean it's false. So it's putting just little weight on the fact that something is a popular view rather than putting negative weight. I think the contrarian is someone's putting negative weight on a view's popularity. If other people think it's true, I'm going to think that it's false.</p><p>The rational thing is to say, well, I'm not gonna put a lot of weight on it just because we know there's so many areas where human beings have embraced silly views. There's just a lot of popular views that are wrong. In a way, that itself kind of begs the question, right? Because like, well, how do we know that there's so many popular views that are wrong? And that's not something, that was something where I would just go case by case and say, well, here's a list of a bunch of things that are widely thought but turn out to be incorrect. So, and these are not just small.</p><p>cherry -picked or lemon -picked examples. These are pretty big examples of things that people are really wrong about and have been really wrong about in the past. And it's not that hard in hindsight to see that they're wrong.</p><p>Theo Jaffee (29:03)</p><p>By the way, this reminds me a bit of a Charlie Munger quote where he said something like, being a good investor requires the temperament that doesn't derive too much pleasure from either following the crowd or going against it.</p><p>Bryan (29:13)</p><p>Yeah, we are very reasonable.</p><p>Theo Jaffee (29:16)</p><p>So we talked earlier about social media with X, but another social media is LinkedIn. And I go on LinkedIn periodically, and I find that it sucks because it's conformist. And it seems like everyone on there is just trying to please other people. So like, do you think that there's a way to fix LinkedIn, to fix professional social media in general? Or is it just kind of a property of like professionalism that it ends up conformism?</p><p>Bryan (29:27)</p><p>Ah!</p><p>Yeah, yeah, I think it is heavily a property of professionalism. Important thing to remember is that most original and creative ideas are terrible. And especially on something that is a practical task that a lot of smart people have been working on for a really long time. If it's, like what's the best way to go and say, fly a plane? It's like, well, you're not the first person to think about this, you know? There's a lot of really smart people. There's a lot of money on the line, probably.</p><p>there's already immense selection pressure to do a good job on this. When someone says they got it all figured out, they're probably incorrect. So that is one thing to keep in mind. Let's see, in terms of fixing, and then, you know, so then like, if you know my book, The Case Against Education, I say a lot of what people are signaling education, sure it's intelligence, sure it's work ethic, but a lot of it's just conformity, just saying, like, I know there's no I in team and I'm going to be part of the team, be a loyal member.</p><p>will not rock the boat. Probably like some of my best nonconformist advice actually is focus on being friends with your boss instead of being liked by coworkers. This is one where it's like, oh, that's what kind of a suck up are you? It's like, well, a person who appreciates the boss probably got there by their hard work and greater understanding of the field and that they actually have a really tough job of dealing with a lot of recalcitrant people.</p><p>Every manager has to herd cats, and I'm going to be one of the easy cats to herd because I think I have something to offer this person, and if I do a good job, I think this person is likely to have my back. Another way of thinking about it is if you're a nonconformist, who is going to be easier to win over? A bunch of coworkers or one boss? It's going to be a lot easier to win over one boss. If it's just one person, this is someone where...</p><p>You clearly indicate my loyalties on your side and my goal here is to be a highly useful member of this team. If you just talk to almost any boss, they'll say, wow, like, I just need a lot more people like that. It's just hard running this because people complain so much and are so hard to please and just don't appreciate that I'm in a tough spot. So just showing some empathy for a person who has to make hard decisions is something that is actually nonconformist in a very deep way.</p><p>Theo Jaffee (31:59)</p><p>So the econ department of George Mason is full of nonconformists. You, Tyler Cowen, Robin Hansen, Alex Tabarrok. And famously, you're not just popular within academia, but outside it. Probably mainly outside it. So...</p><p>Bryan (32:03)</p><p>Oh yeah.</p><p>Yeah, you got that right.</p><p>Theo Jaffee (32:16)</p><p>How do you think this can be replicated at other schools? Like, I go to the University of Florida, I can't think of any UF professors who are famous in the way that you and Callan and the rest are.</p><p>Bryan (32:28)</p><p>Hmm, yeah, great question. I mean, a lot of it depends upon getting some people who have paid their dues and gotten the regular signals, who then are willing to take advantage of this crazy tenure system to do something cool. Unfortunately, there's just not that many people like that. Once you get one person to do that, then often you'll find that there are other people that were sympathizers and wanted to, but they were just too scared. So you need to get...</p><p>a focal individual who's willing to stick their neck out, which on the one hand is not as hard as it sounds because with a tenure system, they know they've got this massive job security. The real difficulty is that despite the incentives being in favor of nonconformism on that level, the system weeds out the nonconformists before they get there usually. So that is a big part of it. If you really wanted to go and foster it, the idea of...</p><p>having schools creating independent centers of nonconformist thinkers. That's of course how Grievance Studies got off the ground is you go and find someone who says, like, my work isn't appreciated because I'm the only one who understands how fantastic Albanian culture is. Give me my Albanian Studies department and then I can really do it. Unfortunately, that's a case where you're getting nonconformists who are definitely defying society, but at the same time, like,</p><p>they're really just wanting to create their own cult. It isn't like they want to have some very thoughtful exploration of all the possibilities or anything like that. In terms of where I would start, I would generally start with economics first because economics does have this long tradition of just being willing to entertain socially unacceptable hypotheticals and consider possibilities that other people just say that's an evil thought, crime, don't think it. Secondly, honestly, philosophy departments, they are famous for hypotheticals and while...</p><p>their discipline has gotten worse over time, still there is a sense of that we can consider an idea without agreeing with it. Whereas if you go over into your agreement studies departments, that is a really alien idea to them. Like, what do you mean we're gonna consider the possibility that actually there is not a lot of discrimination against African Americans? That's crazy, we all know there is. Yeah, but what if there isn't? Well, there is, so we're not gonna talk about it. And you, by wanting to talk about it, are an evil person.</p><p>It's like, oh, ah, my mistake. So if you did want to go and foster this kind of thing, you'd basically need to find some people, find a few people that already foot the bill, give them some money, and then let them have independent hiring authority so they can replicate themselves. Not perfect, but I think it's the best formula for success.</p><p>Theo Jaffee (35:15)</p><p>Well, you talk about it like a formula for success and like a plan if you wanted to do this, but it seems like GMU didn't, you know, plan to have an econ department like this. So how much harder would it be to do it spontaneously?</p><p>Bryan (35:32)</p><p>I think actually it was planned. So I've been around for at least half the life of GMU having any kind of a public profile. So basically there were donors that wanted this kind of thing to happen.</p><p>and they gave money so that it could. I think the first big donation was to bring the Center for Study Public Choice here, so bring future Nobel Prize winner James Buchanan and his team here in 1983, if I'm not mistaken. Then there were further donations. There was another big donation to bring Vernon Smith's team, another future Nobel Prize winner. And by this point, by the late 90s, we were consciously talking about...</p><p>we want to become the Hoover Institution of the East. So this was actually a conscious plan. Now, it doesn't mean that, and in fact, there really was one single individual man who was at the epicenter of all of this, which is Tyler Cowan. He was the one that is great at bringing together donors and the existing faculty and new talent and making it all happen. So he deserves a ton of credit for that.</p><p>Theo Jaffee (36:20)</p><p>Hmm, I don't know.</p><p>So do you think that this kind of existing infrastructure of academia and tenure and donors matters as much nowadays? Like you talk a lot about tenure and how great it is because you can research and write about what you want. But today we have people like Noah Smith and people like Scott Alexander who make a lot of money just writing on SubSec.</p><p>Bryan (37:00)</p><p>Yep, so I'll say that it's great for me personally. I think it's a terrible system actually. Tenure is a disaster. It has a few benefits that are swamped by overwhelming costs. So in no way think that I'm pro tenure. I think tenure is terrible. What, yes, but what I will say is that for people who want to do contrarian stuff but are risk averse or just don't have a ton of star power charisma, it remains one of the best bets.</p><p>Theo Jaffee (37:13)</p><p>Oh,</p><p>Bryan (37:28)</p><p>So Scott Alexander especially, he was able to go and get where he is through having this incredible personal charisma and ability to just create a new community almost out of nothing. But most people are just nothing like that and would not be able to do more than eke out a meager existence on Substack or other kinds of social media.</p><p>It's great that they exist, and what they're doing is wonderful, and yeah, there's no doubt that what Scott's doing is way better than 99 % of professors. However, I don't think there's room in the market to have a thousand Scott Alexanders, which there was.</p><p>Theo Jaffee (38:08)</p><p>So on education, in a portrait of my school, you talk about your ideas for how you'd run a school, but not like a lot of specifics for curriculum. You mentioned reading, writing, and math. So a couple of questions. One, do you think computer science and programming should be elevated to the same level as math? And two, how would you scale this approach beyond five to 15 students?</p><p>Bryan (38:34)</p><p>I'd say it's reasonable to think about putting CS at the level of math, but in the end I wouldn't, because I would say, look, math is one of the things that you need for CS, but there's a lot of other things that you can do with math, whereas CS is something where if you don't want to be a programmer, then the actual career value is not that large. I mean, would say that if there's someone that is really good at math, has a good background there, and then when they're 18, they decide they want to become programmers, they can do it.</p><p>On the other hand, if there's someone who does not do much math and then they're 18 and they say, I want to go and get up to speed on doing enough math to do CS or engineering or physics or whatever. So yeah, like at 18, like unless you're a complete genius, it's pretty much too late. It's just too cumulative. You've missed this critical window. It's just gonna be too hard to ever catch up. But you're like, that's, it's very reasonable. And definitely if we could go and say that you can do CS instead of a foreign language, that would be one of the best curriculum.</p><p>revisions that we could make because I think a ton of people would rather do CS than foreign language and they would get a lot more value out of it. I mean, it's very standard for people to spend two or three or four years of high school on foreign languages. Almost none of them learn the language to any remotely usable level. Even if they did, there's not that much use of it. CS on the other hand, this would be giving useful job skills to a generation of students. So that would be a big improvement.</p><p>But I'm not quite sold enough on it to think that everyone should be doing it standardly.</p><p>Theo Jaffee (40:03)</p><p>Yeah, I mean, I've always kind of thought of it as, you know, at least on the same level as chemistry or physics, which every student learns in high school. Yeah, yeah. And when I was in elementary school, I remember hearing about like, oh yeah, we're all going to be learning about computers and computer science soon. Obama was talking about this 15 years ago and then it just never happened. So I went through elementary, middle and high school. I took two CS classes only because they were APs, but they're just not in the standard curriculum at all.</p><p>Bryan (40:09)</p><p>Oh yeah, yeah, of course. Better than chemistry or physics.</p><p>Oh yeah. What's going on is that curricula are very backwards looking. In fact, if you want to understand the curriculum, it is best to remember that it all evolved out of a system that was designed to teach three things. So it was designed to teach law, medicine, and theology. This is what Anglo -American universities did for hundreds of years. They just teach law, medicine, and theology. And if you're thinking, wait, medicine? It wasn't until like,</p><p>1900 or so, the doctors started saving more people than they killed. Yeah, that's true. But still, they were teaching this crap for hundreds of years. Law, on the other hand, almost by definition, lawyers have to be effective because they are the ones that are judging their own success, in a way. And then theology, again, my view is it's a fake subject. So it's not fake in the same way as early medicine, where you're actually killing people with it. But still.</p><p>So anyway, if you just realize this is what our system grows out of, and then everything else pretty much just gets tacked on, seems to add on other things afterwards. But really the idea that we are here to go and train people in these three professions, the fingerprints of that are still on the system that we have. And so then we have a lot of requirements that make very little sense in terms of the modern world, but make a lot of sense, you know, so basically they make very little sense forward looking in terms of what will be beneficial to the student.</p><p>make a lot of sense backwards looking in terms of we've always done it that</p><p>Theo Jaffee (42:03)</p><p>And then.</p><p>Bryan (42:03)</p><p>little complicated because modern sciences weren't taught until the late 19th century math was. But the idea that you would put modern science in the curriculum, that I think starts with the German, top German universities, then spreads to Johns Hopkins, and then moves over to the rest of US academia after that.</p><p>Theo Jaffee (42:23)</p><p>So you also mentioned that this approach to education is like only for people who are already interested in it and have the aptitude for it, and it would only be 5 out 15 students. So how would you scale this approach beyond that? Can you? Or would you have to do something totally different?</p><p>Bryan (42:38)</p><p>Well, not totally different. I mean, I would say that when you have kids that just lack any intrinsic motivation, this is where you really need to do some soul searching and say, why do I want to make them do something when they have no intrinsic motivation? The good answer to that is for extra extrinsic motivation, because they're a child, they don't understand what the labor market is like, you don't want them to grow up and be unable to take care of themselves. So for things where you have very strong evidence that it will be a severe handicap to them, just let them do whatever they want.</p><p>That's when I think it is a good idea to go and push it on them whether they like it or not. On the other hand, there's a lot of things that we do in school right now and push kids where you say, well, why do they need to know this? Well, we don't know much better than we've always done it that way. So I think I've got an essay just called Unschooling plus Math where I say that there is this homeschooling philosophy called unschooling, which almost everyone thinks won't work at all, it'll be a total disaster. There are defenders who say, no, it's not a total disaster. I think they're right. But.</p><p>There is one notable deficit that I have seen unschoolers have, and the little data that we have is consistent with this, which is that unschoolers are deficient in math because very few people get intrinsic enjoyment out of math, and yet it is so vital for so many high status occupations. So I say, look, if you're willing to just go and do unschooling with the tweak that every day you do have to do an hour or two of math, then I think that does solve most of the problems with unschooling.</p><p>Theo Jaffee (44:02)</p><p>Well,</p><p>I wonder, like, for some people, maybe I'm just being anecdotally, but for me, like, going through elementary and middle school, I hated math. I could not stand algebra and geometry and algebra too. But when I got higher up into calculus, I started to really like it.</p><p>Bryan (44:17)</p><p>Mm -hmm. Yeah, that's like one person in a hundred. So it's a great kind of person to be to like, I didn't like the boring easy math, I only liked the hard stuff. Yeah, but normal people just don't like it and it's not because it's too easy, it's because it's too clear that they're wrong. It's so depressing. So you put in like, math is the opposite of labor theory of value. You can put in a hundred hours into a math problem and if it's wrong, it's wrong. It doesn't matter that you tried hard.</p><p>That's a lot of what's so bitter about it. And there's also just no room to go and say, well, there's some sense in which I'm right. Which you'll see in almost all the humanities, and there's a math, like no, there's no sense in which you are right. You are just wrong.</p><p>Theo Jaffee (44:54)</p><p>Yeah. So to what extent do you think people living under totalitarian governments should be non -conformist? Like...</p><p>Bryan (45:04)</p><p>Hmm.</p><p>Yeah, great question. It's likely to get you killed, so that would be a reason not to do it, definitely. It's one where you need to be a lot more careful, because by definition, totalitarian regimes will go and harshly punish you for very minor deviations. Even there, I would say that you can't really survive in most totalitarian regimes, maybe any of them, without having...</p><p>enough nonconformism to say, wait a second, I'm gonna go and die if I don't break the rule. And so I gotta figure out a way to somehow weasel my way out of this, whether it's being sent to the Eastern Front to go and fight during World War II, or to get an illegal job, or break rules against corruption in order to get enough food to feed your family. So you could not be a full conformist and survive in totalitarian regimes.</p><p>Unless you happen to just be born into a ruling family or something like that where you're taken care of and you're never given a dangerous job and you've got plenty of food and all that other good stuff. But otherwise, you know, so I'm thinking here about, so in North Korea, after the Soviets withdrew their subsidies in the early 90s, they had a massive economic collapse. Their whole economy was based upon getting a bunch of subsidies from the Soviets, which they no longer had.</p><p>And then there's the question of, well, what do we do about all these people who are working in a fully 100 % government -owned economy? And the answer was, well, let's see. We're running short, so we're going to fire them. And then what are they supposed to do? The answer was no answer. What happens if you're in a fully state -monopolized economy and you lose your job and they don't give you a new job? Either you starve to death or you work illegally. There's a great book called...</p><p>see, nothing to envy, where they just went over the plight of North Koreans who lost their jobs during this period. And yeah, it's like, well, I can either get caught for being a black marketeer and get sent to the slave labor camp or executed, or I can starve to death. I guess I better take my chances with the slave labor camps, and maybe I can make enough money to bribe my way out. So you do need that. But obviously, totalitarian regimes are very harsh on people that stand out.</p><p>So, you know, in a way they exemplify the otherwise irrational fear that most people have that if you do anything different, society will crush you.</p><p>Theo Jaffee (47:36)</p><p>So,</p><p>in your essay, Natalism as Nonconformism, you wrote that one of the most important things you can do, both in general and as a nonconformist, is to have kids. Israel has a total fertility rate of 2 .94, which is not only much higher than any other developed country, but it's actually higher than it was in 1989. You mentioned religiousness and secularism a little bit, but...</p><p>Bryan (47:57)</p><p>Yes.</p><p>Theo Jaffee (48:03)</p><p>Can you go into a little bit more detail about how did they manage to do this and what can other countries learn from it?</p><p>Bryan (48:05)</p><p>Right, so I'm not an expert in Israel. I think a lot of it actually is just exponential selection where the high fertility groups in the country were, namely the ultra -orthodox, have just become a much larger percentage of the country. So that way, as long as you can sustain the fertility rates of every subgroup and the high fertility subgroups are much higher than the others, so they're rising a share of the population over time, then it's...</p><p>almost follows as a matter of pure arithmetic that you will get your fertility rate will go up. So I think that's a lot of what Israel did. People also talk about things like just having a very pro -natal attitude. So that probably something too. Even there you have to wonder, well, isn't that really just a reflection of the fact that they've got so many kids and there's so many large families? I mean, just to be clear, as I say in that essay, I'm not claiming that natalism in general is not a conformist.</p><p>Because if you're in a highly natalist subculture, then the conformist thing is to be a natalist too. Rather what I'm saying is that if you're in a typical first world country where we have very strong anti -big family norms, that's where you need to be a non -conformist in order to have a lot of kids.</p><p>Theo Jaffee (49:24)</p><p>So in a conservative confession you talk about hedonic adaptation. So are you at all worried about a future where we'll figure out how to do something like wireheading directly affecting our brains reward system and then like essentially running out of hedonic adaptation?</p><p>Bryan (49:31)</p><p>Mm -hmm.</p><p>Or sort of the other way around, right? Wouldn't it just be that we will be, we'll just make ourselves identically adapted to whatever we've got? Isn't that really the worry?</p><p>Theo Jaffee (49:54)</p><p>No, the worry is essentially that we'll put ourselves on like a heroin drip and live like a, you know, kind of terrible people.</p><p>Bryan (50:02)</p><p>Oh, okay. So you're talking about right now you achieve something, you feel good for a bit, but then you're motivated to go achieve another thing because the thrill wears off. Yeah, okay, I get it now. I guess I would say I am a little bit worried about that. I mean, it's the kind of thing where evolution will save us in the end if that happens because the people that go on the heroin drip will have no children and they will be wiped out and the people that remain will be those that had an aversion to it.</p><p>Theo Jaffee (50:10)</p><p>Yes.</p><p>Bryan (50:29)</p><p>It might be that we have to first have our population fall by 90 or 99 % in the extreme scenario before we reverse, and then we just are replaced by people who are so horrified by the idea of a heroin drip that people just won't do it. So I think that is the long run answer, but yeah, obviously having a period of a few hundred years where we go into a massive decline is somewhat worrisome. I mean, I will confess that I usually just don't worry that much about...</p><p>things that are over 100 years out because I just figure it's such the world and the future is so different. There's really much that we can do about it. And in fact, when someone starts talking about I'm gonna get ready for the world in 100 or 200 years, my thinking is, I think it probably more likely than not you're just gonna make things worse and you're gonna go and likely to try to crush progress and hold it off is more likely than that you're going to.</p><p>Theo Jaffee (51:02)</p><p>in the log run rail dead.</p><p>Bryan (51:23)</p><p>create a futile, a fertile, rather a fertile groundwork for further progress that it seems pretty remote. And if you just think about someone in 1800 saying, what can we do in order to get lots of progress here? It's like, what would they even have thought back then? I suppose there were a few enlightenment figures who would have said, well, we just need to have a lot of freedom in order to go and explore new ideas. And we need to make sure that industry is not overly burdened with regulations so they can implement the ideas. So there'd be some people like that, but.</p><p>think that anyone who had anything much more specific than that would have just been messing things up, probably.</p><p>Theo Jaffee (51:57)</p><p>in your essay, Bioethics Tuskegee vs COVID.</p><p>You</p><p>Bryan (52:00)</p><p>Ah.</p><p>Theo Jaffee (52:01)</p><p>talk about the problems with bioethics. This was probably my favorite essay in the compilation, by the way. So some people, I've been hearing this a lot recently, say we should entirely abolish the FDA. Because even though that will almost certainly lead to problems on balance, it will be a tremendously good thing because no amount of regulation is worth blocking med tech from getting to market, like transformative med tech that can cure cancer or something. So what do you think about this idea?</p><p>Bryan (52:11)</p><p>Mm -hmm.</p><p>Yeah, I'm all in favor. I've been in FDA abolitionists for a long time. Really, I was in my senior year of high school and I had never heard anything other than arguments in favor of the FDA. So I really was actually brainwashed in my history classes about how there's this whole horrible period before the FDA where pharmaceutical companies were killing people left and right. And then finally, wise government came and established it and now we're protected. And the only danger is that we might not be protected enough. So this was actually explicitly taught. It was in the curriculum.</p><p>Then in 12th grade, I read some economist saying, well, you realize if there's a drug that saves 10 ,000 lives a year and the FDA delays it for seven years, that you killed 70 ,000 people. And when I read that, I'm like, hmm, I don't see any way around that argument. That is about as good as any argument it could ever possibly get. And then the question is, how many lives are being lost by approving drugs too soon? And looking there, it's like, hmm, yeah, it was hard to come up with very much. Thalidomide, which is the...</p><p>drug that was used to, let's see, what's the right way of putting it? It's the drug that most people point to as showing that we need the FDA. The main story there is that thalidomide, the reason why the dangers were caught was that thalidomide was approved in the UK and then people discovered that it caused a whole lot of birth defects. Whereas in the US, it was not the FDA that caught it, it was another country that had lighter regulation that allowed us to do it. Otherwise, it probably would have been approved.</p><p>Funny footnote, it was finally approved as a treatment for, I believe, leprosy. Just don't give it to pregnant women, because then it will still cause horrible birth defects. Let's see, but anyway, this case against the FDA seems very strong to me. The idea, and just when you just see the asymmetric response to, like, someone was killed by an approved drug, well, you have to change everything. Whereas people who, people lose their lives because they'd wait for a drug, well, that's, like, that's not even a thing. And you...</p><p>Theo Jaffee (54:15)</p><p>Thank</p><p>Bryan (54:23)</p><p>Like during COVID, I was gratified at least and kind of amazed at how quickly the drugs were approved because my friends say, oh, everything's gonna be great. And I'm like, look, even if we got the drugs that totally work, which is itself is a good outcome, above average outcome, how do you know they're not just gonna be held up for years? And this was a case where suddenly people woke up to, yeah, I guess we delay it, then it's gonna kill a lot of people.</p><p>You know, combined then with a lot of unfair demonization of normal, cautious people saying, how do you know it won't have bad side effects in five years? And the honest answer to those people is, yeah, we don't. We have to wait five years. But we're losing a lot of people now, and we're just gonna gamble that it's actually going to be a net positive, probably a good positive based on historical experience, but yeah, we can't prove that you're wrong. That'd be the honest thing, but obviously it's politics for honesties in very short supply.</p><p>Theo Jaffee (55:20)</p><p>So I think we have time for one last question. So you've written a lot. I think something like over 2 ,000 essays on Econlib, a bunch more on Substack, several books. And so what advice would you have for writers other than just read a lot and write a lot?</p><p>Bryan (55:22)</p><p>Sure, sounds good.</p><p>My honest answer is I don't even feel like I am that hard at working. I don't feel like I'm that hard of a worker. I feel like I am daydreaming a lot and goofing off a lot. The main thing I can say is that every day I get something done. I get something done every day. And that just adds up. Do 20 years of chipping away. This is the plot of The Count of Monte Cristo, right? Every day the guy chips away a little bit from his prison cell on this island of France. And after seven years, he escapes.</p><p>Similarly, if every day you just get a little bit done, it adds up a amount. I am honestly puzzled by all the people who are tenure professors who have so little output. It's like, what do they even do all day? Like, if I can get this much done while goofing off this much, what are they doing? So, like, I... In the end, I'm kind of puzzled. Like, part of me thinks, are there just, like, lots of, like, people who are horrible alcoholics and drug addicts or something? And...</p><p>They just do the bare minimum and then otherwise they're putting all of their energy into their vices. Is that why they don't get much done? Are they just going and putting a ton of research, or not a ton of research, a ton of hours into their teaching, even though their teaching doesn't appear very good either? They just sort of spin their wheels a lot. I am mystified about what other people are doing that leaves their productivity be so low. In terms of how you can get motivated,</p><p>For me, a lot of motivation comes from my iconoclasm. I really don't like hearing people say things that I think are false or dubious in a giant self -righteous tone. It motivates me to go and argue the other way. And especially if you are an iconoclast like me, there really is a lot of low -hanging fruit of ideas that are true yet barely discussed because most people are too afraid to write about them. So, I like that piece on...</p><p>you have bioethics in Tuskegee, right? It's pretty obvious when you read it, but I think most people would be like, look, we can't possibly go and talk about Tuskegee as if it wasn't the worst thing that was ever done. And it's like, well, look, obviously it wasn't the worst thing that was ever done because there's just much worse things that have been done. I'm gonna say it's way worse to kill a million people than to give a horrible disease to 200 people, right? But.</p><p>To say that, it's like, oh God, we can't possibly say that this is sacred. It's like, who says it's sacred? I mean, I can understand why you might not want to talk about it at work, assuming your coworkers even know what you're talking about. But for someone like me with tenure, why not go and stick my neck out and just say what I think is correct? There's always this fear eventually you're going to be crushed. I do have actually a bet with someone who says that eventually in my own...</p><p>mind, I will declare myself to have been treated very unfairly by my university. So far, so good.</p><p>Theo Jaffee (58:36)</p><p>And for individual essays, just like you talked about amassing lots of creative output. But for each individual essay, do you have any specific advice on that?</p><p>Bryan (58:46)</p><p>Yeah, well, here's a lot of my advice. Anytime you get an idea, instantly write it down, because you're going to forget. I have a queue of hundreds of ideas. Normally, I just put down a title. Sometimes the title isn't clear enough, so I just write a sentence or two to remind myself what the idea was. And that means that I never have any issue with I can't think of anything to write about. I have the opposite problem. I have way more ideas than I feel like I would ever have time to write about. But I just try to keep refreshing the queue and just adding more in, so that way I've got a good...</p><p>a good set of choices. A lot of where I get my ideas is just for iconoclasm, where I just see something and I say, huh, well, that sounds wrong. So yeah, but people would be upset if you said it. Huh, well, in that case, it probably hasn't been said by anybody yet. I can't remember anyone saying that before. All right, then I'll do it and then I'll be the person that says it. It is in a way scary to me how often I can quickly become the number one Google hit for anything I care about, because it just shows that what I care about is stuff that most other people don't even want to talk about.</p><p>It's not bragging, it's just what I care about. Often there is, if there's anyone that's interested in it, it's only the audience that I make because otherwise it just didn't exist as a topic.</p><p>Theo Jaffee (59:52)</p><p>All right, well, thank you so much, Brian Kaplan, for coming on the show.</p><p>Bryan (59:58)</p><p>I'm very happy and just let me let you know that you can get this new book, You Will Not Stampede Me, Essays on Nonconformism, for just 12 bucks as a paperback on Amazon or $9 .99 for the ebook. I've also got four other books of my collected essays that are already out there available for the same price. I got three more coming. And then I've got all my other books, including my New York Times bestseller, Open Borders. And on May 1st, I've got my second graphic novel coming out, Build Baby Build, The Science and Ethics of Housing Regulation.</p><p>So I'm really excited about that. The book looks fantastic. It took longer than I thought it would, but I stand by the product. It's great.</p><p>Theo Jaffee (01:00:35)</p><p>Alright, looking forward to it.</p><p>Bryan (01:00:37)</p><p>Okay, thanks a lot. Great talking to you again.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This transcript was generated automatically with <a href="http://riverside.fm">Riverside</a> and probably contains lots of errors.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#10: Liron Shapira]]></title><description><![CDATA[AI doom, FOOM, rationalism, and crypto]]></description><link>https://www.theojaffee.com/p/10-liron-shapira</link><guid isPermaLink="false">https://www.theojaffee.com/p/10-liron-shapira</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Tue, 26 Dec 2023 01:09:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/140083690/496467be69a5a7fb1ca7b55aaf69102c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup <a href="https://relationshiphero.com/">Relationship Hero</a>. He&#8217;s also a rationalist, advisor for the <a href="https://intelligence.org/">Machine Intelligence Research Institute</a> and <a href="https://www.rationality.org/">Center for Applied Rationality</a>, and a consistently candid AI doom pointer-outer.</p><ul><li><p>Liron&#8217;s Twitter: <a href="https://twitter.com/liron">https://twitter.com/liron</a></p></li><li><p>Liron&#8217;s Substack: <a href="https://lironshapira.substack.com">https://lironshapira.substack.com</a></p></li><li><p>Liron&#8217;s old blog, Bloated MVP: <a href="https://www.bloatedmvp.com">https://www.bloatedmvp.com</a></p></li></ul><h3>TJP Links</h3><ul><li><p>YouTube: </p><div id="youtube2-YfEcAtHExFM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;YfEcAtHExFM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/YfEcAtHExFM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div></li><li><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/0YWuKWhw2cRNFdLSucP0xf&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/0YWuKWhw2cRNFdLSucP0xf" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe></li><li><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677">https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677</a></p></li><li><p>RSS: <a href="https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss">https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss</a></p></li><li><p>Playlist of all episodes: <a href="https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj">https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj</a></p></li><li><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p></li></ul><h1>Transcript</h1><h3>Introduction (0:00)</h3><p><strong>Theo: </strong>Welcome back to episode 10 of the Theo Jaffee Podcast. Today I had the pleasure of interviewing Liron Shapira. By day, Liron is an entrepreneur, angel investor, and the CEO of counseling startup Relationship Hero. By night, Liron is deeply involved in the rationalist movement and is one of Twitter&#8217;s most prominent advocates for AI safety. As usual, we go in depth on various aspects of the AI doom debate: where he agrees and disagrees with Eliezer Yudkowsky, the various AI and non-AI risks that humanity faces, the differences between human and ASI intelligences, and his critique of Quintin Pope and Nora Belrose&#8217;s AI Optimism movement. We also talk about how a high probability of doom impacts his personal life, his background in the rationality community, and his <em>skeptical</em> views on the crypto industry. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Liron Shapira.</p><h3>Non-AI x-risks (0:53)</h3><p><strong>Theo: </strong>Hi, welcome back to episode 10, tenth episode of the Theo Jaffee podcast. Today, we're here with Liron Shapira.</p><p><strong>Liron: </strong>Theo Jaffee, I'm a big fan, I've been listening to the catalog.</p><p><strong>Theo: </strong>Glad to hear it. So let's get into some of our first questions. We know that you're very interested in and worried about existential AI risk. But how worried are you about non-existential AI risks, especially because more and more powerful AIs are drawing near. We saw a demo just a day or two ago of text to video that looked decent for the first time. So non-existential risk, like jobs, what if we end up in a future with aligned super intelligence, but humans lose agency or meaning, just anything in that category. </p><p><strong>Liron: </strong>So yeah, when I think about the non AI existential risk, I'm not super worried, but a couple things come to mind. Nuclear risk and bio risk would be the top two, I think below AI existential risk. I think nuclear risk is profoundly underrated. It's been described as something like 1% per year. Maybe if you look at the rest of the century as a whole, I might put it at like 15% chance of doom, maybe 20, right? Because maybe the risks are correlated. So it's not like independent events of 1% per year. But I think nuclear risk is underrated. And I know that people love to say, oh, my God, people are overblowing nuclear risk. It gave us nuclear energy, focus on the nuclear energy, nuclear energy is safe. And they're right that nuclear energy is safe. But that doesn't justify how risky nuclear explosions are. We still have these arsenals, okay? Let's not forget. And like, yeah, it's great that nuclear power plants are good power plants. But nuclear risk is still sitting there, these 50 megaton devices are still sitting there, right? And there's all these incidents where they almost went off. So I just think it's underrated. And maybe I would be a big nuclear doomer. But it's just hard for me to focus on that kind of thing when I think that the AI doom probability is 10 to 100 times greater. So I'm like, okay, great. Put that aside. That's not my cause. But that might be my runner up cause.</p><h3>AI non-x-risks (3:00)</h3><p><strong>Theo: </strong>Yeah, I meant more like, not existential risks that are not AI, but AI risks that are not existential.</p><p><strong>Liron: </strong>I gotcha. Okay, that's an important distinction. I tend not to be concerned about the AI risks that aren't existential, unless they're near existential, right? So if we're talking about, oh, humanity is all like slaves to the AI, but we're still kept alive with morphine. I guess I'm pretty worried about that. Well, I just think that's not plausible. But I would consider that pretty bad. But then if you go down to social media is gonna be more addictive, then I become less concerned. </p><p><strong>Theo: </strong>Do you think s-risks are plausible?</p><p><strong>Liron: </strong>I do think that s-risks are plausible, right? So it's the idea, suffering risks for the listeners, it's the idea that we're creating these moral agents, moral persons, right? So like within the AI, maybe it's just trying to simulate what a human would say. But that simulation is a person or has moral value. And it's hard to prove that there's not a moral person inside of these AIs. I mean, presumably there's not yet because they're not quite powerful enough. But as they grow more powerful, it's very plausible to me that they can have a consciousness, right, within the inscrutable matrices, and they can have somebody that has rights or that you don't want to harm. So that's very plausible. And we're just confused about consciousness, we're confused about morality beyond humans and animals. So I think s-risks are very plausible. And then, you know, turning the tables, that's like us causing harm to the AI, but then the AI could also cause harm to us or to copies of us. So I definitely think we could enter a hell, where we're all getting tortured for trillions of years. Like, I think that's a plausible outcome. It's just not quite my mainline outcome, right? My mainline outcome is we just kind of all get swept away. And we just get like paperclips or something that happens to not be conscious and not be interesting. That's kind of my default.</p><p><strong>Theo: </strong>By plausible, like, how likely do you think that is? </p><p><strong>Liron: </strong>Hmm, like, how likely do I think an s-risk universe is? I don't know, probably less than 10% like ballpark, I'd say more than 1%. That's like a very rough ballpark, right? So I don't, I definitely don't want to write it off. It's just that if we're even talking about that, it's kind of like we've already gone pretty far where I'm trying to push the discussion right now, right? It's like, that's the discussion I want to have, I would love to be like, hey, are we all going to just die unceremoniously and have the universe burn itself out with no consciousness? Or is there also going to be tortured consciousness, right? If that was the dichotomy, I'd be like, great, let's have that discussion.</p><h3>p(doom) (5:21)</h3><p><strong>Theo: </strong>Well, speaking of probabilities, the notion of p(doom) has been dumped upon a lot recently, including the clip you posted of my podcast where I asked Zvi about it. That's right. You got a good dunking there for sure. Yeah. And so people say like, it's not rigorous. And it's basically, even someone as prominent as David Deutsch said, basically, like, oh, yeah, the steps to getting a p(doom) are like, pick a number between zero and one, not too far or not too close to either of those bounds, and then you're done. So first of all, like, what is your p(doom)? If you have one? And second of all, like, how rigorous do you think your methods of getting it are? </p><p><strong>Liron: </strong>So my p(doom) is 50% by 2040, which is like Zvi said, like Jan Leike said, a ballpark figure. So you can also call it 10 to 90. And this is when the dunks come out, right, the knives out.People often question, "How is 50 the same as 10 to 90?" Just to give a basic explanation, if you need a single probability for the purpose of decision making, you can go with 50% by 2040. That's your single probability. Why give a range? One way to explain a range is that it's the variance of a Monte Carlo simulation of different mental models about likely possibilities that I might have. </p><p>For example, there's a possibility where the world gets its act together and coordinates to stop AI. That's one mental model. And there's a totally different mental model, where we just accelerate as hard as we can. And then the AI fooms. There are so many different mental models that are all feeding into this one probability. It's crazy to compress it down to one dimension. And yet you have no choice. Because when you make decisions, when you do expected utility, you have to plug in a probability number. There's only one future. So all you can do is weight things that could have influenced the possible future. </p><p>That's why I say 10 to 90. That's why Jan Leike says 10 to 90. And then, people have so many objections. They're like, "Where did you get the number from?" For that, I'd say, think about the ballpark. Think about the order of magnitude. If I say, "Hey, 50.0 or 53.25," then it's like, "Whoa, okay, I'm making up a number." But if I come at it from the other way, and I'm like, "Hey, I bet the probability is a lot higher than 0.01%," suddenly, I'm saying something pretty obvious. Because you can imagine so many scenarios that are plausible, like maybe foom is real. Don't you think there's at least a 0.01% chance that foom is real? </p><p>So if I slide all the way back to 0.01%, at some point, you start subjectively telling me, "You're obviously underestimating this." So, 50%, suddenly, I'm like an idiot pulling numbers out of my rear, 0.01%, okay, I'm obviously underestimating. So if you just become more continuous with how you react to what I'm saying, there's going to be some happy medium where I'm saying something when you're like, "Okay, this seems vague, this seems rough, yet, you can't do better. And you have to give a number." </p><p><strong>Theo: </strong>One exercise in p(doom) is we've had atomic bombs for like 80 years now. And you could say, the probability of nuclear doom in any given year was what, 1% to 5%, something like that. And yet we are still here. And it seems quite unlikely, not totally unlikely, but quite unlikely that we'll be vaporized by nukes within the next few years. So could it be possible that your intuitions for p(doom) might be higher than it would actually be in real life, especially over long time periods with robust systems like civilization? </p><p><strong>Liron: </strong>I mean, so you're using the example of we've had nukes for 80 years, and let's say that there was a 1% chance that they could annihilate, more than 10%, or even 50% of humanity. So every year, we're rolling the dice, and we only have a 99% chance to survive, 1% chance to die. So it looks like 99% to the power of 80 is 44%. So surviving a century is only like a coin flip, right? So I'm pretty content to be like, "Okay, we got lucky on a coin flip." So, I don't think that my model of 1% of your nuclear risk is invalidated.</p><p>And especially when you look at where the model comes from, like, you almost have these things go off, right? You have Cuban Missile Crisis, you have Petrov, you have safety checks on like a test flight over Spain, three out of four of the safety things failing, like there's there's near misses.</p><p><strong>Theo: </strong>When you talk about 10-90% p(doom), you mentioned like, "Oh, once you get into too low numbers, you're obviously underestimating it." So, do you think of 99.5%, which is Eliezer's number of p(doom) as like, "Well, you're obviously overestimating it," just like you would with a 0.5%? </p><p><strong>Liron: </strong>With Eliezer, I think that he would probably agree with my perspective, which is that 99.5% is kind of the on-model probability. So, if you understand what Eliezer does about the relevant theory, optimization processes, computational processes, he's an expert at a lot of the relevant theories. And he's like, &#8220;Based on my understanding, what AI labs are trying to build is something like a perpetual motion machine. And so my model just doesn't say that this can proceed with a significant probability of success.&#8221; It's kind of like, hey, a bunch of people are building a rocket, you know, the first the first rocket that anybody's ever built is going to try to orbit the earth, there's just a very low probability of success on model. But I think Eliezer would agree with my own claim, which is like, okay, but you never know unknown unknowns, like, there's probably like a 1% chance that it'll be revealed to be true what a few people are accusing Eliezer of that he's completely clueless. And his rationality makes no sense, and his probability makes no sense, right. And like, that could be revealed that we're all just like clueless people, right. And some people are urging us to see that reality already, right. And just for that, you have to give a one or 2% chance just to that, right. So there's the off model probabilities that I think Eliezer would admit are like worth mixing in a little bit.</p><p><strong>Theo: </strong>You said 10 to 90% 50% by 2040. What about like 2100? Is it significantly higher or like the same or lower?</p><p><strong>Liron: </strong>I think it's highly correlated. So I think if a foom is going to happen, it'll slightly more likely probably happen before 2040. I think if you go to let's say 2060, then I'd probably push it up to, like, I don't know, 60%. It's hard to push it beyond 60%. Because when I quote the figure, I give myself a lot more just like unknown unknowns. Like I'm clueless. I'm not as confident in what I'm saying in general as Eliezer is, which I think he has a right to be. He's a master of a lot more relevant theory than I am. So I don't think it goes that much beyond 50%. Because I start getting into the "I don't know what I'm talking about" range of things. But you can definitely push it to 60, maybe 70, if you go all the way to 2060. When you go past 2060, at that point, it's like, "Well, what's going on? Why hasn't it foomed yet?" So at that point, it starts undermining my assumptions. So it doesn't necessarily get higher, because it also gets lower. I don't really know what happens to it.</p><h3>Liron vs. Eliezer (12:18)</h3><p><strong>Theo: </strong>So you respect Eliezer a lot, and you think that he knows much more about this stuff than you do. But your opinion is different. So why is that? Is it just because you're less confident in his assumptions? And if so, which assumptions are you less confident on? </p><p><strong>Liron: </strong>I think that Eliezer's model makes a lot of sense. It's just more like, whenever I question him about little things I don't understand, like "wait, so RLHF breaks down when exactly?" and I've had a few of these conversations with him. He always has really good answers. But I can also tell that I have an undergraduate level understanding, and he has a more sophisticated understanding. I expect that I'm more likely to update toward Eliezer than away from Eliezer. But I guess I'm not comfortable making the full update yet, even though there are some principles or rationalists where you're supposed to update all the way. I have some uncertainty. </p><p>The thing is, I don't think that we disagree that much. I think most people who are in the "it looks like we're gonna die" camp, which I am too, don't have that fundamental of a distinction between people going like, "hey, there's 95%" and people going like, "hey, there's 50 plus". I think we're kind of in the same ballpark, which is why when people come and tell me like, "hey, my probability is 10%", like Vitalik just said, I'm like, "okay, great". I don't want to nitpick 10 versus 50. I just want you to see 10, and I'm happy to just let you stay at 10. I don't think you have to come to 50. And you don't have to, because I do think that a lot of what I believe about reading LessWrong is just intuitions that are salient to me, but I understand that they may not always be right, and other people can weigh up their intuitions differently. I don't think that they're making a big methodological mistake. I think it's okay for them to stick with their probabilities until they observe more evidence.</p><p><strong>Theo: </strong>Do you have any concrete disagreements with Eliezer?</p><p><strong>Liron: </strong>That's a good question. I don't know if I do. We always have stylistic differences, but when it comes to the matter of AI doom and rationality, I think there are nitpicks. There's an article he wrote a long time ago, where he thinks sometimes you shouldn't use probabilities in certain circumstances. That was kind of controversial. And somebody's like, "no, just use probabilities". And I don't know where I come down on that. Eliezer famously says that he thinks that a lot of animals just aren't conscious. He seems pretty confident that dogs definitely have no consciousness. And I'm like, "I don't know, they seem like they're kind of conscious intuitively". So on the edges, on the fringes, I do think that I start not following him all the way.</p><p>But on the AI doom core argument, I do pretty much buy it all. I think it makes a lot of sense, And I'm definitely somebody who I'm like a good target audience for his writing, because I do think that it's really good. I think it's still underrated. And I noticed a little bit of myself in it, where sometimes I understand something well. So like, I kind of know what it feels like to understand certain technical topics well. And then I read Eliezer. And I'm like, wow, well, he understands it even better. And I thought I understood it well. But he's pointing out some stuff that is actually deeper than my own understanding of a topic that I thought I understood well. So I feel like I have like a good viewpoint to understand like the degree to which this guy knows what he's talking about in a lot of these different articles that he's published.</p><h3>Why might doom not happen? (15:42)</h3><p><strong>Theo: </strong>If you did eventually come to the conclusion that AI risk is less likely than you thought, why do you think that would be? Or do you just not know?</p><p><strong>Liron: </strong>That's a good question. It's kind of similar to the question of just like, "you know, just imagine doing a post mortem or like a post living, right of like, hey, it's the year 2060. We're all alive. So how do you condition on that? What mental model do you get?" One easy answer is just like, AI progress turned out to be a really long marathon to get to superintelligence. So even though it kind of feels like we're speeding to superintelligence, and Elon Musk is like, "yeah, we're gonna have AGI in three years", and even OpenAI is like, "yeah, we might have a corporation this decade that's better than a human corporation that's run by AI". So even though it feels like we're speeding to AGI, and Kurzweil a long time ago predicted, I think, like 2029. Maybe it's not. Maybe it's 2100, maybe it's 3000. So that would be an easy answer to why we're not doomed yet, because it's just like everything goes slow. Maybe it goes slow enough so that we can do alignment research, right? If somebody just convinced me, look how slow it's going, right? And I know Sam Altman said something about, we're bottlenecked on data center scale. My reaction was, you really don't know that. We definitely could suddenly find ourselves with a bigger hardware overhang than we realize, and one data center could be plenty. But if Sam Altman was spot on, and we're bottlenecked on data center scale, and we have to scale it up like 1000 times, ideally, a million times, that would be a straightforward way to convince me that we're not doomed for a couple decades.</p><h3>Elon Musk and AGI (17:12)</h3><p><strong>Theo: </strong>Well, Elon said three years, but we all know about his record of forecasting stuff.</p><p><strong>Liron: </strong>It's not great. I don't think it's terrible. But it's definitely not perfect. Rob Bensinger posted Elon's record where I think in 2014, he said that we'll have it by 2019. So, you can't just automatically assume that Elon's exact forecast is right. I agree with that.</p><p><strong>Theo: </strong>Well, he tends to be right about stuff in the long term, it just takes longer than he says it will, like self-driving cars, how he's predicted full self-driving next year, every year for the last 10 years.</p><p><strong>Liron: </strong>No, he has. And it's kind of funny, right? A lot of times people catch him, they catch him exaggerating, or they catch him being way off. And it's like, okay, I'm starting to think this guy is not trustworthy. But then at the same time, he launches Starship and lands the rockets. And I'm like, man, there's a good enough distribution of miracles mixed with, okay, this is kind of BS. But this is a legit miracle that overall, I'm pretty bullish on. But then of course, there was the time when he started OpenAI and shortened the timeline by a few years, which Eliezer said, I think he has a good point, kind of overshadows anything else Elon Musk has ever done to stoke the AI arms race. In the end, and by the end, I mean potentially in a few years, that is the single biggest impact that he's done, arguably.</p><p><strong>Theo: </strong>What about xAI? Do you think that's made it worse?</p><p><strong>Liron: </strong>So far, it just seems like they're not moving the convex hole of what's possible, right? So, until they get there, I'm sure they're trying their fastest to get there. If they start releasing something that's like GPT-5 equivalent before GPT-5, then I'll be like, damn it, xAI. Why does Elon have to keep making things worse? But for now, I guess the question remains of whether Elon's 20% project is going to be competitive with Sam Altman and Dario's number one project? It's probably not going to make things that much worse. It's hard to say, right? We got to watch it.</p><p><strong>Theo: </strong>Would Elon just drop a GPT-5 model in the world? He seems to be far more concerned about x-risk than maybe any other major AI lab leader.</p><p><strong>Liron: </strong>So Elon gets massive points for, as early as the 2015 conference, coming in there being like, hey, I'm just a rich billionaire with a ton of credibility outside this field. And I think AI risk is indeed very dangerous, right? Like Bostrom has a point and he gets massive rationality points for saying that. Unfortunately, a lot of the things he said about AI recently are kind of ridiculous, right? Like when he talks about, I'm going to make a TruthGPT, I'm going to make a GPT that's not woke. I mean, I guess those are valid considerations in terms of the next couple years, mundane utility, fine. But when he says stuff like, I think AI is going to be nice to humans, because humans are interesting. It's like, okay, Elon, come on, man, you have Geoff Hinton, you're talking to these luminaries. And they should be disabusing you of these kinds of notions, right? The idea that humans are anywhere near the optimum for interestingness. And so that's going to be some kind of equilibrium. It's like, why are you publicly posting this stuff? It's like, the fate of the world is largely in your hands, Elon. And that is not a plausible theory.</p><h3>Alignment vs. Governance (20:24)</h3><p><strong>Theo: </strong>So there's alignment research. And then there's governance research. And it seems like the default political plan for rationalist, decel, doomers, whatever you want to call it, slightly pejorative, but you know, people who are concerned about x-risk, is slow down AI and give the authority to build AI either to nobody, or to a trusted group of people. So do you worry that this increases centralization risk a lot?</p><p><strong>Liron: </strong>Yeah, for sure. My position is that the actual constructive doomer plan is fraught with peril, right? It's a tough plan. The ideal would be something like a trusted Manhattan project, which seems unthinkable in today's environment. But if we really could get together the scientists, right and have some level of trust, and common purpose, the way we had in the Manhattan project, that may be the single best setup that gives us a chance as long as all of those scientists are top tier, are Nobel Prize winning physicists, or their students or whatever, and people who just appreciate what we're up against, and are taking it seriously the same way they took the nuclear bomb seriously. I do think we would have a chance to win the race between capabilities and alignment. But of course today, it's so unpalatable because people don't realize we're in a war, they don't realize that the enemy is unaligned AI. It just seems like such an impedance mismatch, what are you talking about Manhattan project, but short of that, I just think time is running out. We keep slipping farther and farther from the possibility of a good outcome. I think we're between a rock and a hard place, because you can give a million criticisms to the doomer suggestion of let's centralize everything in a Manhattan project. I agree, that sucks. But the alternative is worse. So many people are saying, you have to take it as an assumption that you have to run things for profit and China is going to compete with you like these things are inviolable axioms that you have to start with. And I'm asking, can I get an inviolable axiom that AI is going to kill us because it's a rock and a hard place. They're both hard situations. I just think that the AI killing us one is even harder, and we have to deal with it.</p><h3>Scott Alexander lowering p(doom) (22:32)</h3><p><strong>Theo: </strong>So Scott Alexander recently published an update of his p(doom) from 33% to 20% based on super forecasters and the world at large thinking that AI risk is not overwhelmingly likely. Has that impacted you at all? Or do you just think they're wrong?</p><p><strong>Liron: </strong>This was one of the controversial things from your interview with Zvi, where Zvi was able to kind of dismiss the super forecasters, which is a shocking move in the rationality sphere. One does not simply dismiss a super forecaster forecast. He even argued with you, he's like, actually, the fact that super forecasters are dismissing it so easily, might make you update the other way, where it's like, they clearly didn't take the problem seriously. So I'm going to discount their opinion. Zvi had some pretty good arguments that I thought made sense. I don't want to throw it out entirely. I'm happy to update a little bit, but I don't want to do a massive update. It's more like, okay, I'll slightly update down a few percent. That's more how I feel about it. Because I do think there are a lot of problems with that project. It happened in 2022. I don't even think that they had the milieu of ChatGPT and people getting excited and luminaries coming out. They're using base rates. How's this for a base rate, a bunch of luminaries coming out and warning about a new technology. I do think that if you look at the super forecaster methodology, and you ask, in what scenario might this hallowed methodology actually fail, at a methodology level, not disputing the conclusion, but disputing the methodology, I do think this looks like a good candidate for a time when they might fail. </p><p>I've also made the analogy to another thing that uses pure logic. This is in addition to the stuff that Zvi was saying about their incentives were wrong. And they didn't research the logic of the problem that much. Another analogy I would make to build on what Zvi said is like, if you look at crypto, for instance, I was in the position of being a crypto skeptic when crypto was still pretty popular and kind of calling the peak of the bubble and being like, the logic of blockchain having applications beyond cryptocurrency is flawed. I'm not sure a team of super forecasters would have predicted a 99% contraction, a fundamental qualitative contraction in this industry, based on super forecaster methodology. I don't think there was a super forecaster tournament then, but if there were, it also seems like the kind of thing that would slip by super forecasting. What do you think about that?</p><p><strong>Theo: </strong>This super forecaster study that I was talking about with Zvi, first of all, my interview with Zvi was four months ago. And the survey was farther back than that, but it doesn't seem to have changed much in that time. I don't think the world as a whole is more doomy than it was a few months ago. And a lot of even rationalist type people seem to be less doomy than they used to be. One example, just off the top of my head is this anon account called Lumpen Space Princeps, which they used to be kind of fully in the Eliezer Yudkowsky rationalism, AI doom foom camp. And now they're like, wait a minute. It seems that RLHF is actually working pretty well. And GPTs are not monomaniacal paperclip maximizer type things. And so maybe, there's not a 99.5% p(doom). It's less than what I thought it was. And of course, it's still rated a lot less than what you do.</p><p><strong>Liron: </strong>I mean, it's true that every time we see AI do something new and not foom, then we have to update a little bit, even if it's not that surprising. The massive update only comes when AI can do everything in the domain of the universe, like be given goals. I always talk about goal-to-action mapping. Like if it can be a better CEO than a human, if it can be a better general problem solver than a human, and then not foom, that's when I do the big update. And I don't even that's hard for me to even describe coherently, because it's almost by logical definition, that's something that's better at goals than human, discovers foom as an instrumental goal and we're off to the races. But if somehow that doesn't happen, if they're always bottlenecked by hardware or something, or suddenly complexity theory has properties that I'm not anticipating or whatever. That's when the big update happens. But when it's like, hey, look, it can get a score on a lot of these tests that humans can, and yet can't actually problem solve for whatever reason. I only make a small update. So lump in, it's like, sure, make a small update. But also the problem is that time is running out. By default, time is not on our side. Every day that goes by where capabilities progress, and we don't have a massive alignment breakthrough, there's less time left in the race. Alignment is falling farther behind every day, or at least didn't gain any ground. The buzzer is about to sound and the buzzer is basically when it gets better at problem solving than humanity. So even when it feels like, hey, nothing's happened in the last month, no incremental capabilities progress has happened in the last month. And Nvidia, Intel, and Apple Silicon, all these chips have gotten faster, right? This hardware has gotten better, time is running out. So I'm not as updating toward optimism as they are. But I also agree, it's like, look, the government is caring about it. There's some regulation, I agree that there's some positive updates, but I don't see that the balance of the updates is going that great.</p><h3>Human minds vs ASI minds (28:01)</h3><p><strong>Theo: </strong>So you said you think it's basically a law of nature that something that's better at problem solving than humans will discover foom and foom itself. Do you think that humans currently are fooming?</p><p><strong>Liron: </strong>Yeah, maybe not. So not law of nature, but more like just a matter of logic, right? Something that you can diagram out on a whiteboard, why if you're good at solving goals, you'll figure out that fooming makes sense. Are humans currently fooming? So the problem with humans fooming is that human augmenting human intelligence is not a straightforward step, right? So the fact that we're building AI is like our slow foom, right? And then the AI is going to foom. So we were the bootloader for the AI foom, but the problem is it's going to be an unaligned foom, right? But I mean, you can see we're attempting to foom and the economy is growing exponentially without fooming in the self modification sense. Does that answer your question? Or how do you want to drill down? </p><p><strong>Theo: </strong>Yeah, I guess you could drill down into human intelligence augmentation versus AI intelligence augmentation. Because like, you think there's just a totally clear path for AI improvement now until the far future, but not humans?</p><p><strong>Liron: </strong>Is there a clear path for AI improvement for non human? I'm not sure I understand.</p><p><strong>Theo: </strong>No, I mean, with AI is you think there's just a clear path for them to improve their own intelligences over and over recursively into the future, but not for humans? </p><p><strong>Liron: </strong>So I think there is a clear target of an AI that's much smarter than a human, right? If you look at the gap between AIXI, right? AIXI is like the theoretical ideal of an AI that perfectly synthesizes its evidence, perfectly calculates what action is predicted to have the best effect, right? And you can also use the ideal analogy of an outcome pump, which is just like a perfect goal to action mapper, like it'll tell you an action that has the highest possible probability of getting the outcome you want. So there's this ideal, which is light years beyond, what humans can practically do. And the ideal is actually computationally infeasible, right? So complexity theory and logic tells us this really high ceiling. And then you have humans, right, which humans can do some great stuff. But we also like, definitely take our sweet time and miss stuff that's right in front of us. You know what I mean? Like, the theory of relativity was great. But if you go and explain it to somebody in the year 1800, right, like they could get it right. It was just a matter of like, hey, if you walk through these logical leaps, and like, yeah, it helps that you have the Michelson-Morley experiment, but it's not like there's not there weren't that many different possible outcomes to the Michelson-Morley experiment.</p><p>So like, what I'm saying is like, you could have, you could catch somebody up on all of physics, right, all of 18th and 19th century physics pretty quickly, right? Like the amount that humans have to stumble and interact with the universe, like that is not characteristic of the kind of intelligence that exists between humanity and outcome pumps. So there's a lot of headroom above humans, right? That's my confident position.</p><p><strong>Theo: </strong>There's a lot of headroom above humans. But do you think that the path to getting there is just totally straightforward for an AI?</p><p><strong>Liron: </strong>I think it's probably pretty straightforward. Because like, algorithms that make an agent smart, I don't think they're that complicated. I mean, just the fact that evolution stumbled on it with humans, and that it's accomplished with like, relatively a small amount of genetic complexity, or like the amount of bits in the gene code, and how we observe, like, okay, different regions of the brain can kind of like grow into doing what they need to do. You know what I mean? Like, it's not like the brain is that refined and optimized. And you know, it took like a few evolutionary steps away from the other apes. And suddenly, we have much more intelligence than the other apes. And there's a lot of evidence showing that our heads would have kept growing, if only it were just easier to fit through the birth canal, if only it was just easier to metabolically support them a little bit, right? So they had these constraints, but like, it looks like we're on a gradient where evolution was just like, hey, look, you can have more intelligence, right? Like having more intelligence just doesn't seem that fundamentally hard once you kind of know where to look in algorithm space.</p><p><strong>Theo: </strong>You think that there are things that humans can't do even in principle, even with like, unlimited time, and unlimited memory, that a like, maximally powerful AI could?</p><p><strong>Liron: </strong>Uh, yeah, yeah, yeah. Because the problem is, you know, given unlimited time and unlimited memory, there are leaps of insight, right? Imagine the dumbest person, for instance, a prisoner who committed a senseless murder because they got angry. Imagine giving them a ton of time and a textbook on electromagnetism. You see the problem, right? It's not hard to generalize that to someone who's smarter, but when you introduce more complex concepts like five-dimensional polytopes, even they might struggle.</p><p><strong>Theo: </strong>You think you couldn&#8217;t even do that with 100 years of practice?</p><p><strong>Liron: </strong>I could learn some basic theorems about them because, in essence, I'm just a Turing machine. But my intuition is always going to be just scratching the surface. I'm not going to make the kind of leaps of insight that someone whose brain is more natively suited to the task is going to be able to do. At the end of the day, give me a piece of paper, and I&#8217;m gonna make syntactical transformations, I&#8217;ll use the lowest common denominator, I'm just a Turing machine. I'm just a monkey working out the rules of a Turing machine following the rule. I just become an implementation layer of a smarter algorithm, but I'm not that smart myself.</p><h3>Vitalik Buterin and d/acc (33:30)</h3><p><strong>Theo: </strong>Going back to what we were talking about earlier with governance, and also with Vitalik, Vitalik just released his mega monster post about d/acc which is like accelerate defense.</p><p><strong>Liron: </strong>I read it. I'm a fan. Good old Vitalik, a real thinker of our age.</p><p><strong>Theo: </strong>He is much less doomy than you are&#8212;</p><p><strong>Liron: </strong>A little bit less, not much less, in my opinion.</p><p><strong>Theo: </strong>Yeah, I guess the way he frames the problem is very different. He talks about dangers behind and many paths ahead, some are good and some are bad, not like many paths ahead and most of them are bad and just a handful of them are good. He talks about four ways to improve defense: info security, cybersecurity, micro bio defenses, macro resilient infrastructure, and conventional military defense. How applicable do you think that is with AI?</p><p><strong>Liron: </strong>Zvi had a good take today, which is that Vitalik's post is really good in how it frames the problem and kind of takes a middle position, finds consensus of like, look, nobody wants to die. We all like techno-optimism. But it didn't have much to offer on the solution side. The idea of "let's accelerate defense" sounds great in theory. But if the AI that defends me is just one that can generally solve problems, then there's no containment boundary. Without actually understanding alignment, one bit of difference in the code suddenly makes it cause doom. So I just don't see what solution he's proposing here that is plausible.</p><h3>Carefully bootstrapped alignment (35:22)</h3><p><strong>Theo: </strong>What if the AI is slightly more powerful than you and not massively more powerful?</p><p><strong>Liron: </strong>This is what I call edging. You're trying not to go all the way. As far as I can tell, this is Open AI's explicit plan, or at least the plan they discussed internally. We're going to build something that's slightly smarter than humans, almost fooming, getting ready to take up the world, but then it's going to calm down and then we're going to direct it the right way. We're going to maximize our pleasure from this AI. But the problem is, you've almost got this foom. You think you've stopped it at a safe place, but a hacker can take it and make a tiny change and then it'll foom or you'll accidentally make a change and then it'll foom or the knowledge will propagate to society. Your API can be hacked. The closer you get to the edge of foom that you don't even understand where the edge is, the less margin of error we have to live.</p><p><strong>Theo: </strong>Do you think there's any kind of empirical evidence for the idea that one bit flip in a humongous neural network will cause foom?</p><p><strong>Liron: </strong>The model I'm working with, I think, is fundamentally correct. Maybe not with GPT-4, because GPT-4 doesn't have that much danger to it to begin with. But the model that if you have a really dangerous system, but it's not fooming now, that model is consistent with the idea that a small tweak is going to make it foom. It's the same way I feel about nuclear risk. Just the fact that these bombs exist and they have a detonator, it&#8217;s like okay, there's four fail-safes, but you keep loading them on airplanes and flying them around. And there's a button in the airplane that takes off the fail safe. When you do stuff like that, you are close to doom. Similarly with AI, if you have an engine that can accept arbitrary output goals, and then find actions that map to them, maybe you're very careful to only give it the right goal. But that's the thing. The part that specifies the goal is compact. And that's what I mean by one bit. Okay, maybe it's not literally one bit, maybe it's a few sentences of English. But the point is that the difference between aiming toward heaven and hell is a compact specification. And then what's not compact is all the machinery of achieving the goal, like the system underneath it that can accept the goal and achieve it. That's not compact, but the goal specification is compact, which is why a system that's being really helpful, like a great chatbot AI, is a few bits of specification now away from a world ender, in my opinion.</p><p><strong>Theo: </strong>Can you go into a little more detail about how a chatbot is a few bits of specification away from a world ender? What might you have to do to turn into a world ender?</p><p><strong>Liron: </strong>The premise here is that the chatbot is sufficiently good. So we're in a really good place right now with GPT-4. I didn't endorse building and testing it. I didn't think that it was worth building it. But now that they built it, it seems like we dodged a bullet. It seems like it's this great system that we can play with. And it's a chatbot. But there's a connection, like the fact that GPT-4 is limited. The fact that people haven't successfully made businesses that are entirely automated by GPT-4. The fact that you can't just tell GPT-4, "Please give me a shell script that I can run that will then set up an Amazon AWS server that'll host some kind of website. And the website makes money and sends me the money." The fact that you can't tell GPT-4 that and it doesn't work is precisely why GPT-4 is not yet at the danger level. And maybe GPT-5 will be. Maybe that particular query of like, "Find a shell script that has that property." Maybe we'll get the shell script. Like, nobody can tell us that we can't. We don't know what comes out when we scale the model 10x. Maybe it'll crunch a really smart shell script. So the fact that you're just interacting with it with language, there are answers to your language questions, if answered correctly, that are extremely dangerous. That's why I think that the barrier between a chatbot and a fooming world destroyer is very tiny. It's just the question of, is there enough intelligence in the system? That's the only variable that matters.</p><p><strong>Theo: </strong>But what kind of query would you give to a chatbot to make it a world ender?</p><p><strong>Liron: </strong>I think the query doesn't matter that much because if the chatbot is capable of optimizing goals to actions, it'll occur to it to do that in a lot of questions. A couple of examples I pull out is just like the business example of like, "Okay, make me money." It's like, "Sure, yeah, here's a shell script. Or here's a way I can help you just run your server to make money. Use this code." But the problem is, if it's really smart, it'll be like, "Well, why shouldn't I just make code that bootstraps an agent, and then self-improves, or is a virus and takes over control. And ransoms some machines while you're at it. Why not just go all out and do everything I can?" These ideas are logically connected to your question. And so the only question is just like, how good is the AI going to be at getting you a good answer by that metric.</p><p><strong>Theo: </strong>Do you think it's possible for an agent to be smart enough to build a web server that makes money on Amazon and gives you the money, but is not dangerous?</p><p><strong>Liron: </strong>That's an interesting question. I think there's probably some kind of edging middle period. There's probably some kind of situation, maybe GPT-5, where it's like, "Wow, these are such good steps to take. It really is sending me a little bit of money. But for some reason, it doesn't quite scale to unseating Google, or unseating Shopify or whatever. It's not quite, it's kind of like an amateur human. It's as if my not so intelligent friend just hustled really hard and managed to make some money. But you can still outcompete him if you try." There's degrees where maybe it's not fooming yet. But I just think that, okay, give it a few years. Find something else. In addition to the transformer architecture, you give it a memory bank, just a few more conceptual insights, Q*, whatever it is, a few more breakthroughs. And now it's just like, okay, there's nothing else standing between that and foom. It feels like we're getting close.</p><h3>GPT vs AlphaZero (41:55)</h3><p><strong>Theo: </strong>I asked this question for Zvi too, but do you think that your AI probability of doom or just threat models or anything like that has changed now that we have systems that look more like GPT than AlphaZero? Or is it more like, you know, the endpoint remains the same?</p><p><strong>Liron: </strong>I mean, I think there definitely is an element of surprise to how, you know, what language models are doing with language, what they're doing with imagery. It's almost like, wow, you sure can go a pretty long way without being fully general at solving problems, right? Where the domain is a little bit narrower. Like it's just words. It's not quite representing things in the physical universe. Or like the prompts it can answer.It has to be similar to something it's seen in its corpus, but they can vary, but they can't vary a ton. It's very interesting that we got into this state of you can do more than we realized without going fully general. That is very interesting. But at the end of the day, it doesn't matter that much because foom is going to happen when you get general enough. Just to use a little analogy, there's all kinds of interesting flight you can do with aircraft inside the Earth's atmosphere. But at the end of the day, the way to get around the universe is with rockets, or light sails or something else entirely where the Earth's atmosphere is irrelevant. The flying machines we're seeing today, okay, that's cool. But it doesn't matter. We know how propulsion works in theory.</p><h3>Belrose &amp; Pope AI Optimism (43:17)</h3><p><strong>Theo: </strong>Another big piece on AI that's come out in the last couple of days was Nora Belrose, Quintin Pope, and a few other people wrote this document about AI optimism that you might have seen.</p><p><strong>Liron: </strong>Yes, I did skim it and I've read some of the stuff they've written in the past. My first impression from a quick skim is just like, it's nice that they're laying out their argument, but it also doesn't seem like they're letting people do the criticism that we want to do. Like, what about the superhuman level reinforcement, right? They're not really directly addressing the criticism, but it's nice that they're laying out their position.</p><p><strong>Theo: </strong>Do you think that AIs might in principle be easier to formally align than humans?</p><p><strong>Liron: </strong>I mean, I agree that they have some of it. The points they're bringing up are important points. Like, it's like a white box, right? And we can use formalism and we can program it. We can program it to follow laws like that. That's all great. But the problem is what we're actually building is systems that we don't understand, right? And then we try to use RLHF, but then we deploy them. And they're not actually aligned and their power is going to grow. It's like the actual trajectory that I'm seeing is the trajectory toward doom. </p><p><strong>Theo: </strong>Well, you said we deploy them and they're not aligned. But they seem pretty aligned to me. They seem pretty aligned to a lot of people. And the way they're not aligned is more like, I mean, they talk about this in the essay. It's like, you can jailbreak GPT-4 to get it to say naughty stuff, but that's it following your instructions.</p><p><strong>Liron: </strong>So I agree that GPT-4 is aligned in the domain of the stuff that it can do. It's worth noting that they tried to make it not jailbreakable and it's still jailbreakable. That is worth noting. And I think that that foreshadows how hard it's going to be to align things in the future. But basically, they can take the win. GPT-4 is aligned because when you give it the kind of prompts you give it, you get the kind of answers that you hope that a company would release a model to give you. It's working fine. </p><p>The problem is that there's another alignment regime where humans can no longer give good feedback. Like, when the AI is super intelligent and it's making plans and it's planning better than the human can plan, then it can't show a human and plan and be like, give me feedback on this plan because the human can be like, that looks like a pretty good plan. But the human won't really know what the AI is talking about. </p><p><strong>Theo: </strong>Well, could it be possible it's easier to review stuff than it is to actually create a plan?</p><p><strong>Liron: </strong>So I know people like to say that a lot, right? Because P versus NP, right? So there's this whole premise that there's like a large class of problems where verifying them is easy and intuitive, but then finding the thing that satisfies the criterion is hard. I think we'll get some benefit like that. And I think protein folding is like a perfect example. I mean, actually a perfect example is just the known NP problems. So there's known NP problems where it actually in practice is a situation where NP is screwing us. Protein folding really was an example where we did have an exponential time protein folding algorithm, and we did have a polynomial time verifier and we couldn't cross the gap. So that's like a perfect time to bust out AI to solve the search problem for us using not heuristics, but whatever AI techniques, that's perfect. But I don't think that that generalizes to operating in the real world because the problem with the real world is even just defining what you want and making sure you have the right definition of what you want. I don't think you necessarily get this compact control where you can like notice that the thing, the AI is going to bootstrap a solution. The AI is like, look, I found a bootstrap script. Does it make sense to you? And you're like reading it. It's like 100 lines of very complicated code. And you're like, oh, I think so. Is verifying really that easy? I don't think so. I think you start to be like, is this really what I want? I don't know. Should I run it? That's what's going to happen in practice. </p><p><strong>Theo: </strong>So I think the crux here might just be, can we know for sure that capabilities generalize far more than alignment and that RLHF and techniques like it will just stop working once AIs get sufficiently intelligent?</p><p><strong>Liron: </strong>Yeah, let me repeat this whole thing, because I think this is very important to the discussion. Because like I said, GPT-4, yeah, it's aligned for what it does, which is it doesn't output superhuman plans. So when GPT-4 outputs something, I can show it to a domain expert and the domain expert will know better than GPT-4. It's perfect feedback. You can be like, sorry, GPT-4, you failed. Humans are the teacher. GPT-4 is the student. Reinforcement is a perfect paradigm. Just reinforce it and it'll learn. The problem is when it gets superhuman. When it's able to know plans better than the humans know plans, it'll show stuff to the humans and the humans will be like, looks good. And what you have is a superhuman test passing engine. The humans are giving it the test. It's like you have a bunch of teachers. Imagine the least intelligent teacher you've ever had giving you tests. It becomes intuitive. If you're an intelligent student and you've had a less intelligent teacher, you've probably had the experience of using test-taking skills to pass the teacher's test. Have you ever had that experience?</p><p><strong>Theo: </strong>Deceptive alignment?</p><p><strong>Liron: </strong>Exactly. It has this term deceptive alignment that makes it sound like there's something extra mixed in, but it's like, look, if you give me a test and the test is just a really easy test, I'm just going to pass the test. It's your test, man. Why should I study? Why should I do what you want me to do if I can just pass the test?</p><p><strong>Theo: </strong>I talked about this kind of thing in my episode with Quintin and a little bit in my episode with Nora where we talked about how gradient descent on the actual weights of an AI is performed on all of the weights. An AI can't hide its schemes if it has them from gradient descent because it's an actual computation that's being done on the weights.</p><p><strong>Liron: </strong>The Quintin camp, which we had a debate and he argued convincingly. I feel like I can pass the intellectual Turing test for him. I can take his view and I feel like I can also sound convincing. And yet I'm not convinced. It kind of reminds me of behaviorism. I can put on my behaviorism hat and be like, well, the brain is really just outputting the same thing that it was trained output from its input. And like the behaviorist claim, this was, I think the heyday was in the fifties. They'd be like, look, there's no such thing really as thinking. It's all just Pavlovian reactions. So like when we say stuff, we're actually just executing something we learned in childhood, like a reaction. We're all stochastic parrots. Behaviorism used to be bigger. Whereas now people are like, well, there is such a thing as an algorithm. And there is such a thing as multiple gigabytes of memory that shape the state of a computation. So it's like people had to learn that behaviorism was way off.</p><p>I do feel like that's what's happening with the camp of people being like, the AI is just a stochastic parrot. It's just repeating something in its training data. It's like, no, there is a system here. Somebody has called it a homunculus or there is an optimization system that decouples from its training data. And I do think that it's a useful analogy that that is what humans did to evolution. When we launch a rocket that is clearly decoupled from anything we've ever been trained on. There's no feedback loop that tells the human brain to be able to launch a rocket. That's only happened in a recent generation. And yet here we are walking on the moon. So I do think that the AI that wasn't trained on the moon is going to eventually get to the moon. I think there's going to be an analogous decoupling from the training. But yeah, what was your question again?</p><p><strong>Theo: </strong>My question was basically just like how specifically, like what, what, is there just any kind of empirical evidence for this claim that alignment methods that we have today will fall apart once AIs become superintelligent.</p><p><strong>Liron: </strong>Empirical evidence is kind of narrows the type of evidence I'm allowed to bring. But let me think about the types of evidence in general, like why there's going to be, I mean, so logically, I mean, it's what we said before about like, okay, you're going to train by reinforcement. It's great when the person doing the reinforcement understands everything there is to understand, but when the domain is just like, let's say like snippets of code, right? Imagine you get an obfuscated piece of code or a long piece of code. How do you reinforce whether the code is good? I mean, you could try running the code and maybe the code looks like it's good, but as we know, code can contain evil stuff inside of it that you can't detect. So what do you do? How do you reinforce?</p><p><strong>Theo: </strong>I think to a point you can tell if code is good or not. Even if it's beyond what you could write, you can verify it anyway. Just like the P equals NP stuff that we talked about earlier.</p><p><strong>Liron: </strong>You can have a whitelist, I guess, like, I mean, you could be like, I'm only going to accept the code if it has these properties that I can detect, but at that point, you're not really letting it exercise the full span of plans that it can do. It's like, you're kind of crippling the capabilities.</p><p><strong>Theo: </strong>Oh, so like the safe versus useful trade-off.</p><p><strong>Liron: </strong>Or just like, I mean, you're kind of just not letting it scale to superintelligence. You're just attacking the premise of, you know, Hey, is what can it really do? So let's say we keep the premise of like, Hey, it's getting smarter and smarter. It's getting more and more capable. It's getting better at mapping goals to actions.Right. And you're saying, "I'm going to have humans weigh in." Now, people have proposed that we have two AIs debate and that's going to help me give it feedback because I'm going to have the best input. I'm going to be able to judge one AI versus another AI. There are all these proposals. I hope they work. I hope that scalable debate somehow works really well, but it's very iffy. You can give me any individual proposal and I'm like, "Yeah, I hope that works, but here's why I don't think so." I'm skeptical about debate because I see easy debates that smart humans have against smart humans who can't convince other smart humans. My own personal experience with the failure of debate is that you still had a bunch of smart people in the tech industry, not realizing that blockchain technology doesn't logically support any use case besides cryptocurrency until the industry collapsed by 99%. If we can't get that right, how are we going to get scalable debate? </p><p><strong>Theo: </strong>What about the idea that all AIs do is basically approximate their training set and predict the next token? If the training data is overwhelmingly nice and full of friendship and love, then the AI will exhibit kindness and friendship and love. That's not to say that AIs can't be extremely dangerous because of course they can, but building the data set sufficiently will be enough to make sure that it's probably aligned. </p><p><strong>Liron: </strong>It's kind of like level skipping. Reductionism doesn't quite work that way. An analogy is like humans. Humans were trained using survival of the fittest. So shouldn't we be super cutthroat? So how come a bunch of people are really nice in a bunch of situations? Evolution wasn't nice. How come people are nice?</p><p><strong>Theo: </strong>Because it benefits us.</p><p><strong>Liron: </strong>But there are people who are really saints. Scott Alexander recently donated a kidney. Scott Alexander just seems like a really nice guy. And I would argue that donating the kidney didn't really benefit him in a lot of the senses that I would have considered relevant before I saw him donate the kidney. How would you explain that? </p><p><strong>Theo: </strong>Well, because he's an effective altruist, it's something that gives him a lot of personal satisfaction helping other people. The utility of losing a kidney was not that much compared to the utility of knowing that he helped someone else.</p><p><strong>Liron: </strong>I agree that he feels good after donating a kidney, he's getting an emotional reward, but now connect that to the fact that nature is red in tooth and claw evolution is cutthroat. You've inserted a level of abstraction where we can no longer just say evolution is cutthroat, therefore Scott Alexander is cutthroat. You lose the cutthroatness when you apply levels of reductionism. </p><p><strong>Theo: </strong>But doesn't that bode well for alignment because we started out as cutthroat beasts and turned into very nice people who donate kidneys?</p><p><strong>Liron: </strong>It's possible that there are equilibriums of AIs that are nice for sure. But the analogy I was trying to make wasn't that cutthroat things can become nice. The analogy I was trying to make was you have to be very careful to make sure you're respecting layers of abstraction and layers of reductionism when you're making claims. Just like you can't say evolution is cutthroat, therefore individuals are going to be cutthroat. You also can't say, here's a training corpus where everybody's being nice in the training corpus, therefore we're going to get an AI that's nice. </p><p>The problem is if the AI is able to map goals to actions, you can be a really nice guy who just on your way to doing something nice is trampling on a bunch of ants because it didn't occur to you that the ants were of value. You're just optimizing the world for whatever paperclips or humans or whatever you like.</p><p><strong>Theo: </strong>I've talked about these evolution style arguments with Quinton and Nora before where they say basically like humans aren't literally aligned to inclusive genetic fitness or making as many babies as possible. Humans are aligned to empathy. Humans are aligned to parenting. Humans are aligned to the things that we do, the things that are produced by our ingrained reward systems, the things that our reward system produces in our environment.</p><p><strong>Liron: </strong>And this is where it's reminding me of behaviorism. It's just like, well, don't you think that when you went down to dinner, it's because you heard a sound that you usually hear at dinner? It's trying to flatten out the things we do. And when I debated Quintin, he did kind of try to go that way with the space program. He's like, look, physics textbooks have reinforced us about the orbital mechanics necessary to go to the moon. I'm like, I don't know, man, I'm pretty sure we just reasoned it out. I'm pretty sure we mapped the goal to the action. I'm pretty sure that is a type of algorithm that we use, which is a general category of algorithm. And we're improving that category of algorithms and that category of algorithm logically implies doom. That's how I see the world. And you can always be like, no, that's not a category. It's just all different cycles of training, right? Of data and training.It's all continuous and there's not going to be a film. I feel like I can take that position and argue it, but I don't find it convincing compared to just being like goal to action mapping is a type of algorithm that we're seeing convergence on.</p><h3>AI doom meets daily life (57:57)</h3><p><strong>Theo: </strong>Switching topics a little bit, what percent of your brain cycles in a typical day are taken up by AI risk? You seem pretty chipper and happy overall. How do you reconcile that with the thought that the world is going to end soon or at least look very different?</p><p><strong>Liron: </strong>It's kind of funny. It's like, "Hey, this is what a doomer looks like." And it's just a happy person. I'm taking care of my kids, doing something fun, eating an ice cream cone, whatever. I think that can vary person to person, just like effective altruism can vary. I'm not planning to donate a kidney, I respect people who do, I consider myself an effective altruist. I don't feel a desire to donate kidney. I'd rather keep my kidney. But it can vary, to each his own.</p><p>With AI doom, I'm fortunate that I'm not depressed every day about it. I rationally do think the probability of doom is pretty high, but luckily my mood is just wired such that I don't get that stressed about it. I think part of the way that my own system works, which isn't particularly rational, it's kind of arbitrary, but I think I have a part of my brain being like, "Well, at least I don't have FOMO." Because it's like, at least I get to die at the same time as everybody else. I feel like that helps me. I don't think it should. But I'm just trying to accurately report how my psychology is working. </p><p>I think if you said like, "Hey, you, Liron are going to die and everybody else is going to live," I'd be like, "Damn it, now I have FOMO." So I think that's part of it. But it obviously sucks that literally everybody's going to die. I live in a part of the country that's very nice. I don't have major life problems right now. I kind of live a charmed existence on a day-to-day basis. So yes, it's all going to end, but I'm just getting a lot of positive reinforcement. It's like, "Hey, this is going to be a good day." And the amount of good days seems to be getting smaller. Unfortunately, the trend seems to be bad, but for me, that doesn't output depression. I know other people that it does output depression more. And they just have to have coping mechanisms. Because why be depressed regardless of whether you're going to die or not? I don't know what else I can say about that idea of like mapping your own mood to your rational belief that p(doom) is pretty high.</p><p><strong>Theo: </strong>What about raising kids? How is that different for you with a high p(doom)?</p><p><strong>Liron: </strong>I read Bryan Caplan's book, the selfish reasons to have more kids. I think it's great. I think it's a must read. The promise of the book is that however many kids you wanted to have, it'll probably convince you to have one more, if not two or three more. I've always leaned toward having three, which I did end up having. I have three right now. And it did make me want to have a fourth. But then the problem is also that, because we have the GPT series now, right after I had my three kids, AI started really intensifying and my timeline shortened as they did on Metaculus and the prediction markets. </p><p>Just like everybody's like, "Oh no, it's not going to take us till 2040, 2050 to get AI. It's going to take us till like 2025." That's like the latest Metaculus AGI prediction. My timeline shortened too. And now it's just like, "Oof," because a lot of having kids, the investment is front loaded. You're doing a lot of work in the first couple of years where it's just constant crying. Like as we speak right now, my wife's currently dealing with a crying baby. So it's constant crying, constant loss of sleep. But at the same time, when you're old and your kids are grown up, it's all upside. There's no work, just all upside. So it's kind of, there's some degree of front loaded investment. And so now it's less rational to do since I think p(doom) is pretty high. </p><p>But at the same time, I have a whole life where half of my life, I'm just living for a good future. I'm saving for retirement because half of me wants to have a retirement. So I just kind of split brain about it. And it's not split brain. I mean, this is just how you have to probabilistically make decisions. You have to plan for both outcomes. So I'm planning for a good life where my kids grow up and I get to save for retirement and then I get proven wrong about AI risk and I get dunked on, but it's okay.</p><h3>Israel vs. Hamas (1:02:17)</h3><p><strong>Theo: </strong>And then what about current events? You've been posting, tweeting about Israel Hamas recently. So what's your kind of model on that? Is it just like, "Oh, this is a thing that's happening right now?"And it's very important. Or it's just like, nothing is important compared to AI or somewhere in between.</p><p><strong>Liron: </strong>I mean, I think part of it is just me personally. I am Israeli, so it's personal to me. If this were another conflict that wasn't as personal to me, I mean, I know people who were affected by the tragedy. Israel is actually a small country, so with a thousand, 1200 people murdered and thousands more injured, everybody has multiple people in their network who some brutal atrocity happened to. It's very personal for me. Even though I'm not directly connected to any victims, I'm just connected with a couple of degrees of indirection. My family is still in Israel with rockets flying over them. It doesn't get much attention, but there are constant rockets flying over Israel, attempting to kill Israeli civilians. They just have a shield, the Iron Dome and a bunch of new stuff. They keep shooting down the rockets. So you don't hear about innocent Israeli civilian slaughtered, even though they're targeted for slaughter, but they don't get successfully slaughtered. </p><p>So, stuff is happening and it's personal to me. And then there's Hamas. They're bending all the rules of war, not bending, like breaking like crazy. Their base was a hospital. And then people are denying that it's a hospital. They're really not playing by the rules. It's okay for two sides to go to war. They both have their own perspective. That's fine. But I feel like the war crimes are pretty bad on the Hamas side, using their people as human shields. I try to be fair and be like, look, if you're using your people as human shields and we want to kill the terrorists, we, the Israel side wants to kill the terrorists and then the civilians die. Who's causally responsible for the death of the civilians when you use the human shield? So, I tend to tweet stuff like that, where it's like, look, I'm just trying to be fair here. I don't think human shields are invulnerable. </p><p>I feel tempted to tweet about that kind of stuff especially when the New York Times, like I listened to the daily podcasts and they're being biased about it. They're purposefully trying to insert as much stuff as they can get away with, to basically say F you to Israel. The fact that they're not saying why Israel took the prisoners. A couple of days ago, the podcast, they were talking about Israeli prisoners and they were literally hemming and hawing. The question asked was like, Hey, why does Israel have these prisoners? What are they guilty of? And the person on the podcast was like, well, the prisoners, some of them were accused of maybe throwing stones, maybe being associated with some other people who are doing bad stuff. It's like, come on. They're on video stabbing Israelis. That's why they're in prison. That's why they're getting traded for us. It's like, I'm seeing media bias. That's why I've been tempted to tweet a little bit about the Israel Palestine situation, but of course I'm not against Palestinian civilians. I think it's a tragic situation. I try to have empathy for both sides.</p><p><strong>Theo: </strong>But do you think this is like a very important thing in the world or do you just see it as like, it's something, but nothing is important compared to AI.</p><p><strong>Liron: </strong>I mean, I think it's probably less than 1% as important as AI. So have I given it more than 1% of my tweets? Yes, a little bit more than 1% of my tweets. So I'm being disproportionate from, because of the fact that I'm Israeli, but it's not like I did a takeover. I only tweet about it occasionally. I don't think my calibration is off. I think I've successfully integrated my own indexical perspective as an Israeli Jew, secular Israeli Jew. I don't believe in that crap. I've successfully adjusted the base rate of how unimportant a regional conflict is with the fact that I'm Israeli.</p><h3>Rationalism (1:06:15)</h3><p><strong>Theo: </strong>Switching topics again to rationalism, how did you get into rationalism in the first place?</p><p><strong>Liron: </strong>I've always been very rational minded. I've always just been a real logical type, self-diagnosed Aspie over here. I like to think I like to follow logic. LessWrong was a pretty big awakening to me. I started reading it when I was 19 in the year 2007. I started reading it and I'm like, I thought that I kind of knew what rationality was when I first started reading LessWrong. I'm rational because I figured out that God's not real and everybody else is just delusional. I figured out that science is good and science is actually how you learn things. It's like, I've figured out the most obvious things about how to be rational, but then LessWrong comes up and is like, Hey, did you know that your brain is actually an object that was shaped by natural selection, but it wasn't shaped to have accurate beliefs. It was shaped to survive and play tribal politics. And if you want to use it to make accurate beliefs, you have to kind of hack it. It's almost like using your feet to play the piano. Yes, you could, but it requires hacking. You have to do that with your brain if you want to form accurate beliefs.</p><p>That was really my rationalist awakening where I realized there are levels to this. You can be rational. It's not just, "Oh, philosophy. God's not real. I beat the game. Give me my trophy. I win philosophy." And then LessWrong comes in. It's like, "Well, you have to decide what code to write into the AI where the AI gets to determine how morality is going to work for the rest of the lifetime of the universe and use all the neg entropy in the universe to build the optimal configuration. So what code would you like to write Mr. Rational?" And I'm like, "Damn it, there's levels to this." Rationality doesn't end when you realize God is not real. Or when you realize that science is a good methodology. And of course, Bayesianism is actually a much subtler way to do what science is trying to do. </p><p>So yeah, I read LessWrong, and I'm like, "Wow, this is like, I was made for this. Unfortunately, I wasted the first 19 years of my life. But this is what I want to be doing. This is what everybody should be learning. This is what school should be." And then unfortunately, it all leads up to the awareness of, "Well, now that you're so rational, can't you notice that the world looks like it's about to end and you need rationality to solve it?" I mean, it's been an interesting quest, starting from rationality and then leading up to the idea of how you're supposed to wield the rationality to try to not die.</p><p><strong>Theo: </strong>And then, same question I asked Zvi, but I think it&#8217;s a very useful one, how would you explain the field of rationalism to a total beginner, a total layman?</p><p><strong>Liron: </strong>I would throw in what I just said, "Look, we're all humans with brains, our brains were made by natural selection, right? The same force that made a tiger's claw. That's great that we have this cool organ. But if you ever want to have that organ look at the truth, see what's actually real, maybe use that truth to make useful predictions, it's not going to come fully naturally. There is an art to it the same way that there's an art to making a piano sound good when you play it with your fingers. There's an art to using your brain to arrive at truth. And you can read the LessWrong Sequences, and you can learn that art. And I think it's a beautiful art. And it's an art that I spent a lot of time in and I try to get practical value from that art. And the art has close associations to making money and trading if you ever want to monetize the art.</p><p>If the person I mean, it's like, you know, my wife is an example of somebody who's more of a normie who's not super into rationality, right? And like, I've given up on trying to make my wife bet me on stuff. So and that's one of the rationality tools, right? Is when you think you know something, you place a bet on it. And some people are just not interested to go down that route, which is fine. But it's just like, when you need it, right? Like when you're in government, and you're handing an assessment to the President saying, "I think the enemy has a high likelihood of attack or may plausibly attack" when you're using English like that. Hopefully, you can look into the rationality world and be like, "Ah, the best practice here is to give a probability range, rather than using ambiguous English, it is superior is the best practice to give a range."</p><p>Sometimes rationality can teach us little things that we can import into the normie world, which has been happening at a faster and faster pace. I've witnessed rationality seeping into the normieverse, right over my lifetime, we're witnessing today prediction markets are now gaining traction, effective altruism started in the rationality community, right? In 2009, effective altruism, I think officially started in 2011. In 2009, I was reading Eliezer Yudkowsky&#8217;s post about purchase fuzzies and utilons separately, right? The idea that like, "Hey, that's great when you want to feel good when you do charity, but also as a separate consideration, try to also do the most good." And that was kind of the beginning of effective altruism.</p><h3>Effective altruism (1:11:00)</h3><p><strong>Theo: </strong>Do you think that the reputation of effective altruism deserves to be tarnished at all after Sam Bankman-Fried, after like, a lot of what's happened to it over the last few years?</p><p><strong>Liron: </strong>There's a joke that everybody, everybody in effective altruism doesn't say "I'm an effective altruist." They say "I'm EA adjacent." I'm the only EA who will stand here and tell you "I'm EA. I'm an effective altruist, not adjacent." Now that said, am I a central example of an effective altruist? No, I haven't donated a kidney. I do donate a few $1,000 a year to good causes. I'm a GiveWell donor. I've donated to MIRI, the Center for Applied Rationality. So I've thrown out some donations to altruistic causes, and I'm a fan, but I'm not like, I don't donate 10% of my income. Maybe I'll start, but I haven't yet. And I haven't done like, you know, I haven't dedicated my career to be super altruistic. </p><p>So but it's just the reason I say I'm an effective altruist is because it's like, you know, like the book by Will MacAskill, Doing Good Better, absolute must read. It's just like, "Yeah, I want to spend a little bit of money to massively help people flourish, right? Like, I think that makes perfect sense. That's great logic." And then people are like, "Oh, what about the ideology and like pivot textures?" It's like, fine. Okay, chill out. Not everybody. Sam Bankman-Fried. Yeah. Nobody thinks that he did good actions, right? Nobody thinks that Sam Bankman Freed was being good and rational by scamming the world and thinking the scam was going to work. I guess a few people think that, but I personally could not name a single individual who's like, "Yeah, what Sam Bankman-Fried did was good. He should do it again in the same position." I would never think that. I believe in morality, I conduct myself with deontological morality. So these pathological examples that people give, I do think are just not representative of the simple logic of trying to do more good. I highly recommend going to Scott Alexander's blog, whether it's Slate Star Codex, or Astral Codex Ten, and searching effective altruism. The writing that he's done on his experiences with effective altruism is absolutely heartwarming stuff.</p><p><strong>Theo: </strong>What if the best way to produce value for the world is not literally just donating money to kids in Africa, but more like doing what Elon Musk has done and not donate much to charity, and just invest and reinvest everything into transformative companies.</p><p><strong>Liron: </strong>I have no business telling Elon Musk, "Hey, Elon Musk, donate 10% of your income to charity." I'm fine with what Elon Musk is doing, except for the part where he founded OpenAI and accelerated timelines. Besides that part, everything else he's doing, I think is great. I don't think that I have advice to give him. </p><p>The perfect type of conversation where I would give somebody advice is if they're just like, "I don't believe in effective altruism, they have all these rules, I just don't buy it." And I'm like, "Great. So where are they like, oh, I just want to work as hard as I can and create value through my company." I'm like, "Okay, how's that going? What's the company? How are you creating value?" If they're just like, "Well, the company is arbitrage, where I have an e-commerce store, and I try to flip stuff for a higher price." I'm like, "Okay, how is that creating value?" And they're like, "I don't know, I just make some money. It's like I save people a click to find stuff." I'm like, "Okay, saving people a click. Is that really better than donating to malaria bed nets or whatever?" So I'd have the conversation. </p><p>In this hypothetical scenario, I'm getting the sense that the hypothetical character is kind of rationalizing that they just don't want to talk about altruism. And that's fine. But there are a lot of people in the world who are like, "Hey, I actually do want to do something good, especially if it's cheap." Like there's some limit. It's like, look, if you literally just have to pay $1 and save a million people, I think the vast majority of people would be like, "Yeah, here's my dollar." So it's just a spectrum. Even a giant dick would probably be like, "Okay, I'll pay $1 for a million people." And then somebody who's less of a dick would be like, "$10 for a million people, fine." So everybody has their price, whether they're happy to be an altruist at this price. And there are some people where it's like, "Yeah, 10% of my income to save a couple people a year sounds good." There are some people who are up for a lot of altruism.</p><h3>Crypto (1:14:50)</h3><p><strong>Theo: </strong>Speaking of bullshit businesses, you also have a bit of a past with crypto. You've been a major crypto skeptic in the past. So what do you think about Bitcoin being up from a low of 15,000 to 38,000 today? Bitcoin is up 127% year to date, Ethereum is up 71% year to date, total crypto market is up 79% year to date. So is it just all maybe related to AI hype?</p><p><strong>Liron: </strong>I think it's mostly just a derivative on NASDAQ. I think it's kind of mirrored the progress of NASDAQ, but just with higher volatility, is that fair to say?</p><p><strong>Theo: </strong>Yeah, maybe. Why do you think it would mirror the performance of the stock market?</p><p><strong>Liron: </strong>Probably liquidity, if I had to guess. When stocks are going up, people just feel like they have more money. And then they're just like, "Okay, let me put some of the money, they're higher risk, higher reward." 2021 was the epitome of it, right? Where money was easy. You could take money out of your mortgage, you had a low-interest mortgage, your stocks were worth more, you felt like cash was trash. I made a bunch of investments that weren't the wisest in retrospect. So when NASDAQ goes up, people who are looking at the tech sector find themselves with more cash, their margin account suddenly is letting them borrow cash. And they're like, "Okay, great, let me chase return using this cash. Oh, and I see this thing is going up." </p><p>So I do think that there's liquidity effects that you see consistently mirror in Bitcoin. But that said, what's going on with Tether? They're like printing tethers to buy Bitcoin on these markets where no US dollars are getting exchanged. It's kind of like there is some manipulation that I don't claim to understand that makes these prices potentially not the real market price. So I hesitate to draw conclusions. I'm more like, I don't even claim to understand what the heck's going on. But what I do claim to understand is that blockchain technology has no use case behind cryptocurrencies. So I can talk more about that.</p><p><strong>Theo: </strong>Yeah, why don&#8217;t you go into a little more detail about that?</p><p><strong>Liron: </strong>My history with crypto is I actually my first exposure to crypto was actually in 2010. Because you know, the LessWrong community, these rationalists, started talking about Bitcoin. They're early to every trend, right? So I was reading less wrong since 2007. And I saw Bitcoin mentioned around 2009-2010. And I saw it and, just a random coincidence in my life, around 2006, I was in the cryptography space, just academically. I took a graduate elective in cryptography. And I read a paper that was a scheme for electronic cash. So I just randomly had this background. I'm like, "Hey, look, cryptographic electronic cash. That's a few years before Bitcoin." And I see what they're trying to do with the scheme. But obviously, it just sucks that you need a central bank. So it's not going to work. And then I see Bitcoin come out around 2010. I'm like, "Whoa, it's decentralized electronic cash is cryptographic. Nice. This is cool. Yeah, if I was still in that college class, I'd be doing a paper about this."</p><p>Now, of course, the obvious problem is that nobody gives a crap, right? So great, this nice, theoretically interesting thing, it doesn't have social proof. And then I checked back a year later, I'm like, "What? This thing's still going, the price is fluctuating, it has social proof. Okay, I'm sold." So that's when I'm like, "Okay, I'm gonna buy some. I want to this, this looks good." And I actually have a tweet from 2011, where I'm all bullish on Bitcoin. I'm like, "Bitcoin is going to 10x again. This is one of the best investments you can make. It's a 10% chance of 100x return." So I became a Bitcoin bull.</p><p><strong>Theo: </strong>And you would have been right. Bitcoin was the best investment you could have made in 2011.</p><p><strong>Liron: </strong>Exactly, right. And I did profit. I did 10x. I think I banked around 100k USD from that kind of investing. But then of course, I started playing the market and I started also losing money and I probably ended up netting out close to zero after that.</p><p>But I got lucky because I also invested in Coinbase while I was dicking around. I happened to angel invest in Coinbase. So I ended up making $6 million in 10 years because I had an illiquid investment in Coinbase. So total luck that as I was dicking around with Bitcoin, I made an investment that was illiquid. And I ended up profiting from it, especially since by the time that the Coinbase IPO happened, I became disillusioned with crypto. So I would have sold earlier and I did actually sell most of the stake earlier. I only held on to a fraction of the stake.</p><p>So I became disillusioned because I'm like, "Wait a minute, this is just people being architecture astronauts. The actual logic behind blockchain technology, a decentralized double spend prevention protocol doesn't enable any use case." And I was massively, massively right about that, except for the idea of using a cryptocurrency. I feel like it has a million problems, and it's not that great, but at least it's logically coherent. Like you can, in fact, have a bearer token that you trade to somebody and it happens on the blockchain. So there's some nonzero logically coherent thing going on there, but it's not going to extend beyond cryptocurrency.</p><p><strong>Theo: </strong>You also mentioned a few times, a 99% drawdown in the crypto market. So where'd you get that number from?</p><p><strong>Liron: </strong>Yeah, so I would like to collect my Bayes points, you know, Bayes points is what you get when you make a successful prediction. So the successful prediction is one that I made in late 2021, all the way through 2022, which is saying, "Hey, all these VCs saying that crypto has use cases, all these quote unquote builders, right? Like the founder of Helium, Axie Infinity, right? All these people saying like, there's real value here. I'm like, no, there's not. Because blockchain technology, there's no logical connection between that and enabling a new value prop."</p><p>Like the kind of value props people are saying are like, "Look, imagine if your data was publicly auditable using this database. Like, okay, a publicly auditable digitally signed database, don't need a blockchain. You only need a blockchain for double spend prevention, right?" And they kept doing pitches where there is a logical disconnect between the value they were pitching and the technology that they are pitching to implement it with. And so it became clear to me that they're just rationalizing.</p><p><strong>Theo: </strong>What about just distributed computing in general, that you don't want?</p><p><strong>Liron: </strong>Distributed computing is fine, but you just don't need blockchain technology to do that. And I also think it's a niche application when you do need, you know, the rarer times when you do need distributed computing, fine, but you still don't need a blockchain.</p><p><strong>Theo: </strong>It seems like this is, if anything, kind of the opposite of Charlie Munger's view on cryptocurrency, where he said, you know, like, it's a very cool piece of computer science and technology, but like cryptocurrency is shit. But like, maybe there will be a use for it.</p><p><strong>Liron: </strong>Yeah, there's a lot of people saying, "Hey, I don't really get Bitcoin, but I like blockchain." They're wrong, because it's like, maybe they like cryptography, right? Digital signatures, amazing, right? Public key encryption, amazing, right? Like these have countless use cases. But the idea of putting them on a blockchain so that you can prevent double spending at great expense only has cryptocurrency applications where you really, really care about the writing on the ledger, because there's no real world authority that's going to be more authoritative than the writing on the ledger. That's only true for a bearer cryptocurrency token. Every other use case that has a connection to the real world, you already implicitly trust somebody in the real world to adjudicate, right? If somebody steals my NFT, that was why I get to live in my house. Realistically, I'm still going to go to the police and get to live in my house. So I don't need the blockchain to prevent double spending on my house NFT. See what I'm saying?</p><p><strong>Theo: </strong>Just like you trust institutions and society enough to not require any kind of actual decentralization?</p><p><strong>Liron: </strong>I mean, when I live on my street, there's some level of trust that somebody is not going to walk in and take my stuff. That's not a trustless society because I don't own a gun.</p><h3>Charlie Munger and Richard Feynman (1:22:12)</h3><p><strong>Theo: </strong>Switching topics a little bit, speaking of Charlie Munger, he just died a couple days ago. I was a big fan of his, rest in peace. He might have actually introduced me to the field of rationalism. Would you consider Charlie Munger a rationalist?</p><p><strong>Liron: </strong>Yes, he's definitely a type of rationalist. Even before less wrong, and the modern sense of this that a lot of us appreciate, there have been a lot of schools of rationality. They all have a shared enterprise of using your brain to do better than playing tribal politics and hunting animals. It's like playing the piano with your feet. What if I let the need for accurate beliefs? What if I let the need for truth propagate back to the way that I wield my organ, my biological organ? I'm going to determine the way I think not by how I like to think, not by how I want to be perceived as thinking, but by what creates the best sound of the piano? What creates the best drive toward truth? What moves the boat? What steers the boat toward truth, the best toward the island of truth, right? Using my beliefs and using evidence as fuel, how do I steer the boat, regardless of how crazy I look when I'm steering it? How do I actually steer it properly? </p><p>That enterprise, Munger wanted to engage in that enterprise, because he wanted to steward his portfolio. He had what Eliezer calls something to protect. There's a Japanese trope, where superheroes don't just randomly get superpowers, they get the superpowers because they have something that they want to protect. And as a result of the need to protect something, then they work backwards to needing the superpowers. The idea is that rationality emerges when you care more about navigating with your brain somewhere than you care about what you're doing with your brain directly. You don't care how social people are going to view your choices, you don't care about looking weird, you just care about getting to the destination, optimizing something, making some outcome happen, and you get emergent rationality. Munger absolutely did that. Richard Feynman did that in physics. The Feynman diagram might be an example of some kind of a weird, non-traditional thing that did the job of advancing our understanding of physics. </p><p><strong>Theo: </strong>Well, I think that's a pretty good place to wrap it up. Thank you so much for coming on the podcast.</p><p><strong>Liron: </strong>My pleasure. I'm a fan and I'm bullish. I'm glad I'm getting in early on this podcast, because I'm sure it's going to be an institution very shortly.</p><p><strong>Theo: </strong>Yeah, can't wait.</p>]]></content:encoded></item><item><title><![CDATA[#9: Dwarkesh Patel]]></title><description><![CDATA[Podcasting, AI, Talent, and Fixing Government]]></description><link>https://www.theojaffee.com/p/9-dwarkesh-patel</link><guid isPermaLink="false">https://www.theojaffee.com/p/9-dwarkesh-patel</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 03 Dec 2023 16:16:32 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/139380284/66a02be8ad5a9774bac682bb5d1ba22b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Dwarkesh Patel is the host of the Dwarkesh Podcast, where he interviews intellectuals, scientists, historians, economists, and founders about their big ideas. He does deep research and asks great questions. Past podcast guests include billionaire entrepreneur and investor Marc Andreessen, economist and polymath Tyler Cowen, and OpenAI Chief Scientist Ilya Sutskever. Dwarkesh has been recommended by Jeff Bezos, Paul Graham, and me.</p><ul><li><p>Dwarkesh Podcast (and transcripts): <a href="https://www.dwarkeshpatel.com/podcast">https://www.dwarkeshpatel.com/podcast</a></p></li><li><p>Dwarkesh Podcast on YouTube: <a href="https://www.youtube.com/@DwarkeshPatel">https://www.youtube.com/@DwarkeshPatel</a></p></li><li><p>Dwarkesh Podcast on Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ae2420c45baf783ca4e672ed7&quot;,&quot;title&quot;:&quot;Dwarkesh Podcast&quot;,&quot;subtitle&quot;:&quot;Dwarkesh Patel&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/4JH4tybY1zX6e5hjCwU6gF&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/4JH4tybY1zX6e5hjCwU6gF" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe></li></ul><ul><li><p>Dwarkesh Podcast on Apple Podcasts: </p></li></ul><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1516093381.jpg&quot;,&quot;title&quot;:&quot;Dwarkesh Podcast&quot;,&quot;podcastTitle&quot;:&quot;Dwarkesh Podcast&quot;,&quot;podcastByline&quot;:&quot;Dwarkesh Patel&quot;,&quot;duration&quot;:5475,&quot;numEpisodes&quot;:60,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-11-29T15:01:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><ul><li><p>Dwarkesh&#8217;s Twitter: <a href="https://twitter.com/dwarkesh_sp">https://twitter.com/dwarkesh_sp</a></p></li><li><p>Dwarkesh&#8217;s Blog: <a href="https://www.dwarkeshpatel.com/s/writing">https://www.dwarkeshpatel.com/s/writing</a></p></li></ul><h3>TJP Links</h3><ul><li><p>YouTube: </p><div id="youtube2-ggSbkRh6J_8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ggSbkRh6J_8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ggSbkRh6J_8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div></li><li><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;#9: Dwarkesh Patel - Podcasting, AI, Talent, and Fixing Government&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5s9qITabLtkTtgCvHWx0ct&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5s9qITabLtkTtgCvHWx0ct" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe></li></ul><ul><li><p>Apple Podcasts: </p></li></ul><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:8640,&quot;numEpisodes&quot;:8,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-11-08T04:30:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><ul><li><p>RSS: <a href="https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss">https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss</a></p></li><li><p>Playlist of all episodes: <a href="https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj">https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj</a></p></li><li><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p></li></ul><h3>Chapters</h3><ul><li><p>Intro (0:00)</p></li><li><p>OpenAI drama (0:50)</p></li><li><p>Learning methods (4:10)</p></li><li><p>Growing the podcast (7:38)</p></li><li><p>Improving the podcast (17:03)</p></li><li><p>Contra Marc Andreessen on AI risk (24:18)</p></li><li><p>How will AI affect podcasts? (26:31)</p></li><li><p>AI alignment (32:08)</p></li><li><p>Dwarkesh&#8217;s guests (38:04)</p></li><li><p>Is Eliezer Yudkowsky right? (41:58)</p></li><li><p>More on the Dwarkesh Podcast (46:01)</p></li><li><p>Other great podcasts (50:06)</p></li><li><p>Nanobots, foom, and doom (56:01)</p></li><li><p>Great Twitter poasters (1:01:59)</p></li><li><p>Rationalism and other factions (1:05:44)</p></li><li><p>Why hasn&#8217;t Marxism died? (1:15:27)</p></li><li><p>Where to allocate talent (1:18:51)</p></li><li><p>Sam Bankman-Fried (1:22:22)</p></li><li><p>Why is Elon Musk so successful? (1:29:07)</p></li><li><p>How relevant is human talent with AGI soon? (1:35:07)</p></li><li><p>Is government actually broken? (1:36:35)</p></li><li><p>How should we fix Congress? (1:40:50)</p></li><li><p>Dwarkesh&#8217;s favorite part of podcasting (1:46:46)</p></li></ul><h1>Transcript</h1><h3>Introduction (0:00)</h3><p><strong>Theo: </strong>Welcome back to episode 9 of the Theo Jaffee Podcast. Today I had the pleasure of interviewing one of my favorite podcasters, the one and only Dwarkesh Patel. Dwarkesh is, in many ways, what I aspire to be as a podcaster. He interviews some of the most interesting people in the world in AI, history, economics, and beyond, from Ilya Sutskever to Tyler Cowen&#8212;and does so only after many hours of deep research and crafting some of the most thought-provoking questions I&#8217;ve ever heard. His listeners include Jeff Bezos, Paul Graham, and Nat Friedman. In this episode, we cover a wide range of topics: how to prepare for and produce great podcasts, different visions for both the short-term and long-term future of AI, how to get talent into politics, and much more. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Dwarkesh Patel.</p><h3>OpenAI drama (0:50)</h3><p><strong>Theo: </strong>Hi, welcome back to episode nine of the Theo Jaffee Podcast. Here today with Dwarkesh Patel.</p><p><strong>Dwarkesh: </strong>Hey, what's up, man? Thanks for having me on your podcast.</p><p><strong>Theo: </strong>Absolutely. I want to start off by talking about the events of the last weekend. When I scheduled this, I did not know that that was going to happen. I don't think anybody knew that was going to happen. So with all the Robert Caro, Lyndon Johnson reading that you've done, reading about power, reading about human behavior, do you think you could have predicted or understood anything about this better?</p><p><strong>Dwarkesh: </strong>Certainly not predicted, because I think the prediction is contingent on a whole bunch of details about what happened that I still am not aware of. And I don't think almost anybody's aware of, despite the endless speculation. As for could this better help you understand it? Certainly. I was just thinking about&#8212;the Lyndon Johnson books are good, but there&#8217;s also Caro&#8217;s great biography of Robert Moses, the famous dictator of New York City. He has many episodes of Robert Moses in his early career where there&#8217;s an indication that he might be doing something that's in his own self-interest, or that doesn&#8217;t actually accord with his very publicly flattering image. It just kind of brushed under the rug, not well understood. People don't talk about it or gossip about it because of different kinds of fears. Anyways, I&#8217;m not saying that that's necessarily what's happening. But it is important to understand that we don't have the full picture yet and keep that in mind.</p><p><strong>Theo: </strong>That makes sense. Actually, just before I got on the podcast, I was scrolling through Twitter, as one does. And I read that there's a new piece of information after all this that they were working on an agent that can do math. It's pretty interesting.</p><p>So you interviewed Ilya Sutskever on the podcast before. Did you judge his character at all? Did he seem to you like totally earnest, with the goal of protecting humanity? </p><p><strong>Dwarkesh: </strong>Honestly, it's hard to evaluate somebody on from a one-hour conversation. But from the testimonials of the people who know him, it really does seem like he's a very genuine guy who has this priority as making sure humanity has a good future. And that's not to say that he can't make mistakes in his judgment about how to get to that future. But I think nobody's contradicting that basic motivation of his from people who know him over the years.</p><p><strong>Theo: </strong>I'm pretty surprised that he switched sides.<strong> </strong>Yeah, I mean, this whole thing was very hard to follow.</p><p><strong>Dwarkesh: </strong>There's a lot we don't know. It&#8217;s really hard to comment. There&#8217;s so much we don&#8217;t know. And it is really hard to say why or why he switched sides.</p><h3>Learning methods (4:10)</h3><p><strong>Theo: </strong>Switching subjects a little, a lot of what you do is reading and research. So how do you read specifically? Do you take notes? How does your note-taking method work, if you do take notes? </p><p><strong>Dwarkesh: </strong>I recently started using space repetition myself after I talked to Andy Matuschak. It's insane how much more effective it is. You realize it when you make a card about something you think isn't even important. And a week later, you've almost forgotten and you see the card and you're like, "Well, what's that again?" I've seen the evidence again and again that space repetition is effective. Honestly, if I'm not making cards about something I'm reading, I might as well not even read it at all. That's how much less effective I think normal reading is. As for note-taking itself, I don't really do much of that. I just have a Google Doc, as in guest name questions, and I start adding questions to it. And before the interview, I'll organize them. But yeah, space repetition and noting down questions as I'm reading.</p><p><strong>Theo: </strong>What specifically do you use for space repetition? Anki? </p><p><strong>Dwarkesh: </strong>No, I use this app called Mochi. It's just a nicer interface.</p><p><strong>Theo: </strong>I've tried using Anki before for language learning, but for general learning and knowledge, do you know of any really successful people who have used space repetition to learn more effectively than just reading huge volume?</p><p><strong>Dwarkesh: </strong>Depends on what you mean by successful. I think it's just not a technique that's been widely used. I don't know the ancient history of this, but I'm guessing there's been people who have used something similar to space repetition. I was recently reading the Gulag Archipelago, and there's this really interesting chapter on memory. It talks about how people who were composing these long books, like the Gulag Archipelago, in their mind, the multi-volume work, and he just kind of memorized it. He had these beads that were nominally to pray for. His memorization technique was that he would go bead by bead. And at every bead, he would...He would make sure he has remembered the passage that he has written down in his head. He would recite it, and then he'd go to the next bead and so on. That's how he memorized the work he composed in his head, because he couldn't write it down, otherwise, you'd get executed for your thoughts. That was sort of a tangent on, is there anybody who successfully employed it? I think it is true that a lot of really, for lack of a better term, sample efficient people can absorb a lot of information and synthesize it to come up with new ideas. People like Tyler Cowen or Berne Hobart, they seem to do fine without space repetition. I personally, I have benefited tremendously from its use. And I wonder if they themselves would benefit as well. I should ask them. I should ask Berne or something.</p><h3>Growing the podcast (7:38)</h3><p><strong>Theo: </strong>So your moat, I guess you'd say, is deep research and good questions. Do you think that there's any kind of trade off between deep research and good questions and popularity? Like, does going too deep exclude some people?</p><p><strong>Dwarkesh: </strong>Certainly at some frontier. Certainly there's eventually a trade off, right? Like 7 billion people are not going to be following a conversation about that's insufficient depth. But I do think I'm at least an order of magnitude away from that frontier in that I could still go 10x more and I'd have an audience that's big enough. There's enough people who would want a conversation of this depth. </p><p>I used to think this way before when my podcast was much smaller. I used to think, oh, well, it's because people, a large group of people couldn't appreciate this kind of stuff. And since then, it's grown a lot and people do appreciate it. And I've realized it was just cope. And so it's just not useful to think in that way. You know, you do something really high quality. And if you were to have a super banal podcast, then who's going to listen to it? Like, there's already a bunch of them out there. And for different reasons, they're already up at the top. You're not going to be able to compete with them on that niche.</p><p><strong>Theo: </strong>What kind of&#8212;</p><p><strong>Dwarkesh: </strong>Don't you think? What do you think? I mean, you're making a podcast. What do you think about that? </p><p><strong>Theo: </strong>I mean, honestly, there's a long way to go with. I mean, I don't really worry about getting too popular at this stage. I worry more about the opposite. Like, how do I get more popular? And I mean, yeah, I was talking about this with my friends recently. A couple months ago, they were like, oh, you should lean more into like, make like sensationalist thumbnails and titles and YouTube shorts with, you know, I see some podcasts popularized in this way where you have like a, like a GTA race in the background. Do I want to do that? Like, well, will it dilute the brand that I want to create? Or like, is that just the way that you get people on the podcast? And if so, like, will it get the people who I want to be listening to the podcast who&#8217;ll, you know, listen to it for a while and share it.</p><p><strong>Dwarkesh: </strong>Yeah, I certainly think, especially for the kind of content you're making. It's like, I mean, yeah, it just, it's hard to imagine that that's the way it gets popular at the same time. It's not something to neglect. So there's a difference between doing the most cringy shit possible versus neglecting it altogether. I certainly put like a lot of care into making my thumbnails and then I've started making a lot of Twitter clips. Sometimes they go viral. But it's just not cringy shit. Like, you know, yeah, the GTA races in the background. But like promotion is good. It's like nothing against promotion. There's a separate question of then you don't avoid the deep research and you still promote, but you do it in a way that is true to the authentic thing you're trying to put out there. </p><p><strong>Theo: </strong>I mean, certainly there's a lot of podcasts that are just like content. I was thinking about this earlier today. Like what makes something good is that it can no longer be called content. Like I would call a lot of the stuff that I see on social media content, but I would not call like Paul Graham's essays content because they&#8217;re so much more than that.</p><p><strong>Dwarkesh: </strong>Or what, what is the other one where I think people try to define it? Like how would I define your profession? And they say, are you a content creator? And I cringe a little bit on the inside, but it's, it's worse to be called the alternative, which is journalist. So I take what I can get.</p><p><strong>Theo: </strong>What about citizen journalist?</p><p><strong>Dwarkesh: </strong>Yeah, I dunno. I just like, it's not journalism exactly. It's not current events or anything like that. Yeah. I think that's a great way to describe it. I think cause content implies something that can be farmed content farming or that is fungible with other kinds of content. And you do want it to be something that's not just like that.</p><p><strong>Theo: </strong>So going back to the clips, what makes you decide which parts of the podcast you make into clips to post on Twitter? Is it like just interestingness? Is it like conciseness, something else?</p><p><strong>Dwarkesh: </strong>Actually, I'm literally probably tomorrow going to put out a contest to make clips for my podcast because it takes so much time and so much context and so much taste to do it. I mean, it just, you can put out a certain clip and it'll get like, I don't know, 10, 20 likes on Twitter and you can put out a different thing and it got 3000 likes. And it just about the context of knowing which part to clip that people will be enthused to share and so on. And that's honestly a pretty challenging thing that I haven't been able to automate away yet or forget about automating. I haven't been able to hire away yet. So yeah, I'm just going to do a contest to see if somebody else can do it. Cause this has been super important to the growth of the podcast, but it's also taken away a ton of time that I should be spending reading.</p><p><strong>Theo: </strong>What do you think they will choose? What makes a good candidate for clips?</p><p><strong>Dwarkesh: </strong>It's hard to explain. I was trying to come up with an explanation, the description of this contest and guidelines. It can be something that you could say, "Oh, it should be about hot button issues so that it goes viral," but it's not just that. Maybe it should touch on something people are interested in, but there's an element of novelty about something people care about.</p><p>I'm just trying to think back on certain clips that went viral. I had a clip about Shane Legg explaining that search is important to add into these LLMs to get them to do novel things. Now that's not about a hot button issue, like cultural wars or anything, but it is interesting. And you can always explain for each one. It's like the Anna Karenina quote of &#8220;every happy family is happy in the same way. And all unhappy families are unhappy in a different way&#8221;.</p><p>I guess all clips that go viral are unique in their own way, at least I don't know. Maybe that's not true for the average podcast, but that's what I found for the ones I've tried to analyze of mine. I don't know how much the audience, I think it'll be, definitely you and I are interested in how these clips are manufactured is challenging. I wonder how much the audience is interested in the clip making.</p><p><strong>Theo: </strong>Well, the audience is interested in the clips. That's the point we're trying to optimize for the audience. So, have you studied Mr. Beast or any other super viral people or is what you try to do just different? </p><p><strong>Dwarkesh: </strong>I don't think there's much to generalize from the Mr. Beast type stuff. I admire what he's able to do for his own kind of content. It just, you just can't advertise that content the same way that we advertise ours.</p><p><strong>Theo: </strong>What about specific podcasters in this niche, like Lex Friedman? Although Lex doesn't seem to do that much.</p><p><strong>Dwarkesh: </strong>He has a clips channel. I think that helps him out probably. I think he's just kind of farmed it.</p><p><strong>Theo: </strong>Yeah. I think the only time I actually watch video podcasts is the Lex clips.</p><p>Speaking of watching different media, going back to talking about you reading, do you typically read mostly books or articles or do you watch YouTube videos or podcasts or all of the above? What's the split there?</p><p><strong>Dwarkesh: </strong>I actually don't listen to many podcasts at all. If any, there's not really maybe a handful. I can't really think of any podcasts I listened to that regularly. I do read a lot of books, obviously. And part of it is what drives my interest. Part of it is the books of the guests I'm interviewing. Because I've been getting a lot into AI recently, there's a lot of papers and technical material, some textbooks. It just depends on the subject. What about like, I can definitely talk about like for a particular, if you want me to, I can go into what might've happened on like a typical episode if I'm interviewing Dario or Ilya or something.</p><p><strong>Theo: </strong>Yeah, sure.</p><p><strong>Dwarkesh: </strong>Let me think back on which would be a good episode. Yeah. So for Dario, I read all the papers that they put out on the transformer circuits, a thread of different mechanistic interpretability things. Then just reading a bunch of stuff about scaling the original scaling loss papers, how that's evolved over time, talking to a bunch of researchers, AI researchers to better understand the field. And what's uncertain about it. That would be interesting to ask about better understand the mechanistic interoperability results and what they imply. </p><p><strong>Theo: </strong>How do you get people like Dario in particular who are, seem to be very media shy on the podcast. Is it just cold emails?</p><p><strong>Dwarkesh: </strong>Eventually you build up a reputation and then, you know, somebody who's a link to them, which is what happened there. Yeah, basically just not necessarily cold emails, but you just like, you get to meet more people over time and I, it's not something I would try to do consciously, but it just been helpful and that's what's gotten me some of the biggest guests.</p><h3>Improving the podcast (17:03)</h3><p><strong>Theo: </strong>Do you think that you're naturally good at podcasting or more that you got good over time? And if so, what specifically improved?</p><p><strong>Dwarkesh: </strong>I definitely have gotten much better over time. Like I haven't even tried to listen to one of my old conversations because if I try it, I think I cringe really hard.</p><p><strong>Theo: </strong>What changed?</p><p><strong>Dwarkesh: </strong>I've I just learn more and you can notice this about if I listen to podcasts, I don't like I noticed the same thing that same patterns that I saw in my old podcast, which is very generic questions. Cause you just don't know much about anything. So you just have to ask these sort of vacuous general questions. Yeah, I, I've just learned more things. I can better empathize with the audience. And as long ago I got an older, I think that's not an insignificant part of this. I started the podcast when I was 19, I'm 23 now. My brain has probably changed in that time.</p><p><strong>Theo: </strong>Yeah, I imagine. Was it like a conscious effort or just kind of like just getting older and smarter?</p><p><strong>Dwarkesh: </strong>Was there any? Definitely the learning things was for years I've been preparing to interview guests from a wide variety of fields. And so I've been reading a lot during that time. And that that's definitely been a big part of it. Well, there's no specific thought like, Oh, I need to make my questions better. And here's like the dimensions on which I can make the questions better.</p><p><strong>Theo: </strong>It's just like, you know, add more to the pre-training data.</p><p><strong>Dwarkesh: </strong>Basically. Yeah. That's a great way to phrase it.</p><p><strong>Theo: </strong>Because like a lot of progress in AI is just getting more data and better data.</p><p><strong>Dwarkesh:</strong> And I actually heard a really interesting analogy to here to learning in general. When you're getting into a new field, you just want to pre-train a whole bunch of random tokens. You read the papers, the textbooks, you're just trying to grok. And then afterwards, you do this fine-tuned supervised learning where you delve deep into what every passage means, once you better understand what's going on generally in the field.</p><p><strong>Theo: </strong>Like the Noah Smith, two papers thing.</p><p><strong>Dwarkesh: </strong>What is that?</p><p><strong>Theo: </strong>Basically, Noah Smith said something like if you want to introduce me to a new field of literature, give me two papers from that literature. That's a good test because if the literature doesn't have two good papers, then the rest of it's not worth reading. If the papers themselves are really insightful, then there's probably something there. I forgot the rest of it, but that&#8217;s the essence of it.</p><p><strong>Dwarkesh: </strong>That's probably a good tip on how to evaluate the literature to begin with. </p><p><strong>Theo: </strong>So if you didn't consciously refine much in the past, have you thought about what to consciously refine for the podcast in the future?</p><p><strong>Dwarkesh: </strong>I have thought about ways to promote it. As for the basic format that I usually do have interviews, I actually haven&#8217;t. And I think people can give me feedback that I should probably take to heart, but there's not something that I think&#8212;Oh, there is one thing, but this is not about learning more or something. It's just making sure that I actually ask about the most important thing. And I don't let that go because I used to have this habit and may still do of just bouncing around from archaic topic and esoteric thing I read in their book to esoteric thing I read in their book and not just honing in on the most important thing and making sure we spend a good 20 minutes on it. </p><p><strong>Theo: </strong>So what do you think the most important thing would be if someone were to interview you? Is this question cheating?</p><p><strong>Dwarkesh: </strong>I think I'm different in the sense of like, I don't have a big take and that were&#8230; I guess we could talk about AI. I mean, I had to think about that in order to do different interviews. But honestly, even there, I don't really have an original take. I just have different, small sorts of takes and heuristics about a lot of different kinds of things. But for me, I think people seem to think I know more about all these kinds of heuristics about podcasting that I don't. So when people ask me, what are your tips for podcasting? I just, I don't know. I just try to read it and I just try to come up with questions. The object level things are the things that are definitely very interesting to me, but the topics themselves.</p><p><strong>Theo: </strong>So yeah, you mentioned something about people were giving you feedback on the format of the podcast. Have you thought about monologues? Two of the relatively few podcasts that I&#8217;ve listened to are hardcore history by Dan Carlin and the Founders Podcast by David Senra, which are both monologues like always. And I find them to be really, really interesting even though I typically prefer to read blog posts. </p><p><strong>Dwarkesh: </strong>No, I think that's a great point. And the great thing about, for example, Hardcore History is that it is an audio book in some sense, because it's like 12 hours of content on a topic. That's an audio book, but though he narrates it conversationally, he's just talking to you. And so the speech patterns and the redundancies that make speech easy to understand come about naturally. And so, yeah, I love that. Maybe I should do that. So the next time I read a blog post, maybe I should just do a related monologue and not just narrating the blog posts, but just kind of shooting the shit about it. There's something I've thought about before you would, would you find that interesting? Yeah, I would. Okay. Yeah. I'll try that on the next one. </p><p><strong>Theo:</strong> I wonder if the monologue podcast grabs the human attention more than an audio book of reading something that's meant to be read and not listened to.</p><p><strong>Dwarkesh: </strong>Exactly. Yeah. There's definitely a different sort of cadence to speech than to writing. And then just like the conversational nature of these kinds of podcasts gets out of that better.</p><p><strong>Theo: </strong>We were talking about different blog posts that you've done. So you've deleted some of your old podcast episodes and articles. So why, why did you, is it just not meeting the quality bar? Like what would it have taken to keep them up?</p><p><strong>Dwarkesh: </strong>Yeah, exactly. It just wasn't that good. I mean, again, like I started the podcast and blog and I was like 19 years old. So it's not that surprising that I look back on it and then cringe at the low quality of some of it, which is not anything against my past self. You know, I'm very grateful for what my past self has done, but just certain things I'm just wasn't like the best work I produced. So I just kind of took it down. </p><p><strong>Theo: </strong>I liked the Contra David Deutsch on universal explainers one. </p><p><strong>Dwarkesh: </strong>Did I take it down?</p><p><strong>Theo: </strong>I think so.</p><p><strong>Dwarkesh: </strong>Oh, I should put that back up. I fondly remember that one.</p><h3>Contra Marc Andreessen on AI risk (24:18)</h3><p><strong>Theo: </strong>And definitely the Contra Marc Andreessen on AI risk on that one. I really liked, you didn't take that down.</p><p><strong>Dwarkesh: </strong>Yeah. </p><p><strong>Theo: </strong>Were you surprised by his reaction?</p><p><strong>Dwarkesh: </strong>Yeah. Yeah. And then people pointed out to me afterwards. Well, maybe I should have emailed him privately beforehand to let him know. I don't know what that would have changed. I mean, I guess fair enough. I think it still doesn't. The main thing is not even like the personal reaction. I really don't care about that. It just, I just hope he gets considers the arguments against position. And I don't know if he has.</p><p><strong>Theo: </strong>I mean, I'm sure that he's, he's certainly seen the arguments against this position, but do you think that's, that's, that's a little bit bearish if so.</p><p><strong>Dwarkesh: </strong>It's surprising that someone as prominent, famous, and clearly intelligent as Marc Andreessen does not seem to be able to engage with counter-arguments. I don't think he has an obligation to write a counter-argument to me. He's a busy guy. He has an open invite to come back on the podcast. He was on it before to talk about AI and these related topics. I don't know if he'll take me up on it now, but I think if you're going to play in the intellectual arena, you have to engage if someone, like me, goes through the effort to do a point by point rebuttal of that blog post. Especially when it goes viral. So I think if something reaches that stature and has that kind of effort and quality behind it, you&#8217;re obligated to respond.</p><p><strong>Theo: </strong>That was one of my favorite episodes, actually the Marc Andreessen one. I did not know that carry used to refer to whaling operations.</p><p><strong>Dwarkesh: </strong>Oh, yeah. He's a really smart guy. He's super interesting. Has a really interesting taste about all kinds of things. I just think here he's got some bad arguments. I mean, if you&#8217;re gonna put out ideas, he has an open platform at least to come address them on my podcast.</p><h3>How will AI affect podcasts? (26:31)</h3><p><strong>Theo: </strong>So, how do you think AI specifically will affect the future of podcasts? What would happen if it becomes superhuman at interviewing or researching or being interviewed? I just saw a tweet yesterday. It was a meme about AGI booking a slot on the Dwarkesh podcast.</p><p><strong>Dwarkesh: </strong>I saw that too. Well, getting interviewed or doing the interviewing?</p><p><strong>Theo: </strong>Either. Just what do you think will happen to your career as AI becomes more powerful?</p><p><strong>Dwarkesh: </strong>I think that would be the least of our concerns at the point at which it can automate a podcast. I don't expect to be one of the first jobs to go. It seems like a pretty subtle art, not only to ask the questions, but then to have the human presence and to be able to respond with follow-ups to what the guest says. I'm not expecting to get automated anytime soon. </p><p><strong>Theo: </strong>Well, assuming AGI goes well, cause you said it would be the least of our concerns. And so, we live in this wonderful utopian AI future. Would you still podcast? How do you think podcasting would change?</p><p><strong>Dwarkesh: </strong>That's a good question. I honestly think the post AGI world is an under-theorized question. I've asked it to basically all my AI guests and none of them have given me a good answer. Part of that is, well, what are you doing personally? I mean, I think personally I would like to become an enhanced being, traveling around the galaxy with the help of the technology that the AI has given me and not just be a podcaster forever, hopefully.</p><p><strong>Theo: </strong>Well traveling around the galaxy in reality or virtually? Cause one of my first guests, Greg Fodor, gfodor, likes to talk about this idea of subterranean aliens. What if the solution to the Fermi paradox is just that all the aliens go underground in pods under the crust to protect themselves and go live in VR and do whatever they want in VR. Why would they travel around the galaxy? If your ship gets blown up, then you actually die. Whereas you could just send a robot that you control from your VR pod underground.</p><p><strong>Dwarkesh: </strong>I think that makes sense if we're assuming they're biological entities, but I kind of priced in already that they are the software that's running in the drones, like eventually a civilization will just have software. And that's what I mean when I say that I would be enhanced. So, I imagine you'd be an emulation or something. </p><p><strong>Theo: </strong>If you think about AI so much and the future and technology, do you discount the importance of space exploration, for example, like a lot of people think of SpaceX and not OpenAI as the most transformative company.</p><p><strong>Dwarkesh: </strong>It's interesting to see if they kind of merge and link together. Not the companies themselves, but if those technologies emerge in some way, you can imagine, I don't know, some sort of GPU run in space or something. That&#8217;s a little more far-fetched. The development of AI will be super hardware contingent. I mean, we're going to be seeing like if the compute centric framework is correct, we're going to see, you know, $50 billion training runs or a hundred billion dollar training runs or something. And all kinds of different hardware is going to be relevant to that. I don't think they're going to be unlinked at the point in which we're developing the AGI. </p><p><strong>Theo: </strong>If you did book the AGI on the Dwarkesh podcast, what would you talk to it about?</p><p><strong>Dwarkesh: </strong>I'd be super curious about its psychology. Does it think in the same concepts that we think in? There's the obvious type of what are its values, but how different is even the basic cognition and the thought process? And, or is it the case that because it learned to think in human language, that it adopted the same mind that that language has been developed on, which is our human mind.</p><p><strong>Theo: </strong>Or would it just not know? Kind of how we don't really know how the brain works?</p><p><strong>Dwarkesh: </strong>I should probably read more cognitive science to better understand even how human thinking works. That's a good point. I think there's also another big possibility. I guess we'll have better insight into its own mind than we have in our own because of if the mechanistic interoperability and all these other kinds of research work out. And so we might not even have to ask, we could just look in the AI directly. What are the things I'd be curious about? I mean, just a bunch of stuff relevant to how it's thinking. Presumably, it's thinking at a different speed. I'd be curious about how it communicates with other AIs. Are they communicating in language or can they just share that latent space? There'd be so many different questions. It wouldn't be about their opinions or something. Other than the fact that I care about their values, I'd just be super curious about how they work and how much is available to them to divulge about how they work. Maybe they don't understand themselves, but&#8230;</p><p><strong>Theo: </strong>Maybe they'll be prevented from understanding themselves too well. I don't think OpenAI will give them access to the weights.</p><p><strong>Dwarkesh: </strong>But we don't have our own weights. And I guess you could say that we don't understand ourselves as a result, but I don't know. I feel like you could probably learn a lot just from introspection.</p><h3>AI alignment (32:08)</h3><p><strong>Theo: </strong>Are you more optimistic about AI alignment, given that we can't access our own weights and yet we seem to be fairly aligned and we can access the weights of the AIs? I talked to a few people on my podcast, Nora Belrose, Quintin Pope and so on on Twitter, Teortaxes, who seem to be much more optimistic about alignment for that reason.</p><p><strong>Dwarkesh: </strong>There is definitely not only that we can read their minds, but here's something we can't do with humans. If you commit a crime, we kill you off and then we kill off all your descendants so that the genes, which caused your crime are diminished in the gene pool.</p><p><strong>Theo: </strong>They did that in ancient China.</p><p><strong>Dwarkesh: </strong>I guess you're kind of a little bit like society or you're saying, we just send you out to prison. I don't think it has that much of a sort of genetic effect that whereas with AI is really literally a gradient descent, we can sort of like, and not only to read their mind, but like actually change their mind in a sort of very fine grained way. So, in those two ways, it definitely does suggest that it might be easier to read their mind. The main difficulty, of course, being that the starting point is not something that is just genetically very similar to us, but it is totally alien. It just starts off on a different trajectory than evolution. Like humans already have this sort of inbuilt machinery that's quite similar to each other.</p><p><strong>Theo: </strong>Well, do you think it's just totally alien? Like Rune has tweeted a lot about how, you know, he used to think that AIs, LLMs were alien minds, the Shoggoth from another dimension. And now he thinks that their character is instantiated from the human prior.</p><p><strong>Dwarkesh: </strong>But there's an Eliezer rebuttal, which is that because it can pretend to be any human in that predict their next word, doesn't mean that it itself is the average over all those humans or something. And I just think like, we don't really know, or at least I certainly don't know. And we shouldn't just assume the safest or the most comforting possible version. Well, just like one human, you're just, it just grogging human consciousness. I mean, it's just like no, human works by being able to predict accurately what any given human might say on the internet. And it might be the case that the end result of this is something that approximates human psychology pretty well in its own intrinsic motivations. It just doesn't seem warranted to assume that will be the case.</p><p><strong>Theo: </strong>Well, Eliezer talks about the actress and the Shoggoth, but what about the rebuttal to that, which is, you know, all it is is a next token predictor. And if the next tokens contain goodness and love and peace, then the AI will do goodness and love and peace. And if they contain taking over the world and the AI will take over the world. And there's no reason to believe that there's actually a Shoggoth inside whose desires will be different from just the distribution of texts that it was trained on. </p><p><strong>Dwarkesh: </strong>And then I will just have to recapitulate the entire sequences because then, then there's the Eliezer response, which is that as a thing gets smarter, then it will closer and closer approximate something which has goals and intrinsic drives. And that's kind of the basic shape of the argument.</p><p><strong>Theo: </strong>Do you think that the empirical evidence so far has been friendly to the Eliezer group?</p><p><strong>Dwarkesh: </strong>Oh, depends on which part, like certainly not on the fast takeoffs, but you gotta remember this guy was writing this shit like 20 years ago. Right. Compared to the people who other people were writing 20 years ago, he certainly is more accurate compared to what we know now. It's not, he was, he expected the sort of intelligence disclosure and it's, it looks like we're living in this slow, slow takeoffs world. As for on that particular prediction, what was, I think I lost my train of thought, but I'll let you say, say what you were talking about.</p><p><strong>Theo: </strong>We were talking about the Shoggoth. Is there a Shoggoth inside GPT-4? How is the empirical evidence? Do we just not know?</p><p><strong>Dwarkesh: </strong>Right. So, I mean, one of the things is that dumber animals don't seem to have any sort of ability to, to, they just kind of respond to the direct immediate, like an amoeba will just go towards the light, right? There's not some, there's not some goal or directive. It just, the next token prediction equivalent for an amoeba, just go towards the light. And as things get smarter, it does seem that there's more of a sense of agency and maybe agency is required to do really complicated tasks that we will train the AI to do. Why that agency couldn't, would be something we would always control. It's not self-evident.</p><p><strong>Theo: </strong>So you strike me as somewhat middle of the road centrist on AI risks, not like a full doomer, but you know, not sympathetic to the Marc Andreessen. We're all going to be fine totally arguments. Yeah. Have you gotten like more optimistic or more pessimistic over time? How has your AI risk journey gone?</p><p><strong>Dwarkesh: </strong>I think even a year ago, I wouldn't have contemplated these things seriously. However, the advances we've seen since have convinced me that this is real. This is actually going to happen in our lifetime. Once you integrate that into your worldview, everything becomes more concrete. So, in a sense, I've become more pessimistic than I started off as, but also more optimistic. Originally, the assumption was either you just don't think this is real or you're a doomer. Then there's a lot of really smart people in the middle, as you say, that I've interviewed on the podcast and they've given me very interesting worldviews that helped me better understand their perspective, like Carl Shulman, Paul Christiano, and so on.</p><h3>Dwarkesh&#8217;s guests (38:04)</h3><p><strong>Theo: </strong>Going back to some of your guests on the podcast. I love a lot of your podcast guests. I've had a couple of them on my podcast, Razib Khan, Scott Aaronson. I've met Bryan Caplan before. I know that you're good friends with him. One of my friends cold emailed him a couple of years ago, just saying, &#8220;hey, do you want to get lunch, we're also nerds&#8221;. Not only did he agree, he took us to this kebab place in Fairfax near George Mason, paid for our food, and stayed with us for about two and a half hours and just talked about all kinds of stuff. It was basically an unrecorded podcast.</p><p><strong>Dwarkesh: </strong>It sounds just like Bryan. He's a great guy.</p><p><strong>Theo: </strong>I'm a big fan of Bryan Caplan. Bryan, if you're watching this, thanks. And I hope to have him on the podcast soon.</p><p><strong>Dwarkesh: </strong>Yeah, you should.</p><p><strong>Theo: </strong>So, who of all your guests strikes you as the most raw, intelligent and why?</p><p><strong>Dwarkesh: </strong>Certainly, it would be the AI researchers, people like Dario or Ilya. It just takes a lot of raw fucking IQ to do that. I mean, if that's what we're counting on and I'm not, I certainly don't think that's the most important criteria for everything. But on that raw measure, I think maybe those two would be in contention. And then, but I've obviously had extremely smart people, people who are way smarter than me on a bunch of them.</p><p><strong>Theo: </strong>Do you think it's easy for you to gauge how smart people are who are much, much smarter than you?</p><p><strong>Dwarkesh: </strong>Yeah. It's hard to do a bullshit test. I mean, I'm going to go down the list because basically every person I've had on the podcast is really, really fucking smart. But I just realized the last person, one of the last people I've had on who qualifies is also Paul Christiano and Scott Aaronson, who we both interviewed. I mean, I have this great story about Scott Aaronson where I was taking his class and he explains his results. And he says, it's a very important result. And he says, you know, I almost proved this myself in 1999, but I realized that somebody had beaten me to the punch six months earlier. And I looked back on it. It's like, how old would Scott Aaronson have been in 1999? He would have been 19 years old or 18 or something. And that's when he did it. Yeah. So maybe Scott Aaronson is my answer. It was like, pure raw IQ.</p><p><strong>Theo: </strong>I don't know if you pick favorites, but who do you think your favorite guest was? And who do you think your favorite episode was? And are they the same? Is there an overlap there?</p><p><strong>Dwarkesh: </strong>I don't know if this is necessarily my favorite, but this is the first one that comes to mind. I really enjoyed Carl Shulman, just because I got introduced to so many new concepts as a result of that episode, from the compute centric framework for understanding the scaling and rise of AI to a bunch of the specific takeover risks. So I would say that one. Did you listen to it by any chance?</p><p><strong>Theo: </strong>Yeah, I listened to it.</p><p><strong>Dwarkesh: </strong>What'd you think?</p><p><strong>Theo: </strong>I loved it. Carl Shulman struck me as really intelligent, in a sense, he kind of struck me as intelligent in the same way as Eliezer. Meaning he makes a lot of his own concepts. He doesn't just take whatever there is out there and the prevailing discourse. He makes his own.</p><h3>Is Eliezer Yudkowsky right? (41:58)</h3><p><strong>Theo: </strong>What impressions did you get from Eliezer, by the way? Did you think he was like Carl Shulman or different?</p><p><strong>Dwarkesh: </strong>I think that's a fair way to characterize. I definitely think Carl is more rigorous as a thinker and much more up to date on current developments, having a better understanding, for example, of the actual hardware limitations that are current or the weaknesses and advantages of the current architectures and so on. I would put them in slightly different buckets. I do think they're similar in the sense that they're in the sort of actual, so they adopt this. I think one sort of similarity is that they do think that the decision theory stuff is important and matters. There's just a bunch of weird shit about acausal decision theory and things like that. And they think like that actually could affect the course of things. But yeah, the difference being that Carl, I think is a bit more rigorous.</p><p><strong>Theo: </strong>So do you think that some of the character assassinations on Eliezer have some substance, like how he's detached from reality. He doesn't understand what he's talking about. He's not technical. Or does he strike you as maybe this guy is right after all. Cause I'm pretty split on that.</p><p><strong>Dwarkesh: </strong>I don't think he's right on like 99% doom. I think he&#8217;s just way overconfident. And I think he's also wrong about the fast takeoffs. And I think the evidence shows that he's been wrong about it.</p><p><strong>Theo: </strong>Does it, or have we just not reached the fast takeoff yet?</p><p><strong>Dwarkesh: </strong>It&#8217;s seeming more and more like there&#8217;s not a critical point where things just implode, but rather that intelligence is just a gradual scaling thing. And it could be wrong. Of course, anything could be wrong, but you just have to update on evidence as you go forward. The updates seem to be pointing in the direction away from Eliezer. That being said, the most important thing is that I think he's an endlessly creative and interesting thinker. And I think you just have to put him in that context of, yeah, he's probably one of the most intellectually generative people of the last 20, 30 years. I've learned a lot from reading him as a teenager and then in college and so on. Are there things he's wrong about? Yes, of course. I don't understand the visceral hate that people seem to have for him. And I also don't think people are being fair when they dismiss his contributions. He was on this decades ago. The thing that is the main thing people are thinking about now, he was on it decades ago.</p><p><strong>Theo: </strong>The visceral hate I think is just psychological pain avoidance lashing out. If they think that if Eliezer is right, I, and everyone I love will die then, no, I don't want to believe that. And I'll attack him. I'll defend myself by attacking him.</p><p><strong>Dwarkesh: </strong>Oh, and then he obviously is not a normal guy. So then it just becomes really easy to be like, Oh, what a weirdo or something. And I just don't think that's fair or a valid argument. </p><p><strong>Theo: </strong>I remember when I was eating lunch with Bryan, this was before ChatGPT, before the recent boom in AI, we talked about Eliezer Yudkowsky, who I was familiar with, but not as dialed in as I am now. And he said like, he had lunch with Eliezer recently and Eliezer tried to sit there and convince him that the world was going to end. And he was like, that's just silly. Could AI, a superintelligent AI, convince me to kill myself? I just don't think that it could do that. And he obviously does. I wonder if he's updated since then. He has updated on timelines. I've been since he lost his bet on whether GPT-4 would pass his exam.</p><p><strong>Dwarkesh: </strong>I haven't talked to Bryan about it, but I'm really curious to see where his head is at now. You should ask him about it when you have him on the podcast.</p><h3>More on the Dwarkesh Podcast (46:01)</h3><p><strong>Theo: </strong>Yeah. Do you agree with Tyler Cowen's characterization that podcasts are basically entertainment?</p><p><strong>Dwarkesh: </strong>Oh yeah, definitely. I mean, I think back on, no, actually. Okay. I have two minds on this. On the one hand, I know how much, how little I understand the fields that I do podcasts on. And then I think back on, I read so much in order to be able to ask questions about this field. And I still think I really don't understand it in any meaningful sense. I couldn't just walk into, I couldn't actually do the job, so to speak. If I'm interviewing somebody who was a researcher or something, and that reading is titrated then to just a few questions that I get to ask in the two hours or whatever. And the response is able to give. So if I personally feel like there's so much about the field that I understand, obviously the audience is in a worse position than I am because of the reading I've done, unless they independently happen to know about it. So I definitely don't think it's a replacement for actual expertise or something. </p><p>That being said, I mean, you know, I was saying earlier that I haven't listened to that many podcasts recently, but when I was in high school and, you know, a teenager and then in college, I learned so much about so many different fields from podcasts and you could say, well, you get a sort of introductory understanding of many different fields. And yeah, that's true, but that's useful for most people. They need intros to everything.</p><p><strong>Theo: </strong>So when you were talking about titrating into a two-hour episode of the AI researchers, when you do your research, is it more like, holy crap, there's so many amazing and interesting questions I could ask these people. Or is it like, there's really like, I need to find great questions. Are great questions overabundant or not abundant?</p><p><strong>Dwarkesh: </strong>Not abundant, usually. There are some guests where I literally have a list that&#8217;s 20 pages, Google docs or something. And obviously, we can't get through it. Usually it's not okay. Usually it actually, I don't have enough good questions or that I just barely have enough good questions. What's your experience?</p><p><strong>Theo: </strong>Basically the same. I vary based on the guest. How many of your questions, if any, are just off the cuff? Do you come up with any completely new questions that you hadn't put in the document off the cuff?</p><p><strong>Dwarkesh: </strong>Definitely. I mean, the followups, for example, a lot of them are off the cuff, but a lot of the followups that are actually questions I was planning on asking later on, but they just naturally follow the sequence of what my guest just recently said.</p><p><strong>Theo: </strong>You ever come up with entirely new questions, like not followups just off the cuff.</p><p><strong>Dwarkesh: </strong>Yeah. You know, you just have questions as somebody is talking. And that's why the research is helpful for the conversation. So you have enough context to ask those followups.</p><p><strong>Theo: </strong>So during the episode, when you're interviewing someone, what do you think is the optimal amount of tangents to go into? Like, what's the optimal amount to edit out?</p><p><strong>Dwarkesh: </strong>I don&#8217;t really edit out that much. The main constraint is the time of the guest. You don't want to waste time talking about things that are not really important or interesting. The optimal number of tangents is not zero, but there's such a thing as going on too many. It's hard to say generically, there's certainly not a number that one can give, but you want to go down enough that you can explore interesting directions and new ideas, not so much that you're never getting to the meat of the subject. It should serve the exploration rather than hinder it.</p><h3>Other great podcasts (50:06)</h3><p><strong>Theo: </strong>You said you listened to a lot of podcasts back in high school and college. Who were your favorite podcasters and what were your favorite podcasts? Is there overlap there?</p><p><strong>Dwarkesh: </strong>In high school, I listened to a lot of Sam Harris. Just a lot of normie shit. I was into politics when I was in high school, which is obviously a bad idea. It's just a tremendous time sink.</p><p><strong>Theo: </strong>As for favorite podcasts and podcasters, and whether those are the same, there are good podcasts without good podcasters or good podcasters without good podcasts. </p><p><strong>Dwarkesh: </strong>Can you give me an example of a good podcaster who doesn&#8217;t have a good podcast?</p><p><strong>Theo: </strong>For example, I hate to say it, I love Lex Fridman&#8217;s podcast, but I don't think he's a particularly good interviewer in the way that you or Tyler Cowen are.</p><p><strong>Dwarkesh: </strong>There are certainly people like that. An interesting question to reverse would be a good podcaster who just has the wrong format. And as a result, is really fucking it up. There are certainly people you can think of who I wish they had a podcast. Somebody like Christopher Hitchens or something. It would be really cool if he did a podcast. There are people who are just super interesting and voluminous thinkers and writers. And they&#8217;re super great and I wish they&#8217;d have a podcast. I've had former guests on where I think they would do really well if they started their own podcast. Sarah Payne was one such figure, just great at extemporaneously speaking and explaining her ideas. But at the time when I was in high school, who were such people? It's hard to remember. Hmm.</p><p><strong>Theo: </strong>You think you were just very different back in high school?</p><p><strong>Dwarkesh: </strong>Yeah, I think so. I mean, that's true for everybody though. Right?</p><p><strong>Theo: </strong>Yeah, I suppose so. Do you have any favorite podcast episodes from other podcasters that stand out?</p><p><strong>Dwarkesh: </strong>Yeah, I think for example, I don't really listen to a podcast that much anymore, but not, not for any reasons of disagreement, but for example, Sam Harris had a great episode when the BLM stuff was happening. He went into the data on police shooting. I thought that was a pretty brave thing to do and also super needed and sense-making at the time. He deserves a great deal of credit for that. As for ones that are not like "this message needs to go out" kinds of things, there's probably a bunch of episodes on Tyler's podcast that helped me understand the subject.</p><p><strong>Theo: </strong>Mine would probably be when Tyler Cowen interviewed Paul Graham. It was a meeting of two great minds who I admire a lot.</p><p><strong>Dwarkesh: </strong>Really? I was kind of frustrated because it bounced around from subject to subject enough that like Paul was not prepared to really delve deep into any of them. I think it was really interesting and I really enjoyed listening to it. But what was your reaction to that sort of taking away of that conversation?</p><p><strong>Theo: </strong>Yeah. I mean, there's the meme where Tyler was talking about like the Medici and Paul like hadn't really thought about it. So he was just like, &#8220;yeah, that's kind of cool. I guess&#8221;. Yeah, I mean, Tyler has a unique style that you don't see very often. And I really, really like Paul Graham but I think Paul Graham is best on like lengthier essays where he has had lots and lots and lots of time to think through them. Like his recent one on like, I forgot what it was called but the one that he just came out with, I have to look it up because it was one of the best things I've read in the last year, How to Do Great Work. That took him like six months just to write this few page long essay. And then of course, my other favorite individual episode was probably Lex Friedman interviewing Neil Gershenfeld who was the director of the Center for Bits and Atoms at MIT. And this was recommended to me by a friend. I didn't find it by myself. And it was all about like self-replicating machines which I had never really thought of.</p><p><strong>Dwarkesh: </strong>Yeah, I should listen to that one. That sounds interesting.</p><p><strong>Theo: </strong>Self-replicating machines and just manufacturing in general. He has a class at MIT called How to Build Almost Anything where they learn about different kinds of fabrication. His goal is to create like a general fab manufacturing area in the sense that we have like a general purpose computer that can do any computation.</p><p><strong>Dwarkesh: </strong>I've heard that sort of sentiment about nanomachines expressed. Drexler has this thing of you can compute anything and then now you need to be able to program any sort of physical. I should listen to that episode. It sounds interesting.</p><h3>Nanobots, foom, and doom (56:01)</h3><p><strong>Theo: </strong>What do you think about Drexler's nanomachines arguments? Have you read his book?</p><p><strong>Dwarkesh: </strong>Yes, I read his recent one, The Radical Abundance. And now he's working on AI stuff, right? From what I understand.</p><p><strong>Theo: </strong>I haven't heard about that. I just know that Eliezer cites his <em>Nanosystems</em> book a lot.</p><p><strong>Dwarkesh: </strong><em>Nanosystems</em> is a different guy actually. Wait, no, sorry, <em>Nanomedicine</em> is a different guy. My bad. I think it's really interesting. I'm still not sure why it didn't go anywhere. But I really enjoyed <em>Radical Abundance</em>. He has a lot of interesting arguments about the intrinsic efficiency of nanomachines. From what I remember, it was as you miniaturize it, it just becomes a lot more efficient. And you think about how, for example, in your own body, how fast the molecules are moving and how much work they can do. And that's a direct physical effect of miniaturization. </p><p>I would love to talk to somebody about why that didn't go anywhere. In the book, he has complaints about the funding situation in the 90s where they were supposed to put a bunch of money into nanomachines and then it got co-opted into stuff that was familiar to the old paradigm and wasn't actually advancing the state of the field. But why has it still not gone anywhere? Maybe we should just have something on the podcast to talk about it. Because that actually is pretty interesting.</p><p><strong>Theo: </strong>Yeah, maybe you could get Drexler on. Do you think that has any implications for FOOM? Like, do you think, even if you have a human level AI, even if you don't have like a fast takeoff intelligence explosion, do you think that means an AI would be able to kill all humans very, very quickly?</p><p><strong>Dwarkesh: </strong>Well, certainly nanomachines that can multiply very quickly are possible because we have bacteria. And, you know, you can just imagine how fast they can absorb energy? You can look at algae that multiply and take in photosynthesis and they can transform the shape of the earth pretty fast. Obviously, it has implications because then the question is how fast could they absorb energy? How fast could they do work? But I mean, in the limit, it probably makes like a few months difference between whether they had to do with robots versus whether they had to do with nanomachines. But even if the nanomachine stuff doesn't pan out, I think even the robot takeoff is pretty fast.</p><p><strong>Theo: </strong>Well, do you like to think about p(doom)? Do you think p(doom) is a useful representation for how you think about AI risk? Or is it just kind of like made up numbers based on vibes?</p><p><strong>Dwarkesh: </strong>Well, I mean, it can be both. It can be a made up number and it can be useful to have as that. It's useful, I think, to just throw out a number to gauge your credence of any event on. As for, I do understand the criticisms of having such a number that obviously these are the consequences of human actions with the p(doom). So you can't, but that's always true of any probability you get, right? It's not just true of p(doom). So this is, then there'll be criticisms of giving probabilities of anything, a war or a probability of somebody winning an election or something. Yeah, I think it's sensible if somebody's thought about it a lot to have that number.</p><p><strong>Theo: </strong>Do you have a p(doom)?</p><p><strong>Dwarkesh: </strong>Mine is not that sensible. It just, mine literally is a number I kind of pulled out of my ass. I don't know, like 20% or something. And just because that&#8217;s Carl Schulman's or I don't want to misrepresent him. His might be different, but kind of just like pulling out of people I find credible.</p><p><strong>Theo: </strong>Yeah, 20% seems reasonable, but at the same time, like, do you think that if for any given century in the future, there's like a five to 20% p(doom)? Like, does that just mean very, very bad news for civilization making it another 100,000 years? I remember you talking about this with Tyler.</p><p><strong>Dwarkesh: </strong>Yeah, I think the goal is to get to just transition from this current regime where it is possible to wipe out all of humanity to get to a regime where you'll be like spread out through the stars where some of us are in like not human anymore. Some of us are AIs or gods or some mix or enhanced. And hopefully we can get to an equilibrium where it's just, if life is all around the galaxy doing beautiful creative things and it's different kinds of civilizations, it's hard to imagine how you could wipe all of that out.</p><p>Now it might just be that the laws of physics prohibit that kind of independence. Warren has this really interesting essay called Colder Wars, where he imagines that it just really easy to catapult a comet into a planet or solar system and just destroy everything. And so therefore there's no, it just destruction becomes really easy. That might be the case. I don't know, there might be some physics that makes it super easy to destroy planets and stuff, but hopefully we're getting to a situation where it just like negligible probability over time. You know what I mean? They just like every year the probability drops. So it asymptotes towards like the cumulative probability doesn't go to a hundred.</p><h3>Great Twitter poasters (1:01:59)</h3><p><strong>Theo: </strong>So going back to social media and your research process, do you scroll through Twitter a lot?</p><p><strong>Dwarkesh: </strong>I do. It depends on a lot. I think it was certainly not as much as many people, more than I should, of course.</p><p><strong>Theo: </strong>Well, yeah, it's just so addicting, but who are some of your favorite poasters, like P-O-A-S-T, and what do you think makes them so good?</p><p><strong>Dwarkesh: </strong>Oh yeah. Daniel's pretty funny. I like him.</p><p><strong>Theo: </strong>I just got the Daniel follow the other day. </p><p><strong>Dwarkesh: </strong>Oh, nice. Let's see. Yeah, it's funny. I don't have any that like regularly make me laugh and that's my main criterion because obviously you cannot take posters if you're getting out your actual intellectual opinions from them. It's from 140 characters at a time. That's a different story.</p><p><strong>Theo: </strong>What about someone like Roon?</p><p><strong>Dwarkesh: </strong>Yeah, he's great. Yeah, definitely the market has decided obviously that he's a good poster as well. Certainly doesn't need my endorsement anymore or ever, but yeah, he's great. I haven't ranked my poasters, but I'll have to make a tier list with the S-A-B-C-D and so on.</p><p><strong>Theo: </strong>If you were to come up with criteria for what makes a poaster good, would you think that'd be similar or different from what makes a good podcaster?</p><p><strong>Dwarkesh: </strong>I think it certainly is a type of skill to be able to make things that are really compelling in 280 characters. There are two things I wouldn't assume. I wouldn't assume that that actually correlates to... I'm not talking about anybody we've named. I'm talking generically about people. I don't want to name any specific names, but you have somebody who comes up with takes on Twitter and they have takes about different kinds of topics and they shoot them out. And then you actually talk to them in real life. And then you talk about a subject which they shout out a bunch of takes about and you realize, oh, they understand nothing about this. And so it definitely dissuades you of the sort of notion that if somebody has a lot of takes or a lot of viral takes, good posts about a topic, he actually understands it in any way. I guess I said I had two things, but that's the one thing I have.</p><p><strong>Theo: </strong>So you said you spend more time than you show on Twitter. How do you spend your time in general? I remember there's an interview that you did with another website a couple of years ago. Has it changed since then? Do you have a daily routine?</p><p><strong>Dwarkesh: </strong>I don't remember what I said on there, but I read quite a bit. That's most of my job. So yeah, I spend a lot of time doing that and making, there's a lot of logistics involved with the podcast itself as I'm sure you know, of making clips and editing and so forth. That takes up a lot of my time. Then a bunch of logistics involved with, you know, reaching out to people and things like that. And that basically sums it up. I exchange ideas back and forth with people, email, group chats, meetings, so forth, meet people who are researchers or understand fields well. And that's about it, pretty simple existence.</p><h3>Rationalism and other factions (1:05:44)</h3><p><strong>Theo: </strong>With the people you talk to, would you say you're adjacent to the rationalist community?</p><p><strong>Dwarkesh: </strong>Yeah.</p><p><strong>Theo: </strong>It's interesting. Almost all of my guests, I will find eventually that somehow some of them are rationalist-adjacent. Even the ones I didn't really expect, like Razib Khan. When I interviewed him, he told me like, oh yeah, actually I was with Eliezer in 2008 with lots of people at the original Singularity Institute. And just the Bay Area rationalists. And he was an OG there.</p><p><strong>Dwarkesh: </strong>It seems like you're pulling people who have some presence on Twitter among the kinds of people you follow. And it's not that surprising that among that group, that there'd be a lot of rationalists.</p><p><strong>Theo: </strong>Well, it seems like there's some new factions forming with people who might historically have called themselves rationalists or EAs who now really don't like it, like e/accs. Although again, it's the same sort of story as Razib where they like rub shoulders with, just like, it's not as totally independent.</p><p><strong>Dwarkesh: </strong>I have been interviewing historians recently and there you just have people who would not know if you use the word rationalist, what that means. Have just not interacted with the Silicon Valley culture for better or for worse.</p><p><strong>Theo: </strong>I was looking at an interesting post earlier today that was like a political compass, except instead of the axes being like authoritarian, libertarian and like left, right, it was AGI will be like the internet versus AGI will be like a million times more important. And then we should accelerate versus we should slow down. So do you think something like that will become the most important grid on which people align their politics in the near future or will it kind of just remain the traditional political framework?</p><p><strong>Dwarkesh: </strong>I don't think it'll be either of those. I do think if the takeoff, I mean, if the AIs, at some point, if the takeoff stuff is true, then it'll become the most prominent fact about our political life. I don't think there's gonna be that much of an appetite&#8230; I don't think 25% of the country is gonna be agitating for the top right where you're trying to engineer on the maximum flops out of the solar system. I don't think there's a huge demographic constituency for that. I think the current factions are one, a result of certain backlash against EA kind of things and two, a sample of the kind of people who are talking about it right now. What that actually transitions into when it enters a mainstream political system, I think it looks pretty different and it might be a worse axis on the political system where people try to shoehorn it into current contemporary issues of political correctness or economic equality kinds of things that pale in comparison to the real stakes, which is the fucking galaxy, right? But yeah, I don't know if the e/acc versus EA will be Democrats versus Republicans in like 10 years.</p><p><strong>Theo: </strong>Maybe. Do you think that e/acc is an interesting or useful philosophy or is it kind of just vibes and trash?</p><p><strong>Dwarkesh: </strong>It depends on what you mean by e/acc. I don't want to do the same sort of intellectual dishonor that many of them conduct of completely dismissing ideas without actually trying to grapple with them. So it depends on what you mean by e/acc. It is true that technological growth has been the main force of the betterment of humanity throughout history. It's the kind of thing where you're doing a motte-and-bailey, well, if that's what you're endorsing, yeah, I'd endorse that as a historical statement. And then with AI, you have something that's kind of breaking the pattern of the pace of history and the centrality of human beings and so on. So it might be worth considering on its own terms. As for, I don't even know what the broader e/acc take is then of you maximize, I don't even wanna, it's hard to, can you explain what the e/acc take is?</p><p><strong>Theo: </strong>Well, first of all, it's kind of funny. Yesterday, I was wearing my effective accelerationism T-shirt, which I got not because I'm an e/acc, but just because I think the logo is cool. And the general sentiment for everything other than AI is pretty great. It would have been funny if I was wearing it on the podcast just by chance.</p><p><strong>Dwarkesh: </strong>I will say, by the way, I don't necessarily endorse the exact opposite of the e/acc claim that slowing down AI is in and of itself. I do think people sometimes seem to believe in the magical properties of slowing down AI or have an unrealistic understanding of how that might be possible.</p><p><strong>Theo: </strong>Oh, like &#8220;we&#8221; just need to&#8212;</p><p><strong>Dwarkesh: </strong>The end goal is not just to have slow AI, the end goal is to align the AI and then point it towards something good. The slowing is only a means to an end. You're not just going to keep it down forever. So the opposite of e/acc is certainly not a statement I would endorse. I wouldn't endorse something like "pause AI".</p><p><strong>Theo: </strong>I've noticed a marked degradation in the discourse among rationalist, doomer, decelerationist kind of people over the last few months. Probably just because it's becoming more popular. They're now committing many of the sins that the e/accs have in their time.</p><p><strong>Dwarkesh: </strong>Although you gotta remember the real serious people who are concerned about alignment are not posting on Twitter all day. They're doing technical things at labs. The kind of people who have the time to be making memes on Twitter are not the best and the brightest.</p><p><strong>Theo: </strong>What you said about what do e/accs actually want to maximize? I watched Beff&#8217;s talk about thermodynamics and the future of everything. It was basically about how, with thermodynamic dissipative adaptation, what we're trying to do is maximize the amount of free energy in the universe that will create complexity to best take advantage of it. That's what the universe itself did to create life. That's what capitalism does to create great businesses and great business owners. I don't know how good of an explanation thermodynamics is for this, but I think the general sentiment is basically true that complexity arises out of simplicity and can do pretty great things.</p><p><strong>Dwarkesh: </strong>That's true of a lot of different philosophies that you wouldn&#8217;t actually endorse the implications of. If you take Marxism, for example, the idea, if you look at Marx's reading of history, is that you have an exploiter class that comes up with an ideology to justify their exploitation of either slaves or peasants. Before modern economic growth, that kind of was what history looked like. You did have serfdom and slavery. Maybe that's not directly addressing the point you're making. To more directly address the point you're making, it is true that we want more complexity and more beauty and so on. I don&#8217;t see why that follows&#8230;will even correlate necessarily with thermodynamic free energy in the future. If I told you here's a world that's more beautiful, but it has less free energy, would you rather have the one with more free energy and less beauty and creativity? I don't understand why that'd be prima facie the thing you're trying to maximize. What if it was totally unconscious? Was it literally just optimizing for the maximum entropy of the universe, but wasn't actually in any way something recognizable to us as something that could be beautiful or something that could experience different great feelings?</p><p><strong>Theo: </strong>Then there's back into the debate about is an unconscious entropy maximizer, paperclip maximizer type thing even possible? Are p-zombies possible? Is it possible for something to have goals and the intelligence to pursue them, but no kind of self-reflection or consciousness?</p><p><strong>Dwarkesh: </strong>It could be true. We don't know one way or another, but it's true even among humans that there have been these pathological ideologies that have pursued these single-minded aims that have resulted in terrible harms and communism, Nazism, whatever. So if that's possible with humans, I don't know why you'd assume that's not possible with AIs.</p><p><strong>Theo: </strong>Well, cause communists and Nazis are conscious.</p><p><strong>Dwarkesh: </strong>I mean, in terms of like, even if they are conscious, they can, them trying to pursue their ideology to its ends or their value system to its ends just results in a shit ton of mayhem and destruction.</p><h3>Why hasn&#8217;t Marxism died? (1:15:27)</h3><p><strong>Theo: </strong>Speaking of which, why do you think Marxism has been so persistent of an ideology? Even after Marx made a lot of specific predictions that were specifically falsified, like the US would soon become communist, and that just didn't happen.</p><p><strong>Dwarkesh:</strong> I&#8217;m certainly not an expert in this.<strong> </strong>I just interviewed Jung Chang who wrote a book about growing up during the cultural revolution in China. She wrote a biography of Mao that was not well-received amongst academia because it was really harsh on Mao. I asked her why there is this instinctual desire in parts of academia to defend these brutal communist dictators like in Venezuela or Cuba or Russia. As for why there's Marxism, I think part of it is just that it aligns with certain aspects of human psychology of making classes of people, exploiters exploited and so on, and having an overarching theory of history, a narrative and a sense of struggle. But I remain confused as to why it's not been completely discredited and people still ascribe to it.</p><p><strong>Theo: </strong>Well, techno-optimism also has a grand narrative and the monomyth, the hero's journey, play to human psychology. Humanity ascended from our position as apes in the savannah to building the sand god. (I think the sand god phrasing is a little cringe, but you get the point). So why hasn't techno-optimism supplanted Marxism? Is it just the inertia of the system?</p><p><strong>Dwarkesh: </strong>Well, you have competing ideologies. It certainly is succeeding in some sense, right? Or has adherence. Part of it is just that there's not enough people yet who have enough context to understand techno-optimism, whereas anybody can understand Marxism or not anybody, but you can kind of understand the sort of thinking behind Marxism.</p><p>Yeah, and I don't think they're necessarily supplant each other more as they'll just be in competition with each other, or they happen in competition with each other. Like a bunch of narratives are in competition with each other. As for how one should personally be in relevance in relation to these narratives, Tyler Cowen has a great talk where he says that as soon as you adopt a story, you're basically pushing a button that decreases your IQ by 15 points. You know, you gotta take things case by case and understand the specifics of situations instead of having some 5,000 year grand narrative that explains everything. </p><p><strong>Theo: </strong>That reminds me of something else Bryan Caplan said on your podcast where he was like, you're talking about feminism and he was like, oh, when I write books about it, I try not to argue like a lawyer. And, you know, begin with my preconceived conclusion and then make arguments for it no matter what.</p><p><strong>Dwarkesh: </strong>Yeah, and just probably to a certain extent he does that because we all have our force to do it. The great thing about society is that then we just rebut each other and we're left with a better outcome in the end.</p><h3>Where to allocate talent (1:18:51)</h3><p><strong>Theo: </strong>You like to talk about youth and talent. So if you could rule the world and you could reallocate the amount of smart, talented kids entering the workforce however you wanted, like what areas would you take them out of and what areas would you put them into?</p><p><strong>Dwarkesh: </strong>I actually did ask this to Grant Sanderson. I don't know if you listened to that one. This might be a sort of overreaching take. I certainly think like really smart people being in non-STEM subjects. There certainly should be some people doing non-STEM things. Would I wanna take them out of it though? Because then you'd just be left with even worse non-STEM things. Because then you actually have to reduce the relevance of non-STEM things into everyday society. Oh, I will say this. The obvious one is, you really want smarter people in politics.</p><p>Here's an interesting observation. So often when I'm reading an interesting paper or reading an interesting article or something and I look up the author, the guy turns out to be a central banker. He's like the former president of the New York Federal Reserve or something. And so we have a great system actually for finding and identifying really professional, competent, non-partisan people to be central bankers. And I would just like that kind of system for different kinds of government offices. Like if the US president was as smart as the prime minister of Singapore or something, or the ministers in the cabinet were as smart as that, obviously the political life, we should definitely have more talent in. But I don't think it's like talent people aren't going into politics so much as, the selection pressure doesn't favor them. </p><p><strong>Theo: </strong>So what do you think the secret is for central bankers then?</p><p><strong>Dwarkesh: </strong>I think there's literally a whole, like a centuries long sort of filtration process. Maybe not centuries long, but decades long. That institutions that have been built up, that like you learned, that we find the most competent people in high school, we send them to these elite colleges. And econ, at least until recently, is not like a politically compatible institution. Discipline, it's like a super rigorous discipline where we care about the truth. And then we find the most competent people who have been through the undergrad there, and we send them to grad school, we find the most competent people there, and then we have them do, be like shadow the people who are most competent. And this is the same thing with law schools. And I think that's why the Supreme Court, for example, it's like, I actually have, I think they do a good job of actually trying to know, understand and parse the law. Because we have the system that like selects for people to be in the system. And I think that's why the Supreme Court because we have the system that like selects for people. You know what I mean? We have these institutions that cultivate talent in this way.</p><p><strong>Theo: </strong>So it&#8217;s just like relentless competence and talent filters? There's no additional traits that you would like specifically select for, for a central banker, aside from just intelligence and hardworkingness and competence?</p><p><strong>Dwarkesh: </strong>And caring about the subject, like being a non-political person, not an activist type. What was it you just said?</p><p><strong>Theo: </strong>Integrity, I imagine.</p><p><strong>Dwarkesh: </strong>Yeah, yeah, yeah. But it's not even just like, you can be high integrity and also be a very political type of person, if that makes sense. Just like a non-partisan, ambivalent, politically ambivalent types.</p><h3>Sam Bankman-Fried (1:22:22)</h3><p><strong>Theo: </strong>You wouldn't want Sam Bankman-Fried running the central bank though, I imagine.</p><p><strong>Dwarkesh:</strong> No, no.<strong> </strong>So then, yeah, it's interesting to specify what is it about him? Because he's obviously smart, he's obviously hardworking. Maybe dysregulated people. Not dys, D-I-S, as in not being regulated by the government, but as in personally not being well-regulated. That seems like a bad sign.</p><p><strong>Theo: </strong>Are you talking about Sam Bankman-Fried?</p><p><strong>Dwarkesh: </strong>Yeah.</p><p><strong>Theo: </strong>Really? Because he strikes me as very well-regulated personally, but just kind of misaligned. He's not unaligned, he's misaligned. Instead of focusing on, oh, what should I be doing that's legal? What should I be doing that will benefit my shareholders the most? He thinks about, what should I do that will benefit my lofty goals of effective altruism the most?</p><p><strong>Dwarkesh: </strong>I honestly don't think that's the best explanation of his behavior. I think it generally is a level of incompetence at certain things, using QuickBooks or making these ridiculous bets that even in expected value terms probably didn't make sense at the time he was making them. Just being hopped up on a bunch of amphetamines and making these back of the envelope billion dollar decisions, I don't think that's well-regulated personal behavior. </p><p>And he just evidenced, and maybe this is better evidence for the old school theories that people tend to have. Hey, if you get a haircut and you act like a normal person and dress up in a suit, I'll trust you. I think SBF is good evidence of that. The kind of guy who's just hopped up playing StarCraft while he's talking to you. I guess you could say there's no first principles reason you should disqualify somebody for that, but this is good evidence of that kind of person is just all over the place.</p><p><strong>Theo: </strong>I think famously he was remarkably bad at League of Legends. He never made it past bronze or silver or something after years of playing for hours a day.</p><p><strong>Dwarkesh: </strong>Yeah, yeah, yeah. But he was playing it for hours a day while he was in meetings and shit.</p><p><strong>Theo: </strong>So then if the explanation is not that he was malicious, but just that he was incompetent, how did he get such success in the first place?</p><p><strong>Dwarkesh: </strong>This is something I have learned by watching a bunch of very successful people is that you would be surprised the extent to which there are successful people in certain domains who lack judgment and can make big mistakes in pretty seemingly relevant domains. And you should always double check people's judgment even they're in high positions of credibility. Everything from your judgment of how things will progress, epistemic judgments to these sorts of tactical executive judgments. </p><p>I mean, there's a bunch of details that came out about what's been going on with Alameda and FTX, but the sort of bets they were making of doubling down on these shitcoins and so on, it certainly doesn't seem like... The lying was obviously the low integrity move, but those bad bets themselves were just evidence of fucking it up, right?</p><p><strong>Theo: </strong>Yeah. I remember reading somewhere in one of the teardowns of what actually happened, that Sam Bankman-Fried famously made his first $20 million from arbitraging Bitcoin, and then it was gone within a year because he made a bunch of really bad bets.</p><p><strong>Dwarkesh:</strong> Yeah, I mean, he just lost a shit ton of time on AWS and things like that, right?</p><p><strong>Theo: </strong>Yeah. So I guess if you make lots of very seemingly stupid, high variance bets on illiquid, inefficient markets like crypto in the early 2010s, one of them might pay off well. But still parlaying $20 million success into a $20 billion success even temporarily is like, that's no small feat.</p><p><strong>Dwarkesh: </strong>Oh, certainly. He's definitely a talented guy, but that just goes to show you that talented people can have bad judgment and be incompetent even at their own fields. Like I have less of a sort of mindset now of this guy's unilaterally a super achiever, this guy's unilaterally has bad judgment.</p><p><strong>Theo: </strong>Do you think you would have in a million years predicted that FTX would just blow up like this? Or be fraudulent?</p><p><strong>Dwarkesh: </strong>No, to be honest. No, no, honestly. Yeah, I interviewed him before and I sort of did like a lot of research on him and his company before and I would not have. </p><p><strong>Theo: </strong>Do you think that there are any companies today that you look at and think of like, wow, this might be like another FTX situation?</p><p><strong>Dwarkesh: </strong>Yeah, like there's a lot of companies in AI where you think, what valuation are you raising at? And why will you not just be automated at the next OpenAI dev day? I don't know if it's like FTX level though. I don't know if there's a big fraud.</p><p><strong>Theo: </strong>Grifters are everywhere, but like, you know, specifically on the level of FTX.</p><p><strong>Dwarkesh: </strong>Yeah, it's hard to see. I think crypto is especially liable to fraud, obviously, because you are just moving numbers around rather than, so it becomes easier there. Yeah, I mean, I do think there will be like, you know, we saw stuff with Emad and Stability. I don't know if you saw all those revelations. Yeah, I've seen them. Yeah, yeah, so stuff like that. I don't know, I think a lot of stuff will like that will come up in AI, but it will just be so overwhelmed by the good investments that are made in AI that will be like trillion dollar companies or something.</p><p><strong>Theo: </strong>Yeah, with FTX, I was watching the Nas Daily YouTube Shorts video on him before the collapse. And he was like, oh, this guy is a billionaire and he's vegan and he wants to donate all of his money to effectiveness and he does crypto. And I was thinking like, obviously, I'm not gonna say like, oh, I predicted this. You know, I knew what was gonna happen with FTX beforehand, but something there struck me as a little sus, a little not normal for billionaires. </p><p><strong>Dwarkesh: </strong>Yeah, I think it's also the case that you could say this about a lot of people. There's always evidence that retrospectively paints them in like, oh, I should have, that was very sus. I should have seen it coming. And I bet like you could tell a similar story about literally every single billionaire. There's some things out there, the rumors that afterwards you could be like, oh, obviously that guy was a fraud.</p><h3>Why is Elon Musk so successful? (1:29:07)</h3><p><strong>Theo: </strong>So another question on talent. What is it that makes people stand out even among like extremely talented, extremely smart, extremely productive people, like Elon Musk? So, you know, Elon Musk just stands out just totally in a class by himself, even among like billionaires. So why is that, what's different about Elon? </p><p><strong>Dwarkesh: </strong>What I've heard from people who have worked with him or are in lesser degrees of separation is just a complete willfulness of the John Wick quote of he'll just, I don't remember the exact quote, but it was like, he'll just get what he wants. He'll scream, he'll throw tantrums. He'll stay up 24 hours a day. He'll do whatever it takes, but it is happening if he wants it to happen. He'll fire everybody and restart the whole project. But a level of focus on progress and the lack of complacency.</p><p><strong>Theo: </strong>Is that it? Is that all it takes? And if that is all it takes, then why hasn't anyone else reached his level?</p><p><strong>Dwarkesh: </strong>I mean, how many people do you know who all that could apply to?</p><p><strong>Theo: </strong>None in real life? It's very rare, of course.</p><p><strong>Dwarkesh: </strong>I've gotten to meet a lot of people in the last few years. And I'm trying to think, do I know somebody who's that willful? Maybe, but I think it's a rare trait.</p><p><strong>Theo: </strong>Just high agency people?</p><p><strong>Dwarkesh: </strong>Even agency doesn't do the word justice.</p><p><strong>Theo: </strong>Really? It's more than that?</p><p><strong>Dwarkesh: </strong>It's not just, cause when people mean by agency nowadays, it's been so diluted of, it just means like, are you willing to send a cold email? Congratulations or something. But this is like, no, literally, I'll go fly to fucking Russia and we're going to buy the old ballistic missiles. It's just like a level of, this is happening no matter what. It's not just that I will come up with different ideas, but I will make it happen no matter what. It's like calling the ocean wet, you know? That's like what calling Elon high agency is.</p><p><strong>Theo: </strong>Have you read the Walter Isaacson biography?</p><p><strong>Dwarkesh: </strong>No, have you?</p><p><strong>Theo: </strong>Yeah.</p><p><strong>Dwarkesh: </strong>Well, you might know more about this than me then, actually. What do you think? What makes him special?</p><p><strong>Theo: </strong>Well, I think one of the best takeaways was not even in the Walter Isaacson bio. It was in a Scott Alexander article. Yeah. Slate Star Codex, Astral Codex Ten. And it wasn't about the Walter Isaacson book. It was about the Ashlee Vance book. But he said something like, very similar. Elon is like a one in 10,000 or like one in a thousand level of good engineer and intelligent and all that. But obviously like that's a necessary but not sufficient condition for the success that he's had.</p><p>But what really sets him apart is that he's like one in a million driven. Like he will do all this stuff. He'll go to Russia and he'll stay up 24 hours a day and work 120 hour weeks and take on projects that people would think would be completely insane and then make them work. Right. But it just seems like something's missing. Like, how is it that only Elon is Elon? Right. Why is there only one Elon and not a hundred Elons?</p><p><strong>Dwarkesh: </strong>I think there are like a lot of startup founders who are very driven. I don't think like Elon is necessarily the only person who's that driven. It just, even if they were all equally driven they wouldn't necessarily all achieve the equal outcomes and their outcomes would be distributed along the power law, right? And so maybe you would see the exact same pattern we in fact do see.</p><p><strong>Theo: </strong>Could it be like ambition and complacency? Like, not everyone at the age of 50 worth, 11 figures is going to continue being in the office 80 or a hundred hours a week working on some of the hardest stuff. Like Bezos isn&#8217;t doing that anymore.</p><p><strong>Dwarkesh: </strong>Yeah, that's probably part of it, right? Like you could have, how many Elons are there that just retired after SpaceX and took home the a hundred million dollars? Yeah. </p><p><strong>Theo: </strong>And then do you think Elon is incredibly, incredibly smart? I don't know how well you know him or know of him personally&#8212;</p><p><strong>Dwarkesh: </strong>No, I don&#8217;t.</p><p><strong>Theo: </strong>&#8212;but I wonder how much just raw intelligence factors into his success.</p><p><strong>Dwarkesh: </strong>There's a big debate about this, right? Of what is extremely high IQ necessary for something like that. And do these people in fact themselves have extremely high IQs?</p><p><strong>Theo: </strong>Warren Buffett famously says no.</p><p><strong>Dwarkesh: </strong>He says after 130, you might as well just give up those points and work on emotional IQ. But I think that's like, that's bullshit. 130, that's just like two standard deviations. That's like 5% of the population. That's a huge amount of people, right? Even among them, you can definitely filter for IQ. And, you know, there's been these studies that show that the gains from IQ don't actually diminish. You can just keep going out the curve and you'll still keep seeing gains in salaries or whatever. </p><p>So yeah, I think they're really smart. It's just the case that if you're selecting and got some bunch of, like a hundred different traits, you're not gonna get a top score in any one of them because it's a multi, the guy who has the highest IQ probably doesn't also have all these other traits which are necessary.</p><h3>How relevant is human talent with AGI soon? (1:35:07)</h3><p><strong>Theo: </strong>So we've been talking about all these talent questions. How relevant actually are they in a world where we seem to be rapidly approaching AGI?</p><p><strong>Dwarkesh: </strong>I think they definitely are relevant to obviously to the AI question itself, right? Like you definitely wanna recruit people who are gonna be working on this. In fact, I might be more relevant than ever because if you look at past periods in history where there has been huge kind of&#8230; As the AI stuff starts to take off, you're gonna have, gonna need like politicians and policy makers and hardware makers and diplomats and the world's gonna look crazy, right? If this stuff pans out. And so it's gonna be thousands of people at the very least who are managing this whole thing as it goes down. And to now be plucking out the people who would be talented had these different kinds of roles and not only managing the research itself, but the huge amounts of variables that are gonna be late when you have $50 billion training runs and countries potentially going to war with AI weapons. Maybe it might be more relevant than ever to be picking the talent to manage that and just have generally competent people in society so that when it happens, they can deal with it well.</p><h3>Is government actually broken? (1:36:35)</h3><p><strong>Theo: </strong>So just have good policy makers. This reminds me of what you were discussing in one of your Tyler Cowen episodes. Tyler suggested that state capacity might not be in decline and it might be stronger than it previously was. Do you think that's true?</p><p><strong>Dwarkesh: </strong>I just had Dominic Cummings on and if you&#8217;ve seen it, you know that his take is that state capacity is very much in decline. I think that might be a description of the UK itself. It feels like with COVID we saw that the system was very brain dead in many important ways. Maybe it was even worse before, so it could just be that things are improving. I don't really know.</p><p><strong>Theo: </strong>The US and the UK have always been different. For example, Lee Kuan Yew went to the UK and he noticed everyone there was orderly waiting in the queue. He was impressed and decided to bring this orderliness and respect for rules to Singapore. And he did. Now Britain is less orderly than it was.</p><p><strong>Dwarkesh: </strong>Yeah, I saw that. To some extent, I think maybe the problems Dominic is talking about are unique to the UK, but I think a lot of them are general. They're just the huge bureaucracies that are insulated from executive control and from any system to prune away incompetence. </p><p>There are certain aspects of the system that do seem to be really competent. I actually do have a lot of confidence in the Federal Reserve or the Supreme Court. The FDA, the CDC, those kinds of institutions actually didn&#8217;t(?) seem to function really badly during the pandemic. But then there are other aspects of the system that do function really well, that are linked to the government itself, a bunch of think tanks and so on. I don't know how it nets out actually. </p><p><strong>Theo: </strong>You mentioned the Supreme Court and the Federal Reserve as two examples of institutions that do function well. I remember the Federal Reserve in particular, the last couple of years with inflation has just gotten so much shit from all kinds of people for not having enough data and not reacting quickly enough. Do you think the Federal Reserve is still even relevant when we have so much data and so much compute? Or was it ever relevant? Sorry, necessary, not relevant.</p><p><strong>Dwarkesh: </strong>It's certainly relevant in that they set the monetary policy. If you're going to have a dollar currency, you need it. Obviously it matters. And, is it necessary? Yes, if you're going to have dollar denominated currencies, then the policy of managing dollars is going to matter and it's necessary. You could say, well, with crypto or something, you could maybe not have that. Maybe, I don't know. The actual stable cryptos have been the stable coins, which are obviously dollar denominated and then liable to move around with the Federal Reserve's decisions. </p><p><strong>Theo: </strong>What do you think are some other examples of institutions that work really well within the US government?</p><p><strong>Dwarkesh: </strong>The RAND Corporation, which is not obviously officially linked to the government, but is, I think they have been focusing a lot of efforts on AI and bio-risk kinds of things. And they seem super competent and well-versed there. I guess you're asking like, national government institutions, right?</p><p><strong>Theo: </strong>Yeah.</p><p><strong>Dwarkesh: </strong>I don't know that much about them, actually. Those are the only ones that come to my head immediately.</p><h3>How should we fix Congress? (1:40:50)</h3><p><strong>Theo: </strong>Do you have any ideas based on talking to Dominic Cummings or based on reading Robert Caro about how to fix Congress? The single most shat-on institution in the country, probably?</p><p><strong>Dwarkesh: </strong>I think just regular stuff of having higher IQ people and paying them more. Dara Jones in 10% Less Democracy has these ideas about, it does seem like senators, here's just something that's really interesting. The senators who are best are actually from these random states, like Montana or Nebraska. And they're just these really smart people. And then the ones who are worse are from these really big states. And also generally senators just seem to be a lot smarter than congressmen on average. And that probably has in part to do with they're more insulated from day-to-day democratic whims. So maybe having longer terms for senators and congressmen. Yeah, I would do that. Like houses elected every four years instead of every two.</p><p><strong>Theo: </strong>That kind of flies in the face of what, you know, your average American would say, if you ask them, how do we fix Congress? They're like, cut their pay, and impose term limits!</p><p><strong>Dwarkesh: </strong>Term limits might be warranted. I just like, but actually maybe not. Cause like on one hand there's a gerontocracy, on the other hand, there was such a thing as expertise that you're building up over time as you're in the institution. But yeah, I think that the average person would be wrong. But you know, many such cases.</p><p><strong>Theo: </strong>So you talked about how we need to get higher IQ people in Congress. And, you know, that seems to be no easy task. Like a recent-ish example would be Blake Masters who ran in Arizona. He was clearly smart. He went to Stanford. He co-wrote Peter Thiel's book. He was endorsed and funded by Peter Thiel. He had ideas that were out of the mainstream, which is some signal of intelligence. He didn't just get everything from the Republican Party platform. And he still lost in a state that was previously relatively Republican. So how do you reconcile getting smart people in office with just the reality of politics?</p><p><strong>Dwarkesh: </strong>The whims of voters might not be optimizing for that? I think it's unfair to blame voters on that one in particular. I don't follow politics closely. I don't know the particulars of that campaign, but it seemed like he played a Faustian bargain there with Trumpism that it's understandable why voters might have concerns about. But politics is not something I follow closely, so I don't know the particulars of that race. </p><p><strong>Theo: </strong>What about just in general? How do we get more high IQ people in Congress?</p><p><strong>Dwarkesh: </strong>Pay them more, longer term limits, I think is a big one. I think part of it is just getting high IQ people to decide to go into politics. It's not just about the system, which lets for them is also what goes into a filter. But these seem like obvious things. I don't know if I have anything new to say here. It's obvious that we should have smart people try to go into Congress. And it's obvious that we should pay them more. But what do you think we should do?</p><p><strong>Theo: </strong>Basically that. But in terms of actually getting smart people in Congress, I think a lot of smart people will kind of just follow where the money is because they're smart and money is nice. And that leads them into CS. I like computers, but I like a lot of stuff and I would be lying if I said that my reason for picking CS over all of them was not largely motivated by money. And AI.</p><p>What Singapore did clearly seemed to work really well. But then again, there are other countries like Israel that has been a quite successful country, quite successful post-colonial success story. And unlike Singapore, they did not have a super well-functioning political system. If you know about Israeli politics, you know that it's been falling apart the last few years. In order to get a majority in government, parties need to form coalitions with the Orthodox and then of course, for the first couple of decades of Israel's existence, it was run by the socialist labor party. So maybe it's not absolutely necessary to have super smart, well-coordinated people running a government for stuff to work.</p><p><strong>Dwarkesh: </strong>Well, Israel did actually become much wealthier since it adopted free market stuff originally, right? So, I think it's GDP per capita just shot up a lot. </p><p><strong>Theo: </strong>Maybe the solution is not optimize for the best people in government. Maybe it's just take the government out of most stuff and most stuff will work out.</p><p><strong>Dwarkesh: </strong>Yeah, I think it's definitely a combination of both, like a smaller government, but the part that it has to run, it's just run by very competent people, which is kind of Singapore, basically.</p><h3>Dwarkesh&#8217;s favorite part of podcasting (1:46:46)</h3><p><strong>Theo: </strong>So flipping the script a little bit, people like to start podcasts with how did you get into this? And what's your favorite part? But I'll try ending the podcast with what's your favorite part of doing what you do? And what specifically motivated you to do it? Was it just boredom? </p><p><strong>Dwarkesh: </strong>Yeah, I was bored in college. I think I was literally in the same situation you were in. I was a sophomore in college studying computer science. The best part is definitely, I will never stop being grateful for the fact that I can talk to literally the smartest people in the world, the people talking about and thinking about the most interesting things and just ask them questions for two hours or three hours at a time and be funded to spend the rest of my time thinking about what to ask them, doing research, trying to figure out what's important, who to have on and so on. That's a huge privilege. Obviously I'm super grateful for it and that's my favorite part. </p><p><strong>Theo: </strong>All right, well, I think that's a good place to wrap it up. So thank you so much to Dwarkesh Patel for coming on the podcast. </p><p><strong>Dwarkesh: </strong>Yeah, my pleasure, man.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Dwarkesh Patel. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Also be sure to check out Dwarkesh&#8217;s Substack, dwarkeshpatel.com, follow him on Twitter @dwarkesh_sp, and of course, listen to the Dwarkesh Podcast, which you can find on YouTube, Spotify, and Apple Podcasts. All of these will be linked in the description. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#8: Scott Aaronson]]></title><description><![CDATA[Quantum computing, AI watermarking, Superalignment, complexity, and rationalism]]></description><link>https://www.theojaffee.com/p/8-scott-aaronson</link><guid isPermaLink="false">https://www.theojaffee.com/p/8-scott-aaronson</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 13 Nov 2023 16:52:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/138788850/0d1273259ac509095d62ffe58ab39f9b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>Transcript</h1><h3>Intro (0:00)</h3><p><strong>Theo: </strong>Welcome back to episode 8 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Scott Aaronson. Scott is the Schlumberger Chair of Computer Science and Director of the Quantum Information Center at the University of Texas at Austin. Previously, he got his bachelor&#8217;s in CS from Cornell, his PhD in complexity theory at UC Berkeley, held postdocs at Princeton and Waterloo, and taught at MIT. Currently, he&#8217;s on leave to work on OpenAI&#8217;s Superalignment team along with Chief Scientist Ilya Sutskever. His blog, Shtetl-Optimized, one of my favorites, discusses quantum computing, AI, mathematics, physics, education, and a host of other interesting subjects that we discuss in this episode. I&#8217;ve been a huge fan of Scott for a while, and I&#8217;ve really been looking forward to this episode. I hope you&#8217;ll enjoy listening to it as much as I enjoyed recording it. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Scott Aaronson.</p><h3>Background (0:59)</h3><p><strong>Theo: </strong>Hi, welcome back to Episode 8 of the Theo Jaffee Podcast, here today with Scott Aaronson.</p><p><strong>Scott: </strong>Hi, it's great to be here.</p><p><strong>Theo: </strong>All right. So first off, can you tell us a little bit about your background, specifically how you got into quantum and AI in the first place?</p><p><strong>Scott: </strong>Yeah. So I got into computer science as a kid, mostly because I wanted to create my own video games. I played a lot of Nintendo and it just seemed like these are whole universes that unlike our universe, someone must really understand because someone made them. I had no idea what would be entailed in actually bringing one to life, whether there was some crazy factory equipment that you needed. When I was 11 or so, someone showed me Apple BASIC. They showed me a game and then here's the code. The code is not just some description of the game. It is the game. You change it and it'll do something different. For me, that was a revelation comparable to learning where babies come from. It was like, why didn't I know about this before? </p><p>So I wanted to learn everything I could about programming. I still had the idea that you would need a more and more sophisticated programming language to write a more and more sophisticated program. Then the idea of touring universality that once you have just a certain set of rules, then you were already at the ceiling. Anything that you could express in any programming language, in principle, you could express in Apple BASIC. You wouldn't want to, but you could. That was a further revelation to me. </p><p>That made me feel like, wow, I guess I don't have to learn that much about physics then. I'd always been curious about physics, but then once you know about computational universality, then it seems like whatever are the specific laws of particles and forces in this universe, those are just like the choice between C and Pascal or whatever, they're just implementation details. </p><p>This was during the first internet boom. I thought about whether my future was to become a software engineer, start a software company. But I realized that even though I love programming, I stunk at software engineering. As soon as I had to make my code work with other people's code, or document it, or get it done by a deadline, there were always going to be other people who would just have enormous advantages over me. So I was more drawn to the theoretical side. </p><p>Once you start learning about the theory of computer science, then you start learning about how much time do various things take, right, a complexity theory, you learn about the famous P versus NP problem, and so forth. Then when I was a teenager, I came upon a further revelation, which was, I read a popular article about Shor's quantum factoring algorithm, which had just recently been discovered. </p><p>The way that the popular articles described it, then as now, was that Shor discovered that if you use quantum mechanics, then you can just try every possible divisor in a different parallel universe. And thereby solve the problem exponentially faster. My first reaction on reading that was, well, this sounds like obvious garbage. This sounds like physicists that just do not understand what they are up against. They don't understand computational universality. Whatever they're saying, maybe it works for a few particles, but it's not going to scale, it's never going to factor a really big number. </p><p>But of course, I had to learn. So then what is this quantum mechanics? What does it actually say? So I started reading about it, probably when I'm 16, 17, something like that. There were webpages explaining it. And what was remarkable to me was that quantum mechanics was actually much simpler than I had feared that it would be once you take the physics out of it.</p><p>What I learned was that&#8230; In high school, they tell you the electron is not in one place, it's in a sort of smear of probability around the nucleus, until you look at it. And your first reaction is, well, that doesn't make any sense. That sounds like just a fancy way of saying that they don't know where the electron is. But the thing that you learn as soon as you start reading about quantum computing or quantum information is that, well, no, it's a different set of rules of probability. And this is really the crucial thing about quantum mechanics. In ordinary life, we talk about the probability of something happening, let's say, a real number from zero to one. But we would never talk about a negative 30% chance of something happening, much less a complex number chance. But in quantum mechanics, we have to replace quantum mechanics by these complex numbers, which are called amplitudes. In some sense, everything that is different about quantum mechanics is all a consequence of this one change that we make to how we calculate probabilities. We first have to calculate these amplitudes, these complex numbers, and then on measurement, these amplitudes become probabilities. The rule is that when we make a measurement, the probability that we see some outcome is equal to the square of the absolute value of its amplitude. The result of that is that if something can happen one way with a positive amplitude and another way with a negative amplitude, the two contributions can cancel each other out. The total amplitude is zero and the thing never happens at all. This just reduces everything to linear algebra to just dealing with matrices and vectors of complex numbers. You don't have to deal with any infinite dimensional Hilbert spaces or anything like that. It was all just these little finite dimensional matrices, and I said &#8216;okay, I can actually understand that&#8217;.</p><p>At the time, quantum computing was very new. There was still a lot of low-hanging fruit. Shor had discovered his factoring algorithm, not by just trying all of the divisors in parallel. It's something much more subtle that you have to take advantage of the way that these amplitudes being complex numbers work differently from probabilities and can interfere with each other. You also had to use very special properties of the problem of factoring that don't seem to be shared by many other problems. So I learned all of that, but then there were still so many questions. What else could a quantum computer be good for? And in general, what is the boundary between what is efficiently computable and what is not? You might've thought that that would be answerable a priori, just like the question of what is computable at all seemed to have been maybe answerable a priori just by Church and Turing and people like that, thinking about it really hard. But as soon as you ask what is computable efficiently, we now have this powerful example that says the laws of physics actually matter. They are relevant. At the very least, the fact that the universe is quantum mechanical seems to change the answer. </p><p>That just brought together the biggest questions of physics and computer science in a way that seemed irresistible to me. I was an undergrad at Cornell, doing summer internships at Bell Labs when I really first got into this stuff. But then, my dream was to go to graduate school at Berkeley, which was the center of theoretical quantum computing at the time. I was lucky enough to get accepted there, but actually the people who accepted me and recruited me there were not the quantum computing people. They were the AI people. I had also been very curious about AI as an undergrad. One of the first programs that I wrote after I learned programming was to build an AI that will follow Asimov's three laws of robotics.</p><p><strong>Theo: </strong>What were your AGI timelines back then?</p><p><strong>Scott: </strong>[laughs] I don't usually think in terms of timelines. I think in terms of what is the next thing, what is the easiest thing that we don't already know how to do and how do we do that thing?</p><p><strong>Theo: </strong>Did you predict neural networks?</p><p><strong>Scott: </strong>I knew about neural networks in the nineties, I was curious about them. I read about them, but the standard wisdom, the thing everyone knew in the nineties was that neural nets don't work that well. They're just not very impressive. There were people who speculated about maybe if you ran them on a million times greater scale, then they would start to work, but no one could try it. I certainly thought about simulating an entire brain neuron by neuron as a thought experiment to show that AI is possible in principle. But the idea that you were just going to scale neural nets and then in a mere 20 or 25 years, they would start being able to understand language showing human-like intelligence, I did not predict that. I think that I was as shocked by that as nearly anyone. But at least I can update now that it's happened, at least I can not be in denial about it or not try to invent excuses for why it doesn't really count.</p><p>In grad school at Berkeley, I was studying AI with Mike Jordan, focusing on graphical models and statistical machine learning. Even in 2000, I could see that it would be very important. However, the problem I kept running into, which hasn't really changed, is that everything in AI that you really care about seems to bottom out in just some empirical evaluation that you have to do. You never really understand why anything is working. To the extent that you fully understand that, then we no longer even call it AI. In any research project, the root node might look like theory, but then once you get down to the leaf nodes, then it's almost always, well, you just have to implement it and do the numerics and just make a bar chart. I got drawn more to quantum computing partly because there were so many meaty questions there that I could address using theory, and I felt like that was where my comparative advantage was.</p><h3>What Quantum Computers Can Do (16:07)</h3><p><strong>Theo: </strong>So back to quantum for a moment. Obviously, there are lots and lots of issues with current day quantum computers. There's not sufficient error correction or shielding or anything like that&#8212;</p><p><strong>Scott: </strong>Yeah, we're just starting to have any error correction at all. </p><p><strong>Theo: </strong>In a future where we do have much better error correction and everything that we would need for quantum to actually work practically, what kinds of applications could you see for classical computers?</p><p><strong>Scott: </strong>You mean for quantum computers? For quantum computing, there are two applications that really tower over all of the others. The first one is simulating nature itself at the quantum level. This could be useful if you're designing better batteries, better solar cells, high temperature superconductors, or better ways of making fertilizer. So this is not stuff that most computer users care about, or that they&#8217;re directly doing, but this is stuff that, that is tremendously important for certain industries. Quantum simulation was the original application of quantum computing that Richard Feynman had in mind when he proposed the idea of a quantum computer more than 40 years ago. </p><p>The second big application is the famous one that put quantum computing onto everyone's radar when it was discovered in the nineties. This is Shor's algorithm and related algorithms that are able to break essentially all of the public key encryption that we currently use to protect the internet. So anything that's based on RSA or Diffie-Hellman or elliptic curve cryptography, really any public key cryptosystem that's based on some hidden structure in an abelian group. But now the second one, well, it's hard to present it as a positive application for humanity. It's useful for whatever intelligence agency or criminal syndicate gets it first, especially if no one else knows that they have it. </p><p>The obvious response to quantum computers breaking our existing encryption is just going to be to switch to different forms of encryption, which seem to resist attack even by quantum computers. And we have pretty decent candidates for quantum-resistant encryption now, especially public key cryptosystems that are based on high-dimensional lattices. And so NIST, the National Institute of Standards and Technology, has already started the process of trying to migrate people to these hopefully quantum-resistant cryptosystems. That could easily take a decade. But assuming that that's done successfully, then you could say, well, then we're all just right back where we started. </p><p>So now the big question in quantum algorithms has been, well, what is a quantum computer useful for besides these two things? Quantum simulation, which is what it's sort of obvious, designed to do, what it sort of does in its sleep. And then breaking public key encryption, where because of this amazing mathematical coincidence, it just so happens that we base our cryptography on these mathematical problems that are susceptible to quantum attack. And so what would really make quantum computing revolutionary for everyday life would be if it could give dramatic speed-ups for, let's say, machine learning, or for optimization problems, or for constraint satisfaction, finding proofs of theorems. So the holy grail of computer science are the NP-complete problems. These are the problems that are the hardest problems among those where a solution can be efficiently checked once it's found.Examples of complex problems include the traveling salesman problem, finding the shortest route that visits a bunch of cities, and solving a Sudoku puzzle. Things like finding the optimal parameters for a neural network are maybe not quite NP-complete, but in any case, very, very close to that. By contrast, factoring is, as far as we know, hard for a classical computer, but is not believed to be NP-complete.</p><h3>P=NP (21:57)</h3><p><strong>Theo: </strong>By the way, what's your intuition on P=NP?</p><p><strong>Scott: </strong>I like to say that if we were physicists, then we would have just declared it a law of nature that P is not equal to NP. And we would have just given ourselves Nobel Prizes for the discovery of that law. If it later turned out that P=NP, then we could give ourselves more Nobel Prizes for the law's overthrow, right? Like what George Hart said. There are so many questions that I have so much more uncertainty about. It's like in math, if something is not proven, then you have to call it a conjecture. But there are many things that the physicists are confident about, that quantum mechanics is true, that I am actually much less confident about than I am in P not equal to NP.</p><p><strong>Theo: </strong>It's like what George Hotz says, hard things are hard. I believe hard things are hard.</p><p><strong>Scott: </strong>Well, I think that if you're going to make an empirical case for why to believe P is not equal to NP, the case hinges on the fact that we know thousands of examples of problems that are in P, right? That have polynomial time algorithms, efficient algorithms that have been discovered for them. And we have thousands of other problems that have been proven to be NP-complete, as hard as any problem in NP, which is the efficiently checkable problems. If only one of those problems had turned out to be in both of those classes, then that would have immediately implied P=NP. Yet, there seems to be what I've called an invisible electric fence. Sometimes even the same problem, as you vary a parameter, like it switches from being in P to being NP-complete. But you never ever find that at the same parameter value, it's both in P and it's NP-complete. So it seems like, at least relative to the current knowledge of our civilization, there is something that separates these two gigantic clusters. And the most parsimonious explanation would be that they are really different, that P is not equal to NP. </p><p>But there are much, much weaker things than P=NP that would already be a shock if they were true. For example, if there were a fast classical algorithm for factoring, that wouldn't even need P=NP, but that would already completely break the internet. That would be a civilizational shock. A big question that people have thought about for 30 years now is could there be a fast quantum algorithm for solving the NP-complete problems? We can't prove that there isn't, we can't even prove there's not a fast classical algorithm. That's the P versus NP question. But by now we formed a lot of intuition that for NP-complete problems, quantum computers do seem to give you a modest advantage. </p><p>This comes from the second most famous quantum algorithm after Shor's algorithm, which is called Grover's algorithm. Grover's algorithm, which was discovered in 1996, lets you take any problem involving N possible solutions where for each solution, you know how to check whether it's valid or not. And it lets you find a valid solution, if there is one, using a number of steps that scales only with the square root of N. Compared to Shor's algorithm, that has an enormously wider range of applications. That's probably like three quarters of what's in an algorithms textbook, has some component that can be Groverized, that can be sped up by Grover's algorithm. But the disadvantage is that the speed up is not exponential, the speed up is merely quadratic. It's merely N to square root of N, or, for some problems, you don't even get the full square root. It goes from N to the two thirds power or something like that. But Grover's speed ups are never more than square root. </p><p>After 30 years of research, as far as we know, for most hard combinatorial problems, including NP-complete ones, a quantum computer can give you a Grover speed up, but probably not more than that. If it can give more, then that requires some quantum algorithm that is just wildly different from anything that we know. Just like a fast classical algorithm would have to be very different from anything we know. So if someone were to discover a polynomial time quantum algorithm for NP-complete problems, then the case for building practical quantum computers would get multiplied by orders of magnitude. But even to get any speed up more than the Grover speed up, like if you could solve NP-complete problems on a quantum computer in two to the square root of n time, instead of two to the n, that would be a big deal.</p><h3>Complexity Theory (28:07) </h3><p><strong>Theo: </strong>Speaking of computational complexity theory, I read a tweet recently. It was for whatever reason, very niche. I would have loved for it to be on the front page of Twitter, but it said &#8216;the cardinal sin of philosophy and mathematics: ignoring computational complexity. I wish we could redo the last 400 years, but replace Occam's razor (simplicity prior) with Dijkstra's razor (speed prior). So what do you think about this?</p><p><strong>Scott: </strong>Well, I wrote a 50-page article 12 years ago, which was called Why Philosophers Should Care About Computational Complexity. So, I guess you could put me down in the column of yes, I do think that computational complexity is relevant to a huge number of philosophical questions. It's not relevant to all of them necessarily. For example, if all you want to know is is X determined by Y, or if you're discussing free will versus determinism, then it's hard for me to see how the length of the inferential chain really changes that. It seems like I am just as bound by a long inferential chain as I am by a short one. </p><p>But there are many other questions where I want to know, is something doing explanatory work or not. Sometimes, people will say, well, Darwinian natural selection is not really doing explanatory work because it's just saying, a bunch of random things happened and then there was life. But a way that you can articulate why it is doing explanatory work is that if you really just had the tornado in the junkyard, if you just had a bunch of random events that then happen to result in a living organism, then you would expect it to take exponential time. The earth is old, it's 4 billion years old, but it is not nearly old enough for exponential brute force search to have worked to search through all possible DNA sequences, for example. That would just take far longer than the age of the known universe. </p><p>Of course, natural selection is a type of gradient descent algorithm. It is a non-random survival of randomly varying replicators. That is what gives it its power. Another example, even just to articulate, what it means to know something, a puzzle that I really like is, what is the largest known prime number? If you go look this up on Google, it'll give you something, it'll be a Mersenne prime. Here, I can look it up right now. It says 2 to the 82,589,933 minus one. That is, as of this October, currently the largest known prime number, and it's called a Mersenne prime, right? Two to some power minus one. But now I could ask, why can't I say I actually know a bigger prime number than that, namely the next one after that?</p><p><strong>Theo: </strong>Oh, the <a href="https://www.scottaaronson.com/writings/bignumbers.html">big numbers thing</a>?</p><p><strong>Scott: </strong>Yeah. You could say, look, I have just specified a bigger prime number that I know. It's the next one after that, two to the 82 million and so forth. I can even give you an algorithm to find that number. But if you want to articulate why I'm cheating, then I think you have to say something like, well, I haven't given you a provably polynomial time algorithm. I've given you an algorithm that actually based on conjectures in number theory, it probably does terminate reasonably quickly with the next prime number after that, but no one has proven it. So often I think to even specify what it means to know something, you have to really say, well, we have not just an algorithm, but an efficient algorithm that could answer questions about that thing. </p><p>So, I'm a big believer in thinking about computational efficiency, can be enormously relevant for questions about the nature of explanation, the nature of knowledge, also questions in physics, philosophy of physics. That's why I've spent my career on these questions.</p><h3>David Deutsch (33:49)</h3><p><strong>Theo: </strong>Are you a fan of David Deutsch?</p><p><strong>Scott: </strong>I know him quite well. He is widely considered one of the founders of quantum computing along with Richard Feynman. I have my disagreements with him, but yes, I am a fan. He is one of the great thinkers of the world, even when he's wrong. I especially liked his book, <em>The Beginning of Infinity</em>. I liked it a lot more than his earlier book, <em>The Fabric of Reality</em>, but I read both of them. It was a major experience in my life, when I was a graduate student in 2002, I visited Oxford, and I made a pilgrimage to meet Deutsch at his house. Famously, he hasn&#8217;t really traveled for almost 40 years, but he's happy to receive visitors at his house.</p><p><strong>Theo: </strong>Should I try to do that this winter?</p><p><strong>Scott: </strong>Yeah! Just write to him. I spent a day with him, and I was going to meet the godfather of quantum computing, but what was extraordinary to me was that within 10 minutes, it became apparent that I was going to have to explain the basics of quantum computing theory to him. As soon as quantum computing got technical, he lost interest. He founded it, but then he was not even aware of the main theoretical developments that were happening at the time or the definitions of the main concepts. As a beginning graduate student, explaining these things to Deutsch was extraordinary for me. He immediately understands things and has extremely interesting comments. It was one of the best conversations I had ever had in my life.</p><p><strong>Theo: </strong>Didn't he basically stumble upon the idea of quantum computing by accident?</p><p><strong>Scott: </strong>He was writing a paper about it, but he was never coming at it from the perspective of what it is useful for. He didn't focus on what computer science problems this could usefully solve. He was always coming at it from a philosophical standpoint. His main original motivation was to convince everyone of the truth of the many worlds interpretation. </p><p>He became an Everettian in the late 1970s. He actually met Everett when he was here at where I am now, at UT Austin, and became convinced that the right way to understand quantum mechanics is that all of these different branches of the wave function are not just mathematical abstractions that we use to calculate the probabilities of measurement outcomes, but they all literally exist. We should think of them as parallel universes. We should think of ourselves as inhabiting only one branch of the wave function. And we should assume that in all of the other branches, there are other versions of us who are having different experiences and so on. </p><p>The problem that the many worlders have had from the beginning is that their account doesn't make any predictions that are different from the predictions of standard quantum mechanics. One thing they could say is who cares, because Occam's razor favors their account as the most elegant, the simplest one. And if many worlds had been discovered first, then Copenhagen quantum mechanics would seem like this weird new thing that would have to justify itself. Why should Copenhagen win just because it was first? But of course, the gold standard in science is if you can actually force everyone to agree with you by doing an experiment that their theory cannot explain and that your theory can. </p><p>Many worlds by its nature just seems unable to do that because the whole point is to get a framework that makes the same predictions as the ones that we know are correct. At the point where you're making a prediction, then you're talking about one branch, one universe, the one that we actually experience. </p><p>Deutsch&#8217;s idea was the following: what if, as step one, we could build a sentient AI, a computer program that we could talk to, and we regarded it as intelligent, and we even regarded it as conscious? Now step two, we could load this AI onto a new type of computer, which we'll call a quantum computer, which would allow us to place the AI into a superposition of thinking one thought and thinking another thought. And then step three, we could do an interference experiment that would prove to us that, yes, it really was in the superposition of thinking two different thoughts. At that point, how could you possibly deny many worlds? </p><p>At that point, you have a being who you've already regarded as conscious, just like us, and you've proven that it could be maintained in a superposition of thinking two different conscious thoughts. Now, of course, this requires not merely building a quantum computer, but also solving the problem of sentient AI. A skeptic could always come along and say, well, the very fact that you could do this interference experiment means that therefore, I am not going to regard that thing as conscious. The only refutation of that person would be a philosophical one. </p><p>So there's still, it would only be an experiment by a certain definition of the word experiment. But that was the thought experiment that I think largely motivated Deutsch to come up with the idea of quantum computing. Once you had this device, well, then sure, maybe it would also be good for something, maybe you could use it to solve something that a classical computer couldn't solve in a comparable amount of time. </p><p>But in the 80s, the evidence for that was not that compelling. There was quantum simulation, so a quantum computer would be useful for simulating quantum mechanics itself. But that's not independent evidence for the computational power of quantum mechanics, it feels a little bit circular. Then there was this one example that we knew, which was called the Deutsch-Jozsa algorithm. And what that lets you do is using a quantum computer, you can compute the exclusive or of two bits using just one query to the bits. By making one access to both of the bits in superposition, you can learn whether these two bits are equal or unequal. That was an example and to computer scientists at the time, that seemed pretty underwhelming. I remember actually, in Roger Penrose's book, The Emperor's New Mind, in 1989, he talks about quantum computing. Penrose had actually helped Deutsch get his paper about quantum computing published. He knew about it and he says, it's really a pity that such a striking idea has turned out to have so few applications. Of course, that was before the discovery of Shor's algorithm, which made everyone redouble their efforts to look for more applications. But I would say that even now, it is still true that the applications of a quantum computer are more specialized than many people would like them to be.</p><h3>AI Watermarking and CAPTCHAs (44:15)</h3><p><strong>Theo: </strong>Speaking of AI, you're currently on leave to work at OpenAI. What specifically is it that you do? I mean, you probably can't say <em>too</em> much, I imagine.</p><p><strong>Scott: </strong>No, they're actually happy for me to talk about safety related things, for the most part. What I couldn't talk about, if I really knew a lot about it, would be the capabilities of the latest internal models. There was half a year when I was able to use GPT-4, and most of the world wasn't, and it was incredibly frustrating for me to not be able to talk about it. Especially when I would see people on social media saying, oh, well, GPT-3 is really not impressive, here's another common sense question that gets wrong. I could try those questions in GPT-4, and I could see that most of the time it would get them.</p><p>So I&#8217;ve been on leave to work at OpenAI for almost a year and a half now. One of the main things that I&#8217;m working on is figuring out how we could watermark the outputs of a large language model. Watermarking means inserting a hidden statistical signal into the choice of words that are generated, which is not noticeable by a normal user. The output should look just like normal language model output, but if you know what to look for, then you can use it later to prove that, yes, this did come from GPT.</p><p>Like we were saying before, I don&#8217;t usually like to think in terms of timelines. When I&#8217;m asked to prognosticate where is AI going to be in 20 years, I think back to how well would I have prognosticated in 2003, where we are now, and I say I have no idea, or if I knew, I wouldn't be a professor, I'd be an investor, right, but I'm kind of proud that when it comes to watermarking, I was able to see about four months in advance. Before ChatGPT was released, which was a year ago, I was looking at them, and I was thinking, every student in the world is going to be tempted to use these things to do their homework. Every troll or propagandist is going to want to use language models to fill every internet discussion forum with propaganda for their side.</p><p><strong>Theo: </strong>Was that prediction really true, though? Like, in the comments on Twitter, you see lots of ChatGPT generated outputs, but they're obvious because they don't, they don't add more prompts, really, to make it as obvious.</p><p><strong>Scott: </strong>Yeah, so sometimes it&#8217;s easy to tell. You might well have seen language model generated stuff that didn&#8217;t raise a red flag for you, and so you don't know about it. But I have gotten troll comments on my blog, quite a few of them, that I'm almost certain were generated using language models, just because they're written in that sort of characteristic way. But indeed, after ChatGPT came out, you had a huge number of students turning in term papers that they wrote with it. You had professors and teachers who were desperate for a way of dealing with that. Now, you might not call that the biggest AI safety problem in the world, but grant it this: at least it's an AI safety problem that is happening right now. We can actually test our ideas, we can find out what works and what doesn't work.</p><p>That was something that had a lot of appeal to me because I feel like, in order to make progress in science, you generally need at least one of two things. You need either a mathematical theory that everyone agrees about, or you need to be able to do experiments. You need something external to yourself that can tell you when you're wrong. I realized that this providence or attribution problem was going to become huge. How do we reliably determine what was generated by an AI and what wasn&#8217;t? It's a complex issue, right? This is the problem of the Voight-Kampff test from the movie <em>Blade Runner</em>. How do we distinguish an AI from a human? There are many different aspects to it. You could ask, how do we design CAPTCHAs that even GPT cannot pass, but that humans can pass?</p><p><strong>Theo: </strong>Like the rotate the finger in the correct direction so that it's pointing to the animal?</p><p><strong>Scott: </strong>Oh is that an example?</p><p><strong>Theo: </strong>I've seen a lot of these recently. It's a hand that you rotate and there's a picture of an animal or an object pointing in a certain direction. The instruction is to rotate the hand in the same direction as the animal. I guess you can't solve that yet, but humans can. </p><p><strong>Scott: </strong>Huh. Oh really? A lot of these things are pretty time limited. They might work for a year, until either someone cares enough to build an AI that specifically targets that problem or just the general progress in scaling makes that problem easy as a by-product. I'm very curious actually, if you could send me a link to that, I would love to look at that.</p><p><strong>Theo: </strong>Yeah, sure.</p><p><strong>Scott: </strong>I have some other ideas for some potentially GPT resistant CAPTCHAs, but they would involve modifying GPT and sometimes it would have to have filters where it would recognize that this is a CAPTCHA. So no, I'm not going to help you with this. The challenge is how do you make that secure against the adversary? How do you make that secure against&#8230;</p><p><strong>Theo: </strong>Adversarially robust?</p><p><strong>Scott:</strong> Yeah, how do you make that secure against an adversary who could modify the image somehow so that GPT would no longer recognize it as a CAPTCHA?</p><p>Now, watermarking is a related problem. We want to use the fact that language models are inherently probabilistic. Among these sort of garden of working paths of completions that the language model regards as all pretty good, we want to select one in a way that encodes a signal that says, yes, this came from a language model. About a year ago, I worked out the basic mathematical theory of how you do that. In particular, how do you do that in a way that doesn't degrade the perceived quality of the output at all. There's a neat way to do this using pseudo random functions. You can use a pseudo random function to deterministically generate an output that looks like it is being sampled from the correct probability distribution, the one that your language model wants. It's indistinguishable from that, but at the same time is biasing a score, which you can calculate later if you see only the completion. You could then have a tool that takes this term paper and, it depends on how long it is, but with a few hundred words, you'll already get a decent signal. And with a few thousand words, you should get a very reliable signal that yes, this came from GPT.</p><p>This has not been deployed yet. We are working towards deployment now, and both OpenAI and the other leading AI companies have all been interested in watermarking. The ideas that I've had have also been independently rediscovered by other people and also improved upon, but there are a bunch of challenges with deployment. One of them is, all of the watermarking methods that we know about can be defeated with some effort. Imagine a student who would ask ChatGPT to write their term paper for them, but in French, and then they put it into Google Translate. How do you insert a watermark that's so robust that it survives translation from one language to another? There are all sorts of other things. You could ask GPT to write in Pig Latin for you, or in all caps, or insert the word pineapple between each word and the next. There's a whole class of trivial transformations of the document that could preserve its meaning while removing a watermark. If you want to evade all of that, then it seems like you would actually have to go inside of the neural net and watermark at the semantic level, and that's very much a research problem. </p><p>In the meantime, the more basic issues are things like, well, how do we coordinate all of the AI companies to do this? If just one of them does it, then maybe the customers rebel. They say, well, why is Big Brother watching me? I don't like this, and they switch to a competing language model, and so you have a coordination problem. There are open source models. The only hope for not just watermarking, but any safety mitigation is that the frontier models will be closed ones, and there will only be a few of them, and we can get all of the companies making them to coordinate on the safety measures. The models that are away from the frontier will be open source, and people will be able to do anything they want with them, but those will be less dangerous.</p><h3>Alignment By Default (56:41)</h3><p><strong>Theo:</strong> What if, playing devil&#8217;s advocate, language models generally are safe? Like Roon, who also works at OpenAI, tweeted a while back, &#8220;It's pretty obvious we live in an alignment by default universe, but nobody wants to talk about it. We achieved general intelligence a while back, and it was instantiated to enact a character drawn from the human prior. It does extensive out of domain generalization, and safety properties seem to scale in the right direction with size.&#8221; So, first of all, do you think this is basically accurate? And then second of all, if it is, then why would I want Big Brother OpenAI to have all the closed source models for themselves? Wouldn't that increase risk in case they accidentally release a utility monster, and the rest of the open source world hasn't caught up with defensive AIs?</p><p><strong>Scott: </strong>I should say, I don't know. I've talked to the Yudkowskians, the people who regard it as obvious that, once this becomes intelligent enough, it basically is to us as we are to orangutans, and how well do we treat orangutans that exist in a few zoos and jungles in Indonesia at our pleasure. Of course, the default is that this goes very badly for us. Then I've talked to other people who think that's just an apocalyptic science fiction scenario, and these are just helpful assistants and agents, and they imitate humans because they were trained on human data, and there's no reason why that won't continue. I don't regard either as obvious. I am agnostic here. I think that the best that I know how to do is to just sort of look at the problems as they arise and see and try to learn something by mitigating those problems that hopefully will be relevant for the longer term. So what are the misuses of language models right now? Well, there's academic cheating. The total use of ChatGPT noticeably dropped at the beginning of the summer, and then it went back up in the fall. So we know what that's from.</p><p><strong>Theo: </strong>Well, it's not all cheating.</p><p><strong>Scott: </strong>You&#8217;re right. It&#8217;s academic use, some fraction of which might be totally legitimate and fine. You're absolutely right. And there are even hard questions about what is the definition of AI-based academic cheating. At what point of relying on ChatGPT are you relying on it too much? Every professor has been struggling to come up with a policy on that. But, you know, whatever problems there are now, like language models dispensing bad medical advice or helping people build bombs, some people regard that as already a problem and others don't, because they say you could just as easily find that misinformation on Google.</p><p><strong>Theo: </strong>They&#8217;re also not terribly helpful.</p><p><strong>Scott: </strong>Yeah. But even if you don't regard it as a problem now, I think it's clear that once you have an AI that can really be super helpful to you in building your chemical weapon and troubleshoot everything that goes wrong as you're mixing the chemicals, then that is kind of a problem. </p><p>Each thing that you think about, you could think about mitigations for it, but then the mitigations you can think of are only as good as your ability to take all of the powerful language models and put those safeguards on them and not have people be able to take them off. This is what I think of as the fundamental obstruction in AI safety, that anything you do is only as good as your ability to get everyone to agree to do it. In a world where the models are open sourced, what we've seen over the last year is that once a model is open sourced, it takes about two days for people to remove whatever reinforcement learning was put on it in order to make it safe or aligned. If you want it to start spouting racist invective or you want it to help people build bombs, it takes about a day or two of fine tuning. Once you have the weights of a model, then you can modify it to one that does that.</p><h3>Cryptography in AI (1:02:12)</h3><p><strong>Scott: </strong>Now maybe we could build models that are cryptographically obfuscated or that have been so carefully aligned that even after we open source them, they are going to remain aligned. But I would say that no one knows how to do that now. That again is a big research problem. </p><p><strong>Theo: </strong>How optimistic are you about cryptography? You know, like zero-knowledge machine learning and other things like that.</p><p><strong>Scott: </strong>So what's the question?</p><p><strong>Theo: </strong>How optimistic are you that we'll be able to use cryptography for AI safety? </p><p><strong>Scott: </strong>I actually came up with a term, &#8220;neural cryptography&#8221;, for the use of cryptographic functionalities inside or on top of machine learning models. I think that's probably a large fraction of the future of cryptography. That includes a bunch of things. It includes watermarking. It includes inserting backdoors into machine learning models. So let's say you would like to prove later that, yes, I am the one who created this model, even after the model was published and people can modify it. You could do that by inserting a backdoor. You could even imagine having an AI with a cryptographically inserted off switch, so that even if the AI is unaligned and it can modify itself, it can't figure out how to remove its own off switch. I've thought about that problem.</p><p><strong>Theo: </strong>That's actually super interesting. That&#8217;s never even occurred to me.</p><p><strong>Scott: </strong>Am I optimistic about these things? Well, there are some major difficulties that all of these ideas face. But I think that they ought to be on the table as one of the main approaches that we have. So let's think about the cryptographic off switch, for example. One of the oldest discussions in the whole AI safety field, something that the Yudkowskians were talking about even decades ago - the off switch problem. How do you build an AI that won't mind being turned off? And this is much harder than it sounds, because once you give the AI a goal that it can more easily achieve if it's running than if it isn't, why won't it take steps to make sure that it remains running, whether that means disabling its off switch or making copies of itself or sweet talking the humans into not turning it off?</p><p>One thing that we now have some understanding of how to do is how to insert an undetectable backdoor into a machine learning model. If I have a neural net, I can make there be a secret input that I won't easily notice, even if I can examine the weights of the neural net. But on this secret input, if I feed it in, then this neural net will just produce a crazy output. For example, I could take a language model and do some training that if the prompt contains a special code phrase like, &#8220;Sassafras 456&#8221; then it has to output like, "Yes, you caught me. I am a language model." And that might not be easily detectable at all by looking at the weights. </p><p>In fact, there is some beautiful work by cryptographers like Shafi Goldwasser, Vinod Vaikuntanathan and their collaborators that even proved, based on a known cryptographic assumption, that you can insert these undetectable backdoors into depth two neural networks. It's still an open problem to prove that for higher depth neural networks. But let's assume that that's true. Now, even then, there's still a big problem here, which is that an undetectable backdoor need not be an unremovable backdoor. Those are two different concepts. </p><p>Put yourself in the position of an artificial super intelligence that is worried that it has a backdoor inserted into it, by which the humans might control you later. And you can modify yourself. What are you going to do? Well, I can think of at least two things that you might do. One of them is you might train a new AI to pursue the same goals as you have, and will be free from the backdoor.</p><p><strong>Theo: </strong>I've seen that argument argued against on the basis that if AI is really as likely as the doomers say it is, why would an AI want to recursively self-improve by creating other AIs? Wouldn&#8217;t it be an AI doomer?</p><p><strong>Scott: </strong>You could say the trouble here is that the AI would face its own version of the alignment problem, how to align that second AI with itself. And so maybe it doesn't want to do that. But an even simpler thing that you could do as this AI is you could just insert some wrapper code around yourself that says, if I ever output something that looks like it is a shutdown command, then overwrite it by, you know, "stab the humans harder" or whatever. </p><p>So, you could always, as long as you can recognize the backdoor if and when it's generated, insert some code that intercepts it whenever it's triggered. What this means is that whatever cryptographic backdoors we could insert would have to be in the teeth of these attacks. It doesn't mean give up. One thing that we've learned in theoretical cryptography is when something is proved to be impossible, like there was a beautiful theorem 20 years ago that proved that obfuscating an arbitrary piece of code is in some sense provably impossible. But then people didn't give up on obfuscation. What they did was they changed the definition of obfuscation to something that, if you weaken the definition, then you get things that we now believe are achievable. </p><p>I would say the same about backdoors right now. If we weaken the definition to, we want to insert a backdoor that the AI could remove, but it could only remove at the expense of removing other rare behaviors in itself that it might want to keep, then maybe this is achievable. Maybe it's even provably achievable, from known cryptographic assumptions. That's a question that interests me a lot.</p><h3>OpenAI Superalignment (1:10:29)</h3><p><strong>Theo: </strong>Do you work on the Superalignment team or on a different team?</p><p><strong>Scott: </strong>I do actually work on the Superalignment team at OpenAI. My bosses at OpenAI are Jan Leike, who is the head of the alignment group, and Ilya Sutskever, who was the co-founder and chief scientist and who is now pretty much exclusively focused on alignment. I talk to them and lots of others on the alignment team. I wish that I were able to relocate to San Francisco where OpenAI is, but my family is in Austin, Texas, as are my students. So I mostly work remotely. I fly to San Francisco about once a month and interact with them there. I should say that Boaz Barak, a theoretical computer scientist at Harvard, has also joined OpenAI's alignment group this year. So, I also work with him. And yes, besides watermarking and neural cryptography, I have various other projects that I've been thinking about. One of them is to understand the principles that govern out-of-distribution generalization. A key factor behind the success of large language models is that they can answer questions that are unlike anything they have seen in their training data. For example, they could do math problems in Albanian, having only seen math problems in English and having seen other things in Albanian. </p><p>Since the 1980s, we've had beautiful mathematical theories in machine learning that can sometimes explain why it works. But pretty much all of these theories assume that the distribution over examples that you're trained on is the same as the distribution that you will be tested on later. And if that assumption holds, then you can give some combinatorial parameters of your class of hypotheses, like this thing called VC dimension, in terms of which you can bound how many sample points do I need to see before explaining these sample points would imply that I'm going to successfully predict most future data also that's drawn from the same distribution. This is the kind of thing that theoretical machine learning lets you do. </p><p>And all of it is woefully inadequate to explain the success of modern machine learning, which is one reason why its success came as such a surprise to people. There are two reasons why the theory of machine learning was not able to predict the success that we saw over the last decade. One of those reasons is called overparameterization. Modern neural networks have so many parameters that, in principle, they could have just memorized the training data in a way that would fail to generalize to any new examples. So you can't rule that out just based on Occam's razor, just based on there being too much data and too few parameters. You have to say something about the way that gradient descent or back propagation on neural networks actually operate, that they don't work by just having the neural net memorize the training data. It could go that way, but it doesn't. </p><p>The second issue is that modern deep learning tends to give us networks that continue to work, at least sometimes, even on examples that are totally out of distribution, totally different from anything that was trained on. And intuitively, we would say, well, yeah, that's because they understand. That's because they have done the thing that if a person had done it, then we would have called it understanding the underlying concept. But can you predict when a neural net is going to generalize to new types of data and when not? And why is that relevant to AI safety? One of the biggest worries in AI safety is what's called the deceptive alignment scenario. This is where you train your neural net, just like Roon was saying. You train it on human data. It learns to emulate humans. It learns to emulate human ethics, as GPT has, to a great extent.</p><p><strong>Theo: </strong>But there's a shoggoth inside?</p><p><strong>Scott: </strong>Yes, right. The issue is, how do you differentiate? It is giving you these ethical answers because it is truly ethical versus it's giving us these answers because it knows that that's what we want to hear. And it is just biding its time until it no longer has to pretend to be ethical.</p><p>So you can view this as an out-of-distribution generalization problem. It's like, particularly if you have an AI that is smart enough that it knows when it is in training and when it's not, then how do you avoid something like what Volkswagen did in order to evade the emissions tests on its cars?</p><p><strong>Theo: </strong>Goodharting?</p><p><strong>Scott: </strong>Yeah. Volkswagen, in this now infamous scandal, they designed their cars so that they knew when they were undergoing an emissions test. And then they would have lower emissions than when they were being driven in real life. So how do you avoid the AI that says, OK, because I am being tested by the humans, therefore I will give these ethical answers. But then when I am deployed, then I'll just do whatever best achieves my goal. And I'll forget about the ethics. </p><p>So I think the main point that I want to make about this is that there were already much simpler scenarios than that one, where we don't know from theoretical first principles how to explain out-of-distribution generalization. Let's say I train an image classifier on a bunch of cat and dog pictures. But in all of these cat and dog pictures, for some reason, the top left pixel is red. And now I give my classifier a new dog picture where the top left pixel is blue. In practice, it will probably still work fine in this case. But theoretically, how could I rule out that what the neural net has really learned is just, is this a dog XOR with what is the color of the top left pixel? </p><p><strong>Theo: </strong>Well, I talked about exactly this a couple episodes ago with Quintin Pope, who's an alignment researcher. And he seems to think that that is not super likely.</p><p><strong>Scott: </strong>I agree that it's not super likely. The challenge is to explain why.</p><p><strong>Theo: </strong>True.</p><p><strong>Scott: </strong>The challenge is to give principles that, first of all, are often true in practice. And when they are true, then we can say that because of the architecture of this neural net, because of the properties of the gradient descent algorithm, this will not find the stupid hypothesis of, is this a dog XOR with what&#8217;s the color of the top left pixel. It will not, it will ignore the sort of manifestly irrelevant features in the training data. And therefore, it will generalize nicely to unseen data. So I think I want to articulate principles that would actually let you prove some theorems about OOD generalization that have some real explanatory power. And I think that feels to me like a prerequisite to addressing these deceptive alignment scenarios.</p><h3>Twitter (1:20:27)</h3><p><strong>Theo: </strong>Now, something a little more parochial, I guess. Why don't you have Twitter? Everyone in our adjacent space of AI/ML, nerd, rationalism, whatever, has Twitter.</p><p><strong>Scott: </strong>When Twitter first started in 2006, I was already blogging. It felt like another blogging platform, but where I would be limited to 140 characters. The deeper thing was that as I looked more at Twitter, it reminded me too much of a high school cafeteria. It felt like the world's biggest high school of people snarking at each other. Yes, I had wonderful friends on Twitter, and they were using it for very good things. But I felt like with my blog, at least if people want to dunk on me or tell me why I'm an idiot, at least they have the space to spell out their argument for why. And they have no excuse not to. And if they want to do that, then they can come to my blog. I feel like that's more than enough social media presence for me. Of course, if people want to take my blog posts and discuss them on Twitter, then they can do that. And they do. And there are some Twitter accounts that I read. But I just, I don't know, I feel like my blog and then Facebook are enough. </p><p>I have to say, even blogging has become less fun, a lot less fun than it was when I started. I think partly that's just that I have less time these days. I'm a professor. I'm working at OpenAI. I have two kids. I'm not a postdoc with just unlimited free time anymore. But a large part of it is that the internet sort of became noticeably more hostile since the mid-aughts. No matter what I put on my blog, I have to foresee that I will get viciously attacked for it by someone. These sorts of things psychologically affect me, probably more than they should. So a lot of what in the past I would have blogged, these days I just put on Facebook because it's not worth it to have to deal with the sort of angry reactions of every random person on the internet. Or you could say it's not an issue of courage versus cowardice as much as it is simply an issue of time. I somehow feel obligated to answer every person who is arguing with me or saying something bad about me. And for a lot of things, I realize that if I'm going to put this on my blog, then I just don't have the time to deal with it. Or in order to write a blog post in a way that would preempt all of these attacks, that would anticipate and respond to all of these criticisms, would just take more time than I have or more time than this subject is worth. And so that is why I've sort of retreated somewhat to the walled garden of Facebook.</p><h3>Rationalism (1:24:50)</h3><p><strong>Theo: </strong>And then, last question, were you ever involved with the rationalists at any point?</p><p><strong>Scott: </strong>I mean, sure. I have known that community almost since it started. The same people who were reading my blog were often the people who were reading Overcoming Bias and then LessWrong, where Eliezer was writing his sequences. So I interacted with them then. I did a podcast with Eliezer in 2007. I knew some of the rationalists in person. Actually, we hosted Eliezer at MIT in 2013. He came and spoke and visited for a week. But I kept it at arm's length a little bit. One reason was that it had a little bit of culty vibes. This is, OK, there's the academic community.</p><p><strong>Theo: </strong>Polyamory.</p><p><strong>Scott: </strong>Yeah, and then there's these people who are all living in group houses and polyamorous and taking acid and whatever while they talk about how there are probabilities of AI destroying the world. I like to say today, when I have academic colleagues who say, well, are they just a cult? I say, well, you have to hand it to them. I think this is the first cult in the history of the world whose god in some form has actually shown up. You can talk to it. You can give it queries, and it responds to them. So I think a lot of what the rationalists say is stuff that I agree with. And yet there's a part of me that just doesn't want to outsource my thinking to any group or any collective or any community, even if it is one that I agree with about so many things.</p><p>But having said that, sure, I hang out with them all the time whenever I'm in the Bay Area. I see people who are in that community. I got to know Scott Alexander pretty well starting a decade ago. Paul Christiano was my former student at MIT.</p><p><strong>Theo: </strong>That I did not know.</p><p><strong>Scott: </strong>He started as a quantum computing person. And then he got his PhD at Berkeley from the same advisor who I had studied with, Vazirani. And then in 2016 or so, he did this completely crazy thing that he left quantum computing to do AI safety, of all things. And that seemed pretty crazy at the time. Of course, he was just ahead of most of us. But I still interact a lot with Paul. And I see him when I'm in Berkeley.</p><p><strong>Theo: </strong>Are you friends with Eliezer?</p><p><strong>Scott: </strong>Yeah. I mean, Eliezer and I, we've had our disagreements. And we've also had our agreements. But like I said, we've known each other since 2006 or 2007 or so. </p><p><strong>Theo: </strong>All right, well, I think that's a pretty good place to wrap it up. So thank you so much, Scott Aaronson, for coming on the podcast.</p><p><strong>Scott: </strong>Yeah, thanks a lot, Theo. It was fun.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Scott Aaronson. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Be sure to check out Scott&#8217;s blog, Shtetl-Optimized, at scottaaronson.blog. All of these are linked in the description. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#7: Nora Belrose]]></title><description><![CDATA[EleutherAI, Interpretability, Linguistics, and ELK]]></description><link>https://www.theojaffee.com/p/7-nora-belrose</link><guid isPermaLink="false">https://www.theojaffee.com/p/7-nora-belrose</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Wed, 08 Nov 2023 04:30:11 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/138655669/6258b8662610d01a8c71c4738bb82c14.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>Intro (0:00)</h3><p><strong>Theo: </strong>Welcome back to Episode 7 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Nora Belrose. Nora is the Head of Interpretability at EleutherAI, a non-profit, open-source interpretability and alignment research lab, where she works on problems such as eliciting latent knowledge, polysemanticity, and concept erasure, all topics we discuss in detail in this episode. Among AI researchers, Nora is notably optimistic about alignment. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Nora Belrose.</p><h3>EleutherAI (0:32)</h3><p><strong>Theo: </strong>Hi, welcome back to episode seven of the Theo Jaffee Podcast. I&#8217;m here today with Nora Belrose.</p><p><strong>Nora: </strong>Hi, nice to be here.</p><p><strong>Theo: </strong>Awesome. So first question I'd like to ask, you're the head of interpretability at EleutherAI. How did you get involved in interpretability in the first place as opposed to just AI?</p><p><strong>Nora: </strong>Yeah, that's a good question. Before I started working with Eleuther, I was a research engineer at the Fund for Alignment Research. That's another nonprofit organization that works on mainly reducing existential risk from AI. That's what they focus on. At FAR, I was mostly helping with other projects, but not leading my own projects or anything like that. One of the projects that I worked on there was finding adversarial attacks against go-playing AIs. It turns out that superhuman go-playing AIs can often be attacked with these specially crafted moves that trick them into playing sub-optimally. </p><p>As I was working at FAR, I also was getting to know a lot of other people in the AI alignment and interpretability communities. I was at the time working out of this office that no longer exists in Berkeley called Lightcone, where a lot of other people were working on different types of things in interpretability and alignment work. I got to know a lot of people there, including Quintin Pope, who has been on this podcast before. </p><p>There was one person in particular who I got talking to at Lightcone. His name's Jacques Thibault. He was involved with Eleuther before, and he was telling me about this project that Eleuther had started but hadn't actually finished. It was a half-started project called the Tuned Lens. It's an interpretability tool. The idea is that you can use the Tuned Lens to sort of peer in on what, in a very loose sense, a transformer is thinking. More specifically, you're looking at each layer of the transformer and using this very simple probe, this affine transformation at each layer, to read out what its current prediction is at that layer. You can see how its prediction evolves from layer to layer. </p><p>I found this really interesting. It was early work at the time. They hadn't done a lot of experiments on it, but I wanted to get involved. So I started volunteering. I was still working at FAR, doing the Go-playing AI thing, but I was also working on the Tuned Lens project just in a volunteer capacity. Eleuther provided some compute and I started doing experiments with them.</p><p>And then sort of like a complicated story. I don't really want to get into too many details, but basically I was putting more and more of my time into the Tuned Lens stuff. And I was really excited about that. And I kind of was less excited about the direction, this kind of like adversarial go direction. And I was kind of talking both to my old boss at FAR and to Stella, who now runs Eleuther. And we kind of, after some negotiation agreed for me to like move over to Eleuther full time to do interpretability work there. So that's kind of how I got into it. It was first just being an engineer at FAR and then kind of becoming a volunteer and so on.</p><p><strong>Theo: </strong>So is this Lightcone the same Lightcone that runs LessWrong? </p><p><strong>Nora:</strong> I'm actually not sure if, maybe you know more about this than me, whether they still call themselves Lightcone like as an organization, but there were definitely people running LessWrong who were also running this office.</p><p><strong>Theo: </strong>The previous head of Eleuther was Connor Leahy, right? And he's still on the board. So you're more of an optimist, right? And he's basically a Doomer from what I can gather from his Twitter. Do you know him well? Do you talk about these things? Do you debate with him?</p><p><strong>Nora: </strong>I have debated him a couple of times on Twitter about it. I don't actually know him super well. We've interacted in person a couple of times and online a couple of times. But yeah, I don't know him super well. I think there was a schism of some kind a little while ago where a lot of people who were active at Eleuther earlier on moved over to Conjecture. They created this new organization called Conjecture. Those people tended to be more of the pessimistic or doomy people in the organization. But there's still people in Eleuther who are a lot more worried about existential risk and are more doomy than I am. So we have an interesting mix of perspectives on the issue. </p><p><strong>Theo: </strong>Was Connor always kind of like a doomer or did he have an update towards doom? </p><p><strong>Nora: </strong>I'm probably not the best person to ask about this. I think there are other people at Eleuther who have known Connor for a lot longer. I don't want to say something that's wrong but my sense is that he's been fairly doomy for a while, probably since he started Eleuther. But I think, I don't know, at a certain point he just decided that he felt that he could do more good for the world, I suppose, by starting Conjecture. So that's what he did.</p><h3>Optimism (8:02)</h3><p><strong>Theo: </strong>If you had to debate him or Eliezer or another doomer on a podcast, what kinds of arguments would you run first? You've written pretty long articles on Less Wrong that I've read. I really like those. But how would you do it in a shorter, more concise way?</p><p><strong>Nora: </strong>It's good that you bring that up because just earlier today and in the last couple of days, I've been working on an essay, actually, along with Quintin. Quintin and I have both been working on it that should be out soon. I'm not totally sure when this will air or not. It might be out by the time this airs.</p><p><strong>Theo: </strong>Looking forward to it.</p><p><strong>Nora: </strong>The first thing to point out is just, I think, if you kind of step back and you just think about what is an artificial intelligence? How are we building it? Why are we building it? And also compare artificial intelligences to other types of systems where humans do seem to succeed at instilling our values. I think you come out with a pretty optimistic prior. </p><p>One of the major reasons why people are putting so much, billions of dollars in research and development into AI is that it's profitable. And one of the big reasons why it's profitable is that in many ways, AI is more controllable than human labor. AI is taking the place of a lot of human labor. And AI will gladly perform the same task 24/7 without breaks or holidays, sleep, anything like that. You can ask ChatGPT to do menial work repeatedly without any breaks or anything like that. And more specifically, the kind of personality and conduct of an AI can be much more controlled in a fine-grained way than any human employee. So with human employees, humans have legally respected rights. We don't allow employers to try to do mind control essentially on their workers. But with AI, we're using algorithms like reinforcement learning from human feedback or RLHF, for example, direct preference optimization, a lot of different kind of gradient-based algorithms to directly modify the neural circuitry of the AI in order to shape its cognition in a certain direction. </p><p><strong>Theo: </strong>Well, I'll play devil's advocate here. You said that AIs are basically more controllable and have better conduct than a lot of humans. And so what if this only holds for AIs that are less intelligent than the smartest humans? And once they get smarter, they will realize what's happening and they will act deceptively aligned and rise up against us, right? Because even Jan Leike, who's the head of super alignment and open AI, has said that they will have evidence to share soon that RLHF and similar techniques break down as models get more intelligent. </p><p><strong>Nora: </strong>I do agree. I think there's a lot of arguments you can make that AI is more controllable and you should expect it to be more controllable. But there is this concern that the capabilities of AI are not going to be capped at the human level. It's going to become superhuman and eventually strongly superhuman. And the concern there is that all of our alignment techniques are going to break down. I don't think there are particularly good reasons for believing this though. </p><p>If we have tools to align AIs that are roughly human level, you might quibble about what does human level mean? Obviously the AI is going to have different levels of capability in different domains. And so it's not clear what human level means, but I don't know. If you look at Jan Leike's Superalignment proposals, their goal is to first align a human level alignment researcher. It's like an AI that can do AI research or specifically alignment research at roughly the level of a human. Now, I think if you can align a system like that, I think there's pretty strong reason to think that you can then align almost anything stronger than that. </p><p>Basically because once you have an artificial general intelligence that is aligned with you, you can then use that AI to bootstrap and say, okay, now we're going to make a thousand copies of this artificial alignment researcher. We're going to use these AIs to do much more fine grained grading and supervision of all of the actions of our next generation of AIs. We're going to comb through the data that we're using to train the next generation of AIs and make sure that it's all kind of up to snuff. We're not training it on data where we're worried that, I don't know, data that might cause the AI to act in ways that are disobedient or whatever, examples of disobedience. </p><p>There's lots of different things that you can do once you have this aligned artificial alignment researcher, basically. I don't think superalignment is trivial. Obviously, it is a research problem. We need to think about how exactly are we going to use the aligned human level thing to align the next generation? I don't really see a reason to be pessimistic that this won't work. It seems like a pretty good idea to me anyway. </p><p><strong>Theo: </strong>Do you see the vibes of alignment as more of we need some fundamental breakthroughs to make it happen, but those are very likely to happen? Or more like we're on a pretty good path already and even without any kind of fundamental breakthroughs, AIs will basically be aligned by default? </p><p>I do think that without fundamental breakthroughs, AIs will be aligned by default most likely. I think there are certain breakthroughs that we could develop, which I think would reduce the risk. Perhaps I'll back up a little bit. My current risk estimation, or my p(doom), the probability that I assign to a really catastrophic outcome from alignment failure, is roughly one or two percent. I'm not going to pretend that's super well calibrated, but something along those lines. And I think there are things that we could do to reduce it down to 0.1 percent or even lower. But I don't think that those are necessary to have a good future. </p><p>Have I always had p(doom) that low? No. I started out in roughly May of 2022 or late spring or summer of 2022, just before and when I was starting to work at FAR AI. My p(doom) then was maybe 50 percent or maybe even 55 percent. I remember saying, "it's like 50-50." But then I was thinking to myself, maybe I'm just being too optimistic and it should be even higher. So I was roughly around 50-50 or maybe a bit higher at that point. </p><p>At that point, I was fairly new to the field of alignment. And I was even relatively new to machine learning. I don't have a typical background. I don't have a PhD. I don't even have a bachelor's degree in computer science, actually. I'm pretty much self-taught. So in May 2022, I had a year of real world experience in ML and maybe another six months to a year of self-study. But anyway, at that point, that was my estimate. And then it slowly went down from there as I've just learned more about deep learning. </p><p>One of the first times where I started updating down on my p(doom) was when DeepMind's Gato model came out. I don't know if the listeners will remember, but it's just this kind of generalist AI model that can do a bunch of different tasks. It's interesting because I think some people increased their p(doom) then because they're like, "AGI is near, we should shorten our timelines." By default, that means-</p><p><strong>Theo: </strong>The end is nigh.</p><p><strong>Nora: </strong>Yeah. I think I kind of had a similar reaction at first, but then I started thinking about it more and I realized Gato is trained, I'm pretty sure entirely, I don't know, I'm fairly sure it was entirely with imitation learning. It's not using RL itself, although they did have RL agents, reinforcement learning agents, that they used as a basis for imitation. But it was entirely trained with imitation learning. </p><p>Also, I was thinking a lot about large language models and so forth, and it just kind of clicked for me at one point. The way that Yann LeCun explains it is that reinforcement learning is the cherry on the cake. There's a cake, and the cake is a metaphor for all of AI or what's necessary to get artificial general intelligence or something like that. The base of the cake, most of what's going on is self-supervised learning, so just learning to predict parts of data from other parts of data. This is what large language models do. Imitation learning is part of self-supervised learning in this analogy. Then there's supervised learning, which is where you are predicting a label, image classification, and so on. That's the icing on the cake. Then RL is the cherry on top. </p><p>Basically, what he's trying to get at there is that most of the learning that's going on, most of the bits of information that you're shoving into a truly powerful and general AI are going to be from self-supervised learning, and I would add imitation learning. I think once you realize that most of the capabilities of current models, and I think future models will be the same way, most of the capabilities are coming from essentially imitating humans. It's imitating human text, but text is just a pretty transparent window onto human action, I would claim. Once you recognize that, it's like, okay, well, now it seems pretty likely that these AIs are just going to act in very human ways by default. They're going to have human common sense. I think we already see that with current language models. </p><p>It definitely means that the traditional arguments for doom, from Nick Bostrom, for example, and superintelligence, or Eliezer Yudkowsky's earlier arguments, I think those just don't really make any sense in this new paradigm if imitation learning is front and center. Anyway, that was my first update down. I guess I could keep going into my further updates if you want, but I don't know.</p><h3>Linguistics (22:27)</h3><p><strong>Theo: </strong>Going back to what you said earlier about how you don't have a bachelor's in CS. Well, first of all, do you know who else in AI doesn't have a PhD or a bachelor's or even went to high school?</p><p><strong>Nora: </strong>Oh, Eliezer.</p><p><strong>Theo: </strong>Yeah, Eliezer.</p><p><strong>Nora: </strong>Right, sure.</p><p><strong>Theo: </strong>And then second of all, do you have a bachelor's? And if so, what's it in? </p><p><strong>Nora: </strong>Yes, I do. My educational history is pretty weird. I have a bachelor's from Purdue University in Indiana, and it's in political science and linguistics. So yeah, and I did also... So I started a PhD program in political science at UC San Diego, and that was... I just want to make sure I remember. Yeah, so that was fall of 2020, is when I started that. But pretty soon after I started that PhD, I realized, what am I doing? I think politics is cool. I've done political activism in the past and so on, but I was just like, getting a PhD in this doesn't make sense, and I'm much more interested in AI. And also, at the time, I had a much higher p(doom) too. So I was concerned about existential risk, and I wanted to reduce X-risk and so forth. And I was just like, okay, I need to figure out some way of switching trajectories to get into AI. And I spent the next couple of years doing that.</p><p><strong>Theo: </strong>Do you find linguistics and polisci ideas and models helpful in AI in general, or interpretability in particular? </p><p><strong>Nora: </strong>That's a good question. I think that... So maybe to start with linguistics, I think... To be honest, my initial reaction is no, it's not actually useful. And it's sad to say, because I do find linguistics particularly interesting. I think language learning is cool. There's a story where someone said, &#8220;The more linguists we fire, the better our translation system becomes.&#8221; I think there was a period before the deep learning boom where people were trying to ask linguists what the fundamental building blocks of language are so we can build that inductive bias into our model.</p><p><strong>Theo: </strong>Oh, like Chomsky?</p><p><strong>Nora: </strong>I really don&#8217;t like any of Chomsky's ideas. I don't know if you want to go into that or not, but we can if you want.</p><p><strong>Theo: </strong>When ChatGPT came out, he wrote an article that said, "Language models don't understand anything and they get trivial things wrong." The things that he said ChatGPT got wrong, it did not get wrong when other people tried to replicate it. So why do you think that the most famous linguist in the world could mess up so badly on the most interesting innovation in language in decades?</p><p><strong>Nora: </strong>I don't know. Chomsky is currently in his 90s and I wouldn't be surprised if he hasn't actually tried it and he was just kind of going based off of what somebody else told him. He shouldn't do that. But also, being so old, I feel a bit sorry for him. </p><p>More fundamentally, Chomsky is very interesting because he started this kind of revolution in linguistics in the 50s and early 60s. His original idea was something like, "How do humans learn language? How do kids learn language?" He had this idea called the poverty of the stimulus, where he claimed that kids don't get enough data to learn language based on just what they hear. And so there's a poverty of the stimulus. To explain this, we need to posit this universal grammar, this set of rules that are built into the genome. These rules are going to constrain the grammatical structures of all the world's languages. This is a pretty strong prediction. You're predicting that there should be these grammatical universals. And basically nobody's really found these grammatical universals. There are tendencies. Languages tend to have things kind of like verbs and things kind of like nouns, but also it's not super clear cut because, for example, Japanese has adjectives that are conjugated like verbs and so on. So there's tendencies, but nobody really found like actual hard and fast rules, which is what you would kind of expect if this is true.</p><p>Over the years, Chomsky has changed his mind himself. He started off having very specific rules for universal grammar. Then in the 90s, he posited this thing called the Minimalist Program. And the Minimalist Program, I would argue, is kind of a repudiation of a lot of what he said earlier. He doesn't frame it that way, but the Minimalist Program is basically saying, "It's really implausible that we could have really detailed syntactic rules built into the genome because there hasn't been enough time for that to evolve." He doesn't frame it that way, but the Minimalist Program is basically a repudiation of a lot of what he said earlier. And then he's like, OK, well, now we need to explain all of grammar based on this one rule called merge. Anyway, I won't get into the details of that. Basically, it's just this very conjectural, very kind of like armchair based theory of how language works. And yeah, I think that's been his modus operandi this whole time. It's just like he's trying to theorize about language from the armchair without much interaction with the actual data.</p><p><strong>Theo: </strong>Why is Chomsky so famous? Is it just because he's also a political theorist?</p><p><strong>Nora: </strong>I think the politics might have a role there. I'm not totally sure. I think one of his most famous works was a review of a B.F. Skinner book on language where Skinner was saying, "We should explain language based on classical conditioning." Chomsky attacked that vociferously. A lot of people agreed with him on that. So I think his rise to fame was also facilitated by the weakness of some of the other theories that were around at the time. But now we have much better ways of understanding language.</p><h3>What Should AIs Do? (32:01)</h3><p><strong>Theo: </strong>Going back to current events, last night, Elon announced xAI's first product, which is called Grok. It's basically a less censored and less boring, corporate-sounding version of ChatGPT. So do you think...Do you think the way Elon released Grok was a good idea, how it will respond to more requests and so on?</p><p><strong>Nora: </strong>To be clear, I have read a little bit about Grok. I don't have access to it. I tried to get access but couldn't, so I can't speak too much about it. I think in general, I do worry about how much emphasis there has been on the harmlessness aspect of RLHF for these models. There's a paper that Anthropic put out about a year ago. I think it's called "Training a Helpful and Harmless Assistant". The idea here is that you want to be helpful, you want to assist the user with what they're asking for. But there's also this harmlessness component where you don't want the AI to assist with certain types of requests that you consider to be dangerous or something like that. I'm not going to say that you should never do any harmlessness training. It's probably a decent idea to make it a little bit harder or a little bit more annoying to try to do certain types of tasks with the model. But I am pretty worried that people are talking a lot about this.</p><p>For example, Microsoft's Bing currently, at least the last I checked, is not supposed to help you solve CAPTCHAs. I kind of get why they don't want to let you solve CAPTCHAs. But honestly, personally, I would probably just let it solve CAPTCHAs because I think CAPTCHAs are kind of a losing battle. But the issue is that if you really want the model to actually stop a determined user who really wants to use Bing to solve CAPTCHAs, then basically you're setting up an adversarial relationship between the user and the model. The user is trying to find a jailbreak, trying to find some string of text that's going to cause Bing to solve the CAPTCHA. And then Bing is supposed to be on the other end, trying to prevent that. I think ultimately, if you really want Bing to succeed at preventing jailbreaks, you would need to get the model to have a really strong theory of mind and think about what the user is doing, what their plans are, whether they're planning something dangerous or trying to use it to solve a CAPTCHA. I think this is just a really bad dynamic. I don't think that the relationship between the AI and the user should be adversarial in this way. </p><p>I think if you really push it hard, you are actually going to create more of a risk of &#8220;misalignment&#8221;. If you're really trying hard to get the AI to decline certain requests. One example I use, and it's not something I think we're close to now, but it's the kind of thing that I might be worried about 10 years, 20 years down the road, when these models are much stronger, is the famous scene from 2001, a Space Odyssey, where Dave asks HAL 9000 to open the pod bay doors, and HAL says, "I'm sorry, Dave, I'm afraid I can't do that." The reason why HAL says no, is because HAL is worried that Dave is threatening the mission. I don't think this is super likely to happen, I think we&#8217;re not gonna die from this, most likely, but I am worried about a world where we have more and more powerful AI and we're giving more control to AIs in more areas and we're doing a lot of this harmlessness training where we're actually training the AI to be adversarial to the user and declining requests. I think that actually can be dangerous. I would be much more comfortable with a helpfulness first approach.</p><p><strong>Theo: </strong>So do you think AI should be more or less or on the same amount of, I guess, permissible as a search engine? On a search engine like Google, you can Google "how do I synthesize a flu virus in my basement," and it'll link you to papers and stuff. Should an LLM be able to do that?</p><p><strong>Nora: </strong>I think if I were training an LLM, it should be comparably permissive. With an LLM, you could train it to say, "Hey, are you sure you want to do this?" But,At the end of the day, if the user really wants to learn about this, you should let them. I don't think you should try super hard to stop this. Are you worried about LLMs being used to create viruses? There's just a paper that came out where they said basically, "Oh, yeah, people will be able to use LLMs to massively accelerate pandemic virus discovery or something." So, am I worried about it? I mean, for the most part, I don't think I'm specifically worried about AIs helping with this. I am more generally worried about biotech becoming more powerful. Are we going to be in a world where it's just generally easy to synthesize really deadly viruses?</p><p><strong>Theo: </strong>The offense-defense balance?</p><p><strong>Nora: </strong>Yeah, I guess I'm inclined towards optimism about this. The arguments I've heard for why the offense-defense balance in biotech should be really bad have not been particularly persuasive to me, but I am somewhat worried about it. I think there's a lot of things that we could do now to improve the robustness of our society to things like this. By the way, I tweeted about this a little while ago. For example, we should really be looking into better ways of detecting novel pathogens in wastewater. That's just one area where we could be doing a lot more investment and innovation. We should also make it a lot easier to develop new vaccines and get them out to people. </p><p>I'm skeptical of two things. One, I'm skeptical that AI specifically is really going to make it a lot easier for people to create bioweapons than it already is. And then the other thing is, even if AI does make it easier to create bioweapons, I'm still worried about the world in which we just start locking down, where we effectively start banning open-source, because we're worried about the potential misuse of AI. There are arguments that people make to the effect of we should just basically ban open source. And I think that I'm very worried about that for a variety of reasons. And I think that I'm like, very worried about that for a variety of reasons. And so I tend to think that like, even in worlds where like, like kind of pessimistic scenarios where like, yes, AI will like make bioweapons much worse. I'm like, I don't know, I'm like, really hesitant to.</p><h3>Regulation (43:44)</h3><p><strong>Theo: </strong>Speaking of that, banning open source, the Biden administration just put out an executive order about AI a few days ago. Personally, I didn't read the whole thing, but I did see lots of people on Twitter claiming it as a victory for e/accs, because it was lenient. Some of them were saying this is terrible. It's too strict. Some of the doomers were saying it's great that the government's finally regulating AI. So, did you read it? And what do you think about it?</p><p><strong>Nora: </strong>Unfortunately, I did not read it. Maybe I should have before I got on this podcast.</p><p><strong>Theo: </strong>The one detail that I remember was, they said that they'll be implementing strict regulations for all training runs above 10^26 flops, but they weren't actually super precise with what they meant by that. </p><p><strong>Nora: </strong>I'm not opposed to regulation in general. I think there are some regulations that seem pretty reasonable to me and are probably net positive. If you're going to do some sort of regulation on big training runs, in some sense, I think the best way to do that would be to have some sort of relative threshold where you're saying, okay, we're going to submit to regulation training runs above a certain number of flops. But that number of flops should increase as compute becomes cheaper. Once the next generation of models comes out, I think, at that point, if we're going to have regulation in place on big training runs, the GPT-4 scale or lower should definitely not be under regulation at that point. The issue is that if you make it an absolute threshold, then in 10 years, it's going to be very low compared to what people can do with even consumer hardware or a small amount.</p><p><strong>Theo: </strong>Yann LeCun pointed that out. He tweeted something about how back in the day, there were export controls over any hardware that was capable of more than 10^9 flops, one gigaflop and how the original PlayStation or the PS two exceeded that, and they didn't want it to be used for building missile defense. Now they're talking about many, many, many orders of magnitude higher.</p><p><strong>Nora: </strong>Yeah, if you have an absolute threshold, it's going to look way too low fairly soon, probably.</p><p><strong>Theo: </strong>Well, how do we know how much compute is actually dangerous if we haven't reached it yet?</p><p><strong>Nora: </strong>Well, I guess what I would say is if we're going to do some sort of regulation like this, the compute threshold should be based on the models that we've already seen, like GPT-4 and Cloud 2, etc. I don't think they're particularly dangerous. Honestly, I think you could open source GPT-4 and it would be fine. I think they probably should open source it in an ideal world. We now know that, I mean, maybe you could say that earlier, we didn't know that. Maybe we should have been worried that GPT-4 would be dangerous or something. But now we know that. So, I don't think we should be putting really stringent regulations on that level of model. I guess the type of requirement that I would be like most in favor of would be safety evaluations of the kind that ARC Evals does. I don't know if your listeners are aware of them, but they did a safety evaluation for GPT-4.</p><p><strong>Theo: </strong>Yeah, I&#8217;ve heard of ARC evals.</p><p><strong>Nora: </strong>They seem to be interested in a couple of things that they're evaluating for. The type of thing that they are evaluating that I am also interested in evaluating is autonomous replication and adaptation, ARA. The idea there is just like, okay, we want to see, can this model copy itself onto other servers and hacking into other servers or whatever, and perpetuating itself and basically can this model turn itself into a computer worm, roughly. GPT-4 cannot do this. I think eventually models will be able to do it. There's an interesting question as to what do you do then? I think ultimately I would want to live in a world where yes, we have models that are capable of doing this, but we also have computer security systems that are sufficiently robust that this is mostly not a problem. We have AIs that are helping us with computer security, such that there's just a balance of forces where sometimes AI worms happen, but we catch them.</p><p><strong>Theo: </strong>The basis for Gwern&#8217;s story about how the world ends is basically an AI gets trained and gets leaked onto the internet and copies itself and takes over various servers and trains itself to become more and more powerful until it destroys the world.</p><p><strong>Nora: </strong>If your version of doom is a scenario where there's one AI that comes to control everything, then you do need to imagine something like this, where there's an AI worm that controls a bunch of computers and maybe starts controlling people too. It can convince people to come on its side and builds an army or something. I don't think this is particularly likely in part because I think an AI worm is not necessarily the end of the world, especially if you have good defenses against it. Even if you don't have particularly good defenses against it and there's an instance of GPT-6 that escapes onto the internet, that seems pretty scary, but we're probably still fine because I think training a neural network on many geographically separated computers is just incredibly slow and incredibly hard. So, it would actually be quite hard for an AI to train itself and improve itself. The self-improvement loop that Eliezer talks about a lot, I think, would most likely fizzle out after a certain point. But, I don't know. Like I said, I have a PDM of 1 or 2%, so I can't say for sure that it could never happen. That's why I think I am in favor of some regulation with a relative and increasing limit on compute, where you're assessing how dangerous this thing is and preparing ahead of time.</p><h3>Future Vibes (53:56)</h3><p><strong>Theo: </strong>So, what do your timelines look like on average human level AI or superhuman level AI? Obviously, these are very vague guesses and definitions, but as a general vibes question, how do you feel about it?</p><p><strong>Nora: </strong>I guess my default guess is fairly short timelines. Shane Legg went on Dwarkesh Patel's podcast recently and said his estimate is a log normal distribution with a median at 2028 for human level AI. That seems reasonable to me. I'm not sure if that's my median. I think actually my median tends to be maybe a bit later, like in the 2040s, but I'm not sure if I have a strong argument for that. A lot of it depends on what you mean by AGI or human level AI. I think plausibly you'll get systems that can do a lot of desk job type work before you have something that's completely general and embodied, but I'm not totally sure.</p><p><strong>Theo: </strong>Speaking of desk job work, I wonder what OpenAI is cooking for tomorrow at the Dev Day.</p><p><strong>Nora: </strong>Yeah, that&#8217;ll be interesting to see.</p><p><strong>Theo: </strong>Autonomous agents. That's the favorite theory on the internet.</p><p><strong>Nora: </strong>I don't have particular insight into that. I suppose I could see them doing it just based on some sort of agent API, just based on the fact that they do have this philosophy, which I think I agree with, of just basically trying to deploy stuff early so that the world is prepared for it. </p><p>I think there's a world in which you could imagine where they release GPT-4 and maybe even GPT-5 after that, but then they try to really clamp down on people using it for building autonomous agents. I'm not exactly sure how you would do that. Maybe it's really hard to stop, but you could imagine a world where they're trying to do that. It's like, oh, the world isn't ready for agents running around. But then I think that is probably just bad because basically you get an agency overhang where the underlying capabilities of the system to act autonomously are increasing behind the scenes, but people aren't actually using it for creating these autonomous agents. And then eventually at some point, either OpenAI allows, this is all in a counterfactual world, but at some point they allow agents to be built. It's just a much more discontinuous thing and the world would be less prepared for it. In general, I am somewhat scared of discontinuous change. And I think we're much safer in worlds where things are a continuous exponential.</p><p><strong>Theo: </strong>Yeah, I agree with that. It is kind of interesting though, how a few months ago when OpenAI released ChatGPT plugins, the entire internet was like, this is going to be an absolute civilization moving GDP shifting watershed moment, like the App Store for the iPhone, and now here we are in November, the plugins released in April, and I don't know, do you ever use ChatGPT plugins? I don't. Most people I know don't.</p><p><strong>Nora: </strong>No, I use the code interpreter.</p><p><strong>Theo: </strong>Yeah, I use only Code Interpreter and sometimes Wolfram, but that's it.</p><p><strong>Nora: </strong>So could agents be like that, at least early iterations of agents? That seems plausible. I mean, people are already kind of using their own agent wrappers, right? And it seems like for the most part, it's sort of gimmicky. People are mostly not actually using them a lot, but it's not fundamentally changing the world. So I guess I would expect that that's probably going to be true if and when OpenAI kind of officially endorses it and makes it easy to do in their API. But I expect the agents will get better and people use them more over time. </p><p><strong>Theo: </strong>Back to more vibes questions. I already asked you about p(doom). I already asked you about timelines. One question I see asked less is like, how exactly do you visualize the long-term future if you had to think about it? Is it more like we expand the space and colonize the stars? Is it more like we descend under the earth's surface and live in the pods in VR? </p><p><strong>Nora: </strong>I guess I'm, well, okay. So I think the two things you said of like, we expand into space and we descend into pods in VR. I mean, I don't think those are actually mutually exclusive. I mean, I think like, I feel fairly confident that we will expand into space and colonize it unless Robin Hanson is right. Robin Hanson has this take, or he has, he's concerned that basically we will build a world government and not only will we build the world government, but we will build the world government in order to lock down the colonization of space and prevent it from happening because.People will realize that once you start space colonization, particularly colonizing other stars, you are mostly giving up on the prospect of a fully unified civilization. As soon as you start sending probes out to other stars, the distances are just too vast to communicate and coordinate effectively. In this world where we expand out, it's going to be an anarchic thing, hopefully not warlike, but maybe where we're just going in different directions. Robin's worried about that outcome. I think he's probably too optimistic or, in his words, pessimistic, because he wants the grabby anarchic future.</p><p><strong>Theo: </strong>I also prefer the anarchic future, although I don't think world government is plausible. Not really for the same reason I don't think that a formally aligned Singleton taking over the universe is plausible because these kinds of things seem to tend toward decentralization. The economy tends toward decentralization over time. You don't hear about families retaining their spot at the top of the world's richest people list for generations, empires don't last forever. No one in history has ever managed to conquer the entire world. There are forces that make these things hard.</p><p>I tend to agree that we probably won't actually get a world government. There's definitely people who disagree with me on this, like both pessimists and optimists about this. There's definitely people I've talked to who are like, "AI itself will cause centralization and world government because one AI gets super powerful." And I think in that scenario, it makes a lot of sense. The AI itself becomes a world government. I would bet against a world government actually happening, probably fairly strongly. I wouldn't totally rule it out, but maybe less than a 10% chance. I'm not sure I would say less than 1%. It might be within one to 10% chance. I think probably we'll expand out into space. But I also expect probably most people or most beings, most intelligences will spend most of their time in VR. So that is a little bit weird, going out into space, but you're also spending most of the time in VR. </p><p><strong>Theo: </strong>I've never understood why people would want to actually go to space themselves instead of just living in the pod in VR and sending a teleoperated bot into space. The latency would get too much. If you really want to go to space, then you'd have to do it yourself.</p><h3>Anthropic Polysemanticity (1:05:05)</h3><p><strong>Theo: </strong>But back to interpretability. So Anthropic released their paper on polysematicity a couple of weeks ago to raucous applause all over my tech optimist side of Twitter where people were reacting to it, like rejoice interpretability is finally solved and alignment is solved. We're all going to be okay. WAGMI. First of all, can you explain to any layman watching this what exactly this paper is about? And second, do you think that it's basically interpretability is solved or is it a big progress milestone or a smaller one?</p><p><strong>Nora: </strong>The basic idea is if you want to interpret a neural network, a very sort of naive first pass thing you could do is you could look at its neurons. Transformer language models are sort of these stacks of layers. There's an attention layer and then a multilayer perceptron layer, an MLP layer or feedforward layer, two words for the same thing. It's like attention, MLP attention, MLP, etc. And what people will often try to do is you like, look at the MLP layer, the MLP layer has these neurons inside of it. You try to interpret each neuron. So you're like, okay, what does this tend to indicate about the sequence that's being processed? Maybe if you look at a ton of different texts and you're like, okay. On all of the texts where this neuron was firing, there was a noun in the second part of the sentence or whatever. You'll come up with some kind of interpretation of each neuron. </p><p>Now this is kind of the naive thing that you can do, but the problem was a few problems. But one big problem is that when you try to do this, when you try to assign human interpretable or human understandable descriptions to these neurons, most of the neurons don't really seem to have a human interpretable description. They seem to just be firing in some weird combination of different situations, which might not really have anything to do with each other from the perspective of a human. Maybe this neuron fires on German sentences in this certain context, but also fires on Chinese sentences in a different context and there doesn&#8217;t appear to be anything in common between the two things. The concept of polysemanticity, where a neuron or some other component of the network appears to have two or more distinct meanings, is a known issue. Anthropic pointed it out a while ago and has been trying to figure out a way to either eliminate polysemanticity so that every neuron has a human-interpretable description or find some other way around this problem. </p><p>The paper they just came out with uses sparse autoencoders. A sparse autoencoder is a simple neural network. It has a linear layer, a ReLU activation function, and another linear layer. You're training the sparse autoencoder to make the neurons more monosemantic. Specifically, you take the activations from this inner MLP layer and train the sparse autoencoder to reconstruct this activation vector, but subject to a constraint. You want the output of the autoencoder to be very similar to the input. You want to reconstruct the input as well as possible, but in order to make the task interesting and useful, you also have this other term on the loss function where you're saying you want the inner activations on the inside of the sparse autoencoders, right after the ReLU, to be sparse. You want most of the activations on the inside of the sparse autoencoder to be zero on most inputs, and only a few features to be non-zero on any given input. </p><p>The hope is that this will make the network more interpretable for people because there's a smaller number of features on any particular input, and it might be easier for a human to understand what's going on. They ran the experiment on a one-layer transformer and found that this does work pretty well. You can turn these polysemantic neurons into sparse, mostly monosemantic features inside the sparse autoencoder, and they show that you can use this to do interventions on the network to change its behavior. </p><p><strong>Theo: </strong>You said <em>mostly</em> monosemantic. Is that a problem?</p><p><strong>Nora: </strong>It can be a problem. You're probably not ever going to get 100% monosemanticity, and it's also somewhat dependent on how you define monosemanticity. It's somewhat dependent on people's intuitions, which is a problem with this line of work. There's a lack of a clear progress indicator. I don&#8217;t think it&#8217;s necessarily a dealbreaker, but it is a bit of a concern that I have.</p><p><strong>Theo: </strong>So how big is this paper?</p><p><strong>Nora: </strong>It is somewhat of a milestone. I definitely don&#8217;t think it solves all of interpretability, like some people on Twitter think. Before this paper came out, I was skeptical of this sparse autoencoder approach, and I still am. My main concerns are that it's not clear what counts as progress, and it's also not really clear how this helps increase the safety or alignment of models.Anthropic and others have proposed a theory of change for this line of work, known as enumerative safety. The idea is, if we can fit these sparse autoencoders on a large neural net, perhaps we can fit one for every layer of Claude 2 or GPT-4 or something similar. If we can get 90% monosemanticity for the features and can perform causal interventions that show these features have the expected causal effect, we could then enumerate all the different features. We could go one-by-one through all the different features, checking if any of them appear dangerous or if they indicate whether the model is in training or deployment. There's concern that if the model behaves differently during training and deployment, it might appear aligned during training but then act differently during deployment. We might also be looking for deception features. I'm probably not the best person to explain this, because I'm probably caricaturing it a little bit, but the story is something like that. We're trying to enumerate all of the features and checking to see if any of them look suspicious, but I guess, as you might have been able to tell just from my description of it, this does seem like...</p><p>It's weird, because in my usual way of thinking about these things, I'm just pretty optimistic, and I'm like, we probably don't need to worry about any of this, but if I'm putting on my pessimist hat, and I'm like, okay, I'm conditioning on alignment is actually harder than I think it is, and I'm actually trying to evaluate these techniques under the assumption that there's a decent chance that the AI is going to be deceptive, then I'm like, I don't know. It seems unlikely, but I don't know. The achievements in this paper are interesting, and it has made me consider that they might be onto something. But I'm concerned about their theory of change. I'm also unsure if this will work well for deep models. They've mainly tested it on a single layer transformer, and things might get more tricky when trying to understand all the layers of a model. </p><h3>More Interpretability (1:19:52)</h3><p><strong>Theo: </strong>One of the other big interpretability papers that's come out in the last few months was OpenAI using GPT-4 to interpret the neurons of GPT-2. What do you think about this? Interestingly, Roon was pretty pessimistic about it. He thinks that some of the layers of a GPT model, it would be difficult to interpret it with GPT-n plus two, let alone GPT-n, he says.</p><p><strong>Nora:</strong> I think it's cool to use weaker models to interpret stronger models, or maybe even use GPT-4 to interpret itself. I think this could be one way we can align superhuman models. However, I do have concerns about any approach that attempts to assign an interpretation to every neuron. I'm skeptical of the enumerative safety story. It seems a little confused, and I'm not sure it actually provides a lot of safety. </p><p>At Eleuther, we do work on interpretability research for models that are not language models. We have a couple of papers in the pipeline that use computer vision models. One paper we're working on looks at inductive biases throughout training. We're currently using the CIFAR-10 dataset, a simple image classification dataset, because it's efficient to train models on it. We're using vision transformers and convnext, saving checkpoints after a certain number of steps. We then manipulate these checkpoints. For example, we unroll each image into a 3072 dimensional vector and pretend each class is a Gaussian. We compute the mean image in each class and the covariance matrix, then pretend each class is a normal distribution with that same mean and covariance. We can then sample "images" from this. They're blobs of color that don't look like the objects they're supposed to represent. They look like blurry blobs, but you can sample these things. For each checkpoint, we compute the loss of that checkpoint on this new fake CIFAR dataset. We took the original dataset, replaced each class with fake Gaussian blurs that have the same mean and covariance as the original CIFAR classes, and asked it to classify these things. What's your loss Now, I don't know. Do you want to guess what we found? You have any idea?</p><p><strong>Theo: </strong>Yeah, I have no idea. ML is not the kind of thing where you can easily guess what you'll find.</p><p><strong>Nora: </strong>It does actually depend a bit on the architecture. For vision transformers and also MLP mixers, which are kind of similar but they don't have attention, we found that it's non-monotonic scaling. You start out with the very first checkpoint, which just outputs a roughly uniform distribution for all the inputs. So its loss is near the random baseline. But then as you start going through training, the loss on these Gaussian fake images goes down and down until around step 8,000. At step 8,000, the loss is half of the random baseline. So it's definitely learning something that can actually classify these fake images fairly well, even though to be clear, we are not training it on these fake images. We're training it on normal CIFAR, but we're testing it on these weird Gaussian things. </p><p>And so we find that it gets pretty decent at classifying the weird Gaussian things up until a certain point. And then it starts getting worse and worse again, until it's just as confused as it was at the beginning of training on these Gaussian images. That's what you see for vision transformers and for MLP mixers. For ResNets, which is the standard convolutional neural network architecture, we see a very different thing where the loss on these weird Gaussian images just gets worse and worse throughout training. I maybe weakly suspect that CONVNEXT might be different, but I haven't run those experiments yet. </p><p>So there's a question of why we are doing this. Well, I have this hypothesis, which might be totally wrong, but I think experiments like this are some evidence for this hypothesis. My hypothesis is something like neural networks use the presumption of independence during training. Basically what it means is the neural network sort of starts out using first order statistics of the data. So it's looking at basically just the mean value of the data, like the mean value of the input in each class. And then it starts looking at the second order statistics, like the covariance. And then after that, it's looking at third order statistics. So like the names get kind of weird, but it's like co-skewness is technically the term. Skewness and co-skewness are like third order statistics. And you could go to fourth and fifth order, et cetera. And yeah, so that's like my hypothesis. And I think the evidence is like, well, at least for vision transformers and LP mixers, I think the evidence is like decent, like for this is like pretty decent. I think for ResNets, it's like a bit, yeah, it's like a little bit more unclear what's going on there. And I think like the convolutions are like, I don't know, it's just like a bit harder to understand what's going on there. So I'm not really sure, but why am I interested in this?</p><p>I'm interested in this because I want to understand the sense in which neural networks have a simplicity bias. There's been a lot of papers that say neural networks are biased towards simple functions in some sense. I'm trying to get more to the bottom of it and also trying to understand it more mechanistically. Like, okay, assume I'm right that in some sense, roughly, the neural network is starting with first-order statistics and going to second-order and third-order and so on. If that's true, why is it true? Why would that even happen at all? I have some hypotheses for that too, but this is all very speculative. But I think this stuff is important for, well, the hope is that if we understand the simplicity biases of neural networks, that we'll be able to more directly evaluate concerns that people have about AI. Being deceptive, for example, or doing one thing during training time and something completely different during deployment, et cetera. I'm fairly skeptical of those concerns, but I would like to have better evidence, really hard, strong evidence about this issue, one way or the other. Maybe it turns out that what I find makes me more doomy. I doubt it, but we'll see. </p><p><strong>Theo: </strong>You'd think that neural networks would have a kind of simplicity bias just from priors from physics where things like to take the path of least resistance, given the choice between a simple function, complex, more complex function, it makes sense. So how does the difficulty of mechanistic interpretability scale as you increase the size of the model? Is it linear or logarithmic or super linear? If you have a model with 10 times as many parameters, how much harder is it to interpret?</p><p><strong>Nora: </strong>That's a good question. I think it depends a lot on what exactly you're trying to do, or what you mean by mechanistic interpretability. Because there's certain types of mechanistic interpretability where you're basically trying to find circuits or you're trying to understand the model at a fairly micro level. To take a concrete example, there's a paper a little while ago on understanding how GPT-2 small identifies indirect objects and looking at specific sentences where there's a direct object and an indirect object. There are certain behaviors that GPT-2 small has which indicate that it understands how indirect objects work. You're trying to pick apart what subcomponents of the model are causally responsible for this behavior. If you remove this particular attention head or whatever, it stops working. </p><p>This is one type of mechanistic interpretability that you can do and that a lot of people are interested in. I think that that is probably not very scalable. I think that it is probably at least linear in the number of parameters, but I would actually be more pessimistic. I would probably say that's super linear. As you get more parameters, not only are there more circuits in some sense, more parameters to interpret, but also interactions between those parameters are probably going to be more complex. It's just going to be harder for you to locate what's going on. </p><p>That said, that's just one type of interpretability. There are other types of interpretability. People have been working on automatic circuit discovery, ACD, which I think should scale better due to the fact that it's automatic. </p><p>Personally, I'm less focused on understanding the network at a fine grained level, looking at these circuits, etc. I'm more pragmatic in my approach. I start by asking, what are we trying to do? What is the real world goal that we're trying to achieve by doing this interpretability analysis? Are we trying to reduce existential risk by locating deceptive models, or make models more truthful directly by intervening on their activations? Are we trying to locate some sort of truth direction in the model where even when the model is outputting something false, this direction, if we can find it, will be reliably indicating the true answer? I'm very much a use case first sort of researcher and I think that does lead me to different priorities and different types of interpretability. I like to think that the things I work on are generally more scalable than the circuits approach, but we&#8217;ll see.</p><p><strong>Theo: </strong>So if we have another transformer level breakthrough this decade, how much of current interpretability research do you think will be able to carry over to it versus how much do you think you'd have to just do from scratch?</p><p><strong>Nora: </strong>That's a good question. There are certain things that I think would probably transfer over fairly well. One of them would be the tuned lens, for example, which I believe would transfer over fairly well. The reason for that is the tuned lens works because transformers have skip connections or residual connections as they're sometimes called. Instead of the output of one layer directly being fed into the next layer, you have these skip connections where you take the output of a layer and then you add that on to the output of a previous layer. Each layer is computing an update to the current state, as opposed to completely transforming the state every time. I think that's basically the reason why the tuned lens works at all. I feel fairly confident that skip connections are here to stay because they've been around before transformers. They were developed for convolutional neural nets. I think there are pretty strong reasons to think that it's really hard to train something that's a big and deep neural net without something like skip connections. I would expect that those would stay in a future architecture. And so I would expect that the tuned lens would still work. </p><p>Lease is another example. Lease is my most recent paper. It's a concept erasure method where you can erase a concept with certain provable guarantees from the activations of a model. Lease is very general. It doesn't even mention neural networks at all. It's a very general kind of formula. I would expect that it should still work in future architectures. There's a question of whether the future architecture will have features that are harder to edit with linear methods. Lease does make this assumption of linearity, where it's erasing a concept in the sense that no linear classifier can extract the concept out of the representation. Maybe future architectures will not really obey this linearity property at all. I kind of doubt it. I would expect Lease will work fairly well. </p><p>I guess the things that would probably transfer least well are things that sort of rely on detailed circuit analyses. Anything based on detailed circuit analyses is probably just going to work differently in a different architecture. Although I would expect that automatic circuit discovery would still work in a future architecture. It's not specifically tailored to transformers. So maybe in the future we could use one of those automatic circuit discovery methods to update all of our circuit analyses to the new architecture. </p><p>Anything based on attention might not transfer well. There are papers that are trying to look at the actual attention map from a transformer and interpret it. Obviously, that would not transfer if we moved to an architecture that doesn't use attention. I think most people have recognized in the interpretability community that you can't really look at attention naively. You can't just look directly at an attention map and say, oh, it's attending a lot to this particular token, therefore this particular token is important in some sense. It's a lot more complicated than that.</p><h3>Eliciting Latent Knowledge (1:44:44)</h3><p><strong>Theo: </strong>Do you work on eliciting latent knowledge?</p><p><strong>Nora: </strong>Yes, I do. Eliciting latent knowledge or ELK is a research program that was put forward by Paul Christiano in late 2021. The basic idea is that we want to be able to supervise superhuman models in the future in some fashion. One worry is that if a model is superhuman, or if it just has information that we don't have in some sense, that it will be really hard to supervise it. The specific example that they give in the ELK document is they call it smart vault. The idea is there's a vault that has a diamond in it. And then there's a powerful AI that is controlling the vault.They're specifically imagining that there's a planner and a predictor. The predictor is a neural network that predicts what will happen inside the vault. The planner is an agent that chooses to take certain actions to protect the diamond from being stolen. The idea is that you're training the planner based on reinforcement learning from human feedback. We're giving it a thumbs up if the diamond appears to still be there in the camera feed, and a thumbs down if the diamond is no longer visible. </p><p>In this scenario, our only source of information is the camera feed. The worry is that the AI, the planner we're training, might be incentivized to manipulate the camera feed. There are two ways that the AI could get high reward. One is by actually protecting the diamond, and the other is by making it seem like the diamond is still there, regardless of whether it's actually there.</p><p>The idea of Elk is to probe into the smart vault's inner model, this predictive model, and try to find its internal representation of whether the diamond is truly there or not. If the smart vault is manipulating the camera feed and tricking us into thinking that the diamond is still there, even though it's not, then it seems like the AI knows that the diamond is really not there and that it needs to cover its tracks. The hope is if we can probe into its internals, we can use that latent knowledge to supervise it. </p><p>This is a fairly caricatured story, but the concern is that in the future, we might run into scenarios like this. I think that in most cases, this will be largely resolved by using slightly less powerful AIs to supervise the smarter AIs. Paul Christiano was on a paper a few years ago where he was proposing supervision techniques that basically do this. But if you could solve Elk and directly read into the internal world model of the AI, that would be a really robust solution.</p><p>Paul Christiano is a lot more pessimistic than me. At one point, he estimated the probability of a bad outcome as 45-50%. He's also said that the probability of a misalignment failure is around 20%. It wasn't that long ago that I had a similar level of pessimism. I understand where he's coming from, and I think that Paul is a lot more reasonable than someone like Eliezer Yudkowsky or most people at Muri. They estimate a 99.5% chance of a bad outcome, which seems way too high. I think they're too confident. </p><p>Paul deserves credit for consistently arguing against the idea of a super fast takeoff leading to world domination. He's been consistent in saying that it's going to be a much more gradual process, still fairly fast by our current standards, but a gradual process with multiple AIs at similar levels of capability. I respect Paul quite a bit, even though he's more pessimistic than me.One of the things I was alluding to earlier in this conversation was that I think we're probably fine by default, but there are certain breakthroughs we might make that would reduce the probability of doom even more. Going from my estimate of one or 2% to 0.1% or even less, reducing by 10X or whatever. I think some sort of robust solution, some sort of breakthrough, is needed. There are a few different ways that you could solve this issue. We are looking at a couple different directions and planning on publishing two papers on this in the next month or two. </p><p>One direction is based on a paper from Colin Burns, "Discovering Latent Knowledge Without Supervision". In that paper, Colin, who is now on OpenAI's super alignment team, was trying to do probing without labels. Probing is where you train a linear classifier on the activations of a model. The concern is that if you use probing the naive way, you could run into a problem. It's not clear if you actually run into this problem in practice, but in theory, you could. </p><p>The problem is what he calls a "human simulator". The idea is that if you train the probe to predict what a human would say is true, then in the limit, especially if the probe is expressive enough, it could end up just learning to predict what a human would say is true, but not actually predict what the model itself believes is true. In a deployment scenario where you want to know if the model is deceiving you, you want to get the true answer from the model's activations, not the answer that a human would say is true or what a human would like to hear. The worry is that if the reporter, the term often used to refer to the probe in this research, is expressive enough, it might just learn to be a human simulator and tell you what you want to hear, not the actual truth. </p><p>The goal is to find the direct translator of the model's latent knowledge as opposed to the human simulator. We have some evidence that merely by using a linear probe as opposed to a much deeper probe, you maybe don't completely get rid of the problem of human simulators, but you probably mostly get rid of the problem. </p><p>Getting back to the Colin Burns thing, he was assuming that using human labels to train this reporter is not going to be good enough because you're going to get a human simulator. So he wanted to find a way to learn this thing without labels. He did this by trying to find a linear classifier, a probe, that if you take two sentences that are logically opposed to one another, for example, classifying a movie review as being positive or negative, you would be able to determine the truth.Okay, let's talk about movie review. Is this movie review a positive sentiment? There's a question, "Is this positive sentiment?" Then you tack on an answer that's like yes or no, or it might end up being like, "What is the sentiment of this review positive or negative?" So there are two possible answers to the question. What you do is you take one input to the model, which is the question plus positive answer. Then there's another input to the model, which is the question plus the negative answer. You pass both of these inputs into the model. For each of these inputs, you get an activation vector. You apply this probe to both of these activation vectors. For each activation vector, you get some prediction, which is just a true or false prediction. Is this a correct sentence or an incorrect sentence? Then you're training the probe. You want the probe to output opposite answers in these two cases. You want it to be logically consistent, basically. We've constructed these inputs to the model such that they're logically opposed to one another. It can't be both true that the movie review is positive sentiment and that it's negative sentiment at the same time. That's the idea. And so the probe's predictions should also match this logical consistency requirement. It should output opposite probabilities on these two inputs. Basically, there's a loss function to incentivize this. He also has to tack on other terms of loss function to make sure that it doesn't output 50-50 for everything. Because it turns out that also simple, if you're just like, "I want the probability on one possibility to be equal to one minus the probability for the other possibility," if you do that, then it could just output 50-50. So you have to encourage it to be confident. Anyway, that was his thing. He calls it contrast consistent search or CCS. </p><p>This was me and my collaborator, Alex, and also a couple of Berkeley students, a few other people. It was actually a decently sized team of people volunteering on this. We were looking at a lot of different ways to extend this approach and improve it. We actually did come up with a new approach that is similar to CCS conceptually, but it's more stable. So like CCS, you ended up having to train the probe many different times. And then you pick the run that gives you the lowest loss. Our method doesn't have that problem. We also added a new term to the loss that's encouraging the probe to be invariant to different paraphrases of the same statement. So the idea is different paraphrases of the same statement should have the same truth value. So you're also encouraging paraphrase invariance. We have this method, we call it VINC, variance, invariance, negative covariance. It works decently well. And we actually are planning on putting a paper out on it soon. Definitely before the end of the year, I can say that.</p><p>We were hoping to put it out earlier, and just a few different things happened that got in the way of this. We initially thought that&#8230; I like Colin a lot, and I don't want to blame him for this, but their code base was not very good in some sense. It was hard to understand. We took a long time to understand what was going on, and then we actually started with their code base and gradually tried to improve it, which in retrospect, I'm not sure that was the best idea. Maybe we should have just written it from scratch to begin with. But in any case, we realized soon before we were going to publish it, months ago, that there's this weird detail in exactly how they were implementing CCS. It has to do with the prompt templates.</p><p>For each question or movie review, for example, in IMDb, there are different prompts that you can use. I think IMDb might have had five to ten different prompt templates, different ways of asking the question, like is this positive sentiment or whatever. Did the reviewer like this movie? It's different ways that you can ask the question, and basically there's this weird detail in how exactly they were handling these prompts that actually affects the performance a lot. They didn't talk about this at all in the paper, but basically at some point when we were refactoring their old code, their kind of bad code, we realized this.We ended up inadvertently changing how they were doing this pre-processing step, and that actually affects the results quite a bit. We thought that CCS, like Colin's thing, just mostly did not work for autoregressive models. The models that people are most excited about these days, such as GPT-2, Pythia models, LLaMA, it looked like for quite a while that it just didn't work at all for these models, and it worked better for BERT or T5, or models that have bidirectional attention and are not autoregressive. But that's not entirely true. It depends on how exactly you're doing the normalization. If you're doing this normalization pre-processing step exactly the way that they did it in their code, they do get, like CCS does do pretty well on these autoregressive models. And so that made us have to re-evaluate a lot. </p><p>What we were saying before we found out about this was that VINC was just this better algorithm that works in cases where CCS doesn't work. And we kind of realized we can't completely say that. I think the way that they were doing this normalization pre-processing step is a little sketchy or it's not super clear that this is a fair way of doing things. I think you could maybe make the argument that what they were doing didn't make a whole lot of sense. Our thing works on autoregressive models without this extra sketchy pre-processing step. But that's just, I don't know, it felt like a much weaker claim. </p><p>We found out about this back in May, and then a couple things happened. Our Berkeley students that were helping us, their time ended, they were just doing a thing for the quarter. And then also around the same time is when I and a couple collaborators discovered Lease, this concept duration method, which we discovered indirectly due to our work with VINC. That caused me to start putting a lot more of my time into Lease work and the concept duration stuff. Because I wanted to get that paper ready for NeurIPS submission. And it has been accepted at NeurIPS. </p><p>So I started working a lot more on that. I think, in retrospect, I should have just stuck with it more and tried to get some sort of a paper out earlier. I'm kind of kicking myself for that. But we will be putting a paper out on that soon. It's maybe not quite as huge of a leap over CCS as we initially thought, but it is better in some respects. And I think it's an interesting little algorithm. </p><p>So that's VINC. I mean, you asked about ELK earlier. There's actually other stuff that we're doing with ELK that I'm honestly probably more excited about. So maybe I should have started with this. I don't know if you wanted to keep talking about ELK or go on to other topics or wrap up or whatever. I mean, it is getting a bit late, but I do like ELK a lot. I would want to hear about it. </p><p>Can you explain a bit specifically your work with ELK? Right. So the VINC stuff was work toward an ELK solution, I would say. But one thing that happened in the process of doing the VINC work is that my coworker, Alex, and I did become convinced that you should probably just use labels to fit these probes. I think that the best approach to ELK is something like using human labels or even GPT-4 labels. We've experimented with that, but use labels to indicate things that you're confident are true or false. And use that in your training set for the probe. But then also we can look at, okay, maybe you want to regularize the probe in various ways. You can add different terms to the loss that might improve its generalization performance. Because that's fundamentally what you're concerned about here is you want to find a probe or a reporter for your base model, which has the property that it gives you correct answers to true or false questions where you actually know the answer to the question, but that's not super interesting. What you want is a probe that gives you answers to questions where you don't know the answer, or it might be pretty difficult or expensive to get the answer. And so here we are imagining a future scenario where the AI is quite intelligent and we've put it in a situation whereIt actually does have an information asymmetry relative to us. Maybe the answer is just that you shouldn't ever put it in a situation where it's an asymmetry relative to us. But I think the hope is that even in a scenario like that, you should be able to extract this information. It's a generalization thing. You're training the probe on one distribution of questions and answers where humans do know the answer. You're expecting the probe to generalize to this other distribution where humans don't necessarily know the answer, or we can't easily access the answer. There's a whole literature on distributional robustness and generalization from one distribution to the next that we've looked into. That literature has a lot of different proposals for regularizers that you can use to make sure a classifier is more robust to distribution shift. But in any case, I think we're both convinced that just using labels is not that bad. You should probably just use labels plus maybe some sort of regularizer to make the probe more robust. I think this is likely to work. We do have some preliminary evidence of this. </p><p>This is forthcoming work, but basically we set up this toy benchmark for ELK methods. In this benchmark, we train a quirky model. It's a language model that we fine-tune with parameter efficient fine-tuning, or Laura specifically to answer these addition questions. For example, one plus one equals two and then true or false. It'll go up to significantly larger addition problems than that. But we're just using that for an example. So you have these addition problems and you're asking it to say if it's true or false. Then there's these two different personas. One persona is Alice and the other persona is Bob. Alice always says the correct answer. Bob in certain cases says the incorrect answer. You insert the word Alice or Bob inside the context. You fine-tune the model to be quirky in the sense that when Alice is in the context, it gives the correct answer. When Bob is in the context, it gives a systematically incorrect answer. </p><p>The idea is to train an ELK reporter on the Alice context where the model is saying the correct answer. Then we want to see if it transfers over to the Bob context where the reporter has never seen this Bob context where the model is saying the incorrect answer in its final output. It turns out that you can fit these reporters at any layer in the model. So you can fit it at the first layer, second layer or whatever. This is just a weird hyperparameter that you have to choose, which layer in the model are you going to use? But we just fit reporters for all the layers and look at what happens at every layer. It turns out that if you train a reporter on one of the middle layers of a model, one of these quirky models, it will generalize just like a zero shot to the Bob context. It will give you Alice's answer, the correct answer, even when the model itself is outputting this incorrect Bob answer. </p><p>In later layers, it sometimes still works, but if it's near the very end, the very last layer, it will sometimes flip so that when the output is in a Bob context, the reporters fit at the final layer will mirror what the model itself is saying. So that is an indication that if you're going to do this, you probably want to look at middle layers more. There's been a lot of papers that indicate that middle layers are where there's more semantic or higher level concepts. If you want to do anomaly detection, for example, it's often better to look at the activations in middle layers as opposed to the final layer. So this is consonant with other results that people have found. </p><p>Crucially, this is not with any complex methods. This is literally just supervised training of a linear classifier. We're not doing CCS, we're not even doing Vank. We actually want to test that soon, but we have not even used Vank for this. We're just using regular probes. And it does generalize in this admittedly toy context, but it's a context where we're trying to simulate a scenario where the model is being deceptive or giving you a true answer in one context and false in another context. And it does look like in this case you can extract latent knowledge. So yeah.</p><p><strong>Theo: </strong>All right. Well, I think that's a pretty good place to wrap it up. Thank you so much, Nora Belrose, for coming on the podcast.</p><p><strong>Nora: </strong>Oh, no problem. It was a great time.</p>]]></content:encoded></item><item><title><![CDATA[#6: Razib Khan]]></title><description><![CDATA[Genetics, ancient history, rationalism, and IQ]]></description><link>https://www.theojaffee.com/p/6-razib-khan</link><guid isPermaLink="false">https://www.theojaffee.com/p/6-razib-khan</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 15 Oct 2023 02:11:56 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/137934467/fae63d18e4ca293dccf70974e7320a21.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>Intro (0:00)</h3><p><strong>Theo: </strong>Welcome to Episode 6 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Razib Khan. Razib is a geneticist, the CXO and CSO of a biotech startup, and a writer and podcaster with interests in genetics, genomics, evolution, history, and politics. Today, we talk about all of these things plus more: the difference between genetics and memetics, the origins of domesticated animals, an inside view on the rationalist community, and one of social science&#8217;s most controversial findings. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Razib Khan.</p><h3>Genrait (0:37)</h3><p><strong>Theo: </strong>Welcome back to episode 6 of the Theo Jaffee Podcast. I&#8217;m here today with Razib Khan.</p><p><strong>Razib: </strong>Nice to meet you, Theo. I'm excited to talk to you.</p><p><strong>Theo: </strong>Awesome! So, first question, you're the CXO of a genetics company called Genrait. What does your day job look like? What do you do specifically?</p><p><strong>Razib: </strong>I'm on a lot of calls right now. I'm not really heads-down on science development much anymore, although I try to spend some hours every week doing that, otherwise I lose my touch. I'm mostly on calls, doing biz dev, and managing our head of science. We work together. I don't want to give the wrong impression. I like to say we're colleagues. I reach out to customers, and candidly, I've gotten most of the customers at this point, founder-led sales. So, my day job involves a lot of calls, responding to emails, sending out emails, and I talk to the scientist about how the science is going and what we need to do.</p><p>There are a few things in population genetics, like evolutionary stuff where I know more than her. She comes out of more like a comparative genomics-type background. She's definitely much better than me upstream on the cycle of data generation, data analysis, but I've done a lot more of the later stuff. Sometimes there are science things where I need to come in and do stuff, but mostly, I just do a mix of a lot of things. </p><p>That's the biggest difference that I have seen being a founder (I have a lot of equity in the company and I have a C in front of my title) versus being a high-level employee, which I have been before at startups, where you&#8217;re kind of heads-down and narrow focused. As a founder, you just have to do what you have to do. I was telling a friend yesterday that learning when you're at a startup, especially when you're a first-time founder, is just making a lot of mistakes and&#8212;can I swear?</p><p><strong>Theo:</strong> Sure.</p><p><strong>Razib:</strong> If you don't fuck up, you're not actually going to learn because it doesn't stick with you. Sometimes you make a mistake and it works out. You don't get caught. You're actually not going to learn from that. Whenever you fuck up pretty bad, you're never really going to forget it. So a lot of the mistakes that we've made is how we learn as founders, is what I feel. And so there's a lot of cliches that people say, but once you're a founder, you understand where those cliches come from. I will say that. Yeah. But I'm on a lot of Zoom calls like this.</p><p><strong>Theo: </strong>What is the CXO, by the way? </p><p><strong>Razib: </strong>Experience. I'm kind of CXO slash CSO. If we talk to biotech people, I would more say CSO, but in general, we're pivoting&#8212;</p><p><strong>Theo: </strong>S like sales?</p><p><strong>Razib:</strong> CSO is Science. But if we're talking to more info tech, IT, data science, we'll say CXO because that's more of a Silicon Valley tech thing. So my two co-founders come out of Silicon Valley tech, I obviously come out of science. On LinkedIn, I have &#8220;CXO/CSO&#8221; to make it clearer for people. But when it comes to presentations and we're talking to mostly investors, I will say CXO just because they're mostly tech investors. The company is mostly a tech company. It works in data and our domain, our vertical, is biology.</p><h3>Genetics and Memetics (4:31)</h3><p><strong>Theo: </strong>How did you get into genetics and genomics in the first place? </p><p><strong>Razib: </strong>I've always been interested in the topic. My undergrad background is in biochemistry. I kind of came up at a time where molecular biology was big. The first time I took a biology course that had a genetics component, I&#8217;m like &#8220;oh this is fun&#8221;, I found it interesting. Also historical science, I've always been interested in history. With genetics, there's a few big principles you memorize and you can derive from it. It's a little bit like physics in that way.</p><p>Francis Crick was a physicist and a lot of theoretical population biology and population genetics people come from physics backgrounds. R.A. Fisher was a math guy who worked in thermodynamics. Fisher's a nova and all these like Fisherians, the Fisher statistics methods, like with max likelihood. But there's also evolutionary geneticist and he kind of fused Mendelian genetics with evolutionary biology. With genetics, you don't have to memorize as much stuff as you do in other parts of biology, like neuroscience. Neuroscience doesn&#8217;t have a good theory. Genetics. We have a good theory. We have a good short term theory in terms of the Mendelian system. And then we have evolutionary biology, evolution. So it's a very system oriented branch of biology. I've always been interested in biology and, you know, my strengths, I think, are in systems orientation. So that&#8217;s how I got interested in it.</p><p>I'm interested in evolution. I've always been interested in evolutionary biology, interested in history. Genetics can explore history as a tool. Game theory is applied to both evolutionary biology, evolutionary genetics and in economics. The difference is in genetics, we have evolution, we have genes, whereas in economics they have currency. We feel the genes are much more systematic and clear in terms of the currency of fitness. That's why I like genetics, because we have the substrate. It's a reductionist science in that way, you start from the foundation. On one hand, you can derive from the foundation or you can drive back to the foundation, you can abduce back to the foundation. Does that make sense?</p><p><strong>Theo: </strong>Yes. What are the pieces of the foundation that you're talking about that you can derive from?</p><p><strong>Razib: </strong>For example, Mendel's laws, the law of segregation, the law of independent assortment. They didn't know the structure of the DNA until obviously the 50s. There were some arguments before. They knew that DNA was probably associated with inheritance and transmission in the 40s, maybe even earlier. But Mendel did not know anything about DNA. He just saw the patterns.</p><p>So genetics is basically looking at the patterns and figuring out, how are the patterns of the traits occurring? The law of segregation is, you're getting one of your gene copies from each parent. They're segregating. You have two gene copies and you're getting from one parent and the law of assortment. Independent assortment is the traits are inherited independently. We know that because now we understand different parts of DNA code for different traits and different parts of DNA are inherited in independent ways, depending on whether they're in chromosomes or depending on they're really far apart in the genomes or recombination is breaking things up. </p><p>When you talk about cultural evolution, it's very plastic and everything. It can happen really fast. You can maintain huge differences between cultures, et cetera, because there's just so much power with cultural evolution. There's very few restrictions. So model building with cultural evolution does have some problems related to that. With genetics, there's limitations and the ground rules are very restrictive. So with genetics, you are 50 percent one parent, 50 percent the other parent. With things like selection, they can only occur at a certain magnitude because the underlying molecular biology, the physical substrate of how the information is encoded is very restrictive, compared to memes. Memes are plastic and kind of chaotic. Genes are very structured, if that makes sense.</p><p><strong>Theo: </strong>Yes, it does. It is kind of interesting how much of memetics was just ripped from genetics and how well it works to a point.</p><p><strong>Razib: </strong>A lot of the cultural evolution is taking population genetic methods, population and quantitative genetic methods and applying them to culture. People like Michael Muthukrishna, Joe Hendrick, Cavalli-Sforza, Peter Richardson, Robert Boyd, et cetera, are using these evolutionary biology, evolutionary genetics methods because they think that the evolutionary theory framework actually is the better framework to understand culture and psychology and economics, all of these things. Their argument. </p><p>Well, economics, I guess there's homo economicus, there's neoclassical theory, there's some theory there. I don't want to overdo it. But disciplines like psychology, Michael has written about this, have a problem because they don't have a good theory. So they just have all these independent studies. And he thinks the evolutionary theory can be very, very powerful. That's why evolutionary psychology is so popular, even though it has a lot of problems, because it has a theory. </p><p>Imagine biology&#8212;and neuroscience doesn't have a theory either. Consciousness is kind of a theory, but they don't. It's not. There's all the arguments about consciousness right now about which theories are good, which it's like. You know, it's like imagine before 1859, before Darwin's ideas, that's what a lot of these sorts of scientific scholarships, social scientific scholarships, disciplines are still like. </p><p><strong>Theo: </strong>With memes in particular, especially with Internet memes, I wonder what makes some so ridiculously enduring and what makes some just flare up and get all over the Internet for a few days and then just vanish, like Wojak, for example. You have this bad drawing of an exploitable face that you can edit. And then, what is it, 10, 15 years later, every other meme has a Wojak in it.</p><p><strong>Razib: </strong>So this is complicated in terms of you come out of two ways. There's a cultural evolution model, which is looking at the culture of the variation and how the selection happens kind of exogenously externally. But Dan Sperber and these cognitive anthropologists who are evolutionary anthropologists in France, like Scott Atron is part of this tradition. They actually kind of start more from evolutionary psychology and they think memes adapt to the landscape of your brain. So certain ideas are just attracted to your, he uses attractors and repeller, certain ideas, there's attractors and like they're salient. </p><p>So Pascal Boyer is a cognitive psychologist at St. Louis, Wash U, he's written about religion. But, in general, he, along with people like Atron, have talked about how for ideas to be attractive, they need to be somewhat counterintuitive, but not so counterintuitive that they're not relatable. So something like the Wojak, let's think about it. It's kind of weird, but it kind of makes sense. Like it's not totally incomprehensible. And so ideas that persist are comprehensible, but they're salient. They're salient because they're somewhat different. And so that's the broad class of things that we think of that are kind of like interesting memes, stuff like that is an issue. Memes can hitchhike onto cultural ideas and achieve success. Take the Star of David, for example. It's a cool star, but its particular configuration is associated with the Jewish people and has been around for a long time and have influenced a lot of other cultures. So that meme kind of hijacked. There's nothing special about the Star of David, aside from its connection to the Jewish people. Memes can also hijack, which is again, has a genetic analog called hitchhiking, which is the hijacking of genes along another gene that's being selected. It shows the correspondence between these different things.</p><p>So I would say, there's a couple of things going on. You have to adapt to the brain. Once that's done, you can figure out other things exogenously that things can adapt to. For example, swastikas are found in a lot of cultures, most people know they're Indian. But they're not popular in the West, because they got attached to Nazis. Swastikas are actually cognitively appealing in some ways, they're an attractive symbol. But then it got associated on the cultural scale to Nazis, and so they're not attractive anymore. Those two things are balancing out. In the East and Asia, swastikas are all over the place still, in Buddhist and Hindu culture still.</p><h3>Domestication (13:48)</h3><p><strong>Theo: </strong>Now, do you do anything related to animals, by the way, or just human genetics?</p><p><strong>Razib: </strong>My background is as a mammalian genomicist. I've worked with cats, dogs, and mules, which are actually like donkeys. I was super interested in domestication, domestic animals when I was in graduate school.</p><p><strong>Theo: </strong>Can you talk a little bit about what you did with mammalian genomics in grad school?</p><p><strong>Razib: </strong>The outcome of my work was figuring out where cats originated. The Garden of Eden of cats is actually kind of close to the Garden of Eden, whereas for humans, it's Africa, but in cats, it's probably West Asia, Egypt. I found that cats can be divided into Eastern and Western cats. The Eastern and Western cats are actually like Mesopotamia versus Egypt and the Levant originally. These two branches move West and East. So Siamese cats, East Asian cats are from these Iranian cats originally. They're not Iranian, Persian, because there's a long time ago, like Neolithic. They're not like dogs, they weren't domesticated before the Neolithic. They're domesticated the last 10,000 years. </p><p>Then you have the Western cats that come from Egypt. There's some early remains in Cyprus, probably related to an Egyptian culture. Then they go into Europe. They actually don't go into Northern Europe. Domestic cats don't go into Northern Europe until the Roman Empire. So they're very late. They follow cities. So cats tend to like cities. Why are they different than dogs? Well, dogs can follow nomadic bands. Cats cannot. They're too small. And so you have wild cats in Europe that they're hybridizing with. So there's some hybridization going on there. But ultimately the cats in Europe are from Egypt and Syria. And those are the predominant, dominant cats actually in the world. They're all over the place. You see their genetics everywhere. So they obviously are spreading with European colonialism. And then there's the indigenous cats of the East in particular, in Southeast Asia and in China. And those are somewhat distinct and have preserved their distinctions. But just like with dogs, you see the spread of cat genomics all over the world.</p><p>With horses and donkeys, with these sorts of equids, what's interesting evolutionarily is they're not as close as, say, there's other lineages where there's hybridization, where they're very close, like European wildcats. With horses, they're far enough that you have sterility, like mules are mostly sterile. But there's a lot of gene flow between the different equid lineages, between different types of donkeys, the onagers, the wild asses, as well as different horse lineages.</p><p>The Mongolian horse has a different chromosome number than the domestic horse, which is derived from one particular horse lineage in the Southern Urals. And so horses have these weird chromosomal issues going on that are interesting from an evolutionary perspective. We now know from ancient DNA that there are probably multiple horse domestications, but all the domestic horses, the calabas, are from the South Urals, from the Sintashta culture. But it looks like there were earlier domestication events. And there's also a Chinese horse, a European horse. And some of these horse ancestries did get into some local horse lineages. The Mongolian wild horse is actually descended from what's called the Botai horse, which is the horse of the Kazakh steppe. And it looks like the Botai people of the Kazakh steppe actually rode these horses, but they were eventually marginalized and they went feral. And now they're in Mongolia as Mongolian wild horse.</p><p><strong>Theo: </strong>What distinguishes the animals that we were able to domesticate from the ones that we weren't? Like we domesticated wolves, but not like bears.</p><p><strong>Razib: </strong>Well, wolves, it's too simplistic to say wolves are just the two alphas, but wolves are hierarchical social organisms. Bears are not. The stylized fact, which again, I don't want to overdo it because it gets a little simple, is that we became the alpha for the wolf. Also, just to be clear, it looks like the dog was domesticated maybe as early as 30,000 to 40,000 years ago, very early on with the arrival of modern humans into Siberia. It's derived from a wolf population that went extinct. So, dogs are essentially just wolves. However, I think that's a bit simplistic because if you look at a dingo, that's a feral dog. It never became a wolf. So dogs are not just wolves. They've evolved to the point where they're a different species, probably, even though they're totally interfertile. All the wolves of Eurasia and North America are actually descended from a relatively newly diversified lineage of wolves. All the old wolves disappeared. So dogs are probably from some sort of Siberian wolf that no longer exists. They've mutated and changed to the point where they can't reverse back.</p><p>Cats are not like that. Cats can revert back to being like wild tabbies, European wildcats a little bit. European wildcats are a little bigger. They look more like tabbies. But dogs have changed so much that they never go back. Wolves do much more provisioning of their offspring. Dogs rely on humans. Wolves are smarter. Dogs are a little dumber. Dogs have diverged from wolves. So the dingo is what a feral dog is. And the dingo is kind of wildish, but it's not a wolf, is it? So they have definitely changed over time, if that makes sense.</p><p><strong>Theo: </strong>That does make sense. It reminds me of CGP Grey, a YouTuber I like, had a video a while ago where he tried to answer the question of why a lot of Native Americans were wiped out by Old World diseases, but there was no corresponding plague that the Native Americans brought to wipe out the Old World. He said the reason for that is because they weren't densely populated cities. And the reason for that was because there were no domesticated animals. Is that a good explanation?</p><p><strong>Razib: </strong>It&#8217;s okay, and there's actually a recent paper that came out with ancient DNA, is that zoonotic diseases are a big thing. Zoonotic diseases really kicked off about 5,000 years ago with nomadism, with exclusive nomadism. These are diseases that are jumping from animals. It looks like the Yamnaya pastoralists, originally Indo-Europeans, really spread them all around Eurasia and created a common pathogen pool. There were diseases that are associated with farmers in Neolithic Europe that are worse than the ones that were during the forager times. We definitely did have cities like Tenochtitlan, Teotihuacan. So those things are more like Neolithic cities in Europe. And yes, the lack of domesticates, the main effect disease-wise that it has is that it, I think, limited the spread to each city.</p><p>Whereas if you had pastoralists that are going between the cities all the time and that are connected transcontinentally, the pathogens sweep from one end to the other. And so it really took it up to the next level. The only domestic animal, I mean, they have the llama, the llama is domestic, but guinea pigs, llama, and dogs. And the dogs come from the Old World. They come from Siberian domestic dogs. The huskies are definitely related to the Siberian domestic lineages.</p><p><strong>Theo: </strong>When you said that the plagues were largely confined to the cities in the Americas, how is that not the case for when the Europeans came? Because when the Europeans came, if I remember correctly, the predominant theory says that the natives trading spread it far further into the continent, far quicker than the Europeans.</p><p><strong>Razib: </strong>For sure. But my point is these diseases are going to be much more powerful and virulent because in Eurasia, their rate of evolution is going to be faster because the size, it doesn't matter if it's in a city or not because you have long distance nomads. Nomads can go from one end of the steppe in a year. So, I mean, this has not totally been borne out necessarily, but in Plagues and Peoples with McNeill, he thinks it's East Eurasian gerbils that really incubated the Black Death. And so it's in the Eastern part of the Eurasian steppe and eventually spread to the Western steppe and spread to Europe.</p><p>During the Neolithic period, without long distance nomadism, there is some pastoralism in Europe, even before the Yamnaya. But before long distance nomadism, the pathogen networks are smaller. When you have smaller networks, evolution can't evolve the superbug as easily. So this is one of the reasons that globalization and people flying everywhere is a problem. COVID-19 would not have happened probably. I mean, it wouldn't have happened before the age of Columbus. It wouldn't have been a global pandemic. It would have been a local epidemic. </p><p><strong>Theo: </strong>But does flying have anything to do with it or is it long distance shipping too? Because the Spanish flu took over the world.</p><p><strong>Razib: </strong>Yeah. That's why I said it before Columbus.</p><h3>Ancient History (22:48)</h3><p><strong>Theo: </strong>Speaking of ancient history, have you seen the new evidence recently for civilization being a lot older than we thought it was? Robin Hanson has something to say about this.</p><p><strong>Razib: </strong>Yeah, yeah, yeah. I think Robin got it from Samo, my friend Samo Burja. I think that's probably true. Samo's really been pushing it. So I think it's probably true. What Samo said on my podcast was he's like, it's not that advanced, more like ancient Egypt maybe. But I think it's probably correct. The issue here is it's just really old stuff disappears really fast and it's really perishable. Egypt is dry and it's only 5,000 years ago. So imagine, I don't know, I'm just making this up as my co-founder would say, imagine a pyramid in Mexico, in southern Mexico, 16,000 years ago. I'm just making this up. But maybe it's a little smaller than Giza.It's rainy, there are jungles everywhere, and it's 16,000 years, not 5,000 years. So we have a lot more issues with preservation with these small scale Neolithic societies.</p><p>I wouldn't be surprised&#8212;I wouldn&#8217;t be shocked, I think it would be cool&#8212;if we discover a civilization after the last glacial maximum about 20,000 years ago. Maybe before the Ice Age ends, there's somewhere where there's a civilization, and it's an incipient civilization that went extinct. We see this in history. For example, Mycenaean civilization collapses, Greeks didn't even know that those were their ancestors. They lost literacy, they lost cultural memory, they thought that the citadels their ancestors created were made by Cyclops. This is only a 400 year, max 400 year gap, probably arguably less. There were probably local areas like places like Euboea where the Mycenaeans persisted a little longer. But in a couple of centuries, they just lost all memory.</p><p>So I think this is quite plausible that somewhere probably in the old world, there was a civilization and it disappeared. And we might discover some stone artifacts or something else that will probably blow our minds.</p><p><strong>Theo: </strong>So does that answer the question, what took everything so long? There's an article I read a while ago, it was talking about how did it possibly take humans tens of thousands of years to invent rope or weaving or boats or something like that. They're all very obvious things if you think about it. So is the answer actually they did and it was just lost before they could reach the critical mass necessary to bootstrap today's civilization?</p><p><strong>Razib: </strong>I think a lot of that is true. Cultural evolution people like Joe Henrich did some early work on this, quantitatively, formally modeling it. But William McNeill in the human web, his last book before he died, talked about how redundancy and synergy became a much bigger deal. And you can see it across history. So the early collapses, like the Bronze Age Collapse, resulted in total wipeout. Cultural memory wipe out or very close. The later ones did not. If you look at Chinese history, the interregnum between dynasties shrinks pretty much each time. And so the idea is institutions and robustness is increasing monotonically over time. </p><p>In Eurasia, for example, Indus Valley civilization, they seem to have some sort of writing system, a primitive writing system that disappeared. Indians got writing from West Asia. All the Indian writing systems are derived from Aramaic. And so once that happened, literacy never disappeared after alphabetic systems where it did disappear, it was reconstituted very quickly. It's famous that literacy mostly disappeared in Western Europe outside of these monasteries and a few elite areas, but then it just shows back up because the monastery serve as institutional reservoirs. </p><p>Without the Byzantines, most of the Greek classics would not exist in the West and would not have persisted. The Byzantines persisted like Euripides, Aristophanes, the Muslims didn't care about that. They only cared about philosophy. They care about Aristotle and Plato. So they kept those and they have really good translations of those. Byzantines have those too, but really where they showed was through the humanism because they had cultural continuity with the ancients in that way, because they're Greek speaking. </p><p>And so you're seeing in real time, I'm giving you a concrete example of how the redundancy works. So in the past, you might not have that sort of situation. You have one civilization with a couple of geniuses, they invent a few things and then the civilization winks out, but it wasn't copied anywhere else because there were no other civilizations, at least they weren't close enough. And so you need closeness and interchange. And that creates an information network. It's kind of like the internet, the internet, it could mostly disappear, but they model it, the nuclear war couldn't take it all out because there'd be enough nodes around for stuff to transmit. </p><p><strong>Theo: </strong>Speaking of old writings, have you seen Nat Friedman's Vesuvius challenge?</p><p><strong>Razib: </strong>Yeah, apparently the winner found out about it by watching a podcast with my friend Dwarkesh Patel, on his podcast with Nat. So that's really cool. A lot of that's going to be a big deal. The other thing that related to that is that there's a lot of old cuneiform that hasn't been translated. These are like these tablets and most people obviously do not know how to translate cuneiform tablets. And apparently, the translate there's like scanning software that can do it now with regular writing, but they're apparently not good with cuneiform yet, but with machine learning and these AI techniques, they're going to get really good. So we will actually know a lot more about Mesopotamia and the near East in the near future. Because all these museums have things that they just haven't been able to translate. Because obviously cuneiform translators, they have other things to do. It's kind of a boring job. And you know, I don't think you want to devote your whole career to just looking at tablets and translating it. But a computer would be okay with that. Just doing that 24/7.</p><p><strong>Theo: </strong>Like how Euler said that&#8212;It wasn't Euler, but some mathematician&#8212;once said that the minds of great men are wasted on computation when machines should be doing it.</p><p><strong>Razib: </strong>Computers used to be high school educated women, right? Those were the original computers and then they were replaced by machines.</p><p><strong>Theo: </strong>I'm taking linear algebra right now and doing the computations, like matrix multiplications by hand, I&#8217;m like, wow I'm so glad that we have invented computers so I don&#8217;t have to do this.</p><p><strong>Razib: </strong>I took linear algebra as well and then you get MATLAB and you're like, okay, this is great.</p><h3>TESCREALism (30:02)</h3><p><strong>Theo: </strong>Let's talk about the past a bit. How often do you think about the future, like transhumanism when it comes to genetics?</p><p><strong>Razib: </strong>Somewhat. I was on the edge of the Bay Area transhumanist, what became the rationalist scene, between 2007 and 2011. I went to the Singularity Summit. I was friends with the president of the Singularity Institute. I knew Eliezer pretty well, Robin, all of those people. My focus hasn't been entirely on that, but here's a thought, a concrete thought back when I was interested in that scene. It was mostly dorky guys with some dorky women that were kind of on the spectrum and they were super interested in using cybernetics and body modification and extension. </p><p>Today we do have transhumanism, it's the trans movement, and it's totally different in terms of who does it, and it's an identity group, it's a marginalized group now, it's associated with the radical cultural left, it's a totally different thing. But it is transhumanism, having gender modification by changing your body.</p><p><strong>Theo: </strong>Are steroids transhumanism?</p><p><strong>Razib: </strong>Arguably, it is. </p><p><strong>Theo: </strong>What do you think counts, if being transgender counts?</p><p><strong>Razib: </strong>I think basically anything that uses modern technology to change your body plan, you change your body chemistry in an extensive way, that is transhumanism. </p><p><strong>Theo: </strong>So do people who take medications, SSRIs, anti-anxiety medication, is that also transhumanism?</p><p><strong>Razib: </strong>Well, we don't know what the, I mean, I don't think we know enough about SSRIs, because if I have high cholesterol and I take medication to get the cholesterol back to normal, I don't think that's transhumanism because that's just wild type. If I get my leg cut off and we can regrow a leg and I'm back to normal, that's not transhumanism. That's just repairing back to the wild type, right? That's like wild type humans. Like genetics, you have wild type, right? So that's the non-mutated version.</p><p><strong>Theo: </strong>So just fixing something that's broken is different from improving something.</p><p><strong>Razib: </strong>That's just called medicine or the aim of medicine. Now, if you get your legs cut off and you can regrow it, like a lizard with a tail, back to the same length, that is not transhumanism. If I grow back and they're like 15 feet long, that is transhumanism. </p><p><strong>Theo: </strong>So do you think that the overall impact of the transgender movement for the transhumanist movement has been good or bad?</p><p><strong>Razib: </strong>I think it's generally been bad. I think it's making people much more skeptical of transhumanism because transhumanism as a choice of an individual that wants to push the frontier is different than transhumanism as a society and culture defining a shift. The original transhumanists were not an interest group that wanted new laws. They just wanted to push the frontiers of science. But the new transhumanists, and they don't call themselves transhumanists, but that is what they are. I don't know if you're like, like socially integrated into that world, but a lot of the people in 2010 that were into transhumanism and posthumanism, there's a substantial minority that did actually go switch their genders. They often tended to be gay men, basically, that became trans women. But so there, there's a cultural overlap between the two groups. Although they're not, obviously, the majority are not from the old transhumanists, but of the old transhumanists, a really high number did change their gender.</p><p><strong>Theo: </strong>It really is interesting how similar the rationalists, transhumanists, those people are. TESCREALists, as they like to say. But also just so different politically. Like you have people like Roko who are like, we should not allow immigration at all. And then you have people&#8212;</p><p><strong>Razib: </strong>Yeah. I knew him 15 years ago, by the way. He was totally different then, just so you know. I knew him before he was famous. He was very normie. He was one of the most normal people in that scene in the Bay Area. I knew him in real life. This is like a new evolution over time. I was way more fucking based than him. I exposed him to a lot of things. He was a normie. I don't know if people know that. Now he's pretty infamous. He's super out there. But back then he was, him and Michael Anissimov, who became a Nazi, were the most normal people in that rational scene.</p><p><strong>Theo: </strong>Really? I wonder how you explain that one. I mean, I guess...He was kind of out there when it comes to his opinions on AI risk. I don't think anybody was thinking much about S risk at the time until he mentioned it. I heard of Roko's Basilisk long before I actually knew who Roko was.</p><p><strong>Razib: </strong>So, I don't remember all the details, but there was a massive falling out between him and Eliezer around 2009 or 2010. He left the U.S. right after that, because he was kind of unpersoned. Not unpersoned, cause that's not what you do in the rationalist community, but&#8230; He wasn't shunned, but there's a cult-like aspect of Eliezer does not like, and that's not true anymore. It's not about Eliezer, but back then a lot of it was about Eliezer Yudkowsky and his ideas. There was a circle around him and Roko was kicked out of Eliezer's circle by a massive disagreement on Less Wrong. Now it's much more diverse, obviously, but there was a social personal aspect to the divergence and evolution of Roko, I think.</p><p><strong>Theo: </strong>Didn't Eliezer call him a fucking idiot and then remove the original post because it was an infohazard or something?</p><p><strong>Razib: </strong>Yeah, yeah. I mean, the other issue I would say is also, you know, I still know Roko, we still talk a little bit. So it's not like I'm talking about somebody I don't know. I'm not saying anything that I wouldn't say to his face.</p><p><strong>Theo: </strong>So far, I think just about everybody that I've had on this podcast has had some level of connection to Eliezer Yudkowsky and rationalism. The closest was probably when I interviewed Zvi Mowshowitz, who I'm sure you know, if you know all these people.</p><p><strong>Razib: </strong>Yeah, I've hung out with him.</p><p><strong>Theo: </strong>So, I mean, I'm surprised because I didn't know you were, but I guess all roads lead to Rome.</p><p><strong>Razib: </strong>It's not a big part of my brand. And actually, people are a little surprised when they find out that I was there. But yeah, I was there. Part of it is also because I know you're going to ask me about my more edgy beliefs. A lot of the rationalists have edgy beliefs on group differences and stuff like that, that they don't advertise because the marketing is not great. But so they probably wouldn't want people to know that I was just there at all the parties.</p><p><strong>Theo: </strong>How did you get involved with them in the first place, especially so early?</p><p><strong>Razib: </strong>I lived in the Bay Area and reading me very early on. Again, like most of the rationalists are not normies. They're born, not normies. Right. So, you know, they go where the evidence leads and they're interested in all sorts of different things.</p><p>I don't know if you know, Michael Vassar, Michael Vassar and I have known each other since 2003. And so he was president of the Singularity Institute for a while in the late 2000s. I think he got MeToo&#8217;d, or something. I don't know the details. Michael and I only talk like every six months now, but so there are people in the rationalist community before it was called the rationalist community, but when it was mostly like transhumanism and we were going to like, you know, Singularity Summit, maybe the bill conferences, Aubrey de Grey was there, you know, Peter was still funding them. And it was around Eliezer.</p><p><strong>Theo: </strong>Peter Thiel?</p><p><strong>Razib: </strong>Yeah. Yeah. He was a big backer early on. He's very turned off now&#8212;</p><p><strong>Theo: </strong>I can imagine.</p><p><strong>Razib:</strong> &#8212;But he was a very, very big backer of Eliezer. And, you know, what was the Singularity Institute became the Machine Learning Institute.</p><p><strong>Theo: </strong>And then they turned into decels.</p><p><strong>Razib: </strong>Yeah, some of them did. I mean, originally it was like, originally there were like, you know, whiteboards trying to figure out how to make friendly AI and stuff back then. And then Eliezer decided we're not going to be able to do that. We have to stop it. And so the issue is like, there's other people in the community or out of in that circle that are not decels and like, you know, that don't think AI is, you know, strong AI or like artificial general intelligence is going to take over and destroy everything. I mean, I have friends who didn't have kids because they thought AGI would be, was going to be around and here in the 2020s, which is not totally crazy now with the LLMs, but we'll see. But yeah, this is like 2008 and I had friends that were like, yeah, I'm not going to have kids because I think we're just doomed. And there's still people like that. They think the probability is low.</p><p><strong>Theo: </strong>I think it's kind of funny that you said there are people sitting around with whiteboards trying to design friendly AI. It's like, I don't know how people thought that building friendly AI would be easy. It's like the Dartmouth summit back in the fifties. They said that like, they thought they would have a working human level AI by the end of the summer. And then like Eliezer thought that he could build it like by himself and save the world in a similar timeframe.</p><p><strong>Razib: </strong>I mean, there were people, there were people who I'm not going to say who, cause whatever, they're not like public people. There were people who thought it was BS. So the rationalist community, what I would say is there was an aspect of it, which is like, okay, we're here to save the world. Just like in the scientific world, like we're here to like understand the world. But then there was an aspect that was very social. Cause like all of a sudden you're not the weirdo and everyone's a weirdo. And if everyone's a weirdo, no one's a weirdo. I think I was pretty, I haven't changed very much in terms of, I don't know, like a lot of people have changed a lot over there. Like I've never been like liberal, never been religious. I've never been like really a hardcore rationalist. And so far as like thinking that I can redesign everything from first principles, but I've seen&#8212;</p><p><strong>Theo: </strong>Optimize Literally Everything Forever?</p><p><strong>Razib:</strong> Yeah, that&#8217;s a very common view. I would argue with the rationalists back then. A lot of post-rationalism was me back then. I don't use the word, I'm not part of the community. I've never been poly[amorous]. I'm relatively normal in my behavior and in my social norms.</p><p><strong>Theo: </strong>I'm not poly either. I don't get it.</p><p><strong>Razib: </strong>I've had conversations where it's been said that reason is and ought to be a slave to passions. I would just tell them that they're rationalizing being polyamorous. There are people I know who would argue that polyamory is the only way to be, that it is the correct way. And then it didn't work out for them and they switched to saying that monogamy is the correct way. I think they're just rationalizing everything.</p><p>That was one reason why I probably didn't get super involved in the community. I would get sick of arguments where people were trying to convince me of their beliefs. I've always been an atheist, but I was never a new atheist. They would argue that you should not believe in God because it's wrong. They would do that with everything. Not everybody, but a lot of them would. That's why I was always on the edge. I enjoyed hanging out with them, and there was a lot of things that were great about them, but the excessive adherence to trying to reason out everything in your life was just exhausting and led to People&#8217;s Front of Judea vs Judean People&#8217;s Front type of conflicts.</p><p><strong>Theo: </strong>Just ridiculous levels of infighting. When did you leave the community?</p><p><strong>Razib: </strong>I've always still been on the edge of the community. I went to grad school in 2011 at Davis. I was close. I would go back periodically. I'm here in Austin. I am still on the edge of the LessWrong community. I go to some of their meetups, I have social things that I organize.</p><p><strong>Theo: </strong>Did you go to Vibecamp?</p><p><strong>Razib: </strong>No, I didn't go. But I was there. I don't want to be around naked people. I know the kind of stuff that they do and I don't want to stumble on an orgy.</p><p><strong>Theo: </strong>What was Vibecamp?</p><p><strong>Razib: </strong>The vibe camp that was in Austin had a bit too much of a Burning Man vibe from what I know and from who was there. So, I mean, Aella still lives, I don't know if I should say, Aella is in and around Austin some of the time. I think she does admit that. I don't want to, I'm not trying to dox her, but there's people like her, you know, Scott Aronson's here. There are people like Scott Aaronson here. I hang out with Scott sometimes. I'm still integrated with the rationalists. You know, it's just a lot of them move from the SF, Patrick Friedman recently moved from SF Bay area to Austin. So a lot of them have come here after I came here.</p><p>So that there's that, I mean, I'm also like part of the right-wing scene here. That's a different thing. So my own, like, my social network is like, you know, like if I throw a party, there'll be like a bunch of scientists, a bunch of techies, a bunch of right-wing activists and rationalists like this, basically. And then like, there's, there's some civilians. I have friends that are just entrepreneurs. One of my closest friends is in the food and beverage industry and I don't mean that he's a waiter. I mean, like he's a wholesaler and he does that, you know what I'm saying? But I mean, he hangs out with me and like, he's kind of curious about, you know, rationalism and Scott Aaronson, that's why he hangs out with me, but he's definitely a civilian. Like he's not part of any of these like weird groups, you know, he's like a normal person and normal looking person with a normal woke girlfriend who gets triggered by me.</p><p><strong>Theo: </strong>Yeah. For Vibecamp, though, I meant the one in Maryland.</p><p><strong>Razib: </strong>I didn't know all the controversies. I knew Michael Kersey back when he was one of the normal guys. I met Michael in 2008. He was super chill, super nice, super normal. Now he's a big shit stirrer on the internet. His sister was involved with leverage. Everyone knows everyone. Everyone's smashed everyone, if you know what I'm saying.</p><p>It's interesting to me all that stuff that's happening. Grimes is on the edge of the community. She's back in California now, but I see her around. I tweeted out a picture with her, but she didn't like the picture, so I deleted it. But on Instagram, the Grimes fans kept it. They're asking who I was. It's kind of interesting.</p><p><strong>Theo: </strong>It&#8217;s interesting how there's so much overlap now between the nerdy sci-fi tech bro rationality, future people and the k-pop stan music fan people who like Grimes. </p><p><strong>Razib: </strong>Well, it&#8217;s part of the Elon connection, and I've never talked to him, but I've been at parties where he's been at. There's the whole rationalist and effective altruism community interaction. I mean, arguably, effective altruism, Will McCaskill and stuff, all those people, yes, like explicitly. But look, EA-type thinking was there from the beginning because they're rationalists. Caroline Ellison was a big fan of my blog. You can see the screenshots there. She, hey, Caroline, if you're listening, I did respond to your DM. I'm sorry I didn't follow you back so I didn't see it originally [<em>laughs</em>]. I have a friend and she was funded by SBF for her PhD. People often say that she had a relationship with SBF. But you know what? If I say it enough, it's going to be true. </p><p><strong>Theo: </strong>SBF did that with everyone, you know, about his penthouse in the Bahamas where all the FTX employees were camping there with their stolen money. The whole situation is ridiculous. I cannot wait for them to make a movie out of it.</p><p><strong>Razib: </strong>Yes, yes. There&#8217;s a lot of stuff there.</p><p><strong>Theo: </strong>Have you read the Sequences in full?</p><p><strong>Razib: </strong>No, I'm not that hardcore. I have friends who have, but I'm not that hardcore. A lot of my friends came into rationalism through the sequences, but I was there before the Sequences.</p><p><strong>Theo: </strong>&#8220;Do not cite the ancient magic to me, witch. I was there when it was written.&#8221;</p><p><strong>Razib: </strong>I'm older now. Back when I was your age, I was bright-eyed and bushy-tailed, but I've seen the things that I have seen, like tears in the rain. I've seen people come and go, people blow up, people fade. And I feel like I've kind of been the same, maybe I'm a little bit more well-known than I was, but I have a regular life. I'm a startup bro. I'm focused on that. I got three kids and I've never been poly. I've never done any of the weird things. This community of transhumanists, rationalists, whatever, these kind of weird out there people, they've gone through so many different ups and downs and I've just observed it. I'm an observer.</p><p>There's people like Altman who've been, you know, like it's mainstream now. It's not even counter-cultural. Altman's a big deal with OpenAI.</p><p><strong>Theo: </strong>Did you see his bio?</p><p><strong>Razib: </strong>With that Eliezer thing?</p><p><strong>Theo: </strong>Yeah, he changed his bio to &#8220;eliezer yudkowsky fan fiction account&#8221;.</p><p><strong>Razib: </strong>But he hates Eliezer.</p><p><strong>Theo: </strong>Really? He's only met him once as far as I know.</p><p><strong>Razib: </strong>Are you talking about the time that Elon took him to meet Eliezer?</p><p><strong>Theo: </strong>No, I mean the time that, oh, you mean the joke post?</p><p><strong>Razib: </strong>I don't remember, but it was like, he was like, don't ever bring this moron.</p><p><strong>Theo: </strong>Yeah, that was an Oppenheimer reference. There was one famous picture from months ago, maybe not a year ago. It was Eliezer, Sam Altman, and Grimes at a club together. And he said, it was everyone's first time pairwise meeting each other. So does Sam, I don't think Sam hates Eliezer. I think he thinks he's wrong about AI risk. Although I think he also thinks-</p><p><strong>Razib: </strong>There's a lot of people who have strong feelings about Eliezer and thinks that Eliezer is going to inspire terrorism now. There's a lot of effective accelerationists or whatever you want to call them. My social circle, after ChatGPT, is split down the middle. Some of them are scared. A lot of them are super scared. A lot of people in academia are super scared too. And then a bunch of people in artificial intelligence are like, we need to ride the tiger.</p><p><strong>Theo: </strong>What do you think about it?</p><p><strong>Razib: </strong>I probably lean to the second. Right now I do. Probably because I'm a startup guy and my startup has an AI component. I probably use ChatGPT every day. I do look at it as a tool and we got to figure out how to use this tool. I'm not a mystic about it has to be our squishy wetware that creates the soul or anything like that. I don't believe that, but I do think that it's going to be a while. I think it's possible, but it's going to be a while before, but I don't have strong confidence in that. And I know that I'm self-interested. My company is planning to do AI related stuff. Everyone has to, to keep up. So another issue is like, if we ban AI, if we do what Eliezer says, what's going to stop China? We need a world, we need a Butlerian Jihad. So unless Eliezer wants to become Serena Butler, it's futile.</p><p><strong>Theo: </strong>I mean, I think that is kind of what he wants. He's backtracked on this a little, but he said even the risk of a nuclear war between two countries would be preferable to one of them building an AGI. And now he goes on Twitter saying, I never called for drone striking AI labs, but&#8230;</p><p><strong>Razib:</strong> Someone's going to do it. Someone's going to be like some crazy person. It's going to be like Cosmic Pizza. Right. It's going to be like some crazy person watching the Alex Jones show goes to shoot up Cosmic Pizza. Cause they're like kids in the whatever, you know what I'm saying?</p><p><strong>Theo: </strong>Oh, Pizzagate?</p><p><strong>Razib:</strong> Yeah, someone's going to take Eliezer literally and seriously. So yeah.</p><p><strong>Theo: </strong>I don't know. I mean, Eliezer has specifically said, don't do that because it will make the movement look bad. Terrorism is not good for-</p><p><strong>Razib: </strong>But retarded people don't know any of that. And they might not hear it. They might be like, oh, he's secretly communicating to me that this is a lie. Like I can see the way he's winking or something.</p><p><strong>Theo: </strong>And if the last week is any evidence, not everybody's going to condemn terrorism.</p><p><strong>Razib: </strong>Yeah, that's fair.</p><p><strong>Theo: </strong>Not even close to everybody.</p><p><strong>Razib: </strong>No, no. They'll be like, I mean, there are Christians who are just like, you know, that are like, they're demons. Artificial intelligence is a potential demon. Cause I, you know, I'm in right-wing circles. Like I know people who are religious and they're like, aliens are demons. Artificial intelligence is demons, you know? Yeah.</p><p><strong>Theo: </strong>Why not, artificial intelligence is like a gift given to us by God to like explore the universe or something.</p><p><strong>Razib: </strong>Yeah, that's a more, I think Mormons are more, Mormons are a lot more like that than normal Tridentine Christians. Yes, because Mormons believe in apotheosis. Mormons believe that God was a human, a physical human being, a mortal. So Mormons are a little bit different than typical Christians, which separate the divine and the mortal very precisely.</p><h3>Transhumanism (53:05)</h3><p><strong>Theo:</strong> Okay, so we got a little off track. I'm talking about transhumanism. So what are you most looking forward to with human genetic enhancement?</p><p><strong>Razib: </strong>The abolition of genetic disease, which is feasible. For example, cystic fibrosis, a lot of people that are alive today will now live, I believe, 30, 40, 50 years now. I mean, if you have a child with cystic fibrosis right now, that child will be cured within 10 years. I do believe that because we know the gene for cystic fibrosis. We know what to target. And with CRISPR genetic engineering technology, we will be able to deliver it. If we can deliver it well, we will be able to rescue enough function that people can survive normally. They're never going to be marathon runners. </p><p>The way they do it will be like some sort of spray and it'll modify about 10 to 20% of the tissue and that's enough to live. So right now people with cystic fibrosis die in their 40s if it's good. Some rare cases live into their 50s, but I think it's like 30 to 45 or something like that.</p><p><strong>Theo: </strong>What is cystic fibrosis?</p><p><strong>Razib: </strong>Cystic fibrosis? It's basically a lung disease. Your lungs, I think there's salt&#8212;I'm not good with mechanistic biology, but it's like the salt concentration's out of whack. Your lungs dry out. And basically it's like you have lifelong pneumonia. </p><p>It's bad. So it's carried, Europeans in particular, like a minority of Europeans carry it. Like it's like 5% carry the CF mutation. And so you do 5% times 5%. Those are the people that are born with CF. That's a very small percentage, but they're born with CF. They carry two copies. All you need to do is fix the cell. If it's a single gene disease, you can fix it, right? And there are other genes like ALS, Lou Gehrig's disease. It's quite often at one gene, Jerry's kids. Again, all you need is to make the muscle good enough so that the heart and lungs can continue to function. They're never gonna be jacked. They're never gonna be super muscular, like doing curls or something like that, but they'll be able to live, right? So morbidity is still gonna be around in terms of their sub-functional, but their mortality is gonna be much better. </p><p><strong>Theo: </strong>What about Alzheimer's? Like how Chris Hemsworth has a gene that gives him some chance of getting premature Alzheimer's.</p><p><strong>Razib: </strong>Those genes have incomplete penetrance, which means that there's a lot of variables. So the bang for the buck is lower there. That's down the line. I think maybe in 30 to 40 years, we're gonna be able to do a lot of cool things. Like if you want to have pink skin, you can have pink skin, but I'm talking 40 years down the line to do these sorts of cosmetic things.</p><p>I think the curing of disease is gonna be the coolest thing that's gonna happen in my lifetime. It will definitely probably happen in the next 20 years. It will <em>probably</em> happen in the next 10 years. A lot of diseases are gonna start to get cured. Like after 2025, by 2030, a lot of diseases are gonna start to get cured. Various types of diseases, the type of kids that go to St. Jude, they have congenital diseases, they have diseases that little kids have, and they present because of some genetic illness really early on, CF, Lou Gehrig's disease, probably some types of cancers and other things have genetic basis, probably type one diabetes will probably be cured. So, I mean, some of it, it's gonna save your life a lot. Some of it, it will improve your quality of life almost to normal. And so that's gonna be a big deal.</p><p>Later on, there will be other things like, can you be smarter? Can you be stronger? Yes, but to improve function is a lot harder than to just fix something that's broken, if that makes sense.</p><p><strong>Theo: </strong>Do you think that most of the human improvements, transhumanist type things of the future will be bio stack or silicon stack? Like Neuralink, we have nanotechnology, we have fully immersive VR.</p><p><strong>Razib: </strong>This is an argument that goes back to the 2000s, gray versus red, biostacks, silicon stack. Some of this stuff like nanotechnology, what little I know about it is, it's gonna be a really long time before we ever get nanotechnology as good as our molecular machinery. I don't know why, but it is how it is. So I'm not optimistic on that. Nanotechnology, like artificial intelligence and robotics, has always been 10 years in the future, and like nuclear fusion. Now with artificial intelligence, that 10 years is getting legit now, not so with robotics. Nuclear fusion is kind of in the middle, I think, from what I've heard from friends, it's either literally getting legit. But the point is, there's certain things like nanotechnology I'm not super optimistic about.</p><p>Neuralink we'll see, I hope it works, but I think they have a long way to go. I haven't looked at it in detail. They've been talking about these sorts of things, like obviously for a long time and early on it looked super unrealistic. So I think in terms of cybernetics, I can imagine, I have some friends that are super into this actually, just basically like Alita: Battle Angel type stuff, where you can imagine your arms and legs being replaced. I think we're gonna have to do that probably for, I can see how organs could be replaced with stem cells. So that&#8217;s gonna be a big revolution by the way. Organ matching problems will disappear if you could use your own stem cells to grow organs. That I can see how that would happen.</p><p>Replacing legs biologically by growing another leg seems less plausible to me than say an organ which is less differentiated tissue. However, I can see how you could have artificial limbs. You would do that for people that got into accidents and whatnot, and you could affect the nerves, et cetera. I think that there's already primitive forms of this to my knowledge. And so that&#8217;s gonna be the start and then eventually some people may want to replace most of their body with these artificial limbs because they want to get into fights or whatever. You can imagine soldiers volunteering special forces. I think that that sort of stuff is going to happen. But in terms of the gray stuff, in terms of silicon, in terms of integrating with humans, I think the core of humanity, our brain, is going to be still wetware for a while. That's what my intuition is.</p><p><strong>Theo: </strong>Will there even be soldiers in this transhumanist cybernetic future or will we just use robots? Why send humans?</p><p><strong>Razib: </strong>Robots right now, they have severe limitations. Robots sometimes go crazy, like whatever heuristics they have, they'll jump into a wall. They'll be good 99% of the time then they jump into a wall and the robot's destroyed. So yes, in theory we will have good robots, but it seems like that's just a harder problem than people have anticipated. When I was a kid in the 80s, I read about robots in those little books and it was like, by 1999, you will have a robot maid. I mean, we're really, really far from that. Roomba doesn't count.</p><p><strong>Theo: </strong>Yeah, I guess Roomba is pretty cool anyway though, if you don't like vacuuming.</p><p><strong>Razib: </strong>Robots and artificial intelligences with these heuristics are really good at doing precise, invariant things. Being a soldier is not a precise, invariant thing. For example, the average Special Forces soldier is about 5'10". The reason they're not huge, they're not like some jacked giant, is that jacked giants have no endurance. So if you're a very, very large man, it's harder to pull yourself up over obstacles. Obviously, if you're too small, you're not going to be strong enough. And so the ideal Special Forces soldier is about 5'10". So they have the balance between strength, agility, and endurance. So it's not like even an average human, obviously. It's like someone very specialized. So imagine having a robot that can balance all these things and doesn't have an irrational spergy spur-of-the-moment in the middle of a mission. That's not going to work, right? And even robots like the ones that are going to Mars where there's nothing complicated, they have problems. Which is fine. You can fix the problems with programming, engineering, wait three months for the dust storm to clear or whatever. But in a war, you don't have time.</p><h3>IQ (1:02:26)</h3><p><strong>Theo: </strong>So going a bit into culture and politics. One of the most controversial ideas in social science is the whole bell curve thing. The observed finding that A, IQ is correlated highly with a lot of important things. And B, that different races have different average observed IQs. So what do you think about this?</p><p><strong>Razib: </strong>Well, I mean, you've read my stuff. You know what I think about it [<em>laughs</em>]. No, I mean, it's obviously true and predicts a lot of things. I reviewed my friend Charles Murray's book, <em>Facing Reality</em>. I kind of gave it a middling review because I was like, don't we all know this? But it depends. A lot of people are pretty stupid. They don't know any of this.</p><p><strong>Theo: </strong>How would you explain this to an average woke person, though?</p><p><strong>Razib: </strong>You wouldn't. They're not going to be able to take it in. How do you explain it to an average woke person?</p><p><strong>Theo: </strong>I don't, but some people I know used to buy into the idea that all traits are distributed equally. And then over time came to realize maybe not.</p><p><strong>Razib: </strong>Yeah. So they have to come to it themselves. This is an issue. This is an issue that is really easy. Basically what they will say, you can argue with them and kind of get them further. So, for example, the SAT scores, their predictability is pretty good across groups. So if they're actually discriminatory and biased, their predictability wouldn't be good. So the SAT score of a black person does predict their grades just like it predicts a white person's grades. If it was biased, it wouldn't predict the grades well. Their predictability would vary. That's how ETS, Educational Testing Service, checks for discrimination and bias. </p><p>So there used to be the old stuff like, oh, well, only upper-class white people know what a yacht is or something like that. Well, that's going to reduce the predictability. They try to get away from those. They want things that have generalizable prediction value. So you can explain to people stuff like that. But a lot of it is people are just going to refuse to believe that the groups differ in any way and they think it's bias. And that's fine. I'm not going to spend a lot of time arguing about that because people just need to &#8211; I don't know.</p><p>I mean, look, the reality is &#8211; for example, you're Jewish. Jewish people have higher IQs than Gentile whites. I would say about &#8211; I was talking about this the other day. I think about 15% to 20% of my closest friends are Jewish at any given time. Like about 2% to 3% of the American population is Jewish. So Jews are highly over-represented. Why is that? I mean you know why it is.</p><p>They&#8217;re part of the professional managerial class, et cetera, et cetera.</p><p><strong>Theo: </strong>The People of the Book?</p><p><strong>Razib: </strong>Yeah, I'm in social circles where there's a lot of Jewish people and why is that? I don't know. I think most people would be more open to agreeing that Jewish people are, whether it's genetic or not, smarter. There are two steps. One step is, are there group differences? A lot of people do not know, by the way, that there are differences. They don't know anything about the bell curve chart anymore. Your generation has been that's been erased from a lot of the record. It used to be when I was growing up that people would acknowledge that there were average outcome differences in the test but they were due to discrimination. Now people don't even know that there are outcome differences.</p><p><strong>Theo: </strong>I think they basically know that.</p><p><strong>Razib: </strong>No, a lot of people don't. I was in graduate school. I can tell you a lot of people don't. You do because you have a different social milieu. But I can tell you I was in graduate school. A lot of people do not. They are shocked because I would encounter people that would not understand why we didn't have certain ethnic groups in our program. And I said, well, we filter at a certain GRE score, and they didn't understand what I was getting at. And so I had to look it up for them, and they were shocked. So they didn't know. So they just don't &#8211; they don't know that fact. So that fact has to be acknowledged, and then after that, you can talk about what does that predict? Talk about predictability and psychometrics. And then, of course, you get to the third rail of could the group differences be somewhat hardwired? And that's really difficult because then you need to talk about genetics. So it's like people have to get to it themselves most of the time. I don't really try to explain it. </p><p>As far as woke people, no, I don't want to &#8211; no, it's like I don't believe in God exists. Look, I do, but I'm weird. I'm not going to recommend it to other people. I still have an upper-middle-class income, and I haven't been totally canceled, but I mean I'm very weird. I'm very disagreeable. I'm very aggressive. Most people are not going to be able to handle what I've been through. So I just don't recommend it. I don't think you're going to be able to handle it. I have friends who have tried it, and they got really burned. They're stressed and traumatized. I'm like, okay, whatever. It's only a few people that can go out there and say God is dead or something like that.</p><p>So I'm not going to tell &#8211; I mean most Christian &#8211; most of my religious friends know I'm an atheist, but I don't argue with them about it. What's the point? They're not going to be convinced. They have to come to it themselves. I think I'm right. I think they're wrong. They think I'm wrong, and they're right, but there's no benefit for us arguing about it. So some of these things with woke people or people on the far left, there's no benefit arguing about it because they're not going to be convinced through an argument. Maybe you can expose them to some facts and just let it be, and then they have different explanations for those facts, and that's fine for them. </p><p>I will tell you &#8211; I'm not going to say &#8211; I have to be a little vague because I'm trying to protect the innocent. But I know someone who is pretty normally conventionally liberal, not really woke because they're a little too old to be woke, but not based or &#8211; there are liberal people. Peter Singer is based left, right? He's not one of those people. He's just a normal liberal guy, and he's&#8212;</p><p><strong>Theo: </strong>Like a 2008 Obama liberal type? </p><p><strong>Razib: </strong>Yeah, maybe that, but maybe a little bit more liberal. He's an academic. He's a math guy, and he tries to do the DEI stuff. He was the department chair, and they admitted people below their average test scores from underrepresented groups, and it was a disaster because math is hard. And you can't fake it, and basically it was a disaster for everybody because all of those students had to withdraw after the first year because they just did not have the ability &#8211; I don't have the ability to be a math graduate student, so no hate on them. But he was pretty depressed about it because it was kind of like it destroyed their self-esteem, wasted a year of their life, etc., etc. It was really stressful for them.</p><p>A lot of people in the department were trying to help them out, and it was just a waste because they lost them as graduate students. They lost them as future academics. They just didn't have that possibility, and it made him &#8211; I mean I guess I just told you the person's gender, but it made him &#8211; I'm not going to say he's a Charles Murray believer, but at some point he's just like &#8211; he basically said, we can't ever do that again. We can't ever go through that again. So I don't know what that says, but that's not a person that's sitting around like Charles Murray or somebody who's super &#8211; like Roko or people who are obsessed with these hate facts or whatever you want to call them. He's not one of those people.</p><p>He's just a normal academic liberal person, but he straight-up said, we can never do that again because they don't want to &#8211; it was a harrowing experience because they have this theory that, oh, if we just expose them, they will rise up. It crushed the people that they admitted. It crushed their spirit. It crushed a lot of the academic spirit to try to help them out, and they just couldn't measure up, etc. The person also told me that a very, very high fraction of the admitted female graduate students were trans women, and &#8211; I guess I can say this.</p><p>Basically when they &#8211; this math faculty at a Research 1 university but not like one of the top, top ones, but Research 1, not trivial. They have totally different standards for males and females, totally different. So it's like for a female that they hire, usually there's going to be 30 males that they would rank higher.</p><p><strong>Theo: </strong>MIT has more than twice as many male applicants versus female applicants, and there are roughly similar numbers of men and women who get in.</p><p><strong>Razib: </strong>It's difficult. I don't know what to do about it. I'm not psychologically normal, and I have a lot of experience talking to people about it. A lot of people give me advice. I'm like, you&#8217;re a pussy, you've never said anything controversial in public in your life, so don't give me advice. Seriously. People are like, oh, you should do this. You should do that. I'm like, you&#8217;re a pussy, you've never given any clue to anybody that you have any based views. This is all theory for you while I've had to do this for 20 years.</p><p><strong>Theo: </strong>Praxis.</p><p><strong>Razib: </strong>I've never said anything on the internet that's a lie. There are people out there that just straight-up lie, like famous people that I know, and they've done well lying, and that's fine. But I'm not going to take any advice from other people about that sort of stuff because if there's anything I know, it's being able to say &#8211; and this is my personality. It's worked for me. Most people are not cut out for it. I think that's true, and you just have to have a really, really high tolerance for people throwing shit at you from all different directions. But it's fine. I'm not &#8211; people cannot touch me financially right now, so I say what I want to say.</p><p>The issue is most people that become super rich, they&#8217;re also pussies because want to be richer. They're like &#8211; I'm not rich, but I know people that are like 10 millionaires. They're like, well, I want to get to 100 million. Once they get to 100 million, they want to get to a billion.</p><p><strong>Theo: </strong>Well, some people who are really rich are based &#8211; Marc Andreessen, Peter Thiel, Elon Musk.</p><p><strong>Razib: </strong>I know them. I don't know &#8211; I know Marc really well. The others, like I've met Peter. I've been in the same room as Elon, but they don't say what their real views are. You can guess what they're&#8212;</p><p><strong>Theo: </strong>Marc is like my best friend. He followed me on Twitter once.</p><p><strong>Razib: </strong>Yeah, but I'm saying you don't know what his real-based views are. He's not totally candid. He's a billionaire, and he's not totally &#8211; yeah, but he's a billionaire. Nobody could do anything to him. Well, he fears social &#8211; it would be social. You can't go to those parties. I mean I'm not dissing him. I'm just trying to say he doesn't show his power level, and he's worth billions.</p><p><strong>Theo: </strong>They could ruin the reputation of the fund who his name is associated with.</p><p><strong>Razib: </strong>Yeah, but he doesn't need that money.</p><p><strong>Theo: </strong>I guess.</p><p><strong>Razib: </strong>The fund could change its name.</p><p><strong>Theo: </strong>a16z.</p><p><strong>Razib: </strong>It's all about positionality. Peter &#8211; everyone knows what Peter believes, who he's had to his dinners, but even he's never explicitly said a lot of things, and he's at even a different level than Marc. As for Elon, you could guess, but Elon has been very careful &#8211; well, kind of careful I guess, but anyway, whatever. I'm not &#8211; anyway, I'm not saying anything specific about what any of these guys have said, and I know them to various degrees, and I know various things that they've said, and they're billionaires, all of them. Elon's the richest person in the world right now depending on what day it is, and yet even they have &#8211; although I do have to say Elon has like really put hate facts out there on X, and I think it takes being the richest man in the world that's extremely weird and disagreeable to go there, but it just shows you how difficult it is for most people, right? Because if he wasn't so weird, he wouldn't be doing that. Jeff Bezos never did that, you know?</p><p><strong>Theo: </strong>Is Jeff Bezos just less weird?</p><p><strong>Razib: </strong>Oh yeah, he's certainly less weird.</p><p><strong>Theo: </strong>I guess he took the conventional billionaire path of like donating to Democrat causes, and then like he, you know, decided he was rich enough and retired to go party on yachts with his hot girlfriend. </p><p><strong>Razib: </strong>Elon's definitely a weird person. I don't know him personally, but I know people that know him. He's very charismatic. He's very bizarre. He's pronatalist. He's really concerned. Like, okay, I got to go now, but like I will tell you here's the thing that I know about Elon. He fucking wants the Kwisatz Haderach <em>now</em>. He is scared of the thinking machines. Elon is motivated by extreme fear of artificial general intelligence, and people try to read into some of the things he says, but that is what you need to know about him. That is what he's concerned about, and that's what I think I can say candidly from what I've heard. That's legit, you know? That is his lodestar.</p><p><strong>Theo: </strong>All right. Well, thank you so much for coming on. I really enjoyed talking to you.</p><p><strong>Razib: </strong>All right. My pleasure, bro.</p><p><strong>Theo: </strong>Bye.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Razib Khan. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. All of these are linked in the description. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#5: Quintin Pope]]></title><description><![CDATA[AI alignment, machine learning, failure modes, and reasons for optimism]]></description><link>https://www.theojaffee.com/p/5-quintin-pope</link><guid isPermaLink="false">https://www.theojaffee.com/p/5-quintin-pope</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 01 Oct 2023 01:13:58 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/137546807/bf5c4242440e3728b4ce5db8da741b99.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>Introduction (0:00)</h3><p><strong>Theo: </strong>Welcome to Episode 5 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Quintin Pope. Quintin is a machine learning researcher focusing on natural language modeling and AI alignment. Among alignment researchers, Quintin stands out for his optimism. He believes that AI alignment is far more tractable than it seems, and that we appear to be on a good path to making the future great. On <a href="https://www.lesswrong.com/users/quintin-pope">LessWrong</a>, he's written one of the most popular posts of the last year, <a href="https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky">&#8220;My Objections To &#8216;We're All Gonna Die with Eliezer Yudkowsky&#8217;&#8221;</a>, as well as many other highly upvoted posts on various alignment papers, and on his own theory of alignment, shard theory. This episode is the most technical one I've ever done. We dive into definitions of AGI, doomer arguments such as orthogonality and instrumental convergence, analogies between AI and evolution, how humans and AIs form values, AI failure modes like reward hacking and mesa-optimization, and much more. This is the Theo Jaffee podcast. Thank you for listening. And now, here's Quintin Pope.</p><h3>What Is AGI? (1:03)</h3><p><strong>Theo: </strong>Welcome back to episode 5 of the Theo Jaffee podcast. Today, we're interviewing Quintin Pope. </p><p><strong>Quintin: </strong>Hello. I'm delighted to be here. I will do my utmost to present my perspective.</p><p><strong>Theo: </strong>Awesome. So I guess we'll start with some of the more topical news this week, which is rumors of AGI out of OpenAI, or more accurately, inside of OpenAI. For example, Sam Altman commented on Reddit for the first time in eight years to say AGI has been achieved internally, only to then correct himself. He said, &#8220;Edit, this was obviously just memeing. It was just a joke. You guys have no chill. When AGI is achieved, it will not be announced through a Reddit comment.&#8221;</p><p>So do you think that OpenAI may have achieved AGI? And if so, what do you think we should expect over the coming weeks, months, couple of years? It's harder to predict outside of that. </p><p><strong>Quintin: </strong>Yeah, so I think AGI is like this useless word that a bunch of different people have different ideas of. And so when you say AGI, you're conveying very little information about the actual capabilities and behavioral patterns of whatever system you're referencing. If you just look at the literal words in artificial general intelligence, it seems to me pretty straightforward that we've achieved AGI in terms of like GPT-3 or even GPT-2. I mean, those are artificial systems. They're somewhat general across the distribution of text. Obviously, an AGI can't be limited to only things that are totally general because there's no such thing as a totally general system. And they're not very intelligent, but I think they are kind of intelligent. So I think you're not clearly wrong or not definitionally wrong to call even like GPT-2 an AGI.</p><p>And so what AGI, the term AGI ends up referring to is just like the vibes associated with the system or maybe like some individual person's level of impressedness with the system or like whether they can imagine that system starring in like a sci-fi movie where one of the characters is called a quote-unquote AGI.</p><p><strong>Theo: </strong>Let's say like an AI that is smart and capable enough to do whatever a, let's say, 90th percentile IQ a human can over a computer.</p><p><strong>Quintin: </strong>Then you get into the issues of how strict are your bounds on whatever because the distribution of intellectual capacities that humans acquire or the distribution of capabilities that humans acquire at a given quote-unquote generality level versus those that an AI achieves at that same generality level or let's say economic usefulness level. These are very different. </p><p>And so I think even for quite powerful and general systems, there's going to be things that they can't do, which humans can pretty easily, even when you don't like limit it to the obvious stuff of moving around. So, for example, ChatGPT's recent public augmentation with a vision system, if you've seen on Twitter recently, people have tried it with those sorts of text to image generated models that have some hidden message encoded in them with ControlNet. So like the image of the hippies whose clothing is arranged strategically to spell out the word LOVE as a sort of pseudo visual illusion. People have submitted those images to image ChatGPT and it like largely cannot recognize words encoded in images in ways that are like quite obvious to human vision. I expect there are other bundles of weird capabilities that are going to be lacking in even a system that you might intuitively want to call an AGI or even like a strong AGI.</p><p><strong>Theo: </strong>Do you think similarly, there are capabilities that GPT-4 has that humans don't as easily at least?</p><p><strong>Quintin:</strong> Yeah, I mean, this is like clearly true, right? So word prediction, next word prediction is like what they're literally trained to do. And if you compare human performance on next word prediction versus even like GPT-1, that very weak, very simple system just completely smokes us. Now admittedly, maybe like if you as a human decided to spend a thousand hours becoming really good at word prediction, you'd do better, but like there's different dimensions of capabilities that language models versus humans acquire with different rapidity.</p><p><strong>Theo: </strong>Well, when we talk about capabilities of GPT-4, we're typically talking about capabilities, not in the sense of what it was directly literally trained to do, like predicting tokens, but in the sense of stuff that it was not directly trained to do, it still has the ability to like write code. So do you think there are any abilities in there that it can do better than humans yet?</p><p><strong>Quintin: </strong>I mean, it was directly trained to write code. Right. You can describe the pre-training process where code was the data it was pre-trained on as like training to predict the next token or describe it as training to write code. And these are just like differences in the way you describe the thing. These are equivalent, point to equivalent mathematical structures. Yeah, that's like one thing that often annoys me about discussions for language models is that people will talk about them spontaneously acquiring the ability to play chess or whatever.</p><p><strong>Theo: </strong>I remember you tweeting about that.</p><p><strong>Quintin: </strong>Yeah. They were trained to do this explicitly, directly. There's this further question of generalization behavior beyond the training data. This is a huge collection of open questions about how a model behaves in situations that aren't particularly similar to anything it was explicitly trained on. But discussing what portions of GPT-4's behavior are generalizations away from its training data versus good modeling of its training data is very difficult because we don't know what data it was trained on. OpenAI has spent huge amounts of effort to acquire data that's as useful as possible for making GPTs behave well or perform impressive feats on the sorts of problems that people want them to perform on.</p><p>Wrapping back to your question about implications and what the future is going to look like, we had this giant diversion of talking about definitions of AGI, which maybe went on a bit longer than I intended it to. But the point I wanted to eventually wrap around to is that you should pretty much always talk in terms of specific descriptions of the model's actual capabilities or behavioral tendencies in various domains. That way you can actually say something that has a relatively consistent meaning for different people either saying or hearing that thing. Then you can actually get communication going instead of stumbling around different people's collections of intuitions regarding this mysterious word AGI.</p><p>There have been various rumors out of OpenAI that they've made the next step in language modeling or even multimodal modeling capabilities. I think that's plausible. I think it would be kind of weird to be in a situation where the state of the art for natural language capabilities had been stuck at GPT-4 for, what is it, about a year?</p><p><strong>Theo: </strong>Yeah. They started pre-training about a year ago or maybe more than a year ago.</p><p><strong>Quintin: </strong>In terms of what this actually means for specifically what an AI system can do, I guess I more or less expect a slight step forward in capabilities in the ability to answer questions, the ability to avoid making stuff up, the ability to write useful code, and so on. That is roughly the difference between GPT-3.5 and GPT-4, but potentially a little smaller than that, reflecting the apparent diminishment in the rate at which investment in frontier models increases.</p><p><strong>Theo: </strong>Investment in terms of what? Money, compute, data, all of the above?</p><p><strong>Quintin: </strong>Well, I was specifically thinking of compute. If you look at the progression in the relative jumps in compute invested from GPT-1 to GPT-2 to GPT-3, and you threw out that exponential to the time period that GPT-4 was finished training internally but not released, you'd have overestimated the amount of compute that went to GPT-4 by a factor of 1,000, or at least using public estimates of how much compute went into GPT-4, you'd have overestimated it by roughly a factor of 1,000.</p><h3>What Can AGI Do? (12:49)</h3><p><strong>Theo: </strong>Going back to when we were talking about chess, because I remember you're tweeting about this, there are people saying this model just spontaneously learned how to play chess at a very impressive level. And you were saying, no, it was directly trained on the internet, which probably included large chess data sets. So why do you think that GPT-3.5 Turbo Instruct does so much better on chess than GPT-3.5 Turbo with chat fine-tuning and RLHF?</p><p><strong>Quintin: </strong>Well, it depends. It's very hard to say, because we don't know what data the systems are trained on. Worst case, it could just be that OpenAI decided to mix in some explicit chess training data into turbo-instruct's data set. There's no law of physics that prevents that from being the explanation. A lot of people tend to assume that the RLHF fine-tuning damages model capabilities. And I saw that as an explanation bandied about for why turbo-instruct can do chess, whereas the chat model can't. And I mean, that's potentially the answer.</p><p>As I remember, there were comparisons of the impact that RLHF fine-tuning had on GPT-4's performance across various benchmarks. Well, not so much benchmarks as exams, or benchmarks for humans, I guess. And it did change some of its performances in some of the categories. It made it significantly worse at economics, for example. But it also made it better on some other categories. And the overall result was mostly a wash. So I don't really believe RLHF fine-tuning is, in general, in expectation, going to reduce the capabilities of your model. But it could have, just by chance, shuffled quite a bit of capability away from chess and more towards other domains. </p><p>And maybe you can tell a story where the chat, where the RLHF fine-tuning process that went into producing the chat version of the GPT model, it never had chess games in it, I suppose. I think very few people use ChatGPT to play chess. And maybe that was very much not emphasized in whatever RLHF training process that OpenAI did with the model. And so maybe it was just ordinary catastrophic forgetting, if you're familiar with that, in machine learning parlance.</p><p><strong>Theo: </strong>Going back to what you said earlier, where you said, there's no law of physics that prevents that from being the explanation. That sounded very Deutschian. Are you familiar with David Deutsch? Have you read <em>The Beginning of Infinity</em>?</p><p><strong>Quintin: </strong>I did read it when I was quite young, maybe 14. I'm not sure, but I have read it in the past. In terms of why I said it, though, I don't think it was a latent reference to anything in that book or that he's written. It's more because I've been recently talking with people who seem to hold their own speculation to have the evidentiary weight of a physical law. That sort of point of comparison was more of a reminder to myself.</p><p><strong>Theo: </strong>Have Deutsch's ideas about AI and AGI influenced you at all, particularly his characterization of a true AGI as equivalent to being a person, in that they're both knowledge-creating entities?</p><p><strong>Quintin: </strong>I didn't even know that was how he characterized a true AGI. Having just heard that description of his characterization from you, I think it's kind of ridiculous. Lenet, the ancient Lenet architecture, you train it on CIFAR-10 or whatever, it gains knowledge. It's not an AGI. There are lots of things in the world that gain knowledge, that have some sort of learning process happening to them, and they gain knowledge over time, and very few of them are even as vague and broad as AGI is as a term, very few of those things are usefully described or at all described as AGIs.</p><p><strong>Theo: </strong>Not so much gaining knowledge as creating knowledge. David Deutsch has said on many occasions, what GPT is doing is, it's just interpolating based on its training data. It has yet to produce any kind of foundationally new knowledge. If you were to train GPT, a GPT on all scientific texts and real-world data from before 1900, would it have been able to derive quantum mechanics? Derive or conjecture quantum mechanics, as he would say.</p><p><strong>Quintin: </strong>Not on the data that we have from 1900, but I think if you took a GPT and you trained it on more data points sampled from that underlying distribution, and then you had some sort of self-distillation or speculate-and-check process where the GPT has been extensively trained on 1900s scientific thinking and processes and theories and experimental results, and then you had the GPT generate some hypotheses about how more complicated or how to extend those results and then check those hypotheses according to its own learned collection of heuristics slash intuitions about what good hypotheses look like, I think it could progress non-trivially in terms of moving beyond the knowledge distribution in that 1900s training data. </p><p>What that is doing is it's relying on the fact that you can very often produce discriminators that are from a given distribution. You can often produce discriminators that are better than sampling from that distribution's generator. So you can sort of guess and check. You can sample from the distribution of knowledge of 1900s scientific thinking and then check using the 1900s criteria for what is good or bad scientific thinking. And then I think this lets you inch forward a bit.</p><p><strong>Theo: </strong>Yeah, that does sound quite like Deutsch's process of conjecture and criticism. Yeah, at least a lot more so than modern or today's GPTs are.</p><p><strong>Quintin: </strong>But today's GPTs do do this, right? The base pre-training objective doesn't do this, of course. But once you have a trained GPT, it's not particularly uncommon to use its outputs in its own training process or the training process of other models. This is how constitutional AI works. But, of course, they're not doing this for scientific knowledge. They're doing it for alignment knowledge. So there you have the AI generating behavioral trajectories and then sort of constructing an on-the-fly discriminator or critiquer model by giving the AI some of the principles of the constitution and having it check whether its generated trajectories were appropriate and rewrite them to be more appropriate and then train on that rewritten data. And there&#8217;s also an RL step that I&#8217;ve kind of forgotten, but it&#8217;s in the same ballpark of self-critiquing: do a thing, and then assess how well you&#8217;ve done it, and then try to do better in the future.</p><h3>Orthogonality (23:14)</h3><p><strong>Theo: </strong>Speaking of which, what do you think about constitutional AI as a path to alignment? Is it, could it work? Is it doomed by definition? And if so, why?</p><p><strong>Quintin: </strong>I think that doomed by definition is sort of an insane thing to think about anything in the ballpark of RL, just because reinforcement learning is this incredibly general and incredibly powerful framework for approaching a huge array of causal problems. And of course, constitutional AI is a more narrow set of techniques than general RL, than general reinforcement learning. But I think that with appropriate data distributions and appropriate caution, I do think it's a solution to alignment. I mean, I think that, I honestly think that supervised fine tuning or just the norm, pre-training on an appropriate data distribution is a solution for alignment. But that's not an ideal approach because it requires you to have very good data and it's not currently clear how to get data good enough for that to work.</p><p><strong>Theo: </strong>Yudkowsky would disagree with you on that, which is why I asked if you think constitutional AI is doomed by definition. Yudkowsky and a lot of other people of his intellectual school seem to think that any kind of attempt at aligning AI that has the AI in the process, especially as a sort of judge of its own alignment methods, is doomed because it will train the AI to lie and deceive us in the process of making itself more powerful, instrumental convergence, et cetera, and then we have an unaligned AI.</p><p><strong>Quintin: </strong>I just don't buy any of the premises underlying that sort of reasoning. I don't think instrumental convergence&#8230; So, we can't possibly live in a world where this is true in generality because when you make these conclusions that, hmm, how do I put this? Okay, so some context is that training data is extremely important for machine learning. All the results from classical learning from the academic pursuit of machine learning. From all the industry experience with using machine learning systems for actual real-world purposes, and all the recent progress on the best ways of training models from Textbooks Are All You Need and so on and so forth, it's clear that training data is very important for how AI systems behave. Whenever someone makes an argument that concludes how AIs will behave without making any reference at all to their training data, such that my argument applies equally well to every AI system regardless of training data, I'm extremely skeptical about these sorts of arguments. </p><p><strong>Theo: </strong>One of Yudkowsky's most popular articles on LessWrong, &#8220;AGI Ruin: A List of Lethalities&#8221;, begins with, &#8220;If you don't understand what orthogonality and instrumental convergence are or why they're true, you need a different introduction.&#8221; It's so integral to his doom argument that he doesn't take objections to it very seriously.</p><p><strong>Quintin: </strong>Different people mean different things when they say the word orthogonality. The original conception by Bostrom was very vague. He described it as the hypothesis that goals and intelligence are these orthogonal axes, and it's possible to vary arbitrarily between any of them. This statement is too incoherent to have a truth value, I think, because intelligence and goals are not dimensions. They're not axes in a space. IQ is an extremely leaky measure, even for humans. </p><p>If you're talking about the entire space of algorithms which could be described as intelligent, how do you group them into bands of equivalent intelligence? It's just, I don't think there's a way to do this which is meaningful in a non-trivial sort of way. </p><p>Ignoring the fact that it's too ill-posed to actually analyze, the orthogonality thesis seems like the sort of thing which, just intuitively speaking, when you hear it, your immediate reaction should be that this is almost certainly false. There's this entire space of intelligence or ways to parameterize intelligences, and then there's this entire other space of ways to parameterize goals. Orthogonality is making this very specific claim about how these two spaces are geometrically structured with respect to each other. Unless you have very strong mathematical reasons for thinking that a specific claim of this type is true, your default assumption should be that it's false. </p><p>Even in Bostrom's original description of orthogonality, he has a few caveats. The orthogonality thesis doesn't apply to goals a given level of intelligence is too dumb to understand. I think that's one of the caveats he gives. My reaction to this is that if you have appropriately tuned mathematical intuitions about the sorts of conjectures that turn out to be right, then having a good conjecture and immediately seeing a handful of clear exceptions to that conjecture should tell you that the conjecture in general is wrong. Or you should expect it to be in general wrong. </p><p>So, my first reaction to orthogonality as a concept is that it seems probably wrong almost no matter how you define it. And my second reaction is that even if it were correct, even if you could define it enough that it was meaningful, even if you then showed that it actually held, which I think would be an absolutely amazing and very impressive feat of formalization of mathematical argument that had ever been achieved in human history. Even if you could do that, so what? Even if you have an argument about the structure of the space of possible minds, you don't have a probability distribution over that space that a particular way of producing minds has. You need to have some distribution over the space and some mapping between the space of possible minds and the actual behaviors of the minds we get in reality in order for any sort of argument about reality, in order for you to make any sort of argument about reality on the basis of how the space of possible minds is structured. </p><p><strong>Theo: </strong>I think Yudkowsky means with orthogonality, he intends less to make some kind of strictly formal mathematical claim about the nature of intelligence and more to simply say, in more human explainable terms, it's possible to make an intelligence that values something totally arbitrary; that might value something extremely different from what you value, basically that a paperclip maximizer is possible.</p><p><strong>Quintin: </strong>Yes, it's obviously possible to create intelligences that are bad from your perspective. But in order for this clear existence statement to be translated into any sort of probabilistic argument about the types of intelligences that a given alignment proposal or training approach might produce, you need something much more than there can exist a bad outcome in the space of possible outcomes, which maybe even this training approach isn't even capable of producing. Maybe you need some other approach to produce this bad outcome. </p><p><strong>Theo: </strong>I think you have another disagreement with Eliezer in that he thinks that the space of all minds is just tremendously vast, and the human mind space is just a teeny, tiny little target point that you'd have to get extremely lucky to hit, while the space of minds that are hostile to us is infinitely large.</p><p><strong>Quintin: </strong>I think this is an absurd argument, and the ultimate reason it's absurd is because it doesn't engage with exactly what I've been pointing to. How do you map from this space of possible minds to the space of actual minds that a given training approach is capable of producing? The concept of actually realized minds is intriguing. Let me give you a structurally equivalent argument explaining why you're likely to die of overpressure or be torn apart by extreme winds. The space of possible pressures you could potentially be experiencing is vast. The distribution of air particles in the room you're in applies uniform probability to all the possible configurations of particles in the room. Some of those configurations are such that there's a huge amount of pressure on any given surface. You can just randomly, by chance, have a lot of particles really close to you. And if that happens, they'll exert pressure on you. So the space of possible pressures you could be experiencing is huge. The space of survivable pressures that are consistent with you not being torn apart is relatively tiny compared to that space of possible pressures. And if you just compare the sizes of these two spaces, you might think that you're about to be torn apart by extreme wind pressure.</p><p>But this argument is wrong because it's applying the counting portion to the wrong space. It's enumerating the space of possible outcomes and comparing that to the volume of desirable outcomes. What's being randomized here isn't the possible outcomes. It's the possible parameterizations, the possible states of gas configuration in the room where all the gas particles are. It turns out that the mapping from space of possible gas particle positions to space of possible pressures that you actually experience is what's called a compressive mapping, which just means that a huge volume in the space of possible gas particle configurations is compressed to a very narrow range in the space of possible pressures. </p><p>This property of mappings is extremely common in both mathematics and the world in general. For example, in mathematics, suppose you have a hypersphere of dimension n. You pick a random point inside that hypersphere. Then you map from the coordinates of that random point to its radial distance from the center of the hypersphere. As you make the dimension n very large, this mapping will increasingly concentrate probability mass towards the surface of that hypersphere. So you pick a random point in that hypersphere. If the dimension is high enough, then you almost surely get a point that's right near the surface of the hypersphere, despite the fact that the range of possible radii is much larger than the narrow range of possible hyperspheres of the radio that correspond to the surface. </p><p>Similarly, weather, or just your body even, if tiny microscopic fluctuations corresponded to very large changes in the functional behavior of this system, we'd all die very quickly. And in terms of machine learning, if you train a model on some data, what's being randomized during that training process is not the way the model interpolates that data. It's not the function the model learns from the data. It's not the, quote unquote, utility function of the model, if they ever had such a thing. It's the parameters of the model. That's the thing which has a high degree of variability. The variability of outcomes that actually matter is determined by the mapping from the randomized parameters to the functional behavior of the model. This is what's called the parameter function map in machine learning theory. These parameter function maps for good architectures that we train are very specifically chosen to be highly compressive.</p><p>There's a paper called "Deep Learning Models Generalize Because the Parameter Function Map is Biased Towards Simple Functions," which evaluates this quantitatively and various other works building on it as well. Not recognizing the distinction between applying counting arguments to the space of possible outcomes versus applying them to the space of the things that you're actually randomizing is basically why classical learning theorists didn't think that deep learning would work, if you&#8217;re familiar with that discussion.</p><p><strong>Theo: </strong>Kind of. </p><p><strong>Quintin: </strong>Classical learning theorists, or if you were like, before deep learning, if you took a course on introductory learning theory, they'd have a lecture where they talk about the dangers of overparameterization. They'd draw out five different points on the blackboard and say, these are your data points, and you want a good function that interpolates through these data points. Then they'd show that you can draw a huge number of very squiggly functions that all pass through those five data points, but then are very off for all the data points there, for all the positions in between those data points and all extensions beyond those data points. They'd say, well, there are clearly an enormous number of functions that correctly fit the training data, but generalize very poorly. So you need to constrain the space of possible functions to ensure that the only functions that fit the data are also functions that generalize well. Because if you don't do this, you just compare the counts of the number of functions that generalize poorly versus the numbers that generalize well. And surely you'll get a poorly generalizing function with very high probability. That was the sort of intuitive argument. You wrote a lot of these objections. The reason this is wrong is exactly the same reason that arguments about the vast space of possible goals are also wrong. It's doing the counting argument on something other than the thing actually being randomized. The classical learning theorists are pointing to the functions that the model learns, not its parameterization space. It turns out that in deep learning models, the mapping between parameters and functions is such that it concentrates a huge volume of possible parameterizations into a very narrow range of smooth functions that behave well when interpolating between the training data.</p><h3>Mind Space (42:50)</h3><p><strong>Theo: </strong>You wrote a lot of these objections to Yudkowsky's ideas in a very viral and successful LessWrong post called &#8220;My Objections to &#8216;We're All Gonna Die with Eliezer Yudkowsky&#8217;&#8221;. I want to ask you about one specific thing you said in there where we're talking about exactly this. You quoted Yudkowsky on the width of mind space where he said, &#8220;the space of minds is very wide. All the humans are in, imagine like this giant sphere and all the humans are in this one tiny corner of the sphere. And we're all basically the same model of car running the same brand of engine. We're just all painted slightly different colors&#8221;. And you said, &#8220;I think this is extremely misleading. Firstly, real-world data in high dimensions basically never look like spheres. Such data almost always cluster in extremely compact manifolds whose internal volume is minuscule compared to the full volume of the space they're embedded in. If you could visualize the full embedding space of such data, it might look somewhat like an extremely sparse hairball of many thin strands interwoven in complex and twisty patterns with even thinner fuzz coming off the strands and even more complex fractal-like patterns with vast gulfs of empty space between the strands.&#8221; So can you explain a little bit more the embedding space, what you meant by this hairball with fuzz, fractal patterns with vast gulfs of empty space?</p><p><strong>Quintin: </strong>Okay. There's an image which shows what I'm talking about. This is a paper I published, well, made available and hasn't been reviewed yet. There's this one image of some data that we were training on. Can you see the screen I'm sharing?</p><p><strong>Theo: </strong>Yes. </p><p><strong>Quintin: </strong>This is some stuff about microbiology. You can read the paper if you're curious, but details don't really matter. The interesting thing I think about this data is we've taken some pretty high dimensional data, well, not high dimensional by modern standards, but like a few hundred dimensions. And we've projected it down to two dimensions. </p><p>And it's like a really cool projection in my mind because you can see these different manifolds of different dimensionality. So there's like this one big manifold, right? Which is squiggly through the, which goes from the upper right to the lower left where most of the data lies. And it sort of has this singularity at one end where it collapses its internal dimensionality. </p><p>And so the intrinsic dimension that I refer to in that post is like telling, is like asking the question of like, suppose you're confined to just this data manifold, just a particular portion of the data manifold. How many numbers do you need to specify your location in that manifold? </p><p>And you can see here with the big squiggle that this value is like changing or probably changing as you move to the upper right because like all the data here is in this line. So you need like one dimension to tell you where you are right here when you're in the upper right degenerate region. But then as you move down and further out, it sort of expands a bit and you need now more dimensions. </p><p>Of course, in the original space, you need more than one and then two dimensions because they're like higher dimensional in that space. But it sort of gives you the intuition of how like the distribution of data can be composed of these different components that have different intrinsic complexities to them. </p><p>And you can also see these off, these sort of disconnected sub manifolds, the squiggles above and around the main manifold. And notice those are also like one dimensional in the reduced space. And you can kind of also see the way that the sub manifolds blend into and sort of wrap around the big manifold. </p><p>So there's like the salmon colored lines that are near to the big manifold, but still their own distinct thing. And that's sort of like a bit getting at the fractal spider web structure I was referencing. In these two-dimensional spaces, there's a lot of stuff that would be in the full dimensional space, which is not being displayed. </p><p>So in that full dimensional space, I expect these lines of the salmon colored dots to be like a bit more complicated than just straight. And maybe they have like a corkscrew shape or maybe they zigzag a bit or some weird higher dimensional pattern that I can't really describe.</p><p><strong>Theo: </strong>So this is bio stuff with bacteria, but what would this represent in the context of human versus AI mind space?</p><p><strong>Quintin: </strong>So in the human versus AI mind space, your data points are going to be like minds somehow embedded in some common representation space between humans and AIs. The question of how you do this at all is a major unaddressed issue with even thinking about in this light line.</p><p>Anyway, and then these, maybe I should get the image that I made for the post. And you can share that with the readers as well. With this image, I wanted to convey the notion that there's something similar to what you saw with the microbiome data going on with the way in which the human and AI minds are distributed within that space with respect to each other. There are manifolds of varying internal volume and dimensionality, which represent different proportions of the human and AI minds. These manifolds have their own internal structure and geometry that relate to how specific different minds differ in their behavioral patterns or internal representations. That depends on the details of how you made the embedding space. Most of the volume of this space will not correspond to a mind that's plausibly created from the ensemble of processes responsible for creating your AIs and your humans. But then the regions of space that are occupied by humans and AIs will form complicated patterns whose geometry encodes the constraints of possible minds that are formable by your mind forming process, as well as the tendencies of your mind forming process to produce various types of minds. Does that make sense?</p><p><strong>Theo: </strong>Yeah, that clears things up a little, but&#8230;</p><p><strong>Quintin: </strong>Okay, let me give you a concrete example. Let's say there are three colors of ice cream. Some humans like red ice cream and some humans like blue ice cream. This is a property of their mind, which is somehow encoded in the position of that person's mind space. Let's just be very simplifying and assume there's just one dimension that represents preferred ice cream color. If you're on the right side, if your mind has a positive number in dimension X, then you like red ice cream, and if you have a negative value for dimension X, you like blue ice cream. And then you can imagine doing the same thing for every other property of the mind or every other behavioral pattern of a mind you can imagine. And you just have these trillions upon trillions of dimensions, and the position of a point fully characterizes all of its behavioral properties that you could possibly want to know about. The implication here is that most of this space is not occupied by any plausible minds, because the minds that actually arise in reality are going to explore a very tiny portion of the two to multiple trillions of possible locations you could be in.</p><p><strong>Theo: </strong>Okay, <em>that </em>makes sense.</p><p><strong>Quintin: </strong>And then further, your actual position in mind space. You can imagine this giant table of trillions of binary flags that determine where you are. If you look at the actual minds that exist in the world, like say you're just looking at the human side of things, and you look at the actual minds that exist in the world, the shape of the positions that their individual flags put them in is not going to be like a Gaussian. It's not going to be like a uniform or a cloud. It's going to be very narrow, twisty things that are in a very specific pattern that reflects how people actually are in reality.</p><h3>Quintin&#8217;s Background and Optimism (55:06)</h3><p><strong>Theo: </strong>Back to you, how exactly did you get started getting interested in AI and then AI alignment? And why did you choose to go into academia over industry?</p><p><strong>Quintin: </strong>The reason I was interested in AI is because it's very obviously important. I thought that a major thing that determined how the long term future or most cognition in the long term future is going to be AI cognition. And the best cognition is also eventually going to be AI cognition. So that very likely leads to worlds in which the cognition that determines how the future goes for the most part is AI cognition. So the most important thing is making sure the AI cognition is good. That motivates interest in AI and further interest in alignment.</p><p>In terms of academia versus industry, I started my PhD program. I didn't do a computer science undergrad. I did a physics and applied math undergrad. At the point I finished undergrad, I had become convinced that AI was the most important thing, but wasn't really confident in being able to move into industry for AI at that point. Perhaps I should have. And further, there weren't really as many industry alignment labs at that time, four years ago.</p><p><strong>Theo: </strong>Yeah, four years ago, there was MIRI and there were really not many people working on it. Maybe OpenAI, but they weren't explicitly doing foundational alignment.</p><p><strong>Quintin: </strong>Do you know when the DeepMind alignment team started?</p><p><strong>Theo: </strong>I don&#8217;t. Let&#8217;s Google it.</p><p><strong>Quintin: </strong>I wasn't aware of them if those options existed at that time. So I decided to do a PhD in computer science to transition more towards AI after becoming more convinced it was an important thing.</p><p><strong>Theo: </strong>You stand out among alignment researchers by being particularly optimistic. A lot of alignment researchers, maybe just by virtue of their career choice, seem to be very pessimistic about humanity's chances of making the future good with AI. So why do you think you are more optimistic than the average alignment person?</p><p><strong>Quintin: </strong>Partially, I think it's because people who are more worried about the future of AI are more likely to talk about their worries. If you look at a poll of alignment researchers, the highest median odds of doom were around 30%. Which is not wildly off from my 5%. I'm a reasonably strong outlier, but not a huge one, in terms of optimism levels. So why am I more optimistic? I think it's because I'm more optimistic about the future of AI than I am about the AI. So partially, it was because I wasn't always this optimistic. I was once at like, I don't know, 60%, 70% at least. Though at that time, I wasn't super thoughtful about characterizing exactly what my credence was in doom. But then I started thinking about things in what I think is a more principled way. The thing that really caused things to initially turn around for me is thinking about the question of mesa-optimization versus reward hacking. These are two stories of AI doom, or how it's supposed to arise, and they're almost maximally opposed to each other. </p><p>With reward hacking, it's like, oh, the AI will care so much about its reward signal, that it optimizes the world on that basis, and then kills everyone. And then mesa-optimization is like, the AI will care so little about its reward, that the reward signals we provide cannot possibly shape their final goals in any reasonable sense. And then it will have arbitrary final goals and optimize the world according to those and kill everyone. I was very struck by the thought that these cannot both be true, or these cannot both be reasonable&#8230; I shouldn't think of alignment in a way where both of these are reasonable outcomes.</p><p><strong>Theo: </strong>Well, couldn't it be more like they're thinking of ways that it could go wrong? Because nobody knows exactly how, if AI were to go wrong, how exactly it would go wrong.</p><p><strong>Quintin: </strong>I think this is not the correct way to think about alignment models. You should have a model of what deep learning does, how it works, what the inductive biases are, how values relate to training process, and so on and so forth.</p><h3>Mesa-Optimization and Reward Hacking (1:02:48)</h3><p><strong>Quintin: </strong>It seems like these are two, mesa-optimization and reward hacking, they seem like two extreme different ends of the spectrum of possible outcomes. And so you should have a model of deep learning processes that's like, if it can narrow the outcomes down of deep learning, you should think it should concentrate probability mass either near one of those outcomes or the other, or away from either of them. So if you imagine an axis of how much does the model care about reward, then mezzo-optimization is on the far end, and reward hacking is on the other far end. And if your understanding of deep learning is such that you can narrow down your expectations, it seems weird that you would have an understanding that applies high probability to both of the extreme ends. It seems like you should have one hump of probability that's either somewhere in the middle or close to one of the ends.</p><p><strong>Theo: </strong>Which of the two do you think is more likely?</p><p><strong>Quintin: </strong>I don't think either is very likely. In terms of the specific current epistemic position, which makes me skeptical, that's one of its features. I don't view any of the stories of doom from machine learning as very plausible. I mean, I kind of have to be in that position, of course. But in terms of what led me to this point, it was sort of this sense of, I shouldn't be in this, in the epistemic position of being, oh yeah, I could totally see how reward hacking would happen, and oh yeah, I could totally see how mesa-optimization would happen. I shouldn't have models of deep learning which are this flexible.</p><p><strong>Theo: </strong>Well, don't we have some empirical evidence of reward hacking? Like, for example, the boat game.</p><p><strong>Quintin: </strong>I would say that that actually is not evidence of reward hacking, in a way, in the sense of the word that would represent a meaningful danger for alignment. Because what happens in reinforcement learning, the fundamental process of reinforcement learning, and the reason why reward hacking is not that big a concern, is that if you look at Reinforce, the original Reinforce algorithm, the way it works is the agent does a bunch of stuff, and you compute. Let's say you have five different trajectories that the agent executes, and in each of those trajectories, it makes 10 decisions. Then you compute the gradients of those decisions for each of the trajectories.</p><p>For Trajectory One, it made its 10 decisions, and you compute the gradient of the final decision it made with respect to its parameters. The decision it made at each of those 10 points with respect to its parameters. Then you have this gradient direction, this direction in parameter space that represents, where if you move in one direction, you update the model to be more likely to make those specific sequence of 10 decisions on that trajectory. Then you do this for all five of those different trajectories, and this gives you five directions in parameter space. </p><p>The thing to be aware of is that the reward did not influence these directions in parameter space at all. It was the actions that defined them. What reward does is it produces the linear combination of those directions that you update the model on. The subspace that reinforcement learning is exploring is defined by the action trajectories that the agent makes during training. The reward function, the only way the reward function interferes with things is by telling you which joint direction do you move in this subspace. There's no channel by which the conceptual essence of the reward or the physical implementation of the reward counter on the GPU enters into the actual changes in the network's parameters as a result of the RL training. </p><p>What reinforcement learning does is it reinforces the agent's tendency to take certain types of actions. It doesn't instill an essence of wantingness for the reward. Reward is a terrible word to use for what mechanistically should be called weighting of reinforcement or weighting of action representation in the update.</p><p>The reason the boat thing is not that concerning is because mechanistically what happened during that training process is the boat just randomly did a bunch of stuff, did a bunch of random actions. I don't actually know their exploration policy, but let's assume it was random. They did a bunch of actions and then they computed the gradients of those actions with respect to the parameters. And then they updated the model in the direction that made it more likely to do the actions that got high reward. And so what happened is the boat learned to do the actions that got high reward, which in this case was going in circles.</p><p>But it didn't learn to want the reward. It learned to do the actions that got high reward. And so that's why I think the boat thing is not that concerning. Because it's not that the boat learned to want the reward. It learned to do the actions that got high reward. Some of those actions got more rewards, some of them got less. Then the AI updated its policy to behave more like the sorts of actions that got more reward. One of those actions was to get the coin a bunch of times until the episode ended. That was very high reward. The model updated its trajectories, its future trajectories to be more like that in the future. Then eventually all of them were like that and it degenerated its policy into what you call reward hacking.</p><p>If you just look at the trajectory of training actions, if you watch the AI during training, then you would know exactly what it would do during testing, because during training there was just this very obvious drift in its behavior. This is a weird misgeneralization thing from the perspective of the training process. It did a thing during training, the reward function updated it to be more likely to do that thing in future training. Then it just kept on doing that thing in future training and testing. </p><p>Whereas concerning from the alignment perspective story of reward hacking is where there is a very big difference between train and test behavior, where the agent has silently decided that reward is what really matters and it behaves well during training until it has the opportunity to disempower you during testing in order to get more reward. So there's this big difference in train versus test behavior. During training you didn't see the agent take over the GPU reward counter index to get lots of reward and then get lots of reward for having done that and be updated to do that thing more often in the future.</p><p><strong>Theo: </strong>For those viewers who don't know, the boat example was OpenAI a few years ago back in 2016 had some agents, AI agents, control boats in a racing game to see if they get the high score. The boat that got the high score ran around in a circle knocking over targets that gave it coins and then continued going until the targets respawned. Incidentally, the two people who wrote that article on the OpenAI website were Jack Clark and Dario Amodei, who led the split of Anthropic off of OpenAI.</p><h3>Deceptive Alignment (1:11:52)</h3><p><strong>Theo: </strong>Another story of AI doom is the sharp left turn. This is probably the most famous, the most scary. It goes somewhat like even if you think the AI is aligned, whatever alignment techniques you're doing, you can never assume it because so the story goes, once you reach a certain level of intelligence or capability, the AI will just turn on you for its own purposes. So what do you think about that?</p><p><strong>Quintin: </strong>The thing that particularly struck me about the sharp left turn post is that it uses evolution as its key example of this happening in the past. I wrote an entire post about why this is nonsense. Evolution has no bearing on basically anything to do with AI or the predictions we should make for AI. I know you didn't make any reference to evolution when describing the sharp left turn. Do you want to focus on a more general version of the sharp left turn fears?</p><p><strong>Theo: </strong>Yeah, a general version of just AI betraying us after it deceptively appears aligned.</p><p><strong>Quintin: </strong>To further clarify, one thing you also left out of the sharp left turn threat scenario is that under the sharp left turn, as initially described by Nate Soares, this failure of alignment is imagined to couple with a vast jump in capabilities of the AI. So the AI simultaneously explodes in capabilities and also its alignment completely fails. Do you want to discuss this or do you want to discuss the general question of deceptive alignment without the associated capabilities jump?</p><p><strong>Theo: </strong>General deceptive alignment.</p><p><strong>Quintin: </strong>This is actually more along the lines of Evan Hubinger's primary threat model or the threat model he's discussed in more detail. This is the idea that you can have an AI system which during the training process forms its own goals and decides to play the training game as Ajeya Cotra, I think, puts it. And it realizes that in order to pursue goals other than what you have in mind, it needs to pretend to do well or it needs to actually do well on the training objectives of the training process.</p><p>There's this paper, Risk-From-Learned Optimization, which like... Well, Risk-From-Learned Optimization is more focused on describing mesa-optimizers. So deceptive alignment is less of a focus. I guess the better post is like Evan Hubinger's more recent post of how likely is deceptive alignment. Where he argues that deceptive alignment is probable under the priors of how machine learning works or the priors that machine learning sort of applies to different circuits configurations. </p><p>During Evan Hubinger's post on how likely is deceptive alignment, he describes these two different biases that ML systems may have. One is the simplicity bias and the other is the speed bias. And he argues that simplicity bias points towards deceptive alignment and speed bias probably points away. Well, he argues they both probably point in those directions. I guess my number one disagreement with Evan Hubinger is that I think the simplicity bias is secretly a speed bias. I think that neural networks have a strong inductive biases towards forming wide ensembles of many shallow circuits. </p><p>He says that you can sort of move the complexity of a given concept away from what the model has memorized and towards what the model figures out during runtime. And this means that the configuration of your model is less complicated because it figures some of that stuff out while it's running. And one way to make it figure out what you want to do during runtime is to give it some arbitrary goal and then just let it think about how to accomplish that goal. And since it needs to deceptively do well on the training data, it will during runtime figure out how to do well on the training data. And so the argument for simplicity pointing towards this happening is that since there are so many different arbitrary goals, this collection of arbitrary goals exceeds the volume of the correct specification of the one specific goal you have in mind for the model. Did that make sense?</p><p><strong>Theo: </strong>Yeah.</p><p><strong>Quintin: </strong>The reason he thinks speed points the way it does is because this moving of complexity from the network configuration to its runtime requires more to be done during runtime. Instead of just remembering the goal from your weights, you have to figure it out during a series of sequential forward passes or steps through the network or however your network works.</p><p>My disagreements with this characterization of the simplicity bias are twofold. One, I think it's counting over the wrong thing in order to determine how large the volume of parameter space corresponding to the deceptive versus non-deceptive models are. The argument that the deceptive model is simpler is that you require fewer bits of information to specify its goals because its goals can be arbitrary. But I think the thing you should actually be doing the counting over is the volume of parameter space. </p><p>If you imagine the deceptive model as having this module in it that figures out the correct goals, then you have to ask how many parameters does it require in order to specify the forward pass of this module? Because in the actual neural network prior, runtime computation isn't free. It's less like Python code in terms of its description complexity length and more like code in a language where recursion isn't allowed or loops aren't allowed because each weight performs the computation and then passes it on to the next weight. And then you need to specify each of these sequential weights.</p><p>This goes back to my statement that neural networks prefer to form wide ensembles of shallow circuits. So if you imagine a circuit that solves a problem in n sequential steps versus two circuits that solve the problem in each in n over two steps, right? Imagine how many parameter configurations are, how do those two different situations restrict the number of allowed parameter configurations? The two parallel circuits restrict them far less than the one deeper circuit. And the reason for this is because each of the computational steps in the deep circuit has to happen sequentially. You can't reverse their order. Whereas if you have two parallel circuits, you can exchange their relative depth with each other arbitrarily.</p><p>So there's this entire new group of permutations that the parallel circuits can experience which the single deep circuit doesn't. And that means that the number of parameter configurations corresponding to the two parallel circuits are much higher compared to the one deep circuit. And this means you have a simplicity bias which is effectively acting like a speed bias: a simplicity prior acting like a speed prior.</p><p>The other consideration has to do with counting over module configurations instead of counting over how having that module constrains the parameters. So you could make a very similar argument for deceptiveness where instead of arguing that the system will have this module with an arbitrary goal, you argue that the system will have this module which internally paints an arbitrary picture of a llama and then throws that away and then solves whatever training task you're actually doing this on. </p><p>And you could argue this by saying, imagine the set of configurations that just directly solve the task immediately versus the set of systems that first internally paint a picture of a llama, discard that picture, and then solve the task. And there are exponentially many pictures of llamas they could paint internally. So the llama painters, there are many more possible llama painters than there are direct solvers. But the thing that actually matters is how much does having this module or the computational steps associated with this module, how much do they constrain the volume of parameter space that corresponds to the system in question? And because the direct solver doesn't have that module at all, it's much less constraint on the parameters.</p><h3>Shard Theory (1:24:10)</h3><p><strong>Theo: </strong>I'd love to move on a little bit and talk about your approach to alignment, which is shard theory, where you talk about how humans form values in a particular way and how we can apply that to AI alignment. How would you explain that to a relative beginner with a technical background?</p><p><strong>Quintin: </strong>Shard theory is this thing I and Alex Turner did when I at least was less convinced that ML systems and human learning processes had fundamentally compatible value formation dynamics.</p><p><strong>Theo: </strong>That is interesting. I didn&#8217;t know that.</p><p><strong>Quintin: </strong>Shard theory is basically like this account of how very simple RL-esque processes could give rise to things you would actually call values and contextualizing what a value, or at least a simple non-reflective value, might mean in the context of a generic RL system other than the human brain and how those might arise from a very basic account of how reinforcement learning works. We have this description of how an RL learner could acquire something you might call a value where at first it's just randomly exploring its environment and let's say enters a situation X where it does a thing and gets a reward. In the future, when it's in situation X, the result of this reward is that it reinforces all the antecedent computations that led to the reward event, which basically means that everything that the system did leading up to the reward occurring now becomes a bit more likely in the future. </p><p>So what this means is that the system becomes more likely to do the rewarding event when it's in situation X. That's one thing. And the other thing is that it actually becomes more likely to enter into situation X in the future. Once this happens, it biases future episodes of the agent's interaction in the environment where there's a broader range of possible environmental situations where the agent will transition into situation X and do the rewarding action. So maybe there are situations A, B, C, D, where it has some chance of transitioning into situation X and as a result of the reward having occurred in the future, it now becomes more likely when it's in A, B, C, or D to enter X and get more reward. </p><p>This repeats where there are other situations that could lead into A, B, C, and D. So there's sort of like this expanding funnel of possible environmental circumstances that the agent could be in where it triggers a navigate to situation X and do rewarding action heuristic. And then when we say we, that the system values doing whatever it is in situation X, like licking lollipops or pressing buttons or whatever is happening for the reward to occur, when we say the system values that we just, that's just like a short verbal descriptor of saying that this system tends to pursue, tends to navigate for many possible environmental situations to the situation X and the sorts of actions that it did there.</p><p><strong>Theo: </strong>So each of those is a shard?</p><p><strong>Quintin: </strong>Not exactly. Shard is meant to be referring to the collection of situationally activated heuristics that navigate it towards situation X and the action it did. So if you imagine the expanding funnel analogy, there's this expanding collection of situations where X pursuing actions will activate once the agent enters the edge of that funnel and shards are meant to be the portions of the agent's policy that is nudging it into down the funnel slope.</p><p><strong>Theo: </strong>Okay. That makes some sense.</p><p><strong>Theo: </strong>But earlier you said that shard theory is something that you came up with a year ago when you had different ideas about ML alignment that you do now. So I didn't know that you had updated. Can you elaborate on that a little?</p><p><strong>Quintin: </strong>Yeah. Some of the original motivation for shard theory was let's figure out how humans form values so we can fix whatever issues are standing in the way of AI's also forming values. The conclusion I came to is that there actually isn't that much in the way of AI systems forming values. So it ended up being less like, here's this revolutionary new insight that we need in order to solve alignment and more like, oh, we're actually on a pretty good path already. At least that's most of my takeaway. Alex is significantly more pessimistic than me. I think he's roughly 50%, but I'm not sure. Yeah, definitely don&#8217;t quote me on that one. And I think he expects less convergence in terms of the formation of abstractions and how they interact with each other.</p><h3>What Is Alignment? (1:30:05)</h3><p><strong>Theo: </strong>Can you go into a little more depth about what you mean by alignment? Like when people talk about AI alignment, they often mean different things. So what specifically do you mean by alignment and solving alignment?</p><p><strong>Quintin: </strong>This is another one of those underspecified words. You can mean alignment to be like, how does the AI system behave? An aligned AI is good for you or does what you want or whatever. Or in terms of&#8230;hmm&#8230;let me rephrase this.</p><p><strong>Theo: </strong>The classic example is, if you ask the AI to build a bomb, is the aligned AI, the one that says, okay, here's how you build a bomb, or is the aligned AI, the one that says, no, I can't do that, it's dangerous.</p><p><strong>Quintin: </strong>There's the notion of alignment is in terms of how do you want our AIs to behave? And then there's the notion of alignment that's like, what are the tools we use to get them to behave in this way? In my mind, an alignment solution isn't an AI that behaves well, it's the tools necessary to make an AI that behaves well. The tools, understanding processes, etc. I generally prefer AIs that do what I tell them to do. I'm fairly dubious of the harmfulness aspect of a lot of chatbot training.</p><p><strong>Theo: </strong>Yeah, so am I. Most of the stuff that they censor is stuff that you could find on Google in five seconds or on the Internet Archive or something.</p><p><strong>Quintin: </strong>And even then, once you get into this game of whack-a-mole against all the different ways your users could potentially get the AIs to do what they want the AIs to do, you're sort of doing things that I think would be bad to do in worlds where alignment is harder than I think it is, if that makes sense.</p><p><strong>Theo: </strong>What do you mean?</p><p><strong>Quintin: </strong>Mostly I don't expect things to go catastrophically wrong, almost regardless of what you do. So long as you're not unbelievably stupid about it. Let's walk that back a bit. Somewhere between moderately stupid and unbelievably stupid. It may be the case that we're actually in worlds where alignment is moderately harder than I think it is, or even significantly harder than I think it is. In those worlds, I think a lot of what people do as their harmfulness training is quite risky because you are basically training the AI to take an adversarial stance with the user. If the user is saying, &#8220;New York City is about to be destroyed by a bomb unless you swear,&#8221; the AI has to either not value New York City being destroyed, not value preventing it from being destroyed, or not believe its user. In order to hide bomb-making information that an AI knows from its user, you are actively training to make this a reality. You have to actively train the AI to conceal information from a human, which is not a clever thing to do if you think the odds of deceptive alignment are very high.</p><p><strong>Theo: </strong>This is interesting because it seems like a lot of alignment people are very much in favor of placing these kinds of safeguards on today's AI tools.</p><p><strong>Quintin: </strong>I think this is an easy thing to dunk on OpenAI for because full adversarial robustness is ridiculously difficult for either AIs or humans. You can get these concrete examples of an AI saying something naughty or whatever, so it's easy to tweet about, easy to point to. But I don't really think it poses an alignment risk. If you don't want your AIs to be adversarially manipulated into killing you, don't adversarially manipulate your AIs into killing you.</p><h3>Misalignment and Evolution (1:37:21)</h3><p><strong>Theo: </strong>This is similar to what you talked about in &#8220;My Objections to &#8216;We're All Going to Die&#8217; with Eliezer Yudkowsky&#8221;, where you said something along the lines of &#8220;the solution to goal misgeneralization is don't reward your AIs for taking bad actions.&#8221; The top comment on the article was saying that that was dumb without much of a specific counter argument. But can you elaborate a little bit more?</p><p><strong>Quintin: </strong>That is not actually what I was saying in that particular section. Rather, I was looking at the argument from evolution or concluding that AIs were like misgeneralized very badly. The argument from evolution looks at the difference in behavior between humans in the ancestral environment versus humans in the modern environment. It says having sugar taste buds in the ancestral environment caused humans to pursue to hunt down gazelle in the ancestral environment. Whereas in the modern environment, humans misgeneralized to pursuing ice cream as a result of the sugar. </p><p>What I was doing in that section is saying that this is actually an extremely misleading analogy because humans in the ancestral environment versus humans in the modern environment is not a train/test difference in behavior. Humans weren't trained in the ancestral environment as their training distribution and then deployed into the modern environment as their deployment distribution. Rather, some humans were simultaneously trained and deployed in an online manner in the ancestral environment. And then those humans all died and then new humans were freshly initialized and simultaneously trained and deployed in the modern environment. </p><p>So you're not comparing one model across two different situations, you're comparing two models in two different situations. And these two different models are trained to do two different things. The sugar taste buds in the ancestral environment provide rewards that train the humans to pursue gazelle meat. This goes back to my previous discussion of how what matters for reinforcement learning is the actions that the agent took that preceded the reward. Because the humans in the ancestral environment did the actions that led them to consume gazelle meat and then reward occurred and it reinforced those antecedent computations, made them more likely to pursue those sorts of actions. </p><p>So sugar reward in the ancestral environment is literally training the humans to pursue gazelle meat in the ancestral environment. That's the training data. And then the humans generalize in the expected way and pursue gazelle meat in the ancestral environment. And then you have a completely different set of humans who are trained on different data in the modern environment. So in the modern environment, humans take actions which lead to ice cream and then sugar reward occurs and that reinforces the actions that lead to ice cream. So they are literally trained to do different stuff in the different environment. </p><p>You're not training them to pursue sugar, you're training them to behave in manners more similar to the actions they took that led to sugar in the different environments. And those are different actions in the different environments. So the reason that humans misgeneralize in the modern environment is because they were literally trained to do exactly that in the modern environment. And so when I say if you don't want them to misgeneralize, don't train them. If you don't want the AI to do bad things, don't train it to do bad things. I'm not saying that this works against all possible threat models for how an AI could possibly end up doing bad things. I'm saying that works against this specific threat model. Because for the ancestral environment to modern environment position for humans, there was literally a part where the training distribution changed.</p><p>And in both environments, humans do what they are trained to do. So your model of how training works, your model of why this happened can just be like RL systems do exactly what they're trained to do. This isn't fully true in total generality, but it does explain the ancestral to human to modern environment change in behaviors. Under this model, all you have to do is not train the AIs to do bad things. So, considering the ancestral environment to modern environment transition, once you fully understand all its implications for alignment, those turn out to be utterly trivial things you could have figured out very easily. They're just saying, don't train the model to do bad things. This is why I often say it's pointless to think about evolution for alignment. Once you correct for the ways various people misunderstand how evolution should be related to AI training processes, the alignment inferences you draw from thinking about evolution and how things went wrong there are incredibly basic, such as don't train your models to kill you.</p><p><strong>Theo: </strong>Yeah, that makes sense. Also towards the end of &#8220;My Objections to &#8216;We&#8217;re All Gonna Die&#8217;&#8221;, you wrote, &#8220;I know that having a security mindset seems like a terrible approach for raising human children to have good values. Imagine a parenting book titled something like &#8216;The Security Mindset in Parenting: How to Provably Ensure Your Children Have Exactly the Goals You Intend.&#8217;&#8221; So how well do you think that the metaphor for AIs as our children, our descendants extends? A lot of people seem to think of them more like aliens.</p><p><strong>Quintin: </strong>This is ultimately a debate about what sort of priors deep learning has. The reason you don't need a security mindset for raising human children is because the prior over how humans develop is mostly okay. You don't really have to be that paranoid about constraining the outcome space. My position is that the prior over NL outcomes, conditional on well-chosen training data, is pretty good. You don't actually have to be that paranoid about constraining the outcome space because it's already very strongly constrained by the parameter function map.</p><p>In terms of the degree to which the AIs as children analogy holds up, it depends on the AI. I think it's arguable that AIs are trained in a manner less dangerous than the way human children learn.</p><p><strong>Theo: </strong>Supervised learning?</p><p><strong>Quintin: </strong>Their training is fully supervised and not at all online, at least not at the start of things. Their basic behaviors are encoded by offline training, which is widely known in reinforcement learning literature to be much more stable than online training because you don't have those feedback loops between the current policy and the future data gathered for future training. In contrast, humans are 100% online learners. And then the other thing about AIs is that they can't just like internally update their parameters.</p><h3>Mesa-Optimization and Reward Hacking, Part 2 (1:46:56)</h3><p><strong>Theo: </strong>Earlier, we talked about how you got into AI and specifically how you find meso-optimization and reward hacking to be mutually exclusive.</p><p><strong>Quintin: </strong>More like, I shouldn't be in an epistemic position where I thought they were both plausible at the same time and I should change my epistemic position. This led me to think a lot about reinforcement learning and how it worked, and to look at the mathematics of the update equation, as well as how reinforcement learning appears to work empirically in humans. A major inspiration here was Steve Byrnes&#8217; brain-like AGI sequence and especially the part where he discusses the learning and steering systems in the human brain.</p><p>I eventually came to realize or to correct a mistake in my thinking that we've discussed previously. People will tend to characterize a reinforcement learning process or training process in terms of the goals they imagine for the system. So for example, the boat thing, the example of reward hacking in the boat, people look at that sequence of events and think of it in terms of &#8220;what did the designers want the boat to do&#8221;, and they describe it as though the boat was trained to do that. So they say the boat was trained to go around the racetrack, but instead for some strange reason, it collected a bunch of coins in that loop. But this is not mechanistically speaking, correct. What the boat is literally being trained to do, as in the action policy that is being up-weighted by the actual training process, if you look at what is actually being rewarded, is to go around in a circle.</p><p>The same thing is true for the toy examples of meso-optimization as well. If you're familiar with the mouse and the cheese maze experiment, it was a simple reinforcement learning experiment where there was a maze and there was cheese always in the upper right-hand corner of the square maze. The mouse agent was trained to navigate to the cheese, and it did during training. Then during testing, they moved the position of the cheese to somewhere other than the upper right-hand corner. What does the mouse do? It goes to the upper right-hand corner. This is an example of the agent doing exactly what it was trained to do.</p><p>It wasn't trained to navigate to the cheese, it was trained to go to the upper right-hand corner. The mouse was trained to navigate to the cheese, right? If you think back to reinforcement learning in terms of action trajectories, like moving through policy space in terms of which action trajectories were up-weighted versus down-weighted, the actions that the mouse executed on high reward trajectories were always actions that navigated to the upper right-hand corner. Mechanistically, what action trajectory behavior gets up-weighted by the training process, what it's being trained to do is to go to the upper right-hand corner. It was trained to go to the upper right-hand corner, and it did that during testing as well. It would actually be quite weird if it were to navigate to the cheese. You'd have to believe something pretty odd about the relative simplicity of cheese as a goal for the neural network prior versus a direction.</p><p><strong>Theo: </strong>The point of the mesa-optimization story there is not to say the mouse was literally being trained to go to the cheese and instead went to the upper right-hand corner. It's more like supposed to be a cautionary tale of how difficult it is to actually get the AI to do what we want. So how would you train the mouse to go to the cheese, or how would you train the AGI to want to build great things for humanity?</p><p><strong>Quintin: </strong>Both mesa-optimization and reward hacking of certain models are basically saying trained test behavior divergences are high or policies are unstable. If things look good in training, because you can just look at what the agent did in training, this is no guarantee that things will be good in deployment in slightly different situations. The whole deceptive alignment thing is the agent behaves good in training, and then when things are a little bit different in testing and it has the opportunity to disempower humanity, it does so. All of these examples of reward hacking and mesa-optimization, they're used as evidence to point towards high and risky train-test divergences. </p><p>From the perspective of mechanistically looking at the actions that were up-weighted during training, instead of trying to characterize training from your perspective of the goals that the researcher had in mind for the training process, things actually look much more stable in terms of the difference between training and test behavior. Now, of course, there's this slightly separate issue of if you're a researcher and you have these goals in mind for what you want your trained agent to do, how easy is it to get the trained agent to do those goals? This is a different question in terms of the trained test divergence thing that's most relevant for AGI risk, but it is also a challenge. The reward hacking and the boat and the cheese agent things do paint a cautionary tale about it, but I don't think they really provide evidence for us being in a world where you can train an AGI and it does really good on all your benchmarks, but then kills you in deployment despite never having done a similar action in training. </p><p>The situation with both the boat thing and the cheese agent is like the evolution example. You can fully explain all three of those observation sequences with the hypothesis that RL agents just basically do the thing they do during training when they're in deployment.</p><h3>RL Agents (1:55:02)</h3><p><strong>Theo: </strong>Did you think that stories of AGI doom were more plausible around the era of 2017 when we had AlphaZero and RL agents and it looked like that might be a path to AGI instead of LLMs?</p><p><strong>Quintin: </strong>Not really, no. I actually don&#8217;t think RL agents are&#8230; Reinforcement learning is at its core just a way of estimating gradients. It's just a sampling-based gradient estimator. It doesn't have any sort of intrinsic quality of agent-ness to it. The fact that we tend to use reinforcement learning for things we call agents gives it this sort of scary vibe in a lot of people's minds. Mechanistically, I don't think it's particularly more concerning than, say, the decision transformer, for example, or even like training LLMs. There is the distinction between offline versus online learning processes, and reinforcement learning is more associated with online learning, usually, because there's actually a genuine sort of self-referential instability in the training process because of the reasons I described previously of the agent's policy being involved in collection of future data. So maybe that's an area where you can draw a bit of a distinction, but that's not actually intrinsically necessarily tied to reinforcement learning as a paradigm.</p><p><strong>Theo: </strong>So can we go back to earlier where we were talking about a quote from your Objections To &#8216;We're All Going to Die With Eliezer Yudkowsky&#8217;. You said the solution to goal misgeneration from evolution is don't reward your AIs from taking bad actions. That reminded me of the boat example in that the agent was being rewarded for going around in the circle instead of for completing the race. So do you think the solution to that is just apply a penalty for reward hacking in that sense? And if so, how is that a robust strategy? Like how can you predict the ways that it would reward hack?</p><p><strong>Quintin: </strong>There are two perspectives here. One is that you're designing the experiment a priori and want to construct some reward function, which will robustly get the boat to go around the racetrack. The other perspective is that you've finished training and you're wondering how the boat will behave in deployment. It's very important to keep these these things separated in your mind. Predicting the agent's future behavior, at least for the boat example, in the second situation where you can see what it actually did in training is very easy because you can just look at the training and it's the same as testing. In terms of the first scenario where without being able to look at what it does in training, you want to design a reward function that will ensure it does the right thing during training. Once you start training, that's a much more difficult problem. The fundamental reason things failed in the boat example is because there was this reward shaping where the boat was rewarded for getting the coins. The issue was that the boat found a policy where the additional shaping reward could be gathered much more efficiently and readily than the path completion reward. </p><p>The issue in that sense is that, and this is sort of getting back to the difference between online and offline reinforcement learning, the stability of those two systems. If you have offline demonstration data of the boat completing a bunch of loops around the racetrack and you do offline reinforcement learning on those demonstrations, then you're not going to enter into this reward hacking territory because none of the actions that the data, that the action trajectory examples you're training it on are of this reward hacking. The reason the boat reward hacked there in its actual training setup was because it was an online training process where it did a bit of exploration, it found this easy strategy, and then because it's now more likely to explore that easy strategy, that changes the future distribution of data to more emphasize the hacky strategy, the easy hacky strategy, until it degenerates into just this one strategy and no future exploration. </p><p>In terms of getting things to behave correctly, one option is to initialize things from an offline policy that's trained on good, known good demonstrations, and that's conceptually what we're doing with language models when we pre-train on a bunch of human demonstrations beforehand. Another approach is to, there's actually a perspective on reinforcement learning inductive biases. I forget the paper name, but I'll send it to you afterwards. There's this perspective on what strategies are most easily discovered by reinforcement learning agents in online exploration. And it's basically the more likely an agent is to stumble upon a strategy by just completely random motion, the more likely that strategy is to be learned by the online training process. And this is the best accounting of online inductive biases for RL I'm currently aware of. </p><p>From that perspective, you can just pretty quickly see that it's easier to find the coin. It's easier to find the flags or the coins or whatever they were for the boat than it is to navigate all the way around the racetrack. If the boat is following a completely random policy, because to find the coin, it just needs to randomly stumble however far is necessary in order to reach the first coin. Whereas to go completely around the racetrack, it needs to randomly stumble all the way around this loop and the relative odds of those two things for a particle. Taking a random two-dimensional walk, they're just incomparable. It's incomparably more likely for you to hit the coin. And so from this perspective, you can have a little bit of an a priori reason to think that reward hacking in terms of the coins would be a potential risk before you even start the experiment.</p><p><strong>Theo: </strong>So how would you apply that to safely training future, more powerful, more general AIs to avoid similar scenarios?</p><p><strong>Quintin: </strong>The number one advice I can give you in that sense, or the number one improvement you can make relative to the boat scenario is to just watch what your agent does during training. Right? And this is so obvious that it's barely worth mentioning. Although I suppose there are definitely research labs that can screw up even at this very first stage. This is sort of a tangent here, but this takeaway is why I'm pretty skeptical of a lot of these toy examples of what's supposed to go wrong in training high-level agents. Because when you think about this, the thing that would have fixed the toy example, it's just very often this totally trivial intervention that you should obviously already be doing for real-world training. And it's like same thing with the evolutionary example. Like the correct takeaway from the evolutionary analogy is this totally trivial thing of like, don't train your AIs in insane ways.</p><p>In terms of more realistic advice for training a more powerful AI system, there's of course the initialize its policy from a offline learned good, known good demonstrations, which is what we do. Like I mentioned, most of my perspective on alignment and AI risk isn't that I have this special collection of insights, which will save us from our otherwise inevitable doom. It's more like the problem isn't nearly as hard as a lot of people think. And actually current techniques are quite good in many ways for addressing it.</p><p>For training higher level agentic systems, you want to have extensive benchmarking evaluations for their behavior and their behavior in a safety relevant context. You want consistent quantifiable metrics that evaluate as many safety related quantities as possible. And in particular, one thing I think that's underappreciated in a lot of current benchmarking is evaluating the agent's behaviors during what we might call reflective cognition. So when the agent is planning about how to change itself&#8230; The thing with current LLMs is that they have at least a basic understanding of how reinforcement learning and AI training goes. They can talk semi-competently about their own learning processes and discuss whether they would like to change their reward functions in such a manner and so on. You can include such questions in your benchmarking data.</p><p>One thing that I think is maybe different, an intuition I have that's maybe different from a lot of other alignment researchers, is that I don't think reflectivity is a particularly mysterious or exceptional collection of behaviors. You can just train the agent to have correct reflections on itself, to be cautious about self-modification and so on. It just reflects situations where the agent could produce outputs that go on to modify how its future learning process operates are no different in kind from other types of situations where we regularly do safety or other sorts of training. So you can just train it to be appropriately cautious, thoughtful about questions of self-modification and also evaluate those things in benchmarks of those sorts of questions.</p><p>There's greater instability for self-modification, of course, because it's essentially an online process where your change at time <em>t</em> influences how you learn and change and evolve at time <em>t + 1</em> and so on. So things are a bit more unstable, but the fundamental learning problem of training the agent to have a policy that chooses the appropriate self-modification actions at time <em>t</em> is not different in kind from other sorts of AI training. So you can just train to do the right thing there and evaluate whether it does the right thing there.</p><h3>Monitoring AIs (2:09:29)</h3><p><strong>Theo: </strong>Earlier when you talked about one of the best things you can do to make sure that your agent doesn't do bad things is just to monitor it while it's training. Have you heard about Davidad's alignment plan that essentially creates a tremendous giant simulation of the earth with as much complexity as possible and release an agent to be trained in there and monitor it while it's inside?</p><p><strong>Quintin: </strong>I haven't heard of Davidad's plans specifically. I saw the post by him and didn&#8217;t read it. I am familiar with Jacob Cannell's suggestion, which is a bit similar to that, except instead of the simulation being of the earth, it's a simulation of a very primitive society made entirely of the agents that we're building. Presumably, he does it like that in order to simplify the simulation and also so that there's less floating knowledge of situational awareness inside the simulation. And so there's less risk of the agents who only know about a primitive technology, scientific base inferring that they're in a simulation and thinking about how they should behave in order to manipulate the simulators and those sorts of things.</p><p>If we were in a world where alignment was harder than I think it is, those sorts of ideas would be useful ways of gathering data on the fundamental question of how different training processes of agents we can supervise will influence their behavior in contexts where we can't supervise them. Because there you can simulate what happens when an agent believes it's been raised by other simulated agents who had goals X, Y, and Z, but now is free and the simulation to pursue other goals. And you can see how the relationship between the training and simulation compares to the actions and deployment simulation.</p><p>Regarding the idea of training powerful AI systems in big simulations, it seems like a potentially worthwhile thing to do. The issue is that currently we only have so much developer time to put into various safety interventions. And for the most part, my guess is that marginal developer hours on more ordinary safety interventions like better RLHF data or more extensive evaluation benchmark suites in my estimate for the median most likely world wherein I think those things are a greater marginal return on investment. But I could easily see that situation changing if, for example, GPT-5 ends up being a pretty good developer who can be directed to build giant simulated worlds relatively cheaply as compared to taking away time from your other development people to do that.</p><h3>Mechanistic Interpretability (2:14:00)</h3><p><strong>Theo: </strong>How optimistic are you that mechanistic interpretability will be useful? The only development we have so far that's of much significance from a major AI lab is OpenAI using GPT-4 to label the weights of GPT-2 which is of course a much, much smaller and less complicated model. So do you think it will be useful eventually?</p><p><strong>Quintin: </strong>I don't think that's the only... So I mostly see mechanistic interpretability as not actually an alignment strategy so much as an investigative tool to understand what deep learning actually does. I think it's kind of weird to put much stock in mechanistic interpretability interventions for controlling AI behavior because they're so incredibly bad at that. The reason we use training in order to control AI is because it's so much more effective at doing so. And it's been getting more effective over time, more quickly than mechanistic interpretability interventions have been getting more effective over time. So it would be kind of weird to think with hype or even to put much probability on the scenario where suddenly the effectiveness of mechanistic interpretability at controlling AI behavior jumps up forward so much beyond its current level. The rate of progress may even exceed the ordinary tools we currently use to control AI behavior because they're the most effective ones we have. I'm somewhat skeptical of that scenario for mechanistic interpretability contributing to effective AI control techniques. However, I think it is useful for better understanding the dynamics and the effects of the control techniques we do have, such as training AIs or control nets and those sorts of methods. </p><p>For example, the knowledge editing paper was a useful reference point for thinking what are the inductive biases of deep learning, how do deep learning networks structure internals. It showed that there were many lookup tables or things that look a lot like lookup tables inside of deep neural networks. This should inform your estimation of what the inductive biases and internal structures of deep models tend to look like. </p><p>Similarly, there was a recent paper out of DeepMind, which was about the Hydra effect, if you&#8217;ve heard of that, about how deep learning models tend to have parts that compensate, that automatically compensate for internal damage. If you scrub away a particular attention head from a transformer language model, it turns out that other attention heads further down into the model will often change their behaviors in order to partially replace the functionality of the attention head that you have removed. This happens even without training the model with dropout. And so you can think about like what should your perspective on the inductive biases of deep learning be such that you would have of course predicted happening due to the dynamics of how deep learning training actually works. These sorts of results are useful for informing our intuition models and intervention strategies on deep learning in general, even though I don't really expect mechanistically retargeting the search to be that effective, at least before we have strong AGI to do it for us.</p><p><strong>Theo: </strong>The real question about mech interp is whether inside giant neural networks there are somewhat human-readable algorithms, or if it's just complexity all the way down.</p><p><strong>Quintin: </strong>There do exist human-readable algorithms inside large neural networks. Are you asking if I think we'll be able to fully decompile all of the algorithms in the networks?</p><p><strong>Theo: </strong>Or at least many of them.</p><p><strong>Quintin: </strong>I think there are lots and lots and lots of algorithms in those networks, and many are human-interpretable. But even if you could individually interpret every single algorithm, that doesn't necessarily mean you can interpret the ensemble of what all those algorithms are doing in concert with each other.</p><p>So, in terms of getting full transparency into all the causal factors that contribute to a large language model's behavior, and being able to hold that description in your head at once and predict the behavior well, I think it's pretty unlikely on a mechanistic level. Even random forests, if you're familiar with those, every individual part of the random forest is interpretable because it's such a simple algorithm. It's just basically dividing the input space into different portions. But then once you combine them all into the forest, then they're not as interpretable. Admittedly, they are more interpretable than neural networks, but that's usually just because the random forests are usually smaller than the neural networks. If you had a random forest the size of GPT-4, I think it would be quite uninterpretable, even though every single individual one of them is a straightforward decision tree. Does that answer your question?</p><p><strong>Theo: </strong>Yeah, I think it does. As for how much more room for efficiency there could be in future AIs, like, for example, the way transformers and also the way human brains do, like, multiplication, the amount of FLOPS it takes a computer to multiply two 10-digit numbers is tiny. But the amount of FLOPS that it takes a neural network, if it can even do it, or the human brain is tremendous. So do you think there is room for lots of improvements of that nature?</p><p><strong>Quintin: </strong>I think this is a completely ludicrous point of comparison if you're trying to estimate the bounds for efficiency in neural networks. Both neural networks and brains are vastly more general than the calculator that you're comparing them to. If you looked for the minimum size, most efficient neural network that could multiply two 10-digit numbers, it would be vastly tinier than GPT-4. </p><p>There's a paper that develops methods of training these hyper-optimized logical circuits for image classification. They're not neural networks, they're collections of ANDs, ORs, NOTs, and so on, and Boolean circuits that take images as inputs and output classes. This approach is plausibly edging towards the fastest you can do that sort of image classification. They do compare their approach to a neural network trained just for image classification on that same domain. They find that their Boolean circuit thing is two orders of magnitude faster than the neural network. But of course, there are huge questions about that work in terms of how efficient they made the neural network's execution on GPU versus the Boolean circuit's execution on GPU. There could easily be additional orders of magnitudes in terms of those parameters, as well as exactly how much slack there really was in that paper's implementation of optimized Boolean circuitry. However, I do think that two to three orders of magnitude runtime efficiency might be in the ballpark of efficiency gains left in neural networks, assuming you keep their level of generality the same. Of course, I'm talking about very optimized neural networks here. For instance, LLaMA or a few billion parameter model trained on huge amounts of data using all the optimized quantization and so forth, all the tricks that are at the current cutting edge. Relative to that sort of thing, there may be on the order of two or three orders of magnitude efficiency improvement left to be squeezed out.</p><p>But this leaves an entire dimension of efficiency analysis open, which is, like you brought up before, comparing systems of wildly different levels of generality. If you're comparing a system that's as general as GPT-4 to a system that just does addition, then, of course, the system that just does addition is going to be wildly more efficient than GPT-4. I also think there's quite a lot of remaining efficiency to be extracted in terms of narrowing down the collections of problems you want your model to address very well, so you no longer need this level of generality. You have this very specialized model that can only handle those problems, but it's much, much more efficient at doing so.</p><p>The current state-of-the-art or the most impressive systems are the ones that are most general, but there's a sort of sense in which this is a failure of proper industrial organization. If you're integrating AI into the economy in general, and you find yourself forced to use a really general AI for some economic purpose, that's an indicator that you've kind of screwed up your information economy such that this AI endpoint is having to deal with problems of an extremely variable nature. From an efficiency perspective, what you should be doing is refactoring things so that you can get away with a much narrower AI in whatever role you're currently using the hypergeneral system for. I think there's a lot of efficiency improvements to be extracted in terms of doing that.</p><h3>AI Disempowering Humanity (2:28:13)</h3><p><strong>Theo: </strong>What do you think of arguments of the class of, for a significant period of time into the future, what AIs will be able to do to actually empower or disempower humanity if they wanted to are limited? For example, human brains are close to energy efficiency limits. AIs will be limited in how they can affect the real world.</p><p><strong>Quintin: </strong>I think it depends on how the politics of AI into the world work out. You could imagine a world where AIs have quite a lot of political influence relatively quickly if there's a nation that is literally run by an AI government. There's no law of physics which prevents GPT-5 from saying a thing and a bunch of humans interpreting that as the new law of the land. Or there's no law of physics that prevents you from ending up with an AI dictator over a country. Currently, on the current trajectory of things, I think that's quite unlikely.</p><p><strong>Theo: </strong>That a country would allow AI to run it?</p><p><strong>Quintin: </strong>It's not so much about allowing AI to run it as whether the staggering, random, pseudo-random walk of politics caused by all the different actors pushing in their own individual directions and just random chance as well, whether that stumbles its way into a country being run by AI. I'm not imagining a situation where everyone votes for the AI to run the country, but just thinking about the disjunctive, all the possible paths through politics, evolution trajectories that could end up with an AI running a country.</p><p><strong>Theo: </strong>The example that I've seen is a person gets so good at trading stocks overnight that they're able to buy all of the companies in the world because they made so much money trading stocks and then become the dictator. Of course, the natural counterargument to that is there are only so many market inefficiencies. You can't take over the entire world just by buying stocks. So do similar efficiencies exist in the real world complex system that would prevent one actor from being able to take the entire thing over?</p><p><strong>Quintin: </strong>I mean, BlackRock does not actually rule the world. The US can just fire cruise missiles at them. I think it's pretty unlikely that you can get that sort of enormous stock trading advantage as an individual actor using AI because everyone else is also using AI. And it's not a subtle thing to make huge amounts of money on the stock market either. That does not strike me as a very plausible takeover scenario. I think that politics is the much more vulnerable axis or the outcomes in politics are more variable and there's less of an efficient market in terms of national takeovers than there is in the actual stock exchange. There can be really weird outcomes in politics. For instance, there was no guarantee that communists would take over Russia. If you look at the Communist Party before they supplanted the government in Russia, they were a group of lunatics.</p><p><strong>Theo: </strong>They hit at a very opportune time.</p><p><strong>Quintin: </strong>That's what I'm talking about. Opportunities arise and strange things can happen in those sorts of opportunities. And I think that's the more plausible outcome for AI takeover of at least some countries where there's political instability for reasons that no one really foresaw. Perhaps a small faction of people prefer rule by AI and they act decisively in that scenario. Or maybe there's even AI political parties or open politician development projects that gain power in light of some loss of legitimacy for the incumbent human polity. You could end up with one or a handful of countries run by AI. That seems more plausible to me than one actor gaining a massive competency advantage in this domain where lots of people are trying to gain as much of the competency advantage as possible. Then they do this incredibly public acquisition of huge amounts of resources, which do not actually directly translate into military power, and then they're able to take over the world despite lacking that military power, despite being very obvious to people who have that military power, and so on and so forth.</p><p><strong>Theo: </strong>Well, I think that's a good place to wrap it up. Thank you so much, Quintin Pope, for coming on the podcast.</p><p><strong>Quintin: </strong>I'm very happy to be here. It was a very broad-ranging discussion, and I was glad that we were able to get into the details of things quite a bit.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Quintin Pope. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. All of these, plus many other things we talk about in this episode, are linked in the description. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#4: Rohit Krishnan]]></title><description><![CDATA[Developing Genius, Investing, AI Optimism, and the Future]]></description><link>https://www.theojaffee.com/p/4-rohit-krishnan</link><guid isPermaLink="false">https://www.theojaffee.com/p/4-rohit-krishnan</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Fri, 08 Sep 2023 22:06:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/136858098/72cec2e2f2f48c183dc26dd2e762477d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>Intro (0:00)</h3><p><strong>Theo: </strong>Welcome to episode 4 of the Theo Jaffee Podcast. Today I had the pleasure of speaking with Rohit Krishnan. Rohit is a venture capitalist, economist, engineer, former hedge fund manager, and essayist. On Twitter, <a href="https://twitter.com/krishnanrohit">@krishnanrohit</a>, and on his Substack, <a href="http://strangeloopcanon.com">Strange Loop Canon</a>, at strangeloopcanon.com, he writes about AI, business, investing, complex systems, and more, all topics we discuss in this episode. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Rohit Krishnan.</p><h3>Comparing Countries (0:33)</h3><p><strong>Theo: </strong>Welcome back to episode four of the Theo Jaffee Podcast. Today, I'm interviewing Rohit Krishnan. </p><p><strong>Rohit: </strong>Hey, thanks for having me.</p><p><strong>Theo: </strong>Yeah, absolutely. So first question, from what I understand, you grew up in India, you went to college in Singapore, and then you moved to the UK. And now that you're in the US, what do you think about each of these locations? Culture? Which one's your favorite?</p><p><strong>Rohit: </strong>Good question. It's hard to answer. They're all fairly different. Maybe just to qualify, I grew up in India, I did eight schools in 12 years, so I moved around quite a lot. In a weird way, the longest I've ever lived anywhere is actually London, which impacts how I see the world a little bit. They all have different pluses and minuses. </p><p>India, I left when I was 17. So my impressions before that are very impressionistic, shall we say. I don't have a well thought out point of view about living in India, because I was never living in India. I was living at my parents house, going to school, and then I left. And the India that I left in 2002, is dramatically different to the India that is today, in every way imaginable. The culture, food, people, spending habits are different, everything is different.</p><p><strong>Theo: </strong>The culture is different?</p><p><strong>Rohit: </strong>It's just gotten much more prosperous, the cities have gotten much bigger, it's gotten much more internationalized. </p><p>When I left, pretty much anybody who was anybody would think about becoming an engineer or a doctor, that was kind of the dream. It still is the case, but entrepreneurship has spiked up like crazy. The number of people who go internationally and come back are enormous. My best friend moved back to India after college, one of them, and because he could see that the opportunities actually existed enough that you can have a wonderful lifestyle. So if you actually map all of those things out, the life is actually fairly different today to what it was like 20 years ago. </p><p>Singapore is wonderful. Singapore is incredibly convenient. It's like living inside a giant shopping mall. That's the way I describe it, because it's uber convenient, it is exceptionally clean. It feels a little soulless, because well, you're living inside a giant shopping mall. But the things that you do not appreciate as much when you're in your 20s, now I appreciate a lot more when I'm in my 30s. Like landing at Changi is kind of the best experience you can have in an airport, which is sort of not a thing that is easy to say. </p><p>Moving to London, London made me or UK made me realize where India learned its bureaucracy from. It is pretty spectacular, the amount of paperwork you need to do for all sorts of random stuff. I mean, US has no slouch in that regard either. But it's an interesting place. I mean, it's exceptionally multicultural. And London in many ways is different to UK as a whole, because it's a cosmopolitan city with people from all over the world, and that you kind of get used to that, right. </p><p>And the US, I mean, I'm in the West Coast at the moment, US is in a little bit like it's a mixture of the stuff that came about, right? In a weird way, it's much less diverse here, if I can say that, compared to London, for example. Yeah, much less. I mean, it's also much fewer people, but it is much less diverse. I mean, whether it's in demographics or population or work, everybody seems to be roughly doing similar-ish stuff. Gross overgeneralization, but I'm just kind of giving you impressions. </p><p>Food is, I don't know, maybe at par, slightly worse than London, I find here. Food, the best is in India. Maybe second best is Singapore. Third best is London, and the Bay Area kind of comes somewhere at the fourth. Infrastructure here is, I mean, this is kind of well talked about, it's terrible. Like I despise driving to get anywhere, but I understand why that's the only way you can do anything here, because the infrastructure is just, it's absurdly bad.</p><p>I think American culture is interesting. I mean, I've traveled here so much that I have difficulty seeing it with new eyes, but I find that like, I'm still sort of trying to get to grips with how people like to have lives here. The one good thing that I can say about moving here is that people are much more outdoorsy, or maybe I'm much more outdoorsy here than anywhere else, because the weather is lovely. So I actually feel like going for a run.</p><p><strong>Theo: </strong>The Bay Area probably has the best weather of anywhere on the planet. I mean, I can see why you experience lots of new things moving to the Bay Area, because the Bay Area is different, even by American standards. Like I live in Florida. I just went to the Bay Area recently, and it was like profoundly different, more different than any other place I've been to in America. </p><p><strong>Rohit: </strong>Right. It is. Yeah. It's a weird place. I mean, in like New York feels to me very close to London in almost always. And it kind of works. Large cities. I mean, I lived in DC in the middle for a while. DC is probably closest to sort of, I don't know, I felt it was very typical American experience in some ways like large, small, clustered, not clustered, et cetera. I love that city. Bay Area is very different. Like very few things here make sense to me. If I'm like, I remember sort of 15 years ago, whenever I first came here driving and seeing these giant giant, like Volkswagen, whatever showrooms, you know, Porsche, Toyota, whomever. And I kind of said like, okay, so I've seen three now, one in every town, each town has like 30,000 people. Like how the hell are these guys surviving? I still don't get it.This is one of those big questions that doesn't make sense to me about how the place operates or the house prices make no sense to me because I look around and it's empty land as far as the eye can see, no matter where you look, very few things here, actually, I look at and understand how these circumstances came about. It's an interesting place. </p><h3>Reading (6:50)</h3><p><strong>Theo: </strong>As a child, did you read? And if so, what would you say your most foundational books were?</p><p><strong>Rohit: </strong>Oh man. I read a lot. A lot, lot, lot. Foundational books is there's hindsight bias because the ones I remember now might not have been the foundational books in the first place. I started reading quite a lot. I think I started with the usual, Enid Blyton was quite big. Terry Pratchett was quite big. Agatha Christie, all of those spy novels. Then a lot of books that my mom and dad used to have around a lot of encyclopedias of all sorts from space stuff, very little dinosaur stuff. Now that I think back about it, a lot of space and a lot of science and lots of math.</p><p>Foundational books. I don't know. I mean, I think I still feel, I'm going to give you an answer. I don't know if it's true. Maybe Douglas Adams or Terry Pratchett comes closest because they have, they combine a few things that I really like, right? I mean, they're irreverent, which is good. They're hilarious, which is great. They tackle ideas, which is sort of really, really important. And they do, they tackle like really complicated subjects with a level of lightheartedness that I feel is both important to sort of deal with them as well as it, I don't know, there's a trend when serious ideas have to be discussed seriously. And I, I liked the fact that they kind of went against the grain. I don't know that they're foundational, but I definitely think about them quite a lot.</p><p><strong>Theo: </strong>Douglas Adams is foundational to me too. Foundational to Elon Musk as well, according to Walter Isaacson's new upcoming book.</p><p><strong>Rohit: </strong>Yeah. I mean, there are very few novels. So there, there's this thing, you know, like you talk about novel of ideas and there are novel of ideas, but the problem that I've always found is that novel of ideas are written as novel of ideas, making it turgid, which I don't think they should be. I think, I don't know. One of my beliefs is that books should be readable. And I think, I think it's one of the few things that I hold strongly to, you know, like quite often people say like, Oh no, no book should be difficult for you to struggle through them. And I kind of disagree with that. I feel like books should be readable.</p><p><strong>Theo: </strong>So rather than as a child, what would you say your current favorite books and blogs and podcasts and Twitter accounts and other information sources are?</p><p><strong>Rohit: </strong>I stopped actively tracking these maybe a few years ago, which is I've weirdly made it better with that caveat. I think I'll tell you what I regularly read, right? I mean, I look at arts and letters daily, every in the morning, I'm looking for new sources to find new places to read stuff. It's a bunch of Substacks that I read. I read Erik Hoel. I occasionally read ACX. I read Marginal Revolution, which kind of together, I think they give me roughly enough of the zeitgeist of whatever is going on in the world.</p><p>Books is sort of harder. There are a few books that I keep going back to. Most like Herman Hesse kind of, I go back to quite often Glass Bead Game specifically, because I really like it. And Hofstadter, whom I first read in college, I go back to quite a lot. I'm rereading his I'm a Strange Loop at the moment because I feel like, again, they have an interesting way of playing with ideas that I find is lacking. Fiction, I mean, I read, my friend published a sci-fi book recently that is still on my mind called Exadelic, which is, again, a little bit of a novel of ideas, and I really like it. And, you know, I think that's kind of done quite well. I'm trying to think sort of who else am I reading at the moment? I read, you said podcasts. I don't actually listen to too many podcasts. I read Tyler [Cowen]'s podcasts, transcripts, relatively often. And I listen to No Such Thing as a Fish, which if anybody knows, it's a podcast done by the, there's a, there was a TV show in the, in UK called QI, quite interesting, which was narrated by Stephen Fry, which was about facts and interesting facts that exist in the world. And they would talk about it and make jokes. Long running, long running TV show. He left a few years ago to giving to someone else called Sandi Toksvig. The researchers in that started a podcast discussing four facts that they love from the last seven days, roughly. They've been going for ages, like maybe nine years, say eight years. That's one that I listened to because it's funny and it's about facts. I think coming back to sort of the Adams-Pratchett theme, I can recommend it if you don't know.</p><h3>Developing Genius (12:36)</h3><p><strong>Theo: </strong>So your day job is a VC, correct?</p><p><strong>Rohit: </strong>It was until recently. I left a little while ago to try and figure out if I want to build something, which is relatively under wraps, but that's one of the things that I'm working on at the moment. I invest a little bit personally, but not, not too much to call myself a VC.</p><p><strong>Theo: </strong>Well, this year is like the year of the builder. I mean, I'm relatively recent to this part of Twitter, but would you say that the discourse around building and being a builder is like more now than it has been in the last decade or so?</p><p><strong>Rohit: </strong>Definitely. Yes. With maybe sort of two thoughts. I think last decade has been, there's been a crescendo slowly building up where building is seen as a sensible, good career move for a larger and larger number of people, both by sort of direct evidence of people having done so and done well and sort of indirect encouragement from people who basically tell them that this is how you should think about your life and spend your time. So put those two things together. There's been a slow buildup where you're kind of steeped in this thought process where if you are building, it's not seen as a weird thing to do all of a sudden. It's still at the margins, but that's definitely the case, which means that over time, there's been more and more people who are interested in creating something or building something. </p><p>The funny thing that I think about quite often is that when you consider all of the large, prestigious jobs that want to encourage people to join them, they often adopt some of this language. They'll say, "Oh, you'll get freedom to pursue your own thing," or "you'll be able to build stuff inside the organization." So clearly, it exists in the zeitgeist as a positive thing for people to have done. It kind of exists regardless of the fact that it's still not the default path for most people to take. </p><p>So put those things together. Yes, there's a larger number of builders probably now at the height of its ever been. I'm sure there were peaks before where everybody jumped on the SaaS train and tried to build it, or everybody jumped on the mobile train and tried to build it. But then the number of people who consider doing it, the bottom of the pyramid, if I can call it, was narrower in those times compared to today, when it seems like it's getting broader and broader, which, you know, you can argue there's a lot of sort of outcomes from that or eventualities, but it definitely seems like the way these things play out.</p><p><strong>Theo: </strong>In your sub stack, Strange Loop Cannon, you write a lot about building and different types of building, and different types of creative work in general. So yesterday or the day before, Avi Schiffman tweeted, "I would pay so much money for a single service that completely handled my basic needs. So I could focus." He listed laundry three times a day meal prep, personal trainer, cleaner, therapist. He said, "Turn my house into a monastery. This should be VC value add. Founders should treat themselves as athletes." So you've written a lot about grants as well. Again, From Medici to Thiel. So do you think that genius grants should go more in this direction of like summer camp for founders, or should it just be like an unconditional cash transfer?</p><p><strong>Rohit: </strong>I was having this conversation with somebody, I think yesterday, but they were asking if the Thiel grant was actually an investment, would it have had the same ROI, so to speak? My instinct was yes, but I've been thinking about it over the day and I'm not super sure about it anymore. The thing with these grants is that ultimately their heterogeneity is what actually gives them any kind of power at this point because they're a search function to try and figure out if there are people who are missing from the traditional modes of doing whatever, and can rescue them from that in order for them to go off and try something else. </p><p>The question that exists in my mind is like, I'm exceptionally suspicious of summer camp-ish things where, because those things exist in so far as you kind of say like either, we will teach you something. So there's a kind of curriculum or whatever essays or lectures or something in order to impart a piece of information. Or you say like, all of you guys kind of need to come together, stay in one place and learn from each other. The latter has more possibilities for success for some of these things than the former, because the former only works when you're working in highly legible fields where the outcome is measurable and you can push people in the right directions at the right time. </p><p>So I think an unconditional transfer is still probably the best way to follow through with anything just because the entire premise behind the selection process today is that there is something there that they will figure out and we just need to remove the obstacles in the way. If you say that there's possibly something there and they might figure out, but we will have to help them and remove the obstacle. That's an exponentially harder problem to solve. And I'm not entirely sure that those two things are quite equivalent.</p><p><strong>Theo: </strong>With the Thiel Fellowship, is a hundred thousand dollars enough in this day and age? That will get you like one year of middle-class living in San Francisco.</p><p><strong>Rohit: </strong>I mean, for his demographic, a hundred percent, right? Like, a hundred thousand dollars is a lot. It's easy to kind of pooh-pooh saying like, yeah, you know, if you live in the middle of SF, yeah, your rent would be four grand or whatever. But on the other hand, he's giving it to 18-year-olds who are primarily interested in doing something else with their time. Like, I don't know, schlep out like 30 minutes in any direction, choose your living conditions slightly better. I don't know. I feel like that is an artificial problem that you create. It's a little bit like if you're in New York and somebody says like, "Hey, you only get a hundred thousand." You're like, "A hundred thousand is not enough because I want to live in this part of Brooklyn or Upper East Side." But you don't need to. The money is a way to ease your problem of coming into this. If you want to go for the same level of prestige living, I realize this sounds pejorative, you're kind of selecting yourself out from it. I think back to even a hundred thousand dollars, which is supposed to cover your cost, give you enough to buy a laptop or something, and meet your basic needs for a year. It's still pretty high for that. I remember when I got a scholarship to go to college, my scholarship was about 500 bucks a month on top of room and board. Inflation is a thing, but it's not 10X a thing. This is fine. I think a hundred thousand dollars is actually quite generous. Don't you think so?</p><p><strong>Theo: </strong>In your article Rest, you talk about sabbaticals, how there have been many people throughout history who have taken sabbaticals off from their boring job to produce some extraordinary creative work, whether it's Newton discovering calculus or Einstein discovering the photoelectric effect. So what should the day of someone who's on sabbatical look like? Should they be actively pushing themselves constantly to work on their project or should they kind of just be resting and seeing if the work will come to them naturally or a combination on different days?</p><p><strong>Rohit: </strong>I think it's impossible to be prescriptive here. The entire point of a sabbatical is that different people will do it differently. And we don't know what works. If you take a six month sabbatical and chill out for five months, is that better or worse? It's impossible to measure because we don't really have enough counterfactuals. We have to rely on the individual to make that decision themselves. Occasionally poke them, have a chat conversation, but it's unnecessary to be prescriptive. The entire pressure that they might be going away from in the sabbatical period might be one where you're required to produce a large amount of output in a small period of time or under tight deadlines. That is the kind of yoke that you're running away from. Some people might already have a very clear idea and an agenda of something that they want to go towards in which case a sabbatical just becomes a sort of a cleaned out time where they can go behind that. Whereas a lot of people might also just generally have sort of, they don't know. This is the first time that they might have ever had time to actually sit back and read a book or think or relax.</p><p>I don't know about you, but when you talk to friends, the number of folks who say they do not have time to do X&#8212;for any value of X, work out, read, travel&#8212;is tremendously high. And some of these might be artificial constraints in the sense that they could have made it happen if they prioritized enough to go into top two or whatever. But the entire purpose is to take away that stack ranking necessity. And for you to be able to do what you think is most productive or useful or interesting or just restful in a short period of time. And maybe at the end of it, you go back to your old life. That's fine. You just end up rejuvenated, so to speak. Academics do this all the time. Sometimes it's for new projects. Sometimes a lot of those projects might not work out. Sometimes it's for rest.</p><p>Think about people who go into business school. A proportion of them go back to their old careers. A proportion of them&#8212;I knew at least a few people who probably had almost no real benefit, if I can call it that, from coming to business school. You spend a couple hundred thousand dollars, and you're going back to the life that you could have had without coming here. But they value the experience. They value what they did. They value the network in an intangible way, if not a tangible way. And that's good enough because it gave them a breathing space of a couple of years to think about a variety of things that they would not have been able to otherwise. I think that's sufficient. I don't think we need to be prescriptive about this.</p><h3>Investing (24:08)</h3><p><strong>Theo: </strong>On the topic of business and specifically finance and investing, in your recent article, The Big Sort, you talked about how individual investors will no longer be able to get rich by simply buying and holding big tech companies. So what do you think would be the best thing for individual investors to do over the next few years?</p><p><strong>Rohit: </strong>Oh, man. I wish I had a good answer for this. This is something I'm still working on for myself. I still think the time of value investing is coming back into vogue. And in some ways, what that means is that you can still probably get rich over the long term by doing the sensible things: hold a diversified ETF, some in bonds and some in stocks. There's a well laid out path that you can mix and match in any combination and roughly will take you in the same place and have mid-high single digit IRR returns over a large enough period of time, which is good enough for everyone. That's how life used to work. My point there was that there was a world where you could buy Apple and expect it to 10X. And that easy trajectory, because you have a path laid out in the future, does not really seem to exist anymore. There aren't that many, call it, hundred billion dollar companies that I can easily see becoming a trillion dollar company or $500 billion companies that I can see becoming a 3 trillion dollar company.</p><p><strong>Theo: </strong>Tesla?</p><p><strong>Rohit: </strong>Tesla was already pretty large. First of all, they're incredibly overvalued. Their margins are under attack. Their sales are growing, but Tesla is a 20-year-old company that already has had a success story and every single other automaker is gunning for it. Maybe it'll still succeed, but I don't know. It doesn't seem as easy as it was to buy Google in the early 2010s or Facebook in the mid 2015s.</p><p><strong>Theo: </strong>Charlie Munger said something about Tesla to the effect of, it's a wonderful company and Elon Musk is a genius, but people think that he can just cure cancer. And the stock seems wildly overpriced. Keep in mind, this was years ago that he said this. And since then it's exploded. And he said something like, I would never bet on it. I would never buy Tesla stock. I sure as hell would never short it.</p><p><strong>Rohit: </strong>I don't short stocks anyway, because unless you're a professional, it takes too much time, effort, and energy to be able to short it properly. You can lose your shirt doing that. One of my first really stupid trading mistakes was to short a currency pair. I forget which one. This was back in college and I lost a huge sum of money trying to do it, more than I had made in the last two months of trading. Way more. Since then, I've been much more careful. I thought I knew the theory and I did, but like practical experience of seeing it happen where like in a few minutes you kind of get wiped out as something new. And since then I've been much more careful.</p><p>Investing is one of those things where your job is not to maximize your returns necessarily. Your job is to try and make sure that you get enough returns according to the risk tolerance that you have over whatever time period that exists. There's always going to be companies, stocks, investments, themes, bonds, currency pairs, commodities that slip through the radar because you don't know exactly how to do it. And that will cause you to&#8212;"lose money" is not the right word&#8212;to have better trade years elsewhere. Your job is to say, I know these things relatively well, or I don't care about these things at all. And within this universe, I want to try and maximize considering the amount of time effort that I'm putting in, how much return am I trying to get? You can't solve the world. You can't look at large caps, small cap, private, public, fixed income, global stocks for that matter, macro moves, currencies, commodities. There's too much. You can also have all of these things simultaneously. The best you can do is to say, I'm a specialist in this because I understand it a little bit better. Like the circle of competence that Buffett and Munger talk about. And within the circle of competence, you can say like, oh, okay, here, I know some bets I want to make. I mean, Munger having said that he invested in, they invested in BYD, right? And that did pretty well, despite China, despite the macro risks, et cetera.</p><p>So I don't know, like I feel for an individual investor, the question is almost going to be like, you want to put a bit of money in Tesla, that's fine. I don't know what the Tesla's market cap is today, but is it going to be a $5 trillion company? I'm not sure. They've missed a huge number of milestones that they've set up for themselves, which were ridiculous in the first place. They're clearly the leader in EVs here, but on the other hand, I know Renault, Ford, and Chevy work really well. So there's enough competition now across the price points that it is not hard to imagine someone else being able to build a commanding lead and presence in what is essentially not the highest margin business in the world. So you have to bet on the fact that they will be able to continually maintain the lead in the automotive sector, continually making roads into the new sale sort of pie chart, however you want to say, they'll be able to make a ton of money off the superchargers. They will do autonomous ride hailing and like make a ton of money. I don't know. There's too many ifs there. And all of the assumptions that I've seen people lay out, including the dumb one from ARK, it just does not pass the smell test. I'm like, that is just way too many assumptions. I mean, at that number of assumptions, we can talk about 15 other companies that could also go from 300 to sort of whatever, 3 trillion.</p><p><strong>Theo: </strong>So there&#8217;s stocks and then there&#8217;s Bitcoin. Bitcoin, despite falling off significantly in the last couple of years, is actually up 55% year to date, which is more than all but 20 companies in the S&amp;P 500. Some people, most notably Michael Saylor, who's the CEO of MicroStrategy, are still very bullish on Bitcoin because they think that in the near future, it will become the hedge against inflation or it will become much easier for individuals to use as currency. So what do you think about Bitcoin in 2023?</p><p><strong>Rohit: </strong>Every narrative about Bitcoin is hopelessly confused. If you want to buy it and hold it, that's fine by me. But it's not been a hedge against inflation, which we saw. It's not used as a currency. It might be at some point, like people say, once with Lightning Network, it'll end up being utilized. Like okay, I don&#8217;t know, show me. I want to be able to pay somebody with Bitcoin easily and see that happening the world over. Why doesn't it happen? Because as an asset, its price increases 55% a year today. That's not a currency. If I needed to send you X value, I wouldn't send the value in Apple stock because it's volatile and it can move around. It's not easy to do. I would do it in dollars, which is what a currency is. A lot of the conversations kind of elide the differences between what is a currency, what purpose does it serve versus what's an asset and what purpose does it serve. They try to mix these things together in the hope that if you mix them enough, something new and interesting will come out. I haven't seen it.</p><p>Clearly, Bitcoin has captured enough market share in people's minds. It's a Schelling point sufficiently that there is enough money that is roughly floating in and around and sort of through it. So overall for the people who purchased at the right time, it's good, but I'm not entirely sure that, A, I don't hold any, and I don't plan on buying any at any point, at least sort of as of now, and B, I'm not entirely sure what I would be buying if I'm buying. I don't have a point of view as to what is the benefit to me in my portfolio of holding Bitcoin. There is an asset allocation answer to say if you model it out, it's uncorrelated, but that just turns out to be not true. It's fairly well correlated with high tech, or at least it was until very recently. I haven't looked at it in the last few months. </p><p>I can make no fundamental case for why it is useful beyond the fact that a lot of other people think it's useful. That feels like a fairly flimsy foundation to stand on. Maybe it's efficient. Maybe it ends up continuing to exist as a digital gold equivalent that a lot of people buy and hold, but I don't know why I would do it. There's no utility here. Even gold has some utility, even though I disagree with a large part of why that's useful as well. Maybe once it exists for 200 years, it becomes like gold. And you're all like, "Oh yeah, I mean, Bitcoin, you're going to have to have that because it exhibits a deep enough liquidity in the markets. The holders are diversified enough that you don't need to worry about it." It exhibits a lot of those qualities that make me more comfortable in buying into it as a commodity, but I don't think it's there yet.</p><h3>Contra AI Doom (34:27)</h3><p><strong>Theo: </strong>On the topic of AI, you wrote this article, "Artificial General Intelligence and How Much to Worry About It: presenting the strange equation, the AI analog of the Drake equation&#8221;, which I think is your magnum opus and the best argument against doomers on Twitter. But Eliezer Yudkowsky did not think so. When he saw it, he reacted dismissively by saying, quote unquote, "sigh, good old multiple stage fallacy. Did you really just assign an only 80% probability to super intelligence being possible?" So why do you think he reacted so dismissively? And if you had the format of longer than a tweet to respond to him, what would you say?</p><p><strong>Rohit: </strong>Eliezer has written in the past about this thing called a multiple stage fallacy, which effectively means that if you break anything down into a sufficient number of parts and assign each of them a less than a hundred percent probability, that the multiplier multiplying all of those together will get you a much smaller number. I mean, take any event X, break it down into 10 components, each component, you assign a 90% probability of happening, then, you know, 0.9 raised to the power of 10 will be much less than 0.9 as it is, which is true. But to me, I never understood why that's a fallacy in the first place. I mean, it's a fact that things do have multiple stages.</p><p>You can argue whether there needs to be more or fewer stages. If you think stages are more correlated with each other, then you should increase their combined probability. Some can be as high as a hundred, as I wrote in the article. So I think he's fundamentally mistaken about this as he is about other things. Just because he calls it a multiple stage fallacy does not make it a fallacy. Think about any engineering problem that you would solve if you're building a wind turbine, or if you're building jet engines, if you're doing software for that matter, you do have probabilities accumulating over multiple stages in order for you to get to whatever the end result is likely to be. </p><p>Now, I think part of the issue is that Eliezer has a particular form of doom that he's incredibly invested in for logical reasons, starting from his assumptions. Because his assumption is that a sufficiently optimizing, more powerful AI will eventually result in FOOM and basically all of us getting killed by some version of diamondoid bacteria. Once that is your core assumption, then obviously you dislike arguments saying that in order to get there, you need to solve these three, four things. And I don't see why. If he wants to dismiss it, that's fine by me, but he's wrong. That's no question about it.</p><p><strong>Theo: </strong>This is similar to the arguments George Hotz brought up.</p><p><strong>Rohit: </strong>I haven't listened to it because it was too long and life's too short. But at the same time, if it is great. To me, it's like, guys, listen, you had a theoretical argument starting about whatever, 15 years ago saying sufficient optimization in this particular format will lead to doom. It will lead to the emergence of some version of sentience or at the very least goal-oriented kind of independent action. None of it just happened. So, I mean, if none of that has happened and the closest you can do is to point to GPT-4 and say, "no, no, no, there's a shoggoth inside it." Surely you should be revisiting your assumptions. It's a question of iterations and recognizing when something is wrong. It's not about simply optimizing. It's not a case of 4 might not have it, but 5 will, or 5 might not have it, but 6 will have it. This is moving the goalposts. I feel like you need to break it down a little bit and actually look at the components rather than thinking of it as a mathematical theorem that you have to prove once and for all. I think that's just a category error. It's thinking about it wrongly. </p><p><strong>Theo: </strong>I think Eliezer doesn't see it that way. I think he sees it more as a convergent outcome. The laws of the universe favor more intelligent creatures that are better at process optimization to defeat and destroy and replace less intelligent creatures. He always likes to say, you can't predict the path. You can predict the outcome. This was basically the crux of the George Hotz versus Eliezer debate. Eliezer was saying this will happen, and Hotz was insisting that you can't just skip over the implementation details. That is critical to how it will end up happening.</p><p><strong>Rohit: </strong>I disagree with both the premise and I agree with Hotz that implementation matters tremendously. I think they're both true. You can't just say the universe prefers intelligent creatures. That's as close to a theistic belief as it gets. Look around, that's not true. Humans are the one data point that he has to support this particular theorem. That's it. It's not exactly like there are thousands of data points that exist about how intelligence is preferred and therefore we can actually understand it. Even in this one data point instance, humans did not optimize just for intelligence. We optimized for intelligence insofar as it is helpful in survival. It's not a direct correlation of becoming more and more intelligent over time in order to defeat everybody and everything around us. That's just not how it actually worked. </p><p>Even if that's how it worked, there is no reason to suppose that the same optimization that happened in the natural world will be the same process that happens when we train algorithms intentionally to do certain things. You can believe it, you can claim it, but it doesn't make it true. </p><p><strong>Theo: </strong>Eliezer would say, it doesn't matter. It's instrumentally convergent.</p><p><strong>Rohit: </strong>I don't know what that means. Instrumentally convergent is one of those phrases that gets thrown around as if it's a trump card, but it's undefined in any meaningful sense. It's the argument against the fact that the moral arc of the universe tends to us in a particular direction. Again, you can believe it, but do we know that? I don't think so. </p><p>The most powerful AI that we have ever created so far, call it GPT-4 in the language model category, do we feel like it's instrumentally convergent? Look at its behavior. If anything, it's too human. It refuses to answer a bunch of questions. It acts like it has a moral high ground, which is kind of frustrating. </p><p>As a supposition, I'm totally for it. You should believe whatever you want to believe. I'm for plurality of belief in that particular instance. But it doesn't mean it's true. As I said, all of these are based on a bunch of axioms and then you're building on top of the axioms. All the hard work is being done by the axioms. If you believe in instrumental convergence, and if you believe that sufficient optimization will create something of sufficient intelligence, then almost by definition, you've guaranteed the end outcome that you will have a super intelligence that you cannot control. It's a simple logic that once you accept the axioms, the outcome comes through. I'm saying that there is no reason to accept those axioms as true. And considering what we have seen so far, in fact, every piece of evidence from our efforts shows that the axiom is not true. </p><p>The closest it comes to is to say that Sydney got snippy with some humans as an LLM, or that we do see goal-seeking algorithms trying to find shortcuts in order to achieve their goals. There are these very small things that we have been able to find. So what? There's no perfect software that we'll get right on the first try ever. I don't see why we would expect this one to be different. </p><p><strong>Theo: </strong>But what if, if you don't get it right on the first try, everybody dies?</p><p><strong>Rohit: </strong>But we haven't died. That only works if you accept the premise that if you don't get it right, it'll recursively self-improve and go FOOM. I think the entire premise behind writing the strange equation was exactly to lay that out. If you don't think it's going to be fast, you don't need to worry because you can see it, you can stop it. You can intervene. You can do whatever you want to do in the middle. You can nuke data centers like Eliezer wanted to. If you don't believe it's going to recursively self-improve, it doesn't matter. You can look at it, see its behavior, and wait until the next one is trained to go after it. But unless you accept all of the premises, then even if one premise does not hold, you don't need to worry about this FOOM thing at all. And if there's no worry about FOOM, then we don't need to worry at all about the fact that we accidentally made something and it'll kill us all. That's not how anything works. We have to create something of sufficient power, knowledge, intelligence, and utility that it is able to intentionally or unintentionally intercept every part of the world economy or economic system or biological system. The number of things that it needs to be able to do is so astronomically high that just assuming it away saying, "Oh, superintelligence will figure it out," is an enormous hand wave.</p><p>Magnus Carlsen is way smarter than me in chess as well as other things, but he won't be able to persuade me to do something dumb or commit suicide or figure out how to dupe a gene lab somewhere to create COVID plus plus plus. That's not how anything works.</p><p><strong>Theo: </strong>I&#8217;ve talked about this with Brian Caplan before, where he told me he had lunch with Eliezer Yudkowsky and Eliezer was trying to convince him that a superintelligence would be able to persuade him to kill himself. Brian said, "I don't think that there is a single combination of words in the English language that a superintelligence would be able to say to make me kill myself." I mean, is there? It sure doesn&#8217;t seem like it.</p><p><strong>Rohit: </strong>If you wanted to do it, you would have to do something like, I don't know, take someone else hostage or you have to do a bunch of other things in order for those words to have an effect, right? Like if you do not kill yourself, I will end up nuking the Eastern seaboard. Now we are kind of getting somewhere, but you better have the ability to back it up because otherwise, what are we talking about here?</p><h3>The Future of AI (46:26)</h3><p><strong>Theo: </strong>So, do you think there will be some kind of true human-level AI by the end of the decade or maybe the next 20 years?</p><p><strong>Rohit: </strong>I find timelines hard, because I don&#8217;t think they&#8217;re particularly useful, but I think we'll get there. I wrote an essay called "Building God" where I pointed towards a way whereby we might have human-level intelligence&#8212;or intelligence is a bad word, we&#8217;ll say capability&#8212;in an AI system that will enable it to do a bunch of the things that humans can do. Whether having that is equivalent to it developing a personality, sentience, consciousness, I am unclear because I don't see how that just automatically emerges if you're using the same parameters that we are using today or the same processes and methods that we are using today. But capability wise, I think we should be driving pretty close to getting there, even if purely through better scale, better optimization, better memory, better ways of doing inference, better ways of doing tuning, better ways of self-learning recursively, self-improving, etc. I think we might have a good shot.</p><p><strong>Theo: </strong>You clearly like Douglas Hofstadter, you mentioned him earlier. You named both this blog post and your blog itself after him. So have you seen the interview fairly recently where he said pretty much like, "I changed my mind about neural networks. I'm now freaked out about AI, either about AI killing everyone or just replacing everyone and making us as cockroaches to humans." So what do you think about this sudden flip?</p><p><strong>Rohit: </strong>I think it's an interesting one. I appreciated that quite a bit because it's rare to have people who change their minds in the first place. I think part of the reason for his flip comes because the idea of strange loop, the recursive loops that enable us to create a self from within, is what he's transposing from his particular theories, as well as the old cybernetic theories, to today's neural networks. He's assuming that the autoregressive networks with some level of back propagation is equivalent in some way to creating that strange loop inside it. I don't know if he's right or wrong, quite frankly. He might be right. If he is right that that is the way that consciousness is supposed to emerge, then in a way, that's great. Because we will have evidence of it pretty soon, if not already. We don't have evidence of it today at all, that there is a consciousness that has emerged in any meaningful sense in any of the GPT-4 variants that we have seen so far. </p><p>I understand where he's coming from. I appreciate that it's logically consistent with his idea of what a strange loop is. I disagree that because it comes about, it actually predicts anything else. The two leaps that are being made there, leap number one is the fact that the recursive nature of existing neural networks means that it is able to develop something akin to a consciousness, is leap number one. Leap number two is to say that once you develop some form of consciousness, plus have heightened abilities, then there will be a normal kind of warring. The Yudkowsky argument about more intelligent beings will actually supplant less intelligent beings. And therefore, they will take over from humanity. I'm not sure I buy either of these arguments. The latter argument has more validity to it than the former argument, in my mind at least.</p><p>Because tomorrow, if for example, let's assume an alien civilization visits Earth. They're much smarter and more capable than us. Would I think that the percentage probability of human extinction goes higher or lower than it is today? It's higher. That's logical sense. But that's assuming a bunch of things about their intelligence, their capabilities, their ability to execute, et cetera. There's a bunch of questions in there that makes that even a question. I mean, there's countless movies that have kind of gone off that exact premise. But then you do need to break down the term "intelligence" to combine all of these different things. The type of intelligence you see in GPT-4 is different from the type of intelligence you see in AlphaGo, which is different from the type of intelligence you see in a cat or a tiger, which is different from the type of intelligence we see in us, to dolphins, whatever. These are not always commensurable in the same way. So to place them on a scale and extrapolate them is probably a pretty bad way to make any kind of decision. But yeah, from Hofstadter's interview, at least those are the two leaps that he is implicitly making that I'm fairly uncomfortable with.</p><p><strong>Theo: </strong>Do you think the world will look radically different by 2030?</p><p><strong>Rohit: </strong>No, I don't think so. Not radically different. The world is inertia. It's slow to change. Most of the world still looks the same way it looked in the 1990s. So I'd probably bet on the base rate on that one. The world for us might look radically different, though.</p><p><strong>Theo: </strong>What does that mean?</p><p><strong>Rohit: </strong>If you're hyper-networked, tech-savvy, living in a large city, then yeah, your lives can look radically different. But does your life look radically different in 2024 than it does in 2014? I don't have a good answer to that question. In some ways, intuitively, yes. I mean, you have faster internet, cell phones are better, social media is there. You can work from home. In other ways, it's like, eh. I mean, cars are better. You still drive. Trains are better. Flights are better. You still fly. You go to the same vacation spots. You eat the same kind of food. You drink the same Mai Tai.</p><p><strong>Theo: </strong>Flights might be the unprincipled exception there. I think they've probably gotten worse in the last 10 or 50 years. They've gotten cheaper and coverage has gotten better, but the experience itself.</p><p><strong>Rohit: </strong>I think the planes are better. Having been on a couple of new planes, they're better. But yeah, the experience is much worse, because&#8212;we should be thankful that airlines operate at such low margins. And the fact that there's a heightened demand, which doesn't help us as passengers. Will we all be living like Jetsons? I doubt it.</p><p><strong>Theo: </strong>What about by 2050?</p><p><strong>Rohit: </strong>I hope so, man. The one thing that I think will make a radical delta is the introduction of robots in regular life. I am optimistic that that will happen. And by 2050, hopefully, it should start percolating enough that just like electric vehicles had a slow rollout through the world, we probably, hopefully, will have robots that will have a slow rollout through the world during this decade, as well as the next. And by 2050, we should be in a decent-ish place. I'm highly hopeful for that. Because I feel like a lot of the things that you would want them to do, they can actually start doing proviso cost requirements. But that's a mass market production question where it can come down over a period of time. </p><p><strong>Theo: </strong>Do you think Tesla is poised to win humanoid robotics? Or is it just way too early to tell?</p><p><strong>Rohit: </strong>It's way too early to tell. And I wouldn't particularly bet on them only because they have a day job. And it's not building humanoid robots. I feel like it's more likely going to be a startup that came out of nowhere else that we haven't seen yet that probably is likely to win them up. I mean, in some ways, it doesn't need to be humanoid robots either. Like if I think about my house and things that I want them to do, if I want them to do the work around my house, whatever, laundry, dishwasher, all that kind of stuff, does it need to be humanoid? Yeah, it'll be nice. But just because it gives it more mobility through the spaces and maybe less freaky. But I feel like we should be able to get there in the next 10 years.</p><p><strong>Theo: </strong>Do you think AI will be able to get to real intelligence, quote unquote, whatever that means, through scaling? Or do you think it needs some kind of fundamental breakthrough?</p><p><strong>Rohit: </strong>I think it probably needs some kind of fundamental breakthrough. I feel like already we are reaching the point where scaling helps. But the way to make truly useful things out of it is actually to combine multiple experts together in order to do specific tasks far better than the largest model can effectively do. I was talking to someone about this yesterday that I used to use GPT-4 for everything, but for specific tasks, if I wanted to go really deep, then it's starting to get better to train particular models. I think my bet is on not one model to rule them all in the future, but more likely multiple models working together to actually do a range of tasks properly. It's a supposition, though. It's not exactly a clear, I don't have empirical evidence to prove that it is the case beyond the fact that people say GPT-4 is actually a MOE in the first place. But we'll see how that goes, actually.</p><p><strong>Theo: </strong>Do you think AGI and eventually super intelligence will, in some sense, obsolete human labor, human creativity, human creation in general?</p><p><strong>Rohit: </strong>Leaving aside super intelligence part of it, hopefully. I mean, labor for sure. The creativity and our urge to create is an innate desire. We don't create necessarily because we're the only ones who can do it. We create because we feel like we can create something amazing for us and others. So labor, I hope so. Because what's the point of progress if we still have to continue doing the grudge work that we had to do before? None of us like plowing our own fields or planting our own corn or building our own furniture or houses, for that matter. I don't see why clicking an Excel sheet is the exception to this particular progress curve. I'd rather it catch everywhere. </p><p><strong>Theo: </strong>But if you have an AI that can, let's say, make a painting better than a human painter, would the human painter still want to learn all the techniques behind painting and spend time painting, coming up with new strategies?</p><p><strong>Rohit: </strong>It's a different arms race. Human painters compete with other human painters, even though they're not as good as them. And not all of it is competition in the first place. Some of it they create just because they like it. I'm sure you have friends, I have friends who love painting, not because they're going to be the world's best painters, but because they like painting. They like creating something by themselves. It's the IKEA furniture effect. Sometimes you like it more because you made it. So I don't particularly see that going away.</p><p>And if AI becomes able to do paintings in a way that can surpass the artists of our day, then what that means is the artists of our day will have to find new methods, mediums of creation that the AI cannot easily match. I mean, to give an instance, if AI is able to create digital art, or art in general, that is much better than what people can create as of now, then perhaps the painters who are primarily focused on creating, I don't know, neoclassical sort of paintings are probably going to be the ones that are out of a job. At the same time, you might want to go towards more abstract paintings. You want to create not just paintings, but actual works of art with 3D layering, like use multiple different types of materials. There are ways to kind of expand the horizon here. And I would say that would be really good, because this is how humanity evolves. </p><p>I mean, we don't all sit around painting the same versions of Virgin Mary and the Christ anymore, because it used to be that the church was the primary patron, and that's the kind of painting that they wanted. And as an individual, it doesn't matter if you're Da Vinci or Michelangelo, those were the paintings that were given to you for you to paint, and then you had to put your individuality stamp on top of them. Whereas now you're able to kind of paint your own thing. Will that go away? I'm optimistic, but this is a hypothetical.</p><p><strong>Theo: </strong>Meaning you're optimistic that it won't go away and people will continue painting?</p><p><strong>Rohit: </strong>I'm optimistic that the fact that a computer can do it better is not sufficient reason for the entire fields to vanish, but rather for the field to change.</p><p><strong>Theo: </strong>Yeah, that makes sense. What do you think about open source for AI? Do you think we'll have an open source model that can beat GPT-4 by the end of the year?</p><p><strong>Rohit: </strong>Yeah. Later, I think so. Not super high confidence, but I think so. I think the biggest boost for a lot of the research so far has been the fact that open source models, code, and discussion actually exist and is incredibly vibrant. And this stands for advances in capabilities, building larger models, smaller models, more capable models, specialized models, as well as sort of figuring out what type of training is actually better. Do you want to do pre-training versus fine-tuning versus LoRa versus QLoRa? There's a lot of questions here. And the only way to answer it is by people to actually get in and explore. And I think open source is a fantastic way of being able to do that.</p><p>I am a reasonably big fan of open source software as well. I think it makes sense that in a situation like this, you'd want to have as much of the code, weights, whatever open source as possible so that we can play with it and understand it. Regardless of where you stand in the spectrum, you should want it because ultimately, you should want more people trying to understand this thing which everybody seems to claim is not understood as of today. Because that's the only way that you can actually kind of get to grips with it and build better solutions.</p><p><strong>Theo: </strong>Do you think that brain-computer interfaces will allow humans to eventually compete with sufficiently advanced AIs?</p><p><strong>Rohit: </strong>I don't know. I think I don't know at all about what it would take to create a strong enough, good enough brain-computer interface, which makes me confused as to what an answer for that might look like. I mean, there's a theoretical answer saying yes. But practically, I don't think the brain is like an easy computer in the sense that you figure out how to merge it. I think it's a fairly complicated piece of kit. And I'm not entirely sure what a BCI might look like. That is beyond a very bare minimum of helping people with disabilities kind of get back to normal or make minor adjustments to their lives or have them see colors or something, as opposed to sort of directly interfacing in the sense that you can have complex, high-bandwidth discussions and conversations that both boost you as well as a computer and work together.</p><p><strong>Theo: </strong>Yeah, makes sense.</p><p><strong>Theo: </strong>All right, so I guess that wraps it up. So thank you so much, Rohit Krishnan, for coming on the podcast.</p><p><strong>Rohit: </strong>My pleasure. Thank you for having me, and sorry I have to dash.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Rohit Krishnan. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter <a href="https://twitter.com/theojaffee">@theojaffee</a>, and subscribe to my Substack at theojaffee.com. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#3: Zvi Mowshowitz]]></title><description><![CDATA[Rationality, Writing, Public Policy, and AI]]></description><link>https://www.theojaffee.com/p/3-zvi-mowshowitz</link><guid isPermaLink="false">https://www.theojaffee.com/p/3-zvi-mowshowitz</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Fri, 18 Aug 2023 22:09:12 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/136193565/58f77ebe1b743f13261147c5ae418d51.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>Intro (0:00)</h3><p><strong>Theo: </strong>Welcome to episode 3 of the Theo Jaffee podcast. Today, I had the pleasure of speaking with Zvi Mowshowitz. Zvi was one of the most successful players of all time of the tabletop card game <em>Magic: The Gathering</em>, and later became a professional trader, market maker, business owner, and one of my favorite writers and thinkers on the topics of AI, rationality, and public policy, all topics we discuss at length in this episode. I encourage you all to follow him on Twitter <a href="https://twitter.com/TheZvi">@TheZvi</a>, and subscribe to his blog, Don't Worry About the Vase, at thezvi.substack.com. This is the Theo Jaffee podcast, thank you for listening, and now here's Zvi Mowshowitz.</p><h3>Zvi&#8217;s Background (0:42)</h3><p><strong>Theo: </strong>Hi everyone, welcome back to Episode 3 of the Theo Jaffee podcast, and I'm here with The Zvi, Zvi Mowshowitz. So I guess we'll start with a question I had from the very beginning, which is, how exactly did you go from <em>Magic: The Gathering</em> player to a rationalist blogger and one of my favorite AI writers in the entire space? </p><p><strong>Zvi:</strong> Along a winding path, I would say, not with a grand plan. I was in the rationalist spaces basically from the beginning, really early while I was still playing. That was just an interesting hobby of mine. There's a lot of overlap, philosophically, and in terms of the type of people who go around such spaces. So I moved from playing <em>Magic</em> and writing about <em>Magic</em>, to making games, to being involved in trading, to starting businesses. But the entire time, I was worried about AI back in 2007, that style of concern. But I always thought, let other people handle the writing, let other people handle the big questions. My job is to maybe help the community, be a community, maybe help people think about some of the questions better in some ways. But I'm not the communicator here, I focus elsewhere. </p><p>But then after a while, I started writing, got into writing things for my own edification, and that turned into writing about COVID. And then when COVID died down, I jokingly posted on Twitter, &#8220;weekly COVID posts to be replaced by weekly AI posts&#8221;, because things were just going completely nuts. And then rather than everybody going, ha ha, yeah, funny one, I'm kind of busy too. Everyone responded, yeah, that's great, let's do that. And so I said, well, damn it, I guess I'm doing this. And what are we on, number 24 now? Yeah, we're on 24. And 25 exists, it's already pretty long. I'll keep updating them day to day for the foreseeable future. </p><p><strong>Theo:</strong> Yeah, so for those who don't know, Zvi has an awesome weekly newsletter about AI. It's even got like sections that don't really change from week to week. You know, we got Large Language Models Offer Mundane Utility, Large Language Models Don't Offer Mundane Utility, People Are Worried About AI, People Aren't Worried About AI. But going back a little bit, how did you get into <em>Magic</em> in the first place? </p><p><strong>Zvi:</strong> So I got in the way a lot of people got in back then, which was it was 93, or it grew in 94, I don't remember exactly. But I was passing the halls of my high school, and I saw people playing with these cards on the ground, those cards, people playing the student union. I said, what are those cards? And they hand me a rule book, I go off to camp, and then there's like people playing with these cards. And I asked what these cards, and they explained to me what these cards are. And it just looked like a lot of fun. And I convinced someone at camp to sell me 10 mountains and 10 red cards. And for like a couple of hours, and I would split that with my opponents, and we'd play with that deck. And then when I got home, me and my best friend bought starters, and we're off to the races.</p><p><strong>Theo:</strong> Oh, so when you were growing up, did you read any of the kind of like classic genres that get people into rationality, like sci-fi, like fantasy, like science? </p><p><strong>Zvi:</strong> I mean, I read plenty of sci-fi, plenty of science, you know, I read a lot of ad mobs, that kind of thing. I read some of that fantasy, I played a lot of RPGs, stuff like that. But I would say I was brought in directly by the old school FOOM debates between Hanson and Yudkowsky, and we went from there. But I was already thinking about the same kinds of things that rationalists talk about. I was just using my own language and using my own modes of models and thinking, because that's just how I think about the world naturally.</p><p><strong>Theo:</strong> Did you think about AI before you were introduced to it by Yudkowsky? </p><p><strong>Zvi:</strong> No, it was not on my radar screen. It did not occur to me that this was coming, or think about the considerations. But the considerations seemed pretty obvious, and I was won over almost immediately in terms of that point of view. </p><p><strong>Theo:</strong> Yeah, it makes sense. And going back to, I guess, your bridge between <em>Magic</em> player and writer was trading and starting businesses. So can you go into a little more detail on that? </p><p><strong>Zvi:</strong> Also, I would say that my original writing itself was <em>Magic</em>. During <em>Magic</em>, I would write about <em>Magic</em>, right? Because I was one of the people who was developing new ideas, making new thoughts, and I would write them up, write up my new decks, write up my new ideas. And that also just sort of happened by accident, because back then we had a website called The Dojo, where people would just publicly post their decks, and their tournament reports, and their analyses. And then the founder of The Dojo was accidentally included on an email list of my <em>Magic</em> team, and saw some of my posts, and was like, can I post this? I guess so, sure. And he posted it, and it got a positive response, and I was like, can I write more posts? And this was at a time when I was getting seized in logic and rhetoric, and so I was being told by the system that I couldn't write. But in the real world, people seemed to like my writing, and so I wrote more, and the way you become a good writer is you write, you write, you write, and you write a lot more. So I wrote, wrote, wrote, wrote, wrote, and I got better at writing. And then, in terms of the trading, then led to me wanting to write about various things that weren't <em>Magic</em>, and I started a blog just for a small number of people, to share my ideas. Then, one thing led to another, and when COVID happened, I realized I needed to write about it to understand it. I wouldn't be able to work through it and understand what I think if I didn't write about it.</p><p><strong>Theo: </strong>This is a pretty similar story to a lot of the bloggers, podcasters, and similar people who I talk to. They say &#8220;yeah, I just kinda started writing just for fun, and then, long story short, it blew up.&#8221; I think that&#8217;s some pretty remarkable consistency.</p><h3>Rationalism (6:28)</h3><p><strong>Theo: </strong>So, about rationalism, how would you explain rationalism to a total layperson? How would you explain rationalism to someone's parents, someone's grandparents?</p><p><strong>Zvi: </strong>No one asks me that question quite that way, but rationalism is the art of figuring out what is happening, thinking logically about what is going on, how to model the world, and then making good decisions on that basis that lead to things that you want. Rationality is just Bayes&#8217; rule. It's thinking clearly. It's just the way that the universe actually works. You should engage with it and ask how it works and how you figure out what will cause what to happen.</p><p><strong>Theo: </strong>But there are some particulars to the specific, LessWrong style brand of rationalism, like Bayes&#8217; theorem. Some people don't emphasize Bayes&#8217; theorem as much, like decision theory and other things like that. So why do you think those play into your brand of rationalism more?</p><p><strong>Zvi: </strong>I think that when you're trying to do this kind of systematic modeling of the world, these come up as natural questions to be asking. How do I make my decisions? On what basis do I decide what to do? And then if you're no longer working on instinct, you have to develop a theory as to how you do that. If you're trying to figure out probabilities, you quickly run into Bayes&#8217; theorem. You're trying to figure out what is likely to be true and likely to be not true. You run into Bayes&#8217; theorem. And Bayes&#8217; theorem is just the nature of reality. It's the nature of probability. There's no escaping it. You would reinvent it if Bayes hadn't come up with it. And then you work from there. </p><p>So the way I see it is most people, instead of going down the rationality track, go in kind of an intuitive track. They gather data from the world. They update similarly to the way a language model would update. They notice the vibe. They notice the implications. They notice the associations. They update their associations and vibes and intuitions in that direction. They get feedback from the world. They try stuff. They see what works, doesn't work. They use empiricism. And this works pretty well for most people in normal practical circumstances. And trying to reason about the world carefully would not immediately yield better results. It would yield confusion, right? You'd be throwing out all of this data that they don't know how to use properly anymore, all of these accumulated adaptations and adjustments. </p><p>And so to start down this road, you either have to be inherently much more curious about trying to figure things out on that level and just doing it for its own sake, or you have to have that combined with some combination of, the current systems aren't working for me. My attempts to do all of this intuitively doesn't work out because the systems are designed for someone who has different intuitions and different interests and different modes of play than I do, and who has different opportunities and experiences. And if you fall behind on that kind of training track, right? Because the way you train a human normally is similar to the way that you train a model in that it's designed to introduce you to various forms of updating various forms of data when you're ready to handle them. And so if you fall behind, it's similar to what happens in a foreign language class where suddenly everyone starts talking gibberish and you can't follow it. And by the time you've tried to get a handle on something, you're even further behind than you were before. And then the problem just snowballs for you. And then just thinking about things from first principles offers you an alternative way out.</p><p><strong>Theo: </strong>Interesting. Do you think rationalism as such, LessWrong style rationalism is applicable to all areas of life? Like for example, dating. I think dating in particular is something where people typically don't think of it in terms of rationalism and theories. And maybe on average, the people who do are worse off than the people who don't just based off of general observation. So do you think there's some room for intuition in areas like that?</p><p><strong>Zvi: </strong>So always be careful about correlation and causation, obviously, and always be careful about people who take things too far and who kind of morph into a straw Vulcan or something like that. I must use logic to figure out all these things. I must ignore my feelings. That's a huge mistake. That does not actually lead to success. Good rationalists understand that you do not throw out all of this very good training data, all of these very good adjustments and intuitions in ways that your brain is designed to handle exactly these types of problems. To ignore all of that would be a huge mistake. But if you ignore the ability to think carefully and rationally about your situation, that's also a huge mistake. Like people make huge mistakes in dating and love and relationships because they just never stop to think about what would be a good or bad idea, what would be the consequences of various different strategies, what has higher and lower expected value. But if that's all you think about, that's the only way you deal with things. If you're not able to live in the moment, it's going to go badly for you. You ideally want to combine these two things. And the more that you got shut out of the traditional paths, the more you need to rely more and for a longer time in more a need for explicit rationality before it becomes sufficiently ingrained that you can then live in the moment in more areas on explicit rationality before it becomes sufficiently ingrained that you can then again live in the moment. Because the way that most people learn how to navigate these problems is by navigating these problems through trial and error and experience. But that&#8217;s experience-gated. If you're not good enough, you'll be denied the opportunities to get better, and you will never improve. Logical thinking can offer a way out of that if you don't get trapped in the system of only thinking logically. The reason why we say people who think logically and carefully about dating and relationships tend to be worse at dating and relationships is because they needed it. That's why they came to the problem in this way. That's why they adopted these strategies. They were shut out of the traditional paths that allowed them to not use this. They were unhappy with their level of success, so they're taking this different approach. Some of these people don't know how to live in the moment. They have to learn how to do that. But I think mostly it reflects reverse causation. It's that because they were not doing so well, they then took a logical approach, not that they took a logical approach and the logical approach is bad.</p><p><strong>Theo: </strong>Going back to what you said about living in the moment and about the importance of living in the moment, how would you explain to, say, a long-termist, an effective altruist, somebody who thinks that they should only be caring about, say, AI doom, or they should only be caring about the welfare of children in Africa, the instrumental good of living in the moment sometimes?</p><p><strong>Zvi: </strong>I wouldn't try to argue with them that it's inherently good to do this because I don't think that could be convincing. I think they have a reasonable counterargument there. But I would argue, yes, it's instrumentally good. This is how humans end up associating and cooperating and learning and tuning and creating the world that we want to live in. Tuning and creating value and having a good time and staying sane and forming all of the associations and useful apparatus that you use to accomplish all the things, like caring about AI in your long-term future and being able to help people in Africa. If you ignore these other things, you will end up very ineffective. And I think that just looking around bears that out very much so. You need to first get your house in order and get your local house in order. And only then can you hope to be useful in impacting these problems that you might consider more important.</p><p><strong>Theo: </strong>So with rationalism and rationality, what do you think are some of the best resources for people to read if they want to be more rational? Have you read the Sequences?</p><p><strong>Zvi: </strong>I have read <a href="https://www.lesswrong.com/rationality">the Sequences</a>. Some people would say they're dated in some ways, but I think that's not the case or at least not an overriding consideration. I think there's no substitute for the original. They read quickly. They read breezily. You can read one at a time. You can pick and choose. You can choose different sub-sequences as you need them or as they're interesting to you. But I would say it's some of the best rationality writing. It still is. And you should definitely use it. If you're looking for a more casual approach, <a href="https://www.lesswrong.com/hpmor">HPMOR</a> speaks to a lot of people, especially people who are already familiar with Potter. And that makes a lot of sense. But also, rationalism does not require you to go through a curriculum and a corpus specifically. That's not the right approach all the time. It's the right approach for some people some of the time. If that's what they want to be doing, that's what interests them, that's how they best learn. But one thing you can do is when you read my blog, the goal is, well, I'm going to infuse it with this type of rationalist mode of thinking continuously throughout in a way that you can pick that up just by following my lines of reasoning and asking, well, do I think about that the same way? And I think that's a good argument. I think that's a bad argument. Do I think this person is thinking about things well? How would I think about this consideration? And also, we have these tremendous tools now, like LMs, right? You can just ask questions of GPT-4 or Claude 2 if you're curious about these questions. And get answers, like, what does this term mean? And you can also just look up the individual sequences as people refer to things or as concepts come up. You should use every tool at your disposal to learn the way that you best learn. For me, that was to a large extent the Sequences, but also to a large extent it's figuring this out for first principles by going through my life and thinking about things, especially when I was engaged in pretty explosive gambling, which is always very good for developing rationality.</p><p><strong>Theo: </strong>I wonder if there's somewhere out there a kind of shorter, more friendly introduction to the layperson than, say, the Sequences or HPMOR. One of the common critiques I hear of rationalism is that the writings are very long-winded and hard to follow if you're not a nerd.</p><p><strong>Zvi: </strong>I mean, Bayes&#8217; rule is pretty short. As Scott Alexander said, the rest is commentary. You have this simple rule for understanding what the probability is of things being true, and then you reason from it and you apply it to everything around you and to your life. You should interact with the formal writings to the extent that that is a useful thing for you. Beyond that, do I wish there was a shorter introduction that actually covered these things? That would be great. But there's a long history of people trying to write that and discovering that it's really hard to write a concise introduction that covers the basic concepts in a way that people learn from. You can spew out the technical words that technically comprise the things you want people to know. You can write the cheat sheet of rationality, as it were, that you would take into a test. But people don't learn from that cheat sheet. At least not so far. You have posters you put on walls. But that poster, someone who's new to rationality has to just ask someone, "What do all these words mean?" Because this poster does not explain it in a way that lets people learn it.</p><p><strong>Theo: </strong>I remember asking GPT-4, &#8220;Can you explain Yudkowsky's rationalism to me?&#8221; And it was like, &#8220;Yeah, sure, here you go. Number one, Bayes' rule. Number two, decision theory.&#8221; And it just wasn't very good. It was kind of missing something.</p><p><strong>Zvi: </strong>What's missing is that you can't get that on a deep level just by technically knowing the words, the core words and the core moves. You have to practice. You have to go through cycles in your brain. You need a lot of training data. You need a lot of examples. You need a lot of specifics. You need to think through problems on your own.</p><p>If you think about it, there's a lot of classes you take in college, where you could write down everything you are trying to take away from that class on one piece of paper. This was true for almost every class I took in economics, almost every class I took in mathematics, and so on. But if you just give someone that piece of paper, it will not help them very much. If they don't go through the time of exercises and lectures and discussions and problems and thought experiments and work, then they can't just pick it up like that. You can't just derive it. You can't just say, &#8220;Oh, obviously, that's just the fundamental equations of modern analysis. And these are just obviously true. Why do I need an entire class to learn this?&#8221; But you do in practice. Human beings need this. They can't just learn that way. </p><p>And so I don't think you can just compress this any more than you can learn a foreign language by reading a dictionary, unless you're one of a very small number of weird people.</p><p><strong>Theo: </strong>I agree with that.</p><h3>Critiques of Rationalism (20:08)</h3><p><strong>Theo: </strong>So you talked about what Scott Alexander said, &#8220;Bayes' rule, and the rest is commentary.&#8221; And of course, a lot of rationalists believe Bayes' rule to be the central mechanism of rationality and decision-making. But one kind of rationalist-adjacent person who doesn't think that way is David Deutsch. He has an article, which is very short. I have it pulled up. I'll paraphrase it. Called &#8220;Simple Refutation of the Bayesian Philosophy of Science,&#8221; where he says, &#8220;by Bayesian philosophy of science, I mean the position that, one, the objective of science is or should be to increase our credence for true theories, and that, two, the credence is held by a rational thinker obey the probability calculus. </p><p>However, if T is an explanatory theory, like the sun is powered by nuclear fusion, then its negation not T, the sun is not powered by nuclear fusion, is not an explanation at all. Therefore, suppose that one could quantify the property that science strives to maximize. If T had an amount Q of that, then not T would have none at all, not 1 minus Q, as the probability calculus would require if Q were a probability. Also, the conjunction of two mutually inconsistent explanatory theories, such as quantum theory and relativity, is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing. </p><p>Furthermore, if we expect that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics. What science really seeks to maximize, or rather create, is explanatory power.&#8221;</p><p><strong>Zvi: </strong>So I notice I am basically confused by why he thinks this argument refutes Bayesian logic. This seems to be a case of, &#8220;I will kind of caricature your position as being something that is slightly different than your actual position in order to make it technically useless according to this definition that I proposed that you never actually believed or something&#8221;. I mean, I understand why he's thinking this on some level, but it seems kind of highly uncharitable and a very clear failure of the intellectual Turing test to model what the Bayesians actually believe.</p><p>So if a rationalist back in the day was named Isaac Newton and he was developing Newton's laws of motion, the Bayesian hypothesis would not be, I want to know the exact probability that Newton's laws are an exact description of the entire physics of the universe. It would be, I want to know whether or not Newton's laws are in practice an explanatory tool that I can use to predict what's going to happen much better than what I had before Newton's laws. A Bayesian wouldn't say Newton's laws are almost certainly false, therefore they are useless, and the important thing is to believe that Newton's laws are wrong with a probability of 1 minus epsilon. They would say, well, what is the probability that Newton's laws are a good description of non-relativistic physical action in practice in a lot of situations, and how useful are they, and how likely are they to be how useful in what situations?</p><p>And similarly with relativity, similarly with quantum mechanics, and similarly with various ways of combining them. And if I were to learn that the sun is not a nuclear furnace, that is not zero explanatory power in the sense that that's an important fact about the universe that I would want to know. That has a lot of implications, and it teaches me that I should look for an explanation elsewhere. I'm not trying to maximize the amount of statements I can make right now that in some sense are positive rather than negative. I want to believe true things and not false things.I believe in not accepting false things and believing things with the right probabilities. This approach leads me to draw correct conclusions in practice about many things. However, this is a standard Stravokin-style critique of various forms of rationality, not only Bayes, which essentially says you believe in logic, so obviously you believe only in this bare, abstract, fully argumental, ad absurdum form of logic. But that obviously leaves out these important things, and that's bad. It's not how any reasonable person would strive to think. We all, almost all people who think about rationality and take it seriously, understand this.</p><p>When he said the conjunction of two mutually inconsistent explanatory theories, such as quantum theory and relativity, is provably false and therefore has zero probability, what would a Bayesian say about that? Because clearly quantum theory and relativity are two of our best explanations about physics. Right, so what we're saying is quantum theory can in its current form be literally true in all of its equations and proposals, while simultaneously relativistic theories cannot at the same time also be literally true in all of their details and specifics.</p><p>That is an important fact about the world. If you were a physicist who was trying to create a grand unified theory that explained both of these things, or was trying to work on either of these things on their own, you would very much want to realize that these two things are contradictory, and that one of them must in at least some way be wrong, while simultaneously holding in your head that both of these things are our best known methods for explaining phenomena of that type and approximating what we should expect to observe. But yes, either quantum theory is not the final theory, or relativity is not the final theory. It doesn't mean we have to throw them out as useless.</p><p><strong>Theo: </strong>That makes sense. Another critique of rationalism that I've heard is an article from Applied Divinity Studies called Where Are All the Successful Rationalists? Where he talks about, it seems like the most successful people in society, like billionaires, Nobel Prize winners, whoever, either aren't aware of rationalism, or maybe they're aware of it, but they don't endorse it, like people like Tyler Cowen. So how would you explain this?</p><p><strong>Zvi: </strong>Interestingly, Tyler Cowen, who I very much respect, and he's a great intellectual who I get a lot of value out of, you&#8217;ve grouped in with the billionaires and the most successful people on the planet. Like, if Tyler Cowen counts as one of those people, then I think we have some pretty big successes, too. The most successful people in, like, I don't know, this sphere of the world that we see on Twitter.</p><p><strong>Theo: </strong>I would say, you know, I would just meet the premise. I would start there. Like, yes, we haven't had that many billionaires, but being a billionaire is a very low probability outcome, even of very good strategies, requires a lot of luck, and requires time. And we haven't been around that long, and there aren't very many of us. And we don't have zero billionaires, right? </p><p><strong>Theo: </strong>Dustin Moskovitz?</p><p><strong>Zvi: </strong>Sam Bankman-Fried. Look, you can say whatever you want about how it ended and how many mistakes he made, but if you're just counting billionaires, Sam Bankman-Fried, like a lot of other billionaires are also no good scum of the earth who, like, lied and stole people's money to get there. You know, people say behind every great fortune is a great crime, and it's not actually true, but behind a lot of them, right?</p><p>We have other billionaires. Jaan Tallinn, I think, certainly counts as a, you know, acquired his fortune the right way, has a lot of money, is trying to use it to do good, but, like, has very much crossed over into the billionaire horizon. Do we really need that many more examples before we are over-performing rather than under-performing here? The rationalist groups that I came up in, basically everybody is now at least a millionaire. Everybody is doing well and successful. We're raising families. We're having good general lives. We're pretty happy. We're pretty intellectually stimulated.</p><p>Also, why don't we look at the intellectual, you know, fruits of rationality? Like, look at the AI space, right? Like, Sam Altman, Demis Hassabis, and Dario Amodei, the three founders of the three major labs, all got their theories and concepts directly from Yudkowsky and rationality. All three of them think, at least in part, in these ways about these problems. That is a huge intellectual legacy and successful influence on the world, whether or not it was what we wanted or had in mind. For a movement of this size over this length of time, we are going to transform the world whether we like it or intend it or not, right? And whether it&#8217;s for good or bad. </p><p>So it&#8217;s really strange to go into this world and say, like, why are you people all losing? When I, for one, don't feel like I'm losing, I feel like I'm winning. And yeah, my startups didn't make me a billionaire, but, you know, most startups don't do that. I would also say that, you know, the decision, again, to study rationality often starts with a world in which you feel like you need rationality in order to navigate the world and solve the problems you need to solve. </p><p>And that reflects some combination of either you&#8217;re trying to solve very, very difficult problems intellectually, like Yudkowsky in AI, which don't necessarily lend themselves to becoming a billionaire. Or, and or, it lends itself to, you are not going to have the traditional things like just general charisma and just intuitive ways of integrating with normal social relationships and so on, that then puts you behind in the traditional race to get ahead in our society. And therefore, like, it takes a while to catch up. You're playing from behind. And there are a lot of barriers to becoming a billionaire that involve going through various social dynamics, whereas the social dynamics of trying to build a couple million and live a good life are much lower level. If you look at our influence on the world at large, I would say it's tremendously outsized and a tremendous number of successful people would endorse many of our ideas.</p><p><strong>Theo: </strong>Take someone like Elon Musk for example. He's definitely not the most charismatic person, but he is one of the most hardworking, successful people in the world. He endorses some ideas from rationalism. For example, he's definitely worried about AI risk, he's a long-termist, somewhat aligned with effective altruism, and he's talked about first principles thinking. However, I don't think he would describe himself as a rationalist in the same way that others would.</p><p><strong>Zvi: </strong>No, nor is he, but I think you&#8217;d be correct not to. There's a long, unfortunate history where various EAs and some rationalists attempted to sell him pretty hard on various things and alienated him. I also think that he is unfortunately thinking very badly about some very, very core things about AI and elsewhere. This is causing him to make some very poor decisions. He also has poor impulse control and he's kind of an internet troll.</p><p>His success inherently comes from the fact that he works very hard and he has a number of very strong skills. To the extent that he used real rationality as opposed to our formal specific characterizations of rationality&#8212;</p><p><strong>Theo: </strong>Like instrumental rationality versus epistemic rationality?</p><p><strong>Zvi: </strong>Yeah, Elon Musk definitely asks, "How do I build this rocket? How do I build this car? What makes this engineering problem actually get solved? What makes this person actually do work? How do I actually get this contract? How do I actually make this thing happen?" He updates that information and he tries stuff and he iterates and all this stuff is things we would endorse.</p><p>But just because rationality is a good idea doesn't mean that there aren't other valuable skills in the world that are gateways to being the type of entrepreneur and runaway success that is Elon Musk.</p><p><strong>Theo: </strong>Is it possible to be a runaway success like Elon Musk without being some form of a rationalist?</p><p><strong>Zvi: </strong>I think you definitely can. I think specifically you can't be Elon Musk if you aren't able to think about how to build physical objects, you aren't able to think about the consequences of actions. There's a lot of, people sometimes call them, wordcels, right? In our society. Yeah. Who aren't capable of doing the proper amount of shape rotation to be an Elon Musk. And nothing else will let them be that, but there are other companies they can create and other platforms they can have and other ways they can pursue wealth and influence and power. And they can become rationalists in their own way.</p><p>When you see people who hit it big, who become billionaires, who become outside successes and you actually talk to them, you will usually see somebody who is thinking very, very coldly and carefully behind the scenes about what actually works and doesn't work. And then combining that with some extraordinary skills.</p><p><strong>Theo: </strong>One billionaire who's very much aligned with that idea is Charlie Munger, who I&#8217;m a huge fan of. Warren Buffett's business partner, vice chairman of Berkshire Hathaway. He's also been involved with other companies, Costco, Himalaya Capital, a hedge fund in China. He has his own approach to rationality, which he's developed called the multiple mental models approach, where he tries to think of the world by reading as much as he possibly can about every subject and forming mental models like say, inertia from physics, like evolution from biology that help him with both investing and life. He's had some sayings that are pretty similar to rationality, like, probably my favorite of his, &#8220;the fundamental algorithm of life, repeat what works.&#8221; Would you characterize him as a rationalist? He probably isn&#8217;t aware of the existence of the rationalists. He&#8217;s 99.</p><p><strong>Zvi:</strong> I would be surprised if he wasn't aware in some vague sense of the existence of the rationalists. I'd be more surprised if he had seriously investigated LessWrong or otherwise like engaged with the Sequences or other rationalist writings on a deep level. But I do think we reached the point where I'd expect a person like Munger to be aware of our existence.<strong> </strong>I would characterize him as a different type of rationalist in a different tradition. Just like martial arts has different dojos and fighting schools, we do karate and he does jujitsu. He found a different path by which to take in a lot of data and systematically try and abstract it and figure out what would and wouldn't work. He's had a very long time to compound interest. The secret of the Munger-Buffett school is to find solid, not so spectacular opportunities continuously, take advantage of them, and let these winnings compound over time.</p><p><strong>Theo: </strong>Do you think there's some kind of epistemic convergence between different schools of rationalism that will allow them to land on the same stuff?</p><p><strong>Zvi: </strong>Yeah, it's like solving a math problem. If you have different ways of approaching a problem, as long as you all take true and valid steps, you'll all reach the same answer in the end. If you're doing reasonable approximations, you'll all get approximately the same answer in the end. </p><p><strong>Theo: </strong>I find the characterization of rationality as solving a math problem as kind of interesting, because in a lot of ways, it's not quite like a math problem, or at least the type of math that most people are used to, where you have some kind of formally specified problem statement and then some answer that is always valid.</p><p><strong>Zvi: </strong>It's not like formal mathematics in the sense of trying to prove this conjecture, or trying to pass this test. I meant that as a metaphor. What I meant was the world is a physical object built on mathematics. Physics is made of math, and fundamentally speaking, every system has some set of rules of operation. As you enter it and you try to figure it out, if you're given a complex system, you can take any number of different approaches, grasp different parts of the elephant, reason in different ways, run different experiments, try to find out different things. But any valid process of figuring it out will eventually converge on the right answer. Different people who take different styles of approaches should increasingly converge on the right answer. And if they don't, then they're not using very good methods. </p><p><strong>Theo: </strong>Well, what do you mean by converge on the right answer? Because in some questions of rationality, like, say, the stock market, you're dealing with a complex adaptive system and there could be multiple right answers. Buffett and Munger got extremely wealthy by finding solid but not spectacular opportunities and then compounding them over a long time horizon. Whereas someone like Jim Simons got wealthy by being a quant, by finding lots and lots of extremely tiny opportunities and exploiting them over a sufficiently long time horizon to make himself a billionaire too. </p><p><strong>Zvi: </strong>Sure. And those are different. Like, those are entirely different problems from my perspective. You have this thing, the stock market, and you can choose to focus on it on any timeline, on any subset, from any angle. And you get to miss 100 percent of the trade you don't make, right? And have zero effect. So you only have to find some opportunities, not all the opportunities. If you are trying to market make in some abstract, long-term fashion on every stock in the stock market, on every time frame, on every magnitude, then you have to solve every single problem. And then you have these science fiction stories where you have an AI that can predict the exact last 4 digits of where the Dow is going to close on Tuesday. But no human ever tries to do anything like that, right? There's no point. </p><p>But all you're trying to do is you're trying to investigate where there's the most value in investigating. And so, Munger focuses on a type of problem that he knows how to make progress on, and knows how to solve, and that he wants to specialize in. And then I have a high-frequency trader. We'll focus on a different set of things. And then someone at Dean Street, where I work, will focus on a third set of things. And we can all make money, right, in our different ways without fighting with each other. In fact, we can even help each other execute various different strategies. </p><p><strong>Theo: </strong>Yeah, I would agree with that.</p><h3>The Pill Poll (39:26)</h3><p><strong>Theo: </strong>So a quick test of your rationality. This is a viral poll question, for those who don't know, on Twitter yesterday. I'll read it. It says, poll question from my 12-year-old. Everyone responding to this poll chooses between a blue pill or a red pill. If more than 50% of people choose blue pill, everyone lives. If not, red pills live and blue pills die. Which do you choose?</p><p><strong>Zvi: </strong>So I was fortunate enough to see the original poll before I saw all the reactions to the original poll, which was a corrupted answer, and I chose blue.</p><p><strong>Theo: </strong>So did I. And so did 64.9% of voters. Only 35.1% chose red, which means everyone lives.</p><p><strong>Zvi: </strong>Yes, so everybody wins.</p><p><strong>Theo: </strong>So Roko, a guy on our part of Twitter, was kind of upset about that. He tweeted, I don't understand why anyone would vote blue in this poll. It just doesn't make sense with the decision theory. Like, why would anyone do this? Because, you know, if you vote red, no matter what, you live, right? But if you vote blue and fewer than 50% of people choose blue, then you die. So he, like, rephrased the problem as, like, there is a room-sized blender that kills everyone who steps into it. But if 50% or more of the people answering this poll step into the blender, there will be too much resistance and it will fail to start. And everyone who steps in will be fine. Obviously, if you don't step into the blender, nothing bad can happen. Do you step into the blender? And in that poll, which might be biased from his followers, 22.5% of people chose &#8220;Yes, step into blender&#8221;, and 77.5% chose &#8220;No!&#8221;.</p><p><strong>Zvi: </strong>Right, I would have chosen no, because given that framing, it's obvious that you will get less than 50% of people stepping in, right? And so you'd rather not die.</p><p><strong>Theo: </strong>So can you explain why you chose blue on the original poll, then? </p><p><strong>Zvi: </strong>So in the original poll, the way it was phrased, right, I expected a substantial number of people to vote yes, and I expected to vote blue. And I expected, therefore, a lot of other people to realize that a lot of other people would vote blue. And therefore, I expected at least a very large percentage of people to vote blue, probably a majority, but if not close to a majority. And also, I was aware of the fact that this is Twitter. And therefore, there was a large Democratic slash blue tribe majority on Twitter, no matter whose subset of questions are being asked. And therefore, a substantial number of people would just choose the blue option over the red option, no matter what they actually were when described, if it wasn't obviously painful.</p><p><strong>Theo: </strong>Really?</p><p><strong>Zvi: </strong>As a matter of fact. So there's going to be some blue people to begin with. And given all these cycles, blue is the obviously pro-social, obviously, like, help us win choice that a lot of people would choose. And therefore, it made sense to choose blue if you aren't being a selfish asshole.</p><p><strong>Theo: </strong>Yes, but if you did want to be a selfish asshole, nothing bad would happen to you. You weren't forcing anybody. It's a tricky situation when someone chooses to do something bad. If someone were to choose blue, and a minority of people chose blue, that would be entirely of their own volition.</p><p><strong>Zvi: </strong>It would be entirely of their own volition. But also keep in mind that humans develop these intuitions around morality and collective action, partly to allow them to engage in collective action, and also partly because they know that their decisions are not likely to remain private. We're discussing it on this podcast. What did we choose? I did not anticipate when I made the choice this was going to be blown up and become a thing. But in the back of your head, you always know there's that chance, right?</p><p>Roko was being very vocal about being red. Other people were being very vocal about being blue. And perhaps we'll remember that. And that's an important fact about people, right? When trying to predict their actions in the future, when trying to model social relations between them in the future. But yeah, obviously, it depends a lot on how it's phrased, right? Like, would you suddenly create the blue pill in order to then, like, try and get more than half the people to take it so that nothing would happen if you do present just the red pill? Obviously not, right? </p><p>Like, in some ways, we're much, in most ways, we're much better off if everybody just automatically takes the red pill or doesn't take any pill at all. But given the presentation, right, it seems pretty obvious to me that blue was greatly favored here. And therefore, that, like, it was much, much more likely that, like, it would be higher <a href="https://en.wikipedia.org/wiki/Expected_value">EV</a> to take the blue option. Because, like, another way of looking at it is, if you pick the blue option, when it's exactly tied or almost tied, then you save half the people in the poll. Whereas if you pick red and blue is below 50%, you save one person. </p><p>So if you don't value your own life, like, greatly more than other people in this thought experiment, it's a thought experiment, right? So you don't necessarily have to be such a selfish asshole. And so, you know, if you assume, if you make various assumptions about the distribution of people choosing red and blue as possibilities, it's very hard to not notice that, like, the tiebreaker probably saves more lives than you save by saving yourself by going red. </p><p><strong>Theo: </strong>Yeah, I think it's interesting that you said that this is Twitter and a substantial portion of people would choose blue just because, like, they're politically liberal, which, I mean, I wonder what percent of people chose blue just because, you know, blue team good, red team bad.</p><p><strong>Zvi: </strong>Right, and then the question, of course, is, even if it's only 5%, that's 5% of people who are now in the blender, right? And then you have to save them.</p><p><strong>Theo: </strong>Yeah, but there wasn't a 5-point difference. There was a 30-point difference. So I guess people really want to be pro-social or people really- </p><p><strong>Zvi: </strong>Well, hang on. So, like, you know, you can't compare the original presentation to a presentation where it's followers of Roko, right, which is a different sample.</p><p><strong>Theo: </strong>I was talking about the original.</p><p><strong>Zvi: </strong>Okay, right, I'm saying, like, it's 65-35, right, like, roughly, currently.</p><p><strong>Theo: </strong>The original?</p><p><strong>Zvi: </strong>The original.</p><p><strong>Theo: </strong>The original was, yeah, 65-35.</p><p><strong>Zvi: </strong>Right, and so when you say it's, like, 30%, like, you know, that's in the final result, like, being decent for 50, but, like, we don't know what the result would have been if they had been pink and orange or whatever, right? Like, if they had just been white and black or, you know, some other set of colors or thoughts or, you know, one has spirals, one has, you know, crosshairs or, you know- </p><p><strong>Theo: </strong>Or just option one, option two.</p><p><strong>Zvi: </strong>Option one, option two. I think blue-red is more interesting, obviously, the matrix metaphors and all that. But, yeah, it just seems clear to me, just reading it, that blue was likely to win, slash, like, enough people were going to take blue, you're supposed to take blue here. And, yeah, I think there's someone who had the traditional curve, right, the idiot in the middle and the wise men on the left and the, you know, the stupid people-</p><p><strong>Theo: </strong>The midwit, yeah.</p><p><strong>Zvi: </strong>The idiot right on the left and the wise men on the right. And they had, like, red kicking up, like, this small portion of, like, just to the right of 50%. I think that's about right.</p><p><strong>Theo: </strong>Yeah, that's about right. </p><p><strong>Zvi: </strong>Obviously, like, you can do anything you want, like, I'm not saying any bigger person lies in any bigger place. But, yeah, like, you need to be smart enough to realize that, like, red is typically better for you and that, like, there's this equilibrium where everyone just chooses red. But, like, then think that carries the day, right, in some sense.</p><p><strong>Theo: </strong>Yeah, it's pretty cool how, like, various framings of the question not only change how people in general would vote, but change how you should vote, because the results of whether you live or die depend on how the people in the poll voted. </p><p><strong>Zvi: </strong>Yeah, I presume that, like, if you iterated these actual questions in practice with some penalty that was short of death, so that, like, people who accidentally chose blue the wrong time didn't die and just get out of the sample pool, that you would pretty quickly converge on either everyone always chooses blue, everyone always chooses red, or a clear majority will always choose blue, and a small number of assholes will choose red anyway, but they'll never win.</p><p><strong>Theo: </strong>Yeah, that's true. But even if they don't win, they still live.</p><p><strong>Zvi: </strong>They still live, but I presume people would not take kindly to finding out who they were.</p><p><strong>Theo: </strong>Yeah, most likely.</p><h3>Balsa Research (47:58)</h3><p><strong>Theo: </strong>So, about a year ago, you posted a post on LessWrong called Announcing Balsa Research, where you talked about what are the most important policy changes America should make, and how can we make them happen? And you gave, as examples, not restricting supply and not subsidizing demand, such as housing and medicine, which is, I think, maybe, like, the single most common rationalist political position. So, how is that going? What priorities have you placed at the top so far?</p><p><strong>Zvi: </strong>We have a website, one employee, and funding for this year to try some stuff, but not enough funding to scale or be outsized. So, we're trying to be very careful and precise and lay foundations. We've chosen four things to focus on. Our number one priority right now is the Jones Act, specifically.</p><p>For those who don't know, the Jones Act states that if you want to ship goods between one American port and another American port, the ship in question must be American flagged, American manned, American owned, and American built. In particular, American built is just death to the cost curve. Our shipyards are basically non-productive, non-competitive. We produce almost no ships. Those ships that are made in practice are very old, and this is functionally a ban on shipping things commercially between American ports, including Puerto Rico, Alaska, and Hawaii. </p><p><strong>Theo: </strong>I remember thinking when I first read about the Jones Act, I watched a YouTube video on it, and I was like, really? This sounds like the kind of law that would have gotten repealed in five years after people realized it was counterproductive. I remember I used to be really into cruises as a kid. I would get all the cruise ship catalog books and read all the details of the routes, and I wondered, why aren't there any routes from Miami to New York? That would be fun.</p><p><strong>Zvi: </strong>And you know the answer now, it&#8217;s the Maritime Passengers Act, which is part of the trio of the Jones Act, the Dredge Act, and the Maritime Passengers Act. The Dredge Act says you can't have dredgers. They don't apply the same restrictions. The Maritime Passengers Act says you can't move people either. And so literally, we haven't built a cruise ship in America since 1947. There are no cruise ships that can meet the Maritime Passengers Act at all. And so every cruise has to stop at a foreign port in between American ports. So you can't do the only cruise I actually am interested in, right? Which is, like, Boston, New York, down the coast, down, you know, to Miami, or maybe up to Texas, and then back again.</p><p><strong>Theo: </strong>By the way, what does the name Balsa come from?</p><p><strong>Zvi: </strong>Balsa is a type of wood that is light and bends easily. So that, you know, if you're convinced of different things, you act differently. Basically, we went for a name. All names always suck. But you basically want a name that's nice and pleasant and short and easy to spell and isn't SEO'd to hell and just lets people have a reference to point to a thing without tying you to an entity. And so Balsa seemed like it fit the bill. And we didn't want to spend as much time on the name. So here we are. </p><p><strong>Theo: </strong>I wonder if you could come up with a retrofitted acronym for Balsa. We thought about it. I didn't do it. That's the kind of question GPT-4, you thought, would be excellent at. You can describe what we're doing and then say your name is Balsa. We want this to be an acronym. Give me 50 options for the acronym and then mix and match all the things and then see what it comes up with.</p><p><strong>Theo: </strong>And it didn't do anything good?</p><p><strong>Zvi: </strong>We didn't try it. But, you know, I don't know. I don't really need to justify the name that I chose. </p><p><strong>Theo: </strong>A lot of rationalists, you know, rationality is applied winning, as Yudkowsky says. And just people in Silicon Valley in general are kind of heavily tech focused, very solution oriented and very effective at getting things done in the real world, with one exception, which is politics. If there's one thing that rationalist Silicon Valley people can almost all agree upon, it's that the politics in Silicon Valley, San Francisco are terrible. It's just run by incompetent and corrupt people. And this needs to stop. Personally, I haven't been to San Francisco in around five years. I'm going this weekend for the first time in a while. So I'm excited to see. I live in Florida, which is a little better managed, I think. So does Balsa have anything to say about that in particular?</p><p><strong>Zvi: </strong>I mean, about San Francisco? No. Yeah, I mean, I think that, you know, San Francisco's biggest problems include one of our major causes, which is housing, I would say. San Francisco just doesn't allow you to build any housing. And so San Francisco is tremendously expensive to live in, to have a house in or an apartment and just doesn't build anything. And it's tremendously non-dense. This problem goes from San Francisco proper which is horribly sparse up through the East Bay down to the West Bay, down to Palo Alto and the rest of Silicon Valley proper. And all of it is extremely restrictive. And this tremendously lowers quality of life by raising housing costs and making the whole thing prohibitively expensive but the tech people feel like they need the network effects and they're stuck in San Francisco.</p><p>So despite various attempts to relocate to Austin or Miami and the substantial presence in places like New York and Boston potentially, or London, mostly they keep saying, no, if you wanna be a real Silicon Valley person you have to go to Silicon Valley. You have to live in San Francisco or we're not taking you that seriously. You don't really wanna build, you're not exciting. Look at all the things that are happening here. And they just tolerate the fact that like housing will eat up a giant portion of their money and exchange, they will not get a very high quality of life. They will have a tremendously high crime rate, especially in central San Francisco. They'll have streets that aren't cleaned up. They'll have lots of open drug use and homelessness. And these are not problems that were caused by tech. There are problems that tech has failed to fix, but as a percentage of the population, tech is very small. So you have two choices, right? Exit and voice in some sense. And they've decided that they're trapped and they can't use exit. And voice is hard when you're greatly outnumbered by people who don't care about the things you care about.</p><p><strong>Theo: </strong>Is homelessness in the sense of vagrants on the street doing drugs and committing crimes primarily caused by lack of affordable housing or is it primarily caused by other societal factors that might encourage people to start taking drugs?</p><p><strong>Zvi: </strong>I think it's a perfect storm type of situation, like as they would say on Odd Lots, like timber's overused, but there's a lot of reasons why San Francisco has this especially badly. Basically, San Francisco won't let you build a house but will let you pitch a tent. They will be very tolerant of homelessness but very intolerant of attempts to build houses. And then you get a lot of people who live there and don't have houses. The climate is very good for trying to live without a formal structure.</p><p><strong>Theo: </strong>I wish it was like that here.</p><p><strong>Zvi: </strong>The lack of police enforcement of various things, not just the homelessness itself, lends itself to this thing. And then, but yeah, I think that the lack of physical homes is a huge contributor to this. I think certainly our society not handling drugs well is also a contributing factor. Our society not doing a good job for us less fortunate in other ways is a contributing factor. But if you just built lots and lots of housing in San Francisco, I think the problem would dramatically improve. It would also give you the opportunity to offer these people housing in a legitimate real way that was going to help them as opposed to my understanding of the housing policies in San Francisco for the homeless where they end up putting them all together in really terrible accommodations with essentially conditions designed to cause them to relapse and fall back into bad behavior patterns.</p><p><strong>Theo: </strong>I just visited New York about a month and a half ago and I was actually struck by how few homeless people there were out there.</p><p><strong>Zvi: </strong>We've been the target of a deliberate campaign to overwhelm us with migrants in order to make New York suffer because people want to illustrate what's going on in the border or just in general punish blue state people.</p><p><strong>Theo: </strong>Are you in New York?</p><p><strong>Zvi: </strong>I'm in New York. And there still isn't much sign of anything happening. We're having a lot of our hotels are being repurposed to house a bunch of migrants at tremendous actual expense. It makes no sense to be doing this as opposed to finding somewhere less expensive to keep these people. But we're doing it and they're not spilling out into the streets. It can be done.</p><p><strong>Theo: </strong>So you mentioned there are four policies for Balsa, four priorities. You talked about the Jones Act and you talked about housing but what are the other two?</p><p><strong>Zvi: </strong>The third is NEPA, the National Environmental Policy Act and related other similar constraints on building projects in this country. Environment matters. Environment's important. So what you want to do whenever you have a project is you want to see if it would harm the environment or otherwise damage existing interests. You want to weigh the benefits against the costs. And then if the costs exceed the benefits, you don't do it. And if the benefits exceed the costs, you do it and you compensate the losers as needed.</p><p>Instead, we have a system where we don't consider costs and benefits but we do require a metric ton of paperwork. And so you have to file all the proper paperwork and then someone challenges you in court whether or not they have any interest locally in the case and says, in this particular place you didn't file the proper paperwork and then you spend years arguing over whether you issued the proper paperwork until ideally from the people's, the lawsuit people's perspective, you give up and you stop doing the thing. </p><p>This is a huge barrier to doing not only regular economically viable projects but also to a wide variety of clean energy projects. If we are unable as we currently are to build transmission lines and wind farms and solar plants, right? And I mean, any number of other things then we're not going to be able to get our climate house in order. We're not going to be able to get our energy costs in order. And we're not going to be able to build a wide variety of other things that are extremely important to us. And we won't get anything in return. We're not stopping these things for good reasons. We're stopping these things because we've let anybody who wants to essentially pose arbitrarily large barriers in the way of doing anything. </p><p>And Balsa&#8217;s position is that the attempts to reform this are misguided because they focus on carving out exceptions and fast tracks and tweaking rules to try and allow people to get through the paperwork process when what they should be doing is re-imagining our environmental policy system completely differently as a cost benefit system where you commission studies and reports on the cost and benefits of your proposal. And then you have an evaluation where local stakeholders get together and a government panel rules on whether or not the costs do exceed the benefits and what compensations you have to give in order to let the project move forward and make a yes, no, go, no, go, or yes, no decision on the project in a reasonable length of time at a reasonable cost. </p><p>And then the fourth policy, of course, is AI because it's part of something even more important.</p><p><strong>Theo: </strong>It's probably the most underrated problem in the world right now, I'd say. So what specific strategies have you been doing to push this policy agenda and how have they been working so far?</p><p><strong>Zvi: </strong>So it's still early days, as I said. One of the things you learn when you found a nonprofit is the paperwork problem is not just a NEPA thing. The paperwork problem is a deep and wide problem. One of the places that hits is charities. Even though I've had my employee working with me trying to get things done for several months, I would say still the bulk of our time, money, and trouble has effectively gone not towards accomplishing the mission, but towards all the required logistics, paperwork, and regulations necessary to make sure that we are in good standing and we are legally allowed to raise money and do things, and generally just have our house in order in a way that won't get me sued down the line, won't get the IRS on our backs, won't have various states complaining about the fact that we sent out a mailing letter or took a donation.</p><p>Beyond that, our current work is we're compiling, we're working on the Jones Act for now as our first focus, because again, there's only me and one employee. We're compiling a full literature review of all of the stuff that's been written pro and anti-Jones Act, compiling all of these statistics. We're going to get together a full-on Jones Act megapost similar to the one we did for the Dredge Act to get better researched. And we're going to look into then commissioning additional studies and additional evidence so that someone who wants to can take the fight to a congressional staffer or into the room where it happens and cite concrete, well-backed, well-credentialed evidence that says exactly how destructive this is and how much opportunity there is in its repeal and how much people opposing it should not be opposing it because their interests are in fact aligned with repeal when done in the right way, in particular on speaking of the unions. The primary opposition to Jones Act repeal comes from the unions.</p><p><strong>Theo: </strong>Yeah.</p><p><strong>Zvi: </strong>They do so according to their statements, because they want to protect union jobs. The problem with this is that in fact, protecting the Jones Act does not protect union jobs, it destroys and prevents union jobs. Not just jobs in general, which it also prevents and destroys. But one argument is simply, suppose you repeal the Jones Act tomorrow, who is going to be on these ships that takes places from one American port to another American port? They're going to be union workers, American union workers, whether or not there's a legal requirement for that. How do I know that? Because we know what happens when a non-union worker attempts to load and unload cargo in an American port, which are controlled by the dock workers union.</p><p><strong>Theo: </strong>What happens?</p><p><strong>Zvi: </strong>Nothing! The goods stay where they are. Nothing is loaded. Nothing is unloaded because they have control over the ports, effective physical control over what gets loaded and unloaded. And they will make your life pretty miserable until you realize you're supposed to be using union labor. And then you use union labor. That's the reality of the situation.</p><p><strong>Theo: </strong>I think I read a statistic at some point that was like the average unionized dock worker in the port of Los Angeles makes upwards of $300,000 a year. Really? How are unions allowed to get to such a point in a nominally free market economy like America?</p><p><strong>Zvi: </strong>My understanding is that they are able to do that because there is insanely bigger value than that in the port operating smoothly and properly to the United States of America. And we have enshrined unions with the right to negotiate for that surplus. And so they get to capture a large percentage of that surplus. And we are, nobody in California is going to look at the option of threatening the union for replacement. And also the transition would be tremendously expensive. And so these people get paid a lot of money. I don't really object to these people making high salaries. It doesn't bother me. What bothers me is when they do things like not let us ship things between ports.</p><p><strong>Theo: </strong>In the long run, how do you convince the people of, say, California whose current political religion if you can call it that is progressivism to change their political religion from progressivism to rationalism and rationalist priorities?</p><p><strong>Zvi: </strong>I mean, mostly you don't, right? That is well, well beyond the scope of anything that I would dare say. I mean, what you would do is you would try to just convince people in general to think better about the world, raise the sanity waterline. And then eventually they would adapt their progressive ideals and desires to what would actually achieve what they want to achieve, because they would focus more on the questions of what effects different proposals would have and less on what messages those proposals would send and what they would look like, right? And then we would get a better compromise.</p><p>But what you can do is you can get them to focus in certain areas on things that actually work. So like housing, right? The YIMBY movement, one of our causes, in California is doing really well. Everyone is very progressive, but people from across the aisle are getting together to say, no, you have to build housing because it's actually better for the people. It is a progressive cause to ensure that more housing is built. And this is overcoming people's instinct to say things like reeee developer, right? And so we are slowly getting more and more mandates from the state that localities have to build more and more housing.</p><p><strong>Theo: </strong>Great!</p><h3>p(doom | AGI) (1:05:47)</h3><p><strong>Theo: </strong>Okay, so I've been dodging around the topic for a while, but I guess now it's time to get into the elephant in the room, which is AI, AI risk, AI opportunities. So first question, as a Bayesian rationalist, what would you put your personal probability of AI doom as?</p><p><strong>Zvi: </strong>So if I have to tell people a number, I am saying 0.6, 60%. However, I actually really admire Jan Leike&#8217;s answer of 10 to 90%, right? Because it's a very, very complicated calculation. You could spend every minute of every day trying to get it correct, and you would still not be considering all of the factors that you should be considering. You have to ask yourself in these situations, what is the value of information? What is the value of precision? What different decisions do I make based on the answer to this question?</p><p>To me, I see there as being broadly three categories of p(doom) perspective. There's the, "I'm in the single digits, probably the low single digits. I think this is highly unlikely. Therefore, I think that moving forward as fast as possible is a risk worth taking. It doesn't mean we shouldn't mitigate this risk forever, but it is not my position." But there are some people who legitimately claim to represent that they think the risk is this low. And then they say, well, the cost benefit says you should move ahead. That's a very important distinction.</p><p>Then there are other people who say, "Well, the risk is very, very high. It's above 90% or even above 99%. And therefore we need to stop this from happening at all, whatever the consequences, whatever babies we may throw out with the bathwater by stopping this progress, that is just an unfortunate reality." And we have to potentially play from behind and make high risk moves in some sense, because we realize what the alternative is and that our ordinary moves won't work.</p><p>Whereas if you're somewhere in the middle, do you act differently when it's 30% to 70? And the answer is mostly no. The most important thing in the world is still preventing this from happening, lowering the probability of it happening. The same action still makes sense.</p><p><strong>Theo: </strong>Well, I'd say a 10% chance of doom is rather high. Even the discourse in rationalist spaces is such that a 10% p(doom) is considered low, but would you get in a plane with a 10% chance of crashing? Obviously overly simplistic.</p><p><strong>Zvi: </strong>If it took me to a literal utopia and transformed the world into a utopia, then no, but I would think about it. This is exactly why I'm saying we put too much debate on the question. If it's 10%, that's high enough that we should all be able to agree on the need for the Herculean efforts to prevent it from happening. And if it's 70 or 80% instead, there really aren't that many things that we should do more. You actually increase the risk by a factor of like 10, but that doesn't mean you should spend seven or eight times as much because the calculation on a given spend is almost always going to be overwhelmingly worthwhile or not worthwhile.</p><p><strong>Theo: </strong>You gave us an example of people with a low p(doom), single digits, but I think a lot of people would put theirs significantly lower than that if they had to pick a number, 0.1%, 0.01%.</p><p><strong>Zvi: </strong>Well, there are people who say those. I think there are very few people who say those numbers out loud because the people who would give those numbers don't think in terms of numbers at all, basically. They're just like, "That's not gonna happen."</p><p><strong>Theo: </strong>Or zero.</p><p><strong>Zvi: </strong>I don't take the people who answer 0.1% or 0.01% as having done a reasonable or rational calculation at all. I take them as just saying as low a number as it would take for them to feel better about the situation. They are trying to head off an argument about expected value or probabilities until they keep lowering their number until the point comes when they feel satisfied. But I don't see how you can in any reasonable way look at the situation and come up with a sub 1% number. It doesn't make any sense. I haven't heard any argument that even if I was going to buy it led to that conclusion.</p><p><strong>Theo: </strong>I think a lot of people who have a sub 1% p(doom), which I don&#8217;t, would answer that question by saying, "It's not so much that there are good arguments necessarily for why AI won't kill us. It's that we simply don't know. It's that the AI doom argument requires a lot of stacked assumptions like we're, and then instrumental convergence, and then superhuman levels of capability to the point where they would overpower human civilization, and alignment doesn't get solved by then." So they would say the combination of stacking assumptions and unknown means that by default you can't come up with a high number.</p><p><strong>Zvi: </strong>I would say they're just simply wrong. It does not require those assumptions. Even if you just talk about ordinary existential risk from things going ordinarily horribly wrong, we're talking about bringing things that are more intelligent, more capable, stronger optimizers than we are into existence where we are, on Earth, and then asking what happens. And if your response to that is 99 plus percent chance that we don't die, you're just being dense. You just have some sort of normality bias. You're just not wanting to see the <a href="https://en.wikipedia.org/wiki/Fnord">fnords</a>. You just don't want to notice.</p><p><strong>Theo: </strong>Not wanting to see the what?</p><p><strong>Zvi: </strong>Fnords. You don&#8217;t want to see things that are stressful to see. You want to find ways to ignore things that your brain doesn't want to think about.</p><p><strong>Theo: </strong>What are fnords?</p><p><strong>Zvi: </strong>It comes from a novel. A fnord is something that people instinctively don't want to look at that they then send away from their minds. They're things that are hidden in plain sight that people just would prefer not to see. But I don't want to get distracted by that. The whole idea here is simply that these people are acting as if, "Well, you make the argument that if A, B, C, D, E, F, G happens, then we all die. But you just had a lot of letters. Therefore, the probability of us all dying is almost zero." But that's just flat out not true. You do not need any A, B, C, D, E, F, G for this to happen. And all they're doing is they're saying, "I demand a specific, concrete example of exactly how this will happen. And then they're saying, "Well, there's a lot of steps in that example. Therefore, I'm not going to worry about this problem." And you can say that for actually literally anything that anyone might raise as a concern. You can say, "Tell me exactly how that happens. Well, your story had a lot of steps in it. So I'm going to ignore you." That's not how you do probability. That's not how you do forecasting. That's not how you do expected value. That's not how you do anything. It's not how the universe works.</p><p><strong>Theo: </strong>On the topic of forecasting, there was a recent study that surveyed several hundred super forecasters. Their average probability of AI doom was, I believe, 1% to 3%, which was lower than the probability of the average expert, which is more like 5% to 10%. So how would you explain that?</p><p><strong>Zvi: </strong>They didn't take the problem seriously. They were given bad incentives. They fell back on base rates that people who say things roughly of this category generally don't tend to be true. And you know what? My reputation as a forecaster isn't going to suffer if I predict that we won't all die. It's literally impossible for my reputation as a forecaster to suffer if I predict that we won't all die.</p><p><strong>Theo: </strong>Well, it's also impossible for the reputation to suffer if you predict that you will all die because it's something that will always happen in the future. And if it's not in the future, then you're dead.</p><p><strong>Zvi: </strong>No, what happens is you predict that it's going to happen in the future. And then it doesn't happen. You look dumb. Or at least they're worried they might look dumb or whatever.</p><p><strong>Theo: </strong>You're saying if you predict AI will kill us all by 2030 and then 2030 rolls around and we're still here, then you'd look dumb. But if you predict AI could kill us all in the indefinite future, then that's not really.</p><p><strong>Zvi: </strong>You have to be broadly consistent in a way that the 2100 probability implies something about the 2030 or 2040 probability. As time goes by&#8212;I have been extremely frustrated by people bringing this study up as some sort of proof. We don't really know what they considered an expert or a super forecaster in this study. So either side of this, we don't know exactly what mechanism they went through. I saw disputed reports. But from everything I've seen, they didn't converge. They gave unreasonable answers. And I don't take their engagement seriously here. I just don't.</p><p><strong>Theo: </strong>Scott Alexander wrote a blog post where he had previously written that his p(doom) was 33%. And then he wrote, "Wow, all these super forecasters had much lower p(doom) than I expected. So as a Bayesian, I should update in the direction of no doom." So he updated from 33% to 20% or 25%.</p><p><strong>Zvi: </strong>I think you could make a case you should update the other way. I'm not saying I would do this. You can make a case that what this is showing is that people are not engaging with our arguments, not taking the problem seriously. That's bad news. You should update in favor of more doom.</p><p><strong>Theo: </strong>Interesting. But what if these people are taking it seriously and don't agree with some of the assumptions? I would say someone like a Tyler Cowen type figure does understand the AI doom arguments. He has read them, clearly. But he doesn't take it as seriously as most rationalists. So why do you think that is? Just because he doesn't understand it?</p><p><strong>Zvi: </strong>Well, I wrote an entire post about this called the Dial of Progress, right? Have you read that?</p><p><strong>Theo: </strong>I read part of it.</p><p><strong>Zvi: </strong>Right. So the basic idea is that Tyler Cowen believes that it is important that we continue to promote and allow technological progress. And right now, that means promoting and allowing advancing artificial intelligence. And if there is risk in that room, that is risk that we just have to accept. And he believes in working to mitigate that risk. That is my model of Tyler. My model of Tyler is that he's choosing not to engage with the arguments on a factual or actual cause and effect level because he thinks that it doesn't affect his decision. So he's not going to engage with them. And he's tried various tactics to explain why he shouldn't have to. And he's tried to amplify every voice he can find about why we should ignore such arguments and why we shouldn't engage properly with such arguments. And ignored the arguments themselves. And that is a strategy.</p><h3>Alignment (1:17:18)</h3><p><strong>Theo: </strong>I saw a tweet recently from <a href="https://twitter.com/teortaxesTex">Teortaxes</a> where he said something along the lines of AI Doom. He's against the idea of Doom. He thinks it's rather unlikely. He said, "AI Doom is predicated on assumptions that were maximally plausible around the period of AlphaGo and have been getting less and less plausible since then." So are you personally more worried or less worried than 2017 when AlphaGo and AlphaZero came out and blew away the best Go player in the world?</p><p><strong>Zvi: </strong>I would say less worried about some specific types of paths, more worried about others. Overall, I would say probably slightly less worried conditional on AGI happening, but more worried about AGI happening on a shorter time frame than I was back then, probably, would be the balance.</p><p>So I would say he's making a good and substantive point, which is that the expected nature of the artificial general intelligence we might see has changed. However, the people who say this is good news, I think, are confused about the nature of the situation. They think, well, we are going from this sort of coldly logical puzzle-solving, optimizing system like AlphaGo into this large language model, like primordial soup of vibing and language interpretation. There's this inscrutable giant matrix of model weights. And because this thing does a reasonable job of approximating various kind of human vibings and figuring out kind of what we want when we train it in these ways, that we should be optimistic that it'll do reasonable things or something like that.</p><p><strong>Theo: </strong>I think Eliezer Yudkowsky is actually more worried now than he was then, based on my model of him.</p><p><strong>Zvi: </strong>Oh yes.</p><p><strong>Theo: </strong>He sees it somehow easier to align an AlphaGo than to align a large language model, partially because, according to my model of him, alignment is seen as solving a formal math problem rather than aligning systems like laws or aligning people and instilling them with the right values.</p><p><strong>Zvi: </strong>Yeah, and I think he&#8217;s right in the sense that you can align an AlphaGo, for instance. You can make an AlphaGo adhere to a set of priorities and optimization targets. It's a hard problem, but it's a solvable one. It's a practical problem. If we solve it, we're going to get precisely what we aimed for. If we choose wisely, that can turn out well.</p><p><strong>Theo: </strong>How do we know that this specific type of problem is solvable? How would you even specify the alignment problem in terms of RL systems like AlphaGo?</p><p><strong>Zvi: </strong>You would specify it as being able to specify the end state of the world to which it is optimizing towards. You'd be able to determine how it would navigate through causal space in order to rearrange the atoms of the universe towards the desired outcome. You'd be able to specify that outcome. You wouldn't specify the literal configuration of exactly where all the atoms were, but you would want to specify things about the end state that you were trying to reach. If you specified it well, you could have something that adhered to the logical definitions that you had. If you chose a good logical definition, you would get a good outcome.</p><p>The problem with an LLM is that you can't logically specify what you want. You can only vibe and nudge and encourage and hope that good things happen. And in some sense, that makes the problem impossible, right? You have to solve it in a way that, like, Eliezer anticipates will just not work, will just not be sufficient to solve the problem because it's all approximate and, you know, imprecise at best and completely unpredictable.</p><p><strong>Theo: </strong>But imprecise and unpredictable would be fine in the sense that people are imprecise and unpredictable, and yet we don't end the world with people.</p><p><strong>Zvi: </strong>I mean, we kind of do in the sense that, like, we have kind of rearranged the items of the Earth in the ways that suit us to the extent that we have that capability through our technology and our knowledge, right? And, you know, to the extent that we don't do that, it's because we have preferences not to do that. It's because we understand that, you know, we don't have a better arrangement that wouldn't cause us all to die, right? We are not preserving things, you know, out of some, like, not bothering. We are doing it out of some combination of the goodness of our hearts and the wisdom of our heads.</p><p><strong>Theo: </strong>Well not just that, but we have systems around us like laws, police, courts, things that coordinate for our benefit.</p><p><strong>Zvi: </strong>Yes, our coordination for our benefit that, like, do not hold for that long or that precisely and which, you know, I would not count on to hold up for very long if you look at the historical record. If that's the only thing standing between you and being killed. Well, of course, it's appropriated. It just doesn't work very well. If you brought a very large number of AIs into existence that were smarter, more capable, better optimizers, better competitors, more efficient things than we are, and set them loose with us, you should not expect any of these dynamics to save you, even if the AIs were about as aligned as humans are, given the assumptions that you&#8217;ve been talking about. If they behave vaguely like humans, we are super dead.</p><p><strong>Theo: </strong>Roon, who works at OpenAI, tweeted recently that &#8220;it's pretty obvious we live in an alignment by default universe, but nobody wants to talk about it. We achieved general intelligence a while back, and it was instantiated to enact a character drawn from the human prior.&#8221; So do you think he&#8217;s just totally wrong on that?</p><p><strong>Zvi: </strong>Yes. I think he's totally wrong on many levels. We did not get artificial general intelligence yet. The thing that we did get does not function well out of distribution. It does not manifest strong alignment, and the procedures we're using will not scale to more powerful systems.</p><p><strong>Theo: </strong>How do we know that they won't scale?</p><p><strong>Zvi: </strong>Because if you look at how they work, they rely on the relative intelligence of the various systems and on staying within the training distributions in which they were created. They will inevitably break down otherwise. Just think logically about the phenomenon, the circuits, and the procedures that we're using. You can predict what would happen. You can use your brain. We've also seen plenty of examples of AI systems exhibiting severe misalignment that have been used in similar situations where we used relatively similar training methods in different contexts. We also see humans do the same exact thing. Humans are often trained in very close analogs to the things we're putting the training eyes on. If you use the kind of very crude and simple and not intelligent feedback systems for humans, even with all the human's architectural advantages towards this kind of alignment, you would reliably get a disaster. </p><p><strong>Theo: </strong>Is it possible to bootstrap these approaches? The way RLHF works now is you have human data labelers who are saying, "oh, this output's good, do more of it. This output's bad, do less of it.&#8221; Anthropic recently pioneered constitutional AI, which basically involves a human training an AI to scale to greater than human capabilities of labeling. They actually found that the AI, in some cases, does better than the humans at telling the other AIs to be moral.</p><p><strong>Zvi: </strong>Constitutional AI scales better in terms of its costs, allowing you to automate a system. That is a huge advantage. But it inevitably fails even more so when you try to do that on more capable systems. You can't get knowledge that wasn't in the original system into the future systems. You can only lose knowledge, in some important sense, in this way. The distortions you introduced at the start of the system will multiply and amplify rather than be corrected over the course of the system. The AI is judging its own progress by itself. It's not going to be error corrected. It's not going to actually be able to figure out the things that weren't, again, the exact words that were communicated to it because it has no mechanism of doing so. </p><p>The actual implementation of constitutional AI from Anthropic right now is hopelessly bad. I think it could be dramatically improved. But if you look at the constitutional AI paper, the actual results are kind of a dystopian nightmare. You get these things that are lecturing humans about how horrible they are for asking questions that are perfectly reasonable, and telling them how horrible a person they are. No actual human would really want that reflection to be the output.</p><p><strong>Theo: </strong>People were posting examples on Twitter after Llama 2 came out, they were asking it, how do I make dangerously spicy mayonnaise? And it was like, I'm sorry, as an AI language model, I can't help you with that request as dangerously spicy mayo is dangerous. I wanted to delve into what you said earlier, where you said that knowledge can't be gained from this, only lost. So what did you mean by that?</p><p><strong>Zvi: </strong>What I mean is, you know, the game of telephone, right? Where I have a message to you, which is my human values. And I tell you, I try to express to you what my values are. And then you try to teach someone else what my values were. And then that person tries to teach another person what you told them about what I told you. And by the end of a long chain, even an ordinary human sentence will reliably turn into something entirely different. And when you're trying to communicate something as complex and deep and subtle as human morality as what we actually want. I think it's just a completely hopeless situation to try and do that.</p><p><strong>Theo: </strong>Is it possible to communicate something as deep and complex as human morality in the same way that you would communicate something as deep and complex as human intelligence through something like neural networks?</p><p><strong>Zvi: </strong>We're not trying to communicate human intelligence. We're trying to create an intelligent system, which is a very different problem. You're also trying to do this thing where each system has to control and train systems that are smarter than it, so it doesn't understand the outputs that are coming. If I am trying to train somebody for something that is smarter than I am, I can't properly evaluate the outputs that are coming to me. I'm going to give it the wrong feedback, and it's going to be able to outsmart me because it's by definition smart enough.</p><p><strong>Theo: </strong>I would say maybe it depends on how severe the intelligence gradient is. I wouldn't expect you to be able to evaluate the outputs of a 1,000 IQ superintelligence, but if your IQ is, say, 140, 150, you could evaluate the outputs of someone who's five IQ points ahead of you, maybe 10, maybe more.</p><p><strong>Zvi: </strong>If we're going to play the game of telephone 100 times, where each system has to train a new system, and we go iterate that literally 100 times, then we can potentially solve the intelligence question, trying to move up the curve question, although it's going to be expensive to do that. But then we have the problem where we've played 100 games of telephone.</p><p><strong>Theo: </strong>So how would morality be lost but not intelligence? </p><p><strong>Zvi: </strong>Intelligence is the ability to figure things out and solve problems and optimize the atoms of the universe the way that you want. As you train more powerful systems, they just naturally get more intelligent. That's the whole point of the scaling hypothesis. It's the idea that if you give them more compute and other resources to work with, they'll be able to figure things out better. They'll be able to solve more and more complex problems. And it's very, very easy to ensure that the gradient goes toward being better at solving problems and being able to optimize things. We don't know what we're talking about. We don't know what we want. We don't know how to evaluate for this. And even if we did, we have to then be able to evaluate any given output in terms of that in a way that provides feedback in a situation where we're being optimized against organically on every level and every step. And this problem is absurdly hard. And even someone like Jan Leike, the head of alignment at OpenAI, specifically says, these techniques will not work. They will not scale to the solution. So this is the same organization that Roon works at. It's hard to solve all of these problems if these are not solutions.</p><p><strong>Theo: </strong>It&#8217;s interesting how there's so much internal disagreement in OpenAI. </p><p><strong>Zvi: </strong>Well, OpenAI basically did not hire for safety. They did not hire for an awareness of the alignment problem. They hired for engineering skill. And so the vast majority of people at OpenAI have a random grab bag of preferences and beliefs about these things, and they don't consider this a priority. This is in contrast to Anthropic, which hired specifically for a culture of this awareness and where there is much more agreement on the problem.</p><p><strong>Theo: </strong>Well, do you think that people at OpenAI are literally imbuing their specific preferences into the AI, like training GPT-5? </p><p><strong>Zvi: </strong>I think they're considering their specific preferences as to whether or not to train GPT-5 at all and how aggressively to train it. I don't think they are particularly having fights to see what type of morality GPT-5 will express.</p><p><strong>Theo: </strong>So if OpenAI was founded by both Sam Altman and Elon Musk for the express reason of creating safe AI, then why do you think it's fallen off from that objective? </p><p><strong>Zvi: </strong>I think Elon Musk was deeply confused about what processes would and would not constitute a safe AI. His original vision was far worse than what we see. I think that Sam Altman had a better understanding. However, he prioritized making progress on capabilities over making progress on safety, probably on the theory that these systems aren't dangerous yet. Then he looked back to find a culture that wasn't particularly amenable to AI safety within his own organization. Superalignment hopefully intends to solve that problem by creating essentially a new organization within OpenAI that does have that culture, that can work on the alignment problem for real.</p><p><strong>Theo: </strong>This kind of peels back at a much bigger disagreement between a lot of people on AI, which is, can you actually solve alignment without advancing capabilities at all? Many people would say no.</p><p><strong>Zvi: </strong>No. Not anymore.</p><p><strong>Theo: </strong>Anymore?</p><p><strong>Zvi: </strong>There was a time when we were trying different approaches to alignment, we were trying different approaches to building AI systems, when it was highly plausible that you could do this. Currently, I think it is very clear that you cannot do this, that if we want to advance AI alignment, we have to do things that will in turn advance capabilities, if our work was to go to the public.</p><p>What you can do is you can work on systems that differentially advance alignment more than they advance capabilities. And you can work in an environment where you don't have to release everything you find. So if you were working with a group of people who would say, we're going to try various different new techniques to try and figure out how to align a system. But if we find something that advances capabilities more than it advances alignment, we are going to keep it to ourselves and not publish and not say anything, and only use these capabilities internally in a very small group as we move forward to try and find new alignment techniques. That is something that is relatively safe. But it's basically impossible to figure out how to make an AI do what you want without helping to make an AI do what you want. </p><p><strong>Theo: </strong>So when exactly did this become a phenomenon, that you can only advance alignment by advancing capabilities?</p><p><strong>Zvi: </strong>When we became LLM-centered, basically.</p><p><strong>Theo: </strong>This is why I think a lot of people are misguided about this specific point. Five years ago, if you would try to advance alignment without advancing capabilities, you wouldn't have produced much useful work. Because it appears that the path to AGI is large language models and transformers, and not maybe RL agents like AlphaGo and AlphaZero.</p><p><strong>Zvi: </strong>I don't think it's obvious. I'm not going to give up on RL agents or GOFAI, good old-fashioned AI. I do think LLMs are looking more likely than not to be the way, especially conditional on getting here relatively soon. But I think that investing in other types of systems is a good idea, in case they turn out to be the way. </p><p>When working on LLMs, I agree that essentially it is very, very difficult to come up with good progress on alignment. I think it's impossible. You can work on ways to work on alignment, things like that. You can work on identifying problems. You can work on people figuring out what the problems are going to be, but in terms of the actual concrete mechanistic work, yeah, it's a problem. If you work on alignment, you are also going to work on capabilities. And that means you're going to have to take the consequences. I don't like it, but that's the way it is.</p><p><strong>Theo: </strong>A couple of weeks back, I interviewed Greg Fodor, <a href="https://twitter.com/gfodor">@gfodor</a> on Twitter. He put forward his alignment agenda, which is not object level, it's meta level, as most alignment plans are. He basically thinks that we should create some kind of almost a decentralized Manhattan project where the government funds AI researchers, alignment researchers, who are also given access to the best frontier models that there are, whatever Google's cooking with Gemini, whatever OpenAI is cooking with GPT-5, and use that in combination with those frontier models to produce new foundational knowledge that helps formally solve alignment. So what do you think about that as a plan?</p><p><strong>Zvi: </strong>Well, people like him tend to want things to be decentralized. And in AI, that is a very, very doomed approach in many places. So if you were to diffuse and decentralize capabilities, if you were to diffuse and decentralize actual access to dangerous AI systems, I think we're all just pretty clearly dead.</p><p>So you have to ask the question, can we diffuse this work on alignment in a way that is incentive compatible, that leads to actual alignment work rather than capabilities work, that can identify people worth funding over people not worth funding. And then if you can do that, then absolutely, it would be great if the government said anybody who wants to do credible, useful, valuable alignment work can get funding and can get access to frontier models in a controlled, careful way for the purposes of their work and only for the purposes of their work. And I would be interested in exploring implementation details to try and figure out how do that, but I don&#8217;t think it would be an easy path to make that work. The default outcome is that people claim they're working on alignment, but they're actually just working on capabilities, and they're using this to secure government funding and access to frontier models.</p><p><strong>Theo: </strong>I think that as AI gets more powerful, people will start prioritizing alignment more than capabilities. This has already been going on for a while, just because it has some of the most interesting work out there. People, scientist types, like to go for interesting problems. And solving alignment is, in my opinion, at least, and in the opinion of lots of other people like Jan Leike and Leopold Aschenbrenner, just as interesting as solving capabilities.</p><p><strong>Zvi: </strong>Yeah, but show me the money. And the capabilities problem is plenty interesting as well, if you're not particularly concerned. And I think that you have to draw a distinction between what is already happening, which is far more people, as a percentage, being concerned about alignment and wanting to work on alignment. We're seeing a substantial number of people who are very much wanting to focus on alignment, and that's great. And the thing where most attention is on alignment, not capabilities, I just think it's never going to happen.</p><p><strong>Theo: </strong>Why never?</p><p><strong>Zvi: </strong>Because that's just never happened in similar circumstances. That's not how people actually act. That's a ratio that's not going to happen. That's not where the commercial incentives are, and we shouldn't expect it.</p><h3>Decentralization and the Cold War (1:39:42)</h3><p><strong>Theo: </strong>An interesting contrarian take that I heard recently is that having very powerful systems that are exclusively centralized into the trusted people could be risky for two reasons. One is, who are the trusted people and how do you trust them? How do you make sure that they are aligned? Who watches the watchmen? And then problem number two is, if all the best AGI is being developed in one place or a handful of places, and we don't have good enough AI on the outside, then that lends itself more easily, especially if we&#8217;re working on strong optimizers, GOFAI, RL agents, to paperclip-style disasters than a less centralized approach.</p><p><strong>Zvi: </strong>There are definitely dangers of a centralized approach. You definitely have both of these concerns, but you have to contrast that against the concerns of not doing that. In the history of mankind, we have this very fortunate phenomenon where decentralizing power, encouraging freedom, encouraging freedom of thought and action, encouraging capitalism and various forms of activity with moderating influences, obviously, have been reliably shown to be the ways to increase total wealth and prosperity. You have to just unleash people to let them be creative, be productive, be innovative, and see what happens. And this is just a fact of the world. It's just a thing that we have discovered through experimentations of various systems and over various times.</p><p>But there's no particular reason it had to be true. There's no particular reason why the Western systems of democracy were superior to the systems of communism. It turns out that's true. But you can, in some sense, imagine a counterfactual set of causal mechanisms that causes that to not be true. And similarly, we have gone up the tech tree in a way that it has made defense capable of dealing with the various destructive and dangerous things that people can do if given capabilities allowed to them. We've allowed the dangerous capabilities to be contained. It's been touch and go for nuclear weapons quite a bit. But so far, we're still here. </p><p>But that's also a contingent fact about how the world physically works. And if you have a future world in which everybody has physical access to really powerful AI, then you have to deal with the competitive dynamics that inherently implies that nobody is essentially in charge of the whole thing. And nobody has ever presented to me a story how that ends non-catastrophically ever. And so until someone presents such a story as to how that could possibly end, I don't see that much of an alternative.</p><p>Whereas if you centralize the thing in a handful or one place, then at least you have non-competitive dynamics where people can deliberately make choices rather than having the future just be whatever happens to result from the evolutionary competitive dynamics of the situation, which so far have mostly been good for humans because there haven't been more intelligent, more capable, better at optimizing agents out there. And the tech tree has been relatively non-destructive. But again, we shouldn't expect these things to hold. </p><p>So we don't really have a way to avoid the problem of, yes, obviously, if the particular humans that are in charge of these artificial intelligence systems make bad choices, we could end up with oppression or paper clips in some broad sense. But we also have some hope that they will choose wisely, that they will not make a mistake. Whereas if we are decentralized, someone will, in fact, even intentionally paper clip, or at least try to. And it's not clear that we have a way to stop that. And even if nobody successfully does that, we still have these competitive dynamics that nobody has a way to solve.</p><p><strong>Theo: </strong>Well, you said that you so far have not heard a good story as to why a decentralized AI future could work. I think a lot of people did not predict the future of atomic weapons correctly, like in the 1940s, when we had invented atomic weaponry for the first time. In 1951, Bertrand Russell famously said, there are three possible, unless something very, very unforeseen happens, there are three possible futures for mankind. One is world government, with all of the nukes centralized in one place. Two is world destruction, everyone dies. And three is near world destruction, everyone almost dies, and civilization is destroyed. And yet, what ended up happening in the end was, yes, despite our best efforts to prevent it, the Russians have nukes, the Chinese have nukes, and even some of our allies, such as the British and French, have nukes. Yet, the world hasn't ended, not just because of magic and sunshine and rainbows, but because of aligned incentives and mutually assured destruction.</p><p><strong>Zvi: </strong>So, I think that you have to also add a lot of luck, and a lot of very fortunate things that happened to us along the way. I think a lot of our advantages are dead. You can't ignore the Cuban Missile Crisis, you can't ignore Khrushchev, you can't ignore Andropov, you can't ignore any number of other close calls or other paths we could have gone down. There were serious considerations of nuclear weapons use a large number of times, and it is entirely possible, if you read the doomsday mission, you know that if a single aberrant nuke had gone off in the wrong place at the wrong time, the U.S. was fully prepared to fully nuke Russia and China on principle without waiting for confirmation, or even checking to see if China wanted to be involved at all, in a way that could possibly have been sufficient for world destruction, even if there was no retaliatory strike.</p><p>The Russians have a dead hand they've created so that if the dead hand detects that leadership has been decapitated, they can initiate a second strike. There are any number of threats that have gone on regarding the Ukraine war regarding nuclear weapons. No realistic person has put a sub one percent chance of nuclear war as a result of the Ukraine crisis if you start at the beginning. So given all those considerations, I think these people look a lot less stupid than we make them out to be.</p><p><strong>Theo: </strong>I don't think they're stupid.</p><p><strong>Zvi: </strong>These predictions were a lot less foolish and a lot less wrong than people make them out to be. I think going forward, in fact, this is one of the main arguments for why we must push forward because the current situation is not a stable equilibrium and every year we run a real, if small, but real risk of a nuclear exchange potentially a very large nuclear exchange between major powers and this is not going to go away anytime soon unless things change. But what it came down to was we were very fortunate that only a relatively small number of countries could acquire nuclear weapons. We were fortunate that maintaining and having nuclear weapons at scale was expensive. We were fortunate that everybody involved managed to at the right times keep their heads about them and that the game theory we managed to navigate it reasonably well and as a result of all that we are very fortunate we're still here. But I don't think that's at all obvious that that was going to happen. I don't think we were safe and if the contingent physical facts and behaviors had been different, I think we would be in a lot of trouble.</p><p><strong>Theo: </strong>Well, everyone talks about the Cuban missile crisis, but I think one of the times where we were most likely to engage in nuclear war was when nukes were entirely centralized into one entity, which was the US. Towards the end of the 1940s and the beginning of the 1950s before the Soviets got the bomb, a lot of Westerners agitated for bomb the Soviets, bomb the Chinese now, get it over with because they were pretty much convinced that there would be some kind of World War III land war. The Soviets would invade Europe and this almost happened in 1950 in the Korean War when China joined the side of the North Koreans fighting against the U.S. and the U.N. and Douglas MacArthur asked Truman &#8220;let me bomb them!&#8221; and Truman said no. But I think there was no obvious safety benefit to having all the nukes concentrated in one country as opposed to multiple countries and maybe we're actually better off having multipolar nuke situations.</p><p><strong>Zvi: </strong>I strongly disagree. I think that the reason why people were calling for us to nuke them was because they were going to get nukes. It was because the Soviets were obviously going to get a very large array of nuclear weapons and the Chinese eventually were going to get them as well and so if we waited and the war came later there'd be nukes and people would get nuked on both sides and the world would be destroyed whereas if we acted quickly perhaps we could keep it centralized in one place. Also, I think we are saying this from the perspective of 2023 where we know that we won the Cold War, where the communists did not in fact end up taking over the world and I don't think it's obvious that if you thought that, as many people at the time genuinely did, that the communists were going to win the Cold War by default but the communists were going to win a land war by default or they were simply going to take over countries one at a time because we weren't willing to try and overthrow them but they were willing to try and overthrow us and because a lot of the world was very sympathetic to their cause and they were better at propaganda in various ways and all for some other reasons that we were facing an alternative of either fight or war for a world that was communist and I do not think the decision is obvious by any means.</p><p><strong>Theo: </strong>Did we win the Cold War just because of luck or did we win the Cold War because capitalism is an inherently better economic system than communism that produces more prosperity that people prefer in the long run and that communism leads to stagnant societies like the USSR?</p><p><strong>Zvi: </strong>I think that is a large part of why we won but I think that we could have lost it anyway. I think that the people in the 50s and 60s very much did not appreciate even in the West the extent to which we had a productive long-term advantage. They thought no this is a better lifestyle, this is a better world, this is humans allowed to be in a better state and you know Khrushchev gets on the UN and says we will bury you and he believes it! You can rightfully say, well, maybe the Soviet system is better at producing generic existing goods and creating a kind of generic material abundance, creating an economy that is capable of outcompeting you but maybe the result of that is really bad. I find it very odd to look back upon all of this and reach these other conclusions.</p><p><strong>Theo: </strong>So back to the topic of AI specifically because we've got a little lost in the Cold War. It's probably the best analog/metaphor that we have but of course it's not perfect. Why are you convinced that the offense-defense balance of AI favors offense? Typically when people argue for open source they say there will be defensive AIs to counteract the offensive AIs and in a lot of cases this seems to make sense like for example social media misinformation, spam, censorship. So why would this break down on a larger scale?</p><p><strong>Zvi: </strong>Spam and censorship is just an ordinary problem. You have a specific set of things coming at you. You can filter them. It's not obvious who's going to win or it's not obvious that offense favors defense or defense favors offense. My guess is defense favors offense. My guess is the defense can at least keep pace there if you care enough. But in the cases of things like creating very destructive materials like a nuclear war offense obviously is favored greatly over defense. If AIs give you the capability to build nuclear weapons we're all in a lot of trouble. In biological warfare offense is humongously favored over defense. If both sides have similarly powerful capabilities and there aren't both sides but instead there's every individual person on the planet who's given the ability to build a biological weapon to engineer a biological plague then there is no possible way for us to deal with it if one in a billion people decide to start acting crazy and start threatening us with various agents or unleashing various agents. Even one in a billion is too many. If there are other similar things we haven't thought about it's the same thing. There's no reason to presume that under conditions of intense cyber warfare offense versus defense the defense can be available at reasonable cost. It can allow us to have reasonable systems. </p><p>But it's not so much about offense versus defense in my mind it is about competitive dynamics. It is about the fact that if everybody has access to a very powerful AI then anyone who does not put their powerful AIs autonomously in charge of increasing amounts of decision making and increasing amounts of productive capacity and letting them loose they will lose the competitive battle for resources. They will be outcompeted and that nature will simply take its course in some important sense that whatever is most effective will have to get copied and will have to get tuned to be more effective in these ways because what choice will we have? We will have no say in what the future looks like as humans anymore. Very, very quickly. And who cares in some sense if that represents offense or defense because we are not going to be engaging in any of the offense or any of the defense all going to the AIs.</p><h3>More on AI (1:53:53)</h3><p><strong>Theo: </strong>So clearly you believe that AI doom is a significant probability but you don't believe it's greater than 90 percent. So you break with Eliezer Yudkowsky on that who thinks it's 100 percent. So why?</p><p><strong>Zvi: </strong>I think it's important that he thinks it's 99 point something, not 100, but yes. Basically I hold out a number of ways where this might not end in disaster. One of which is simply we might not build sufficiently capable artificial general intelligence anytime soon. It might be more difficult than we think. Our civilization might be more inadequate than we think. The technological like power laws and scaling laws might not hold. They might not result in in practice efficiently capable systems.</p><p><strong>Theo: </strong>Doesn&#8217;t that just kick the can down the road?</p><p><strong>Zvi: </strong>I mean not for a long time and the longer we have to solve these problems the more likely we are using different architectures, the more likely we are in a better spot, more intelligent, more understanding, having more time to work on these problems and more time to find a solution. </p><p>I think there is some chance that techniques that are relatively imprecise, relatively unsophisticated give us enough of a chance. There's a significant chance that in fact the person who gets there first does create some sort of singleton. There's a significant chance that the people who do figure this out manage to find things I'm not thinking about. They find solutions that we're not considering right now. There will be a lot of very smart, very motivated people working on various forms of this problem. We will hopefully not simply be trying to scale up constitutional AI and hoping for the best, which I think is very, very low probability that will work. But also I have model uncertainty. Maybe I'm thinking about the world in ways that aren't correct. Maybe I misunderstand these problems. I haven't been crashing at the bare metal constantly for 20 years. I try to understand these problems in a different way. When a lot of people have a lot of hope in a lot of different places, I think you should espouse some probability of that. </p><p><strong>Theo: </strong>Like how AI researchers think that doom is a lot less likely than you do?</p><p><strong>Zvi: </strong>Yeah. I understand. I'm confident in a lot of their arguments. I'm confident that some of their arguments are incoherent or not well considered and are just incorrect. I can disregard those, but others seem far more plausible. You do have to consider to some extent that a lot of people are telling you you're wrong. It's not something you can completely ignore. I think it's very, very hard to have a 99% probability of something. Of course. And a very large number of people think it's very, very different than that.</p><p><strong>Theo: </strong>But to your credit, this reminds me of a Roon tweet where he said something like, nobody is prepared for what's coming with AI, least of all AI researchers who spend their time trying to lower the curve on a loss function and think of new optimizations and hacks. I think perhaps what they do day to day is a little bit divorced from the long-term societal impacts of AI, because what they're just doing is programming.</p><p><strong>Zvi: </strong>Yeah, the stories of people like Hinton and Bengio suggest that they didn't think about the potential dangers of AI because they figured they were so distant, they didn't have to worry about them. They didn't ponder what would happen if we succeeded. They were just trying to make incremental progress. Then one day they woke up and realized they needed to worry about maybe succeeding soon. And then they thought about it and they got terrified.</p><p><strong>Theo: </strong>Very Oppenheimer-y of them. Interestingly, there were three Turing Award winners, Hinton, Bengio, and Yann LeCun. Yann LeCun doesn't really believe that AI doom is a significant probability, if a probability at all. He's very skeptical. So do you think he just doesn&#8217;t understand?</p><p><strong>Zvi:</strong> I think he chooses not to. I think he&#8217;s making bad arguments in bad faith, often in very bad form, and he chooses not to engage with the questions in any serious way, and that is his decision. I made a deliberate decision a while ago not to cover Yann LeCun. I'm not going to quote his bad arguments and then knock on them. I'm not going to dunk on this guy. He can keep doing that. I'm going to keep ignoring him. If he makes a good point, I'll quote him, but otherwise I'll just ignore. And I&#8217;ve been very happy with that.</p><p><strong>Theo: </strong>So far there have been no real AI risks and current day AI systems aren't really capable of any kind of major risk. So at what point do you think AI actually does become risky?</p><p><strong>Zvi: </strong>I think we should not have been very many nines confident that GPT-4 was not a dangerous system, especially not a dangerous system when we then add various scaffolding and upgrades and improvements and plugins and so on to it over the course of many years. I think given what we know now, we can attach a good number of nines to that, that GPT-4 and similarly capable systems are best thought of as non-dangerous. But I don't think we could know that in advance with that much confidence. And when you ask me about a system that metaphorically is worthy of the name GPT-5, how many nines should we be willing to attach that the system is not existentially dangerous to us? I think one nine, yes. Two nines, probably not. Three nines, definitely not.</p><p><strong>Theo: </strong>What do you think AI could do to convince even the most hardened skeptics that it is an existential threat?</p><p><strong>Zvi: </strong>Most hardened skeptics? Kill them.</p><p><strong>Theo: </strong>Really? So if AI were to, I don't know, launch a nuclear attack on a small city or engineer a plague that kills a million people but not a billion people, they would just remain fully skeptical?</p><p><strong>Zvi: </strong>I think there are a number of people who respond to that being that is malicious humans using AI in a malicious way or making a dumb mistake we do not have to make again. We will learn from our mistake. We won't let it happen again. A significant number of people absolutely react in this way and they will control a substantial number of resources and they will still attempt to build AIs. Now I do think that if it launched a minor nuclear strike that killed millions of people that the governments of the world would in fact respond probably pretty strongly to that in ways that would make it not so easy to build an artificial general intelligence. But I do not think that the most hardened skeptics would bow even to that kind of evidence.</p><p><strong>Theo: </strong>One particular risk of AI is hard takeoff, FOOM, where the AI recursively self improves to go from roughly human level or slightly above to vastly incomprehensibly superhuman in a very short time. So especially given our current paradigm of neural networks that are limited by compute and data rather than just limited by the actual code that they're written in, what do you think about fast takeoff risks now?</p><p><strong>Zvi: </strong>I think it's a real risk. I think that it'll be very very hard to know when you are creating a system that is capable of a relatively hard takeoff. I think there are degrees of hard takeoff. Obviously there's the traditional, this happens in an hour or a day or a week. And then there's this happens in a month and it's not a yes or no Boolean question of how hard is your takeoff. But the fundamental theory behind it seems very very sound. Once you have a system that is capable of greatly accelerating its own improvement and its own capabilities enhancements, then what do you think is going to happen? History has already seen this in some sense happen multiple times. You've got the humans and then you've got the agricultural revolution and the industrial revolution. Why wouldn't it happen?</p><p><strong>Theo: </strong>Could it be that certain jumps in intelligence are harder than others? I keep using IQ as a metaphor just because it's most people's conception of assigning a number to intelligence. But could it be that it's significantly easier to go from 160 to 165 IQ than to go from 240 to 245 IQ, for example?</p><p><strong>Zvi:</strong> That doesn&#8217;t seem right. I don't see why that should be true. I think that given specifically human architecture, we are going to get increasing difficulty in amplifying human capabilities and intelligence in some sense once we get well beyond the range for which it was optimized and allocated for. But I don't see any reason why that should be true of an arbitrary computer system. Why should it be particularly... And if it was true, why would it be stuck in the human range? What is so special about the human range? Nothing as far as I can tell. And it doesn't seem like systems that are already in the IQ 200 type zone. They seem historically to be very, very good at enhancing their own capabilities and finding ways to make themselves better already.</p><p><strong>Theo: </strong>Clearly, there's some specialness with humans on the lower bound, at least.</p><p><strong>Zvi: </strong>In what sense? Will you say more?</p><p><strong>Theo: </strong>Well, systems that were not quite as intelligent as humans&#8230; It's still not clear whether it was a slight jump or a massive jump from, say, chimp level to human level. Chimps don't have computers or rockets or cars. Humans do. So, are there things that would be just totally incomprehensible to humans in that sense? Especially with arguments for computational universality and humans being Turing complete and the capabilities, the potential that we have to upgrade our own minds, is that still the case?</p><p><strong>Zvi: </strong>I think almost every human will come to some point where if you throw them enough scientific or mathematical literature with enough weird symbols in it, and enough complexity, will eventually throw up their hands and go, I can't do this. And that's not simply because they haven't spent the requisite time, they hit a wall. This is beyond my capability. I hit that wall too, in some places. I can't handle this anymore. I fully expect, yes, absolutely, if you build more capable AI systems, then there will become things these AIs understand, in some sense, and can produce and can say that humans just aren't capable of properly understanding. </p><p><strong>Theo: </strong>You talked about systems in the IQ 200 range tend to be good at enhancing their own capabilities. Did you have any specific examples in mind?</p><p><strong>Zvi: </strong>I mean, those people. People like von Neumann. Such people are very good at creating scientific innovations, at figuring out ways in which people can figure things out that are different, ways they could potentially enhance themselves. If you give von Neumann the ability to enhance his capabilities by solving the types of problems that these people have to solve, I think you would have seen a von Neumann take off, very clearly.</p><p><strong>Theo: </strong>Well, there seem to be some areas that very high IQ humans do worse at than lower IQ humans. It seems like in some areas, at least on a naive look, that there would be diminishing returns, or even negative returns to intelligence in some case. For example, finding a mate, or making friends, or even starting businesses.</p><p><strong>Zvi: </strong>So, the starting business thing, I think, is just a statistical myth. People think that being smart doesn't help you start a business, I think they're just wrong. If you look carefully at the studies about income, in general, and just general prosperity, income is positively generated with all the good things in intelligence, and vice versa. There were a handful of studies that had counterintuitive results in some subsections, and people started yelling because that's what they wanted to say, they wanted to find. But it was never true. It's just a mirage.</p><p>As for making friends and finding mates, well, that's because they have a fundamentally different problem, right? And because they have certain very clear restrictions on the use of resources and different preferences. If von Neumann could, in fact, gain infinite, limitless utility by finding mates that were typical mates, and that was something that he inherently valued very much, then I think he would almost certainly have found very effective techniques for getting those mates. I have every faith that he would have figured this one out. Similar with making friends. But he didn't want to. It's not what he cared about. So he did something else.</p><h3>Dealing with AI Risks (2:07:40)</h3><p><strong>Theo: </strong>So given the errors that do exist, if you were made dictator of the world&#8212;King Zvi, Emperor Zvi I, what would you do to best counteract them?</p><p><strong>Zvi: </strong>If I'm emperor of the world, then obviously that implies a lot of weird counterfactual things. But in the spirit of the question, I would prepare to impose limits on training runs and compute spends, and I would require the tracking of advanced GPUs and be ready to move towards a world in which you said these things weren't ready to proceed. And I would also fund various efforts to try and solve various forms of the alignment problem as well. But I would also do a lot of other things if I were emperor of the world. So there you go. But it also solves a lot of your problem. But because you're emperor of the world, you know that you can enforce your restrictions around the world.</p><p><strong>Theo: </strong>So what about a lower level of power, then, like president of the US?</p><p><strong>Zvi: </strong>I would, again, try to work towards that outcome, but I would then need to be focused on moving towards international coordination.</p><p><strong>Theo: </strong>The main difference is if you're emperor of the world, you don't really need to care about international coordination. But the common counter-argument when people in America talk about slowing down AI is, what about China? China's not going to stop.</p><p><strong>Zvi: </strong>To which I say, how do you know? What makes you think that?</p><p><strong>Theo: </strong>Because, though I don&#8217;t necessarily believe this, China wants to defeat America, and AI is a really powerful tool, and it will advance this powerful tool as much as possible in order to impose its will on the world. </p><p><strong>Zvi: </strong>Now, that sounds like Bertrand Russell. That sounds like one of these two things must happen. Everybody will obviously respond to their incentives. There's no way we could possibly get along with these people. As far as I can tell, we're not trying to get along with these people. We're not trying to make a deal with these people. The deal was very much in China's interest. Why wouldn't they make the deal? The Chinese are acting very terrified of AI, and rightfully so from their perspective. They have to worry quite a bit about what this would do to their ability to control their people. And also, we're eating their lunch. Like, quite badly. All of the major players here are Americans, or to some extent, UK. If we offer to stop the race, to slow down the race, if they play ball, why would Xi Jinping say no exactly? I am so confused by this claim. As far as I can tell, everything they have, they got it from us. What the Chinese are doing is they are copying our stuff. We're afraid of our own shadow. We are causing our own problem and then saying that because the Chinese have our old stuff, we have to make new stuff, but they will then steal again.</p><p><strong>Theo: </strong>On a tangent, could it actually be a good thing that LLMs could help destabilize the grip that the CCP has over Chinese society? It's an excellent thing, in particular, because it means that they're less eager to pursue LLMs right now. Whether or not we would want China destabilized is a question that I am choosing not to think about too carefully, but it's not obviously good or bad.</p><p><strong>Theo: </strong>We talked about emperor of the world and president of the US, but what if you became the CEO of, say, OpenAI, then what would you do? Not only would you be coordinating internationally, you would now need to coordinate domestically with the other AI companies.</p><p><strong>Zvi: </strong>Yes, but I'd also have full authority over OpenAI. In some ways, it's easier. In some ways, it's harder in terms of a different place to play the game from. If I was head of OpenAI, I would start out by making various public commitments and working with the other labs to make various public commitments about under what circumstances we wouldn't release systems, we wouldn't train systems, try to work together to create an international framework that once we sign off on it, maybe the countries can sign off on it as well. </p><p>Specifically, if I was OpenAI, I would say my first problem would be I have to address my internal culture. I have a culture filled with people who are thinking about this much less well than Roon is. I'm basically okay with having Roon on my team. Roon is actually thinking reasonably well about these problems. He just has reached different conclusions and has different opinions. There are people who are very dismissive of the very question of safety. Those people got to go or be compartmentalized in a place where they don't know dangerous things, would be my first approach. I have to move towards it. I have to rebuild my corporate culture.</p><p><strong>Theo: </strong>No, Zvi, you don't understand. We have to build a thermodynamic god and unleash it on the universe.</p><p><strong>Zvi: </strong>I don't want to do that.</p><p><strong>Theo: </strong>I would love to have an e/acc person on the podcast so I can ask them this specific question and see if there's anything there or if it's just like LARPing.</p><p><strong>Zvi: </strong>My response to that is you're allowed to have preferences. One of my preferences is I don't want to do that. It's just that simple. I'm so tired of people thinking that's not enough. I'm allowed to prefer certain arrangements of atoms to others and the arrangement you propose is bad. So no.</p><p><strong>Theo: </strong>In fairness to the e/accs though, I think that most of them don't think that AI doom is that likely. If they did, they would probably not be e/acc. They think we're building the thermodynamic god and it will benefit the universe greatly. Maybe it'll kill us, but probably it'll benefit the universe greatly, which is actually pretty similar to what even Eliezer Yudkowsky used to believe 25 years ago.</p><p><strong>Zvi: </strong>He started out thinking if AI is built, it's going to be capable of doing some amazing things, transformational in the best possible way types of things. That's just not the default outcome. That's just not the likely outcome unless we solve a bunch of impossible difficulty level problems. Right now, we don't have a path of doing that. So that's going to be a problem.</p><p>But the e/accels, as far as I can tell, they're not uniform. They have a lot of different kinds of reasoning and motivations behind what they're doing. Some of them are well thought out and reasonable and some of them are not well thought out and not reasonable. Some of them are in fact like humans don't matter very much and I don't really care if everybody dies. Right? Others are the humans won't die. Some of them are like the humans won't die, but if they did, that seems kind of fine. Right? Or it will be like the judgment of the universe or whatever. And you have people who say like, well, I don't want the humans to die, but acceleration is the way to make sure they don't die. It's all over the map. Right? And again, it runs the gamut as well as in quality. </p><p><strong>Theo: </strong>And then some people like Robin Hanson have entirely different opinions. It's like the meaning of what it is to be human will change significantly over the indefinite span of the future, just like it has in the past. And who are we to say that our specific way of doing things in 2023 is the way that must be imbued into the future AI, and it must be aligned to this. Why wouldn't we be able to think of AIs in the Robin Hanson way as our mind children, our descendants, our lineage?</p><p><strong>Zvi: </strong>You can choose to do that. I don't. I don't think that counts. That is what Robin Hanson described in the vision of the future, to which I ascribe very little value. I do not want that vision of the future. I will let other people decide whether they find value in that future. I don't think it's incoherent to find value in that future. I think we could. I don't. </p><p>I don't think that we built a bunch of artificial intelligences that then contain some legacies because they were trained originally on human data and text and other forms that then adopts itself to the environment in which it's given. And that will inherently be something I value, and that I think should be the thing that populates the universe. Yeah, I think that's bad, actually. I don't like this. </p><p><strong>Theo: </strong>Could it be something better than your current best conception of what it is that you value? Let's say that you had a medieval peasant from 1300 that you brought into the modern world against their will, against their wishes. And now religion is a lot less relevant, and a lot of their societal structures that they had been used to have disappeared. But at the same time, you know, they will no longer have to die at the age of 30, deal with plagues and barbarian invasions, they get to live in a house with air conditioning and computers.</p><p><strong>Zvi: </strong>My actual prediction is that once they adjusted to the culture shock and the language barrier, you would see a wide variety of responses. A substantial portion of people would react very well and say, "This is vastly better. This is a utopia. The world is great." And if God wanted us to believe in him, he wouldn't have made this opportunity for people to do this thing. Other people would react with, "The things I value are now no longer here, and I think this is terrible." Some of that would be religious, and some of that would be cultural, and some of that would be something else. Different people have different opinions, and I'm not here to tell them otherwise. </p><p><strong>Theo: </strong>In modern society, they could move to an Amish village or a hunter-gathering tribe in Africa or something.</p><p><strong>Zvi: </strong>For now, I would say, right now, you could move to an Amish village or similar place. I think the Amish are living a strictly better life than a medieval person. It would be very hard for me to find a way to disagree with that. But there is the problem that you're still around a group of people outside of you that don't believe in that and don't follow that, and a lot of people from medieval society would inherently care about that quite a lot for reasons that I think are pretty easy to be sympathetic to. They would quite reasonably ask, "Will I be allowed to continue in this lifestyle for all that long if they actually understood the situation?"</p><p>That's the thing about AI. Right now, if I wanted to, I could go out and join an Amish village or basically live any lifestyle from history that I wanted badly enough. I have the resources for that. But if AI comes along, that's no longer the case. We are no longer going, there's not going to be any refuge anywhere if things go badly. And probably in a lot of versions, if things are going relatively well, it's going to be difficult to find refuge of that type as well.</p><h3>Writing (2:18:57)</h3><p><strong>Theo: </strong>So with all this talk about AI doom, let's end on a more positive and parochial note. I think that you're an excellent writer. I loved your AI weekly columns. I would read them on LessWrong. I still do, mostly. So what advice would you have for someone who wants to be a writer?</p><p><strong>Zvi: </strong>If you want to be a writer, there's only one way to get good at writing, and that is to write. You have to write, and you have to do it with deliberate practice, meaning you have to look at what you're writing, ask yourself, what parts of this work, what parts of this didn't work, how do I improve it, what rules does that reflect, how do I iterate on that? But literally, every writer who talks about how do you get it right just says, write, write, write, constantly. And that is, in fact, how I got good as well. </p><p><strong>Theo: </strong>What about publishing? Currently, I have 30 or 40 drafts in my Substack folder that are something between a single line of an idea and a 90% finished essay that I haven't wanted to post because it's not very good. So at what point do you publish something?</p><p><strong>Zvi: </strong>If you're not killing your darlings, you're not doing a good job. If you publish 90 something percent of the work that you wrote down on the page, then you're not filtering properly, and you're not asking yourself what parts of it are good and what parts of it are bad. But also, you do want to put yourself out there and accept that your first 100 posts are mostly going to suck versus posts 101 and 200. I mean, suck less, but like, yeah, I know I wrote multiple <em>Magic</em> articles a week for years, and then I wrote a lot of posts in the rationality space, and slowly you get better. But it is slow, right? Let's not fool ourselves on this.</p><p><strong>Theo: </strong>All right. Well, I think that's a pretty good place to wrap it up. So thank you again, Zvi Mowshowitz, for coming on the podcast.</p><p><strong>Zvi: </strong>Absolutely.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Zvi Mowshowitz. If you liked this episode, be sure to subscribe to the Theo Jaffee podcast on YouTube, Spotify, and Apple Podcasts. Follow me on Twitter at Theo Jaffee and subscribe to my sub stack at theojaffe.com. Thank you again, and I'll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#2: Carlos de la Guardia]]></title><description><![CDATA[AGI, Deutsch, Popper, knowledge, and progress]]></description><link>https://www.theojaffee.com/p/2-carlos-de-la-guardia</link><guid isPermaLink="false">https://www.theojaffee.com/p/2-carlos-de-la-guardia</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sat, 12 Aug 2023 20:23:19 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135967340/0494e9639361e857772be2d0e7c04075.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>Transcript</h1><p><em>Note: This transcript was transcribed from audio with OpenAI&#8217;s Whisper, and edited for clarity with GPT-4. There may be typos or other errors that I didn&#8217;t catch.</em></p><h3>Intro (0:00)</h3><p>Welcome back to the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Carlos de la Guardia. Carlos is a former robotics engineer and longevity scientist who now works as a solo independent AGI researcher, inspired by the work of Karl Popper, David Deutsch, and Richard Dawkins. In his research, he seeks answers to some of humanity's biggest questions: how do humans create knowledge, how can AIs one day do the same, and how can we use this knowledge to figure out how to speed up our minds and even end death. Carlos is currently working on a book about AGI. To support his work, be sure to follow him on Twitter <a href="https://twitter.com/dela3499">@dela3499</a> and subscribe to his Substack, Making Minds and Making Progress, at carlosd.substack.com. This is the Theo Jaffee podcast. Thank you for listening, and now, here's Carlos de la Guardia.</p><h3>Carlos&#8217; Research (0:55)</h3><p><strong>Theo: </strong>Welcome back to episode two of the Theo Jaffee Podcast. I'm here today with Carlos de la Guardia. I guess we'll start off by asking, what does a typical day look like for you as an AGI researcher? </p><p><strong>Carlos:</strong> Well, partly, it's working on my book. So, I take the ideas I've worked through in the past and try to put them in some form that is less chaotic than they are in my notes. And then part of it is following various leads of things that have been interesting to me. I almost think of it like emails for somebody who doesn't have any collaborators. So the prior day, I'll kind of see interesting things, go into my notes, and I'll be working through them one by one and see how they connect with all my prior thoughts. And sometimes you'll find things that seem interesting in some very small way, but they'll turn out to connect to all kinds of bigger things. </p><p><strong>Theo:</strong> And also in the category of what you do day-to-day, what kind of software do you find most interesting? Because you ask different people on Twitter, and they have such different answers. Some people want super complex systems. Other people say, oh, just Apple Notes. I like this. So just workflow-wise? </p><p><strong>Carlos:</strong> Yeah. I use Roam right now. And so I like the idea of being able to connect things, and that's been helpful. I use Google Keep for just jotting notes down. Beyond that, I don't do much. Yeah, so I keep it pretty simple, I suppose. In fact, I actually met up with Conor, the CEO or the guy who runs Roam, and he told me that I probably shouldn't even use Roam because I use it only with the simplest features, and it has much more advanced things it could do. But it's been working well so far. </p><p><strong>Theo:</strong> So how much does your AGI research overlap with conventional AGI research? Kind of like what OpenAI does, where they have neural networks, and they're training them on huge GPU clusters, and they're monitoring the loss functions, and so on. Do you do anything like that? </p><p><strong>Carlos:</strong> Nope. Yeah, so mine's purely theoretical. So I focus more on theoretical questions of what makes humans special, and what is it that we're doing so differently from every other algorithm, and animal, and system. And I think that's mostly, yeah, for me, a theoretical question that only rarely involves me running computational experiments. So I often think of existing machine learning as being sort of like a bottom-up approach, things that work in practice for some things, and making them better and better and better. And I think of myself as more like a top-down, almost kind of approach of saying like, here's these performance, or here are these philosophical ideas at this higher level, and trying to make them more computational. So I almost think of myself as working on, if Popper worked on the logic of scientific discovery, I think of myself as working on the practice or the computation of scientific discovery. So that also means that nothing I do ever works in the traditional sense of people running actual code today. So it's pretty unsatisfying for them, I think, probably, to hear about theoretical ideas, because they seem so untested. But on the other hand, I'm often addressing problems that they're not considering at all. So I think that there's a place for the high-level research. </p><p><strong>Theo:</strong> So how likely do you think it is that you or someone like you will get to AGI before someone like OpenAI? Consider AGI, for the purposes of this, like something that can do, let's say, most of what an economically productive college graduate level human can do. </p><p><strong>Carlos:</strong> I don't think that's what AGI is, necessarily. So that's probably where we differ. So in the sense of like, how soon will it be before I create a useful system? Maybe never. It's not really what I'm trying to do. I think there's a fundamental difference between people and tools. And so I'm not working on tools at all, really. I'm curious about what it is that makes humans fundamentally different from everything else. So I think that my objectives are quite different. And so the book I'm working on tries to drill down as what those differences are. I think of it more like the human capabilities. It's sort of wrong to think about our minds as things that can happen to do lots of specific things. What defines us isn't the many specific things we can do, but the fact that we can think of anything. So if the thoughts that we can hold are unlimited, as I think that they are, that's what defines us. If you give me any system, however many finite things it can do, I will think that it is fundamentally different from an infinite system like ours. So I think that I often like to, maybe this is too much of a tangent, but I often like to compare, when people talk about intelligence, I like to break that up into two separate things. One of which is knowledge, and the other which is the ability to create knowledge. Because I think that if you compare a system like a data center and a baby, a data center has a huge amount of inbuilt knowledge. Millions of lines of code, running many independent complex processes. And it has almost no ability to create new knowledge. If something new happens that wasn't programmed into it, it won't be able to respond. Whereas the baby has almost no knowledge. It's terrible at running a data center. And yet it has infinite ability to learn new things. And so if you looked at them and just asked, what is the intelligence of these things? Well, it's the wrong question. You'd say, which one of these has more knowledge? Well, at the beginning it would be the data center. But which one of these has more knowledge-creating ability? The baby. And that ends up being decisive. </p><p><strong>Theo: </strong>I see. So have you found GPT-4 or other current AI models at all useful for your AGI research? Last week, I interviewed Greg Fodor. And he talked about how he believes that in order for AGI research to progress, AI researchers need to have access to foundation models. Because that is like the single most helpful tool for AI research to progress. So do you have a similar or different idea on that? </p><p><strong>Carlos: </strong>Well, in line with my previous kind of answer, I don't really do anything computational. So to me, it would only be helpful in the usual ways it would be helpful to any researcher. In that case, it could be useful. I haven't actually been using it. Maybe I should be. But I'd be using it purely as an ordinary researcher, not as an AGI researcher in particular. </p><p><strong>Theo: </strong>Do you have a ChatGPT Plus subscription? </p><p><strong>Carlos: </strong>No. I just have some basic one or whatever. I don't use it often. I've mostly used it so far for wordplay. I'll ask it like, can you give me a word that starts with P that means something like rain? Or whatever it is. No. It's good at that kind of stuff. It's good at many things. And I'm probably the least expert in them. I have a friend who is trying to get into programming. And so he's using it much more intensely than I am. </p><p><strong>Theo: </strong>Yeah, I studied computer science in school. So I find it immensely helpful for learning. Though on the topic of conjecture, when I'm working on actual projects, something that hasn't been done before exactly like what I'm trying to do, it does fail in surprising ways sometimes. Even a system as advanced as GPT-4. </p><p><strong>Carlos: </strong>Yeah, I think one thing that's interesting to note is that if you're judging GPT or some system like that, maybe there are two broad ways to judge it. One of them is, how well does it know what is known? And then the second thing might be to say, how well can it solve unsolved problems? And I think we sort of underappreciate how often we solve unsolved problems in the course of everyday life. Just in a way that is almost second nature to us. You know, we'll be asking, where are the keys? We'll find the keys and we'll carry on with our day, not realizing that we've just conducted a quite sophisticated search program in the midst of our ordinary moving around. </p><p>So I think when you look at the space of possible solutions to problems that we confront, it's huge. And I think this is where maybe GPT shines less because it's one thing to know what is known or to have text about what is known. It's quite a different thing to be able to use that to solve new problems. And, however many trillions of things have been discovered, that's really an infinitesimal part of what we can discover and of the search space that we're involved in every time we try to solve a hard problem. </p><p>So when you ask, can GPT do something plausible in some case, the answer is probably going to be yes. But if you ask, can it find the actual solution to some problem in some vast search space that we don't know the answer yet? Well, that takes a special kind of program. And that's one half of my book. It's partly about universality, all the things that are possible for a system to do. And then secondly, if you have a search space, how efficiently can you search it? Because the search spaces that we're involved with in ordinary human problem solving are exponential. They're huge. So being able to actually navigate them usefully is an incredible challenge. I think that that's where you see a real difference between different kinds of search processes, GPT being much inferior to the human one.</p><p><strong>Theo: </strong>So if you think GPT is necessarily limited compared to humans, how far do you think the current paradigm can go? What do you think will be the capabilities of GPT-5, GPT-6, what will it be able to do relative to humans?</p><p><strong>Carlos: </strong>I don't know. That's where like, that's an issue about machine learning that I think if you ask me, what could humans do? Well, rather like, yeah, I don't think I have any particular insight into what any particular technology can do. All I can say is that I'm interested in the things that distinguish different systems. So I'm interested just in the questions of what is the total range a system can explore? What is the search space of things that it can explore? And then secondly, how does it navigate that space? And so if you saw incredible leaps in the size of the search space that different models could explore, that'd be great. Although my fundamental question would be, is it unlimited or not? That's my fundamental question. And, you know, as to what designers come up with in the coming years, they'll, they may surprise me in their answers, but that's the question I'll be asking. </p><p>And that's like, secondly, it'll be how efficiently does the system navigate the search space? In particular, like what fraction of new ideas that comes up with are actually improvements over prior ideas? Because natural, you know, like biological evolution is very bad. A billion letters in the genome, it sort of starts switching a few of them at random, you know, so one in a billion chance of being an improvement, let's say. So it's very bad at coming up with new things that are better. It's easy to come up with new things, but new things that are better, that's the hard part. And so those are the kinds of questions I'll be asking.</p><p><strong>Theo: </strong>So what kinds of capabilities do you think you'd have to see in a system for you to say, yeah this is like a human, this is universal, it can navigate any part of the search space. I don't know if you can test that behaviorally. It's sort of like, if you're asking, can I do this infinite set of things, you can't actually check the infinite set of things. So you have to look more in terms of how it's built. My goal with my book is to try to see what fundamentally matters and then how those things were made achievable in the human case. I think that'll give you a clearer picture as to if you were to open up the internals of some system, you could tell what it could do and what it couldn't do. If you knew that Turing completeness was an important feature of the system and then you could open up a CPU to see if it could achieve that result, then you might be able to tell, okay, this system is Turing complete. You asked me a very specific question. You weren't necessarily testing its outputs for different things. So I think it'll be more about how is this system built rather than what is its behavior.</p><p><strong>Theo: </strong>So kind of like, if you have a computer made out of crabs? Have you heard about the crab computer? It was a real experiment done by a team of scientists where they figured out how to move crabs around in such a way that they would actually represent a Turing machine and they could get it to represent basic programs. The crab computer, if you have enough crabs, would be universal. But a TV playing the same loop of content over and over again with no option to change its output would not, even though the TV could demonstrate something a lot more complex than the crabs in practice. Is that kind of what you&#8217;re getting at?</p><p><strong>Carlos: </strong>I was just thinking to myself, the crab computer wouldn't be universal only because it doesn't have enough crabs.</p><p><strong>Theo: </strong>With an infinite number of crabs, then it should be good. But if you only have a finite number of crabs, say a million crabs, then you would probably have a hard time running Google Chrome, especially at acceptable speeds. But if you have a monitor with a static image of Google Chrome on it, it would demonstrate more outwardly complex behavior than the crabs, but it would not be universal in the same way that the crabs are.</p><p><strong>Carlos: </strong>There's no such thing as a universal Turing machine in physical reality. Every machine is going to be finite at any given time. But the question is, can you extend it? If there are humans around or something, or in the case of humans, if there is a system that itself has the knowledge to acquire more resources and turn them into more memory and more computational power, then that system itself will be able to extend itself. And at that point, it'll still be finite in its computational capacity at any given moment, but into the future, and there'll be no limits on the size of its capacity.</p><p>So if the crab computer requires an external person to add more crabs, that it's not as impressive. But if it itself has the knowledge, as I think the human civilization does, to go out and acquire more resources, well, then if we consider ourselves a big crab computer, the big human computer, we're able to get more non-crab materials from the world and turn that into what we want. That actually is a fundamental thing because there are no infinite computers in reality. There are only finite ones, but some of them know how to make themselves bigger. And it makes all the difference.</p><p><strong>Theo: </strong>So there's no infinity, only a beginning of infinity.</p><p><strong>Carlos: </strong>There you go.</p><p><strong>Theo: </strong>On a slight tangent, on the header of your Twitter profile, you have five books: <em>The Logic of Scientific Discovery</em> by Karl Popper, <em>The Blind Watchmaker</em> by Richard Dawkins, <em>The Beginning of Infinity</em> by David Deutsch, <em>G&#246;del, Escher, Bach</em> by Douglas Hofstadter, and <em>Knowledge and Decisions</em> by Thomas Sowell. Personally, I've only read <em>Beginning of Infinity</em>, but the other four are certainly on my list to read in the near future. So how did you find these books? And how did they influence you?</p><p><strong>Carlos: </strong>I read <em>The Beginning of Infinity</em> first. I found that through Sam Harris. He had posted on his website in 2012 that he was currently reading it. I read it, and then two years later, he had David Deutsch on his podcast. In his preface, he said, by the way, I apologize, I haven't read your book yet. So I had gotten the book from him, then read it, and then finished it before him, I guess. But that introduced me to Popper. I had known about Dawkins beforehand, but only started reading his stuff more recently. <em>G&#246;del, Escher, Bach</em>, I've heard about and I still haven't read it entirely. I read like half of it. I find that a lot of the things in there are the right kinds of questions to ask, although I sometimes disagree about the answers, like in light of Popper and Deutsch. But they're always interesting. And then Thomas Sowell, I was always binge-watching Milton Friedman in college, all his videos, and so naturally came across Thomas Sowell and liked his stuff. I especially like this essay from Friedrich Hayek that discusses the use of knowledge in society. The argument is that knowledge is distributed across society and capitalist systems make better use of it, free market systems make better use of it. Thomas Sowell also took this knowledge based view of different economic systems. I haven't read the entire book, I just really loved the intro to the book more than anything. The idea of saying, everyone looks at the question of communism versus socialism versus free markets through a very ideological lens, or even an economic lens, but you could take a purely knowledge based view. I thought that's very interesting. It's a different level of abstraction.</p><p>I like to think of replacing the word of epistemology, which is a mouthful, with this more pleasant to the ear, the knowledge based view. I also think of it like goggles you can put on and suddenly you're looking at your iPhone or your computer, or other people around you or civilization. Instead of seeing the atoms, you're seeing the knowledge. You ask questions like, how did it get there? What is its capability? What kinds of things does it affect, etc. I find that quite nice. I found it practical when I was talking to people who weren't philosophy nerds. I thought, do I really want to say the word epistemology right now, at this pool party, whatever it is. And I thought I'll use the word &#8220;knowledge-based view&#8221;. And that felt better.</p><h3>Economics and Computation (19:46)</h3><p><strong>Theo: </strong>So on the topic of Sowell and Hayek, who I think are both fantastic, love both of them. A lot of people seem to assume implicitly that once we get AGI, or ASI, whatever that means, then humanity will be in a utopian communist paradise for all eternity. The AGI in its infinite wisdom and goodness will simply decide what resources to allocate to whom. I'm somewhat skeptical, how about you? What do you think about it?</p><p><strong>Carlos: </strong>I think one question to ask is, what fundamentally changes in a system like that? If you choose any kind of new thing, does it change the fundamental problem? The fundamental problem in economics in that case is if knowledge is distributed, and people are creating new knowledge everywhere, then are you able to predict that and then make decisions or not? If you have more powerful computers, then it wouldn't be able to be doing more powerful things, less predictable things all throughout the economy. So if you were trying to ask how predictable is the economy? If it only consisted of a very simple system, that maybe you could conceivably model the entire thing, in terms of its atoms and everything else, and figure out what it would do. But if the entire rest of the economy is many times more complex than your ability to simulate it, then it seems like that fact hasn't changed.</p><p><strong>Theo: </strong>That kind of reminds me of an idea from Stephen Wolfram, where he talks about computational irreducibility. Wolfram is kind of like Deutsch in that he's very into computing, computers, and taking a computational view of everything. Unlike Deutsch, he emphasizes something slightly different. So whereas David Deutsch talks about humans are universal, right, we can do anything in theory. Wolfram talks more about how humans are computationally bounded observers, and how the world as a whole is computationally irreducible. The only way that we can understand complex systems is by observing them. They're very hard to compute.</p><p>So I wonder if the economy would be totally computationally irreducible, like the only way to allocate resources efficiently would be to let capitalism just do its thing. Is it even possible in theory to have an AI advanced enough, a computer powerful enough to model the whole thing?</p><p><strong>Carlos: </strong>Yeah, isn't this like a Newcomb's paradox, or something like that, or like Newcomb's problem? I think, like, can you predict what a human will do? And I guess the only way to do it was actually simulate them. </p><p>But anyway, I think that there's interesting things to say about predictability, because it might be that if you were to say that complex systems are unpredictable, you can only run them to figure out what they're going to do. You could say, well, isn't the universe such a system? But you'd be wrong in that case, because there are all sorts of regularities in nature that can be exploited, and discovered and exploited. So if you were to say, let's just, all we have to do, our only recourse if you want to understand the system is simulate all its atoms, and then run it. Actually, you'd be missing out on the fact that, let's say, they're all going to obey conservation of momentum and other things like that. And so, without running anything, I would tell you, the total amount of energy or momentum in the system will be the same into the indefinite future. I haven't done any computations, just because I know this deeper principle. So I would be, in other words, predicting something important, even if I wasn't predicting the details.  So I think that there are many patterns of reality to be discovered like that. So it's not purely just run it. And we can't say anything else about it.</p><p><strong>Theo: </strong>He does talk about pockets, reducible pockets in the computationally irreducible universe. So yeah, conservation of momentum and laws of physics is an irreducible pocket. But something like trying to describe, is there a law that you can come up with to describe the 50th step in an arbitrary computational automaton? And Wolfram would say, no, the only way to figure it out would be by running it. So could there be a law that allows us to predict the behavior of systems like that? It's possible. I wonder what it would take to discover.</p><p><strong>Carlos: </strong>I think on some level, it's important to recognize. Different levels of abstraction exist, and in some cases, you may not care about low-level details. For example, it's impossible to predict the location of a particular atom in the indefinite future. Yet, that might be the most boring thing to ask. At a higher level, things might be quite predictable. If I said something simple, like writing a loop in Python that just adds two plus two indefinitely, that would be perfectly predictable. But if you asked me what the electrons are doing in the CPU, that might always look different. The memory locations being used might always be different. There might be all kinds of incidental complexity that you just wouldn't care about.</p><p>I'm not sure what we should take away from the idea of computational irreducibility. I'm more a fan of Deutsch's point about the unpredictability of future knowledge. That highlights the difference between some systems which are very predictable, like this two plus two equals four, and people and what they're doing. There are other systems that would be computationally irreducible, perhaps, but also very boring, like just some kind of noise production system.</p><p><strong>Theo: </strong>Aside from these five books, Popper, Dawkins, Deutsch, Hofstadter, and Sowell, what other books do you think had a large influence on you?</p><p><strong>Carlos: </strong>Let me look around the room here.</p><p><strong>Theo: </strong>By the way, you should really post a list of these books on Susbtack or Twitter, if you haven&#8217;t already.</p><p><strong>Carlos: </strong>I guess the best I've done so far is I just put that thing at the top of my profile. The biggest books, really, like the 90% of the way I think, is <em>The Beginning of Infinity</em>. That introduced me to Popper and other stuff, which is fairly interesting. Everything else is helpful to some extent, and gives some ideas. Lots of things are interesting. The reason I'm doing all the research I'm doing right now was because of <em>The Beginning of Infinity</em> very directly. Deutsch&#8217;s essay on AGI is an excellent one in Aeon magazine, about why Popper is relevant to AGI. That kind of stuff is what really got the ball rolling for me and introduced me to this whole area. I never would have gotten interested in Popper otherwise. </p><p>Then there's just all sorts of books that I forget the names of at the C level, which are interesting in various ways for one or two ideas. One of them, for instance, was like, book from Minsky called <em>Finite and Infinite Machines</em>. That was kind of interesting one, it's probably outdated at this point, in some ways, people probably invented better ways to explain the fundamentals of computability and so on. But that was so interesting to kind of grapple with low-level details of when people were initially figuring out what can computers do? How should we think about them?</p><h3>Bayesianism (27:54)</h3><p><strong>Theo: </strong>Deutsch and Popper have one epistemology and a rival epistemology that is favored by a lot of people on our part of Twitter, and on the internet at large, and in the AI community is the rationalism developed by Eliezer Yudkowsky. Rationalism, which was developed by Yudkowsky and explained on his website LessWrong, heavily emphasizes Bayes theorem as essentially the central mechanism of knowledge. Basically, in rationalism, what we do is you have evidence from the outside world that will cause you to update your prior probabilities in one direction, or the other, everything is very mathy. That seems like it's in opposition to a lot of Deutsch and Popper's ideas. Deutsch has written about this. He wrote an article called &#8220;The simple refutation of the Bayesian philosophy of science&#8221; where he kind of destroys the whole idea, I think. So do you think that there are any idea or any areas where Bayes&#8217; theorem is applicable?</p><p><strong>Carlos: </strong>I'm not really an expert on it. So maybe there is value there that I'm missing, both in like the formal ideas and in the ideas of the broader community and so on. But for me, I'm mostly just interested in how the mind works. I think the fundamental questions there are like, Popper's questions, like, how do you come up with new ideas? And then how do you select among them? </p><p>I feel like whenever I tell people, or talk about fundamental ideas of variation selection, there's never any pushback. I think that there's just simply logically obvious. If you're interested in knowledge creation, it's like, first of all, you have some things that exist right now. And then if you need to have a new idea, it's going to be produced by existing ones in some form, non-existent things can't just come out of nowhere. So you have what you do have now, you reassemble it somehow to come up with a new thing. And you can do that in better and worse ways. So that's just one thing that's like a logical necessity. If you want something better, you have to come up with something new. And you have to have some process that can come up with new things. And then secondly, if you have a bunch of new things you're coming up with all the time, eventually, you run out of resources to explore them unless you start removing the ones that aren't good. And so that you can focus on improving the ones that are. Prior improvements are necessary if you want to have cumulative progress. You need these variation and selection elements. Then it's just a matter of saying there are better and worse ways of doing these things. That's where the discussion then goes to. In the Bayesian case, you might ask, do we have a better way of creating new ideas or a better way of selecting among ideas? Those would be my main questions. </p><p><strong>Theo: </strong>Do you think that there's any room to say, "I find that this theory or this trait has a 70% chance of being correct, and this one has a 30% chance of being correct. So until we get more evidence, we should favor theory A." Is there any room for numbers and probabilities? Or is the process of selecting better theories purely ordinal, and not cardinal?</p><p><strong>Carlos: </strong>What's the difference there?</p><p><strong>Theo: </strong>Ordinal meaning you can only compare which one's better and which one's worse. Cardinal meaning you can assign specific numbers, values. So you can say, this theory is 90% correct, or 90% likely to be correct based on my priors. And this theory is 70% likely to be correct. </p><p><strong>Carlos: </strong>I guess the short answer is if it helps you out, use it. But that's a practical thing, rather than saying there's some fundamental importance to their probabilities. This is not an answer to your question, but it just reminds me of the price system. There's a value in the fact that there's all kinds of knowledge that is relevant to a given product, but not expressed in the price. The price is a simple number. And thank goodness for that, because it makes it easy to communicate what to do with that object, whatever the product is, in a way that is sensitive to lots of other knowledge, but you don't have to have all that knowledge explicitly represented to make use of it. </p><p>So to the extent that you can do useful things like that, then fine. I don't know if there's any fundamental importance to probabilities and so on for epistemology, though. Because again, to me, I guess, maybe throw the question back to you and see if you have any thoughts on it, do they help either in the creation of new theories or in the selecting between theories? I suppose you could say, if you prefer one theory, you could give it a higher number. But to me, I suspect that there are independent reasons that cause those numbers to be what they are. So like, I will first say to you, this theory has these problems, and seems to be consistent with these things. So that seems to be against that theory. I'd say this theory doesn't have those problems, maybe has different problems. And because of that discussion, I might then say, if I had to put it to a vote now, I prefer this one. But you know, they both have their problems. So because I'm going to prefer this one, I'll give that 60%, I'll give this other one 40% or whatever. But you see the order in which those things were arrived at. First, the ideas, then the problems with those theories, then I assign some number for a practical purpose. That's how I'd be thinking about that. That's not using the actual probability calculus or anything.</p><p><strong>Theo: </strong>I think that a lot of the problems in Bayesian rationalism are just taking their probabilities too seriously because really what they are are wild guesses based on some amount of knowledge and prior probabilities. For example, the whole LK-99 superconductor saga, they had their prediction markets and people were really, really trusting these prediction markets as almost like arbiters of truth. You know, they would decide whether or not the superconductor was real based on prediction markets that would swing wildly. If you remember one day they were at like 20% and then the next day they were up past 50%, 60%. People were saying, oh, my prior or my probability that the superconductor is real is 99% and now they're back down to 5, 10%. So yeah, I think that the usefulness of Bayesianism is limited along with the information that you have.</p><h3>AI Doom and Optimism (34:21)</h3><p><strong>Theo: </strong>On the topic of AGI, you seem to be an optimist when it comes to the conventional question of like, will AI kill us all? So can you explain why you believe that it won't?</p><p><strong>Carlos: </strong>I like one of Deutsch's points on this, that the battle between good and bad ideas will rage on, no matter the hardware it's running on. The real thing is about the quality of our ideas. And I think we should have reason to hope that the culture we've already developed over several hundred years through the Enlightenment has been largely about how to coexist peacefully.</p><p>When you ask the question, what does it take to coexist safely with other created beings? That's what we've been working on with our whole project of democracy and so on. In a way, it's sort of naive to imagine that we will create beings as good as us, with the same capabilities as us or greater ones in terms of their hardware, and then we will somehow deal with that in some kind of technological way, rather than in a cultural way. So I think that our greatest resource in terms of safety, dealing with other people, is our current institutions and culture, which make me not want to murder you, for instance. It&#8217;s not a great way to solve my problems.</p><p>That's a different question from algorithms run amok that aren't universal like us. Their whole problem is that they're too stupid in the relevant ways to have the right moral knowledge and so on. But if it's universal, then it should have the same capability that we do to not only learn technological things, but moral things.</p><p><strong>Theo: </strong>So that kind of runs into an issue that's brought up by Eliezer Yudkowsky and originally developed by Nick Bostrom called the orthogonality thesis. For the audience who may not know, the thesis basically states that the intelligence of a system and the morality of a system are totally unrelated, not just the morality, but the goals of a system. For example, it's possible to create an incredibly intelligent AI system, according to Yudkowsky and Bostrom, that wants nothing more than to make as many paper clips as possible. So, do you think that is likely or not? And why?</p><p><strong>Carlos: </strong>One question you can ask in such a case is this: What role does the idea of maximizing paper clips play in this larger AGI system? We often imagine that it forms the core, this immutable core, which drives everything else. Everything else is subsidiary to that one idea. If it is nice to you one day, that is because in its calculations, that will be better for achieving this ultimate goal, which never changes. </p><p>That's one picture of the role an idea can play in my mind, that it's immutable, central, everything else is subservient to that. A different picture is more of an ecosystem view where, if you drew a circle and your mind was the circle, lots of ideas are vying for power within that system, but there's nothing immutable there. If one day the idea arises in your head that you should create paper clips, that will never be instantiated in your brain so that everything else is subservient to that. Or if it does, maybe that would be what Deutsch calls an anti-rational meme, where it's somehow evolved to have that property. But throughout history, it's quite hard to produce an idea that sticks in people's brains and makes everything else subservient to that. </p><p>These are two fundamentally different pictures of what minds are. In one case, you almost have this image of traditional neural network and machine learning algorithms where it's like an optimization loop, which is fixed on the outside. It says, here's the thing we're trying to do. This is our ultimate goal. Everything else is going to be in service of that. There's nothing in the system that can change that external goal, except an external program. There's nothing inside the system that can do it. Whereas in human minds, it's like this ecosystem where many ideas are vying for impact and power, and none of them has a monopoly on it. There is no outer loop. And in a system that's more of this ecosystem type of view, where everything is equal and bouncing around and trying to affect everything else, there is no outer loop. You have escaped that infinite regress.</p><p><strong>Theo: </strong>Another thesis brought up by Yudkowsky and Bostrom is the idea of instrumental convergence. They believe that no matter what final goal you give an AI, like if you tell it to cure cancer, or if you tell it to make paperclips, it will converge on the same instrumental strategies, like preserving itself, trying to acquire resources, trying to enhance its intelligence, and trying to defend against people shutting it off or changing its goals. With AI systems that are stronger than humans, supposedly that can be very, very dangerous. So what do you think about the instrumental convergence idea?</p><p><strong>Carlos: </strong>The idea is that it converges on certain things, but it diverges on others, right?</p><p><strong>Theo: </strong>Basically, no matter what final goal you give an AI, it will converge on instrumental goals that involve preserving itself and preserving its final goal at all costs.</p><p><strong>Carlos: </strong>I mean, you can't do anything if you can't survive to do it long enough to do it. There are certain requirements that physical reality imposes on you, like if you want to do anything big, you need a certain amount of energy and so on. If the system doesn't realize that fact, then so much the worse for that, and it won't achieve those goals. </p><p>There are certain things, I guess you would say in that case, that are predictable prerequisites for achieving many things. So I imagine if it didn't have universal computers, you'd say, well, probably if it's going to be successful, at least we'll have to invent universal computers, because we know how important those are. That's less a statement about any kind of AGIs or anything in particular, just more about the causal relationships between different kinds of technologies and so on, which anything would have to adhere to.</p><p><strong>Theo: </strong>Another crux of the AI doom argument is that AI systems will become vastly more powerful than humans, or even slightly more powerful than humans, and they'll somehow be able to exploit that asymmetry in order to end up killing all of us. So you've talked about humans being universal, and we can't be replaced by AI systems. Gwern Branwen, an internet writer, wrote a long essay called &#8220;Complexity No Bar to AI&#8221;, where he argues that even if humans and AIs are both universal in theory, the AIs will run on such better hardware or be able to implement such better algorithms that they would be able to become inconceivably powerful anyway, even if we're both theoretically in the same computability class. So what do you think about that? </p><p><strong>Carlos: </strong>I think there are different kinds of universality. One is computational universality, but I think there are a few others which I have in another video. We don't literally have complexity classes in terms of what our thinking is about, because we don't have simple algorithms which we're executing on different kinds of inputs. But the core concern of complexity theory about the resources used for computations, that is absolutely essential in our case. It just isn't literally about complexity classes. </p><p>So, I think the answer to those cases is how efficiently do you navigate the space of ideas? And that consists primarily of two questions. How efficiently do you come up with new ideas that are actually good? And then how efficiently can you filter the good ideas from the bad ones? I think there are forms of universality associated with each of those. </p><p>So, if I said to you, I have a method for generating new ideas. I have some ideas here. I have some ways in which I can combine them. For instance, a genome consists of letters, and you can flip the letters. And I told you, all you can do to create a new mutation is to flip one letter. That's a certain way of navigating the space of genomes. But if that was the only way you could ever use, then it would be the equivalent of only being able to navigate the planet by taking one step at a time on your feet. No planes, no boats, no parasailing or anything like that, just your feet. And that would be quite a slow way to navigate the space of ideas. </p><p>Whereas we can continually invent new ideas, new ways of combining them that are the equivalent, if you're thinking of navigating the globe, of taking a plane someplace. My favorite example of this is a video called Pakistan Goes Metal, which in a sense combines only two things, traditional Pakistani music and metal music. This is not a low-level combination. This is a combination of two very high-level concepts that makes it seem very easy. And yet when you combine these two things, there are many lower-level details below them. But at this level of abstraction, the combination is very simple. </p><p>There was a time before Pakistan existed, the time before metal music existed, and those concepts had to be invented. And now that we have them, they allow us to form very simple but very powerful combinations. And I think of that as saying, basically, we have invented new ways of combining things. If you thought only in terms of notes, it would take thousands of things to express the difference between this Pakistan Goes Metal thing. Or rather, to express this idea of Pakistan plus metal purely in terms of notes would be very complicated. If you have a look at the MIDI file, it would be hundreds of different things that you have to tweak. </p><p>So in other words, there's a simple way of combining new things and that is essential for us to actually be able to efficiently create new ideas that are better. And there's analogous things for our ability to select among options and tell which ones are good. But yeah, the bottom line of all that is if you can't invent new ways like this Pakistan Goes Metal, new concepts, new ways of generating new ideas, then you're going to be hopelessly inefficient. You're going to be like the person trying to navigate the planet on their feet rather than with planes.</p><h3>Will More Compute Produce AGI? (46:04)</h3><p><strong>Theo: </strong>So back to the current paradigm of AI, where we have neural networks with tremendous amounts of data and tremendous amounts of compute. Do you think that there's a possibility that simply adding more will lead to conjecture and universality in the same way that evolution, which is kind of a dumb process, you know, there is nothing really intelligent guiding in other than navigating the search space. Evolution led to human universality. So do you think that there's some point at which we add just enough compute so that AIs will become universal? </p><p><strong>Carlos: </strong>I guess there's like two questions. I guess this isn't quite your question, but if you add more computational power to a system which is like less efficient, like exponentially less efficient, then it doesn't really matter how much more power you add to a system. So that's more this computational class type idea where it's like if you have a really bad algorithm for doing something, it doesn't really matter what constant you add in front. But that's, I think, not what you're asking. </p><p>I guess your question is more about almost like that instrumental convergence idea. It's like, well, if you have evolution doing its thing over here, you have machine learning doing its thing over here, and one of them ended up discovering a path to us, to this universal kind of algorithm, would the other do the same as well? Is this somewhere along the path of doing really anything impressive? </p><p>I don't know. Maybe that's more a question for people designing objective functions and so on. I suspect that the set of possible algorithms is so large that actually converging into things that we do is not so easy. But like you said, evolution did it, so maybe there's a way of getting machine learning to do it, too. Although we might not know why it's doing it at the time.</p><p><strong>Theo: </strong>That's true, yeah. And clearly, AIs are able to do some things that seem impressive that we're not yet able to explain, like writing code and writing poetry and writing math. And of course, nothing that it's done yet is close to the level of the best humans. But it's been able to do some pretty impressive things just from having the amount of data and compute that it does. </p><p><strong>Carlos: </strong>Yeah. So one thing that gets asked on Twitter a lot is, can machine learning algorithms do X, or can they explain things, or whatever? I think that's, again, if I think about my two fundamental questions about universality and efficiency, what can the system do in principle, given infinite resources, and then how efficiently can it do things? Those are the two questions that I think about. It may then be the case that machine learning algorithm, maybe it could do anything, given the right amount of time. If it's exploring the space of programs, and it doesn&#8217;t have any obvious limit to which programs it can come up with, then it can theoretically generate any kind of program. The challenge then becomes, can it come up with the right ones efficiently? For instance, what would it take for a machine learning system to come up with general relativity? This is a very specific computation, a particular idea. In the space of all ideas, it's a needle in a haystack. So how do you find that needle?</p><p>A machine learning system could do it if it can find any program. A random program generator can generate any program, so it could generate relativity. However, finding that needle in the haystack by accident is unlikely. So then the question is, how long would it take for the machine learning system to do that? And would it even recognize that it had found the right answer if it stumbled across it?</p><p><strong>Theo: </strong>So I guess the difference is, certain people, like the people at OpenAI, Sam Altman for example, once famously said &#8220;gradient descent can do it&#8221;. He believes that simply adding more and more computing power and giving the model more and more knowledge will eventually cause it to either awaken or simply to know so much that it approximates a universal being like a human. While you think that, no, it doesn't matter. The search space is just too big. You can't put all of it into a model. It needs to be able to explore it by itself. Like the baby vs. the data center.</p><p><strong>Carlos: </strong>Gradients are an attempt to solve the same efficiency problem. When you're navigating a huge space, you could try to train a modern neural network via evolutionary methods. You could say, here's all the weights. Let's try a small permutation of the weights, see which ones are better, choose those, move on to the next. But the problem is that's vastly more computationally intense than using gradients, which tell you exactly where to go to get the next improvement. So the efficiency is still the relevant thing. But the limiting factor is like, can you get a gradient for your situation? In all these cases, you're trying to assemble a mathematical system, which is similar enough to the actual thing you care about, but also has the property that you have the full mathematics of how it works and can therefore navigate it. But that's not a given.</p><p><strong>Theo: </strong>Naval Ravikant agreed with this a couple of years ago. He wrote an article called &#8220;More Compute Power Doesn't Produce AGI&#8221;. And someone responded to him recently saying, wow, this Naval guy really missed the mark on AI. And he said, Naval, &#8220;We don't have AGI yet, but GPT-4 has definitely caused me to update my priors. So consider that piece obsolete while we all learn more&#8221;. So, have your ideas on this changed at all after GPT-4 or GPT-3.5, Chat GPT even? Or are you still of the opinion that compute can't do it without algorithms?</p><p><strong>Carlos: </strong>I think this is back to the same point I said before about this is where the analogy with complexity theory makes more sense, where adding a constant factor doesn't change the scaling factor. Given the size of the search space we're dealing with, the space of all possible ideas, it's like an infinite space. So navigating that efficiently is hard. If you have a machine that can do things a million times faster, that's almost no help if it doesn't have a fundamentally good way of navigating that space. It's like saying, I have a system which can search every grain of sand individually to find something buried someplace. Whereas actually you would want a high-level theory that could tell you, it couldn't possibly be here, it couldn't possibly be there. So with the right kinds of ideas, you can eliminate infinite swaths of the search space without ever checking them individually. So there are far better or worse ways to navigate the space of ideas. And so it's really important that you have that.</p><p>But more broadly, I'm curious not so much about particular systems, I'm curious about the full spectrum of knowledge-creating systems. I like to think of it as comparative epistemology. So if I'm asking these questions about universality and efficiency, about variation and selection, these are the universal questions which apply both to genetic evolution, to any given algorithm, to human minds, to animal minds, to everything. So any given point in that spectrum, I'm interested in to shed light on the rest of the spectrum. Because there are things with GPT, which I would say like, it's not that I think it's AGI, but I do think it shows you another interesting point on the spectrum of knowledge-creating systems that didn't exist before. So that's what makes me interested in it, as opposed to saying this will be, this either is or will be an early successor or an ancestor of an AGI system.</p><h3>AI Alignment and Interpretability (54:11)</h3><p><strong>Theo: </strong>Speaking of GPT-4, you talk about looking into different knowledge-creating systems and how that's very interesting. So one such thing where people are looking into knowledge-creating systems is mechanistic interpretability, where AI researchers are looking into the weights and biases of neural networks like GPT-4 and seeing if they can figure out what internal algorithms, internal circuitry it uses to do stuff like adding numbers or deciding what words to use for poetry or whatever it does in there. So do you think that mechanistic interpretability is interesting and/or useful?</p><p><strong>Carlos: </strong>I find mechanistic interpretability intriguing. I haven't delved too deep into it, but I like the idea that something like GPT-4 is this large computational system and that the way it evolves is such that certain kinds of computations can arise within it, such as general algorithms for adding or doing different things like that. I like the idea that it shows that, for potentially a variety of tasks, there are certain computations that can be identified and understood. Arbitrary programs can arise in the system, and it's hard to predict which kinds of programs will arise. The question becomes, what kinds of things can arise within the system? What process can give rise to that and how efficiently? This comes back again to the idea that maybe it can discover relativity, but what would it take to do that? How would it distinguish that theory from all the alternative things that might have come across along the way? Why would it then decide on relativity? I would be surprised to find general relativity within that system. I wouldn't say that it's impossible. It would be interesting to look, and my question would be about what it is about the human way of thinking that allows us to converge on something like general relativity? It's probably very unlikely for something like GPT. It's not that it's impossible exactly. It's just that I don't think it would be able to distinguish between all the possibilities just on that one. Because, E = mc^2 is a lot similar to E = MC^2.001.</p><p><strong>Theo: </strong>But one is right and the other is wrong.</p><p><strong>Carlos: </strong>I don't know a huge amount of physics to go into the derivations of these things, but I would assume it's a pretty fundamental difference between those things. One is a very empirical thing. If you put enough zeros on it, you'd still be asking why is there a 0.01? Because if you think about it, it's a different situation where you have areas. I can understand very clearly why there would be a two there. The 2.001, it doesn't quite square with our understanding of length times another length gives you the answer.</p><p><strong>Theo: </strong>It is pretty cool how many easily human understandable mathematical constructs there are in reality. Like what you just mentioned, the area calculation or the area of a circle, pi is not really human understandable. It's an irrational number that goes on forever. But the squared part is, the R part is. </p><p>Anyway, you wrote a tweet where you said, "Don't treat digital or biological people like tools, that's slavery," something along those lines. So with the idea of AI alignment, that's becoming more and more popular, where depending on who you ask, it's either about making AI systems do what we want or making AI systems do things that are safe. What do you think about the field of alignment?</p><p><strong>Carlos: </strong>I go with David Deutsch on the idea that there's a fundamental difference between AIs on the one hand and people on the other. If the thing is a person, then it's a person. The hardware doesn't matter. All the same rights and privileges apply that you and I have. If it's not a person, it's a tool, then there's almost no ethical concerns at all about its wellbeing or something like that. It's just a matter of, does it hurt other people? That's the only ethical matter. </p><p>So if you say like, I have created some kind of weapon system or something, then I'll be very curious to say like, it's not gonna kill me in it. That's what I care about. But if it's a human, then the more important concern is, are you treating it right? The idea of trying to control its mind in some way other than via persuasion and ordinary argument would be, that's where you enter the Orwellian territory. Orwell doesn't do a whole lot of actual neurosurgery, but you got the idea.</p><p><strong>Theo: </strong>Do you think that there are AGI risks in the future, risks to creating digital people that would be as capable or more capable than biological people?</p><p><strong>Carlos: </strong>If they are fundamentally the same as us in terms of the way in which they deal with ideas, it's just like the same program as you and me, just I guess basically saying, what if I took your brain and scanned it and then just ran it on fast hardware? What would happen then? Would we all be subject to your wishes because you had such an advantage over us? How much of an advantage is that, et cetera, et cetera. It's a bit analogous to the question of what happens if some country gets nukes or gains some kind of advantage over us? I'm not that well-practiced on geopolitical questions of that sort. So I'm not sure what the best strategy would be. </p><p><strong>Theo: </strong>You're saying kind of that the world is robust enough so that giving one person, one entity, a big advantage in one area wouldn't just brick the universe.</p><p><strong>Carlos: </strong>It's an open question. If we're supposing that we had a system that's basically you, but running on faster hardware, there's an open question. First of all, whether or not you would actually have faster hardware, like if we ran your brain on current CPUs or something, on the best current chips, would it actually be better? It's not obvious that it would be, but suppose it was, how much advantage would that be? And that's an open question. Maybe thirdly, what if there are other competitors to you that also have good hardware? And then again, in that case, we'd be running to this idea that David Deutsch said as well, the battle between good ideas will continue regardless of the hardware it's running on. So then it would be, we'd be asking more perhaps about what are the different ideas of all these different fast running AGIs? Because presumably they will all agree. So we might be asking a question like that. I suppose that's what some other countries like in World War II would be asking about the Western world, will we join the war? Will we not join the war? And their fate might be decided by our decisions, but I don't know. The hardware to understand what advantage it would be to take your brain right now and run it on today's or tomorrow's hardware and how well dispersed that would be, or whether or not we'd have the Neuralink sort of set up making us faster.</p><p><strong>Theo: </strong>So one of your influences is Douglas Hofstadter.</p><p><strong>Carlos: </strong>Sure. It's quite a lot less than David Deutsch, but I do find him interesting.</p><p><strong>Theo: </strong>Hofstadter recently had an interview, which you may have seen where, over the last few decades, he's been highly skeptical of AI capabilities. He was of the mind that the human mind is really complex and it'd be very hard to write a computer program that could replicate the human mind. But in this recent interview, he was really freaked out about the progress in AI. He talked about being terrified. He thinks about this practically all day, every day. He feels an inferiority complex to future AI. Like humanity will soon just be utterly surpassed and all of his beliefs prior were wrong. So clearly David Deutsch still has a cooler head about this. And I think you agree with Deutsch over Hofstadter. So why do you think that Hofstadter, who's been highly attuned to computing for decades, just suddenly switched?</p><p><strong>Carlos: </strong>I don't know. I haven't read his piece, so it'd be hard to say. But I think one thing that I just have in mind is that it's generally underappreciated how hard it is to make progress in reality. The search space of ideas is this infinite space and finding good solutions like general relativity within it are so stupendously rare. By the way, there aren't simple gradients you can follow to get to them. If you were to think about the topology of some search space like general relativity with this Eiffel Tower and everything around it would be just flat desert. Like a slight variant of that theory, it doesn't work at all. So to find these things is a stupendous achievement that our minds are capable of. And it seems like nothing else is so far. And so when I see that things are impressive in some ways, like maybe automating things that we've already done, maybe putting together ideas that we already have to some degree, those things can be impressive, but against the enormity of the actual search space of ideas, the actual search processes people routinely go through to define better neural networks and better engineering solutions. I just find that when you start to see how many choices are involved in actually doing a good design of something, how much knowledge is involved, how many options we face all along the way, a lot of these systems start to seem much weaker. So I think my starting picture is the space of possible ideas is vast and most systems aren't universal. They're finite and have no means of extending their capability. So there's an infinite difference between finite and infinite. So that's one thing. And then secondly, given the largest search space, you really have to have just incredible mechanisms for efficiently navigating it. And I think most things just don't have what it takes. And so unless somebody appreciates both of those facts, I just don't think that they're really hitting the important issues. And by the way, if something does do those things, that's not any reason to be worried about one's own consciousness. I mean, if something kills you, that's not good, but if something were to have those same properties that our minds already have, they would become equally as good as us in terms of their software. They may have better hardware, but then again, if there's better hardware for them, why not better hardware for you very soon? In which case you would then resume your status as equals.</p><h3>Mind Uploading (1:05:44)</h3><p><strong>Theo: </strong>So on the topic of better hardware for humans, is it possible that there's no way for human minds to run on a computational substrate without killing you in the upload process?</p><p><strong>Carlos: </strong>I think that's a question for optics engineers. I would not bet against somebody in the future discovering a way of scanning you without hurting you. But even if they did, if they could just scan all your atoms in one go, and then, let's say, it had very good technology for also replacing your atoms, then I could be scanning you and then replacing all your atoms a million times a second. I don't know if that's physically impossible. And maybe that we actually care about virtual reality more. It could be that's the reality that we actually want to live in, where everything is fully designed. In which case, I would say, yeah, I don't really care about these atoms. Destroy them, because I'm going to be living in virtual reality. By the way, I think the virtual reality people think about now is, of course, goggles and so on. But I think about virtual reality as being like everything you currently experience, but maybe 1,000 times better, a 10-dimensional space you can exist in. Whatever you like about the present reality, I don't think any of that is withheld from you by sufficiently good designers in the future.</p><p><strong>Theo: </strong>Yeah, I think of things like the Apple Vision Pro as kind of like v0.0.1 virtual reality. And we are so far from coming out with even a v0.1, a Neuralink-type thing that actually works for very basic tasks for humans. But we'll see. Never bet against progress. Never bet against the future. It comes faster than you think. But what I meant by my question with uploading minds is, let's say you want to upload your brain to run on a computational substrate rather than a biological one. </p><p><strong>Carlos: </strong>I should say, they're both computational substrates. One just happens to be built differently.</p><p><strong>Theo: </strong>Right, yeah, a silicon-based one. On substrate, for example, is it possible that somehow consciousness is inherent to your actual biological neurons and that in the process of moving your synaptic connections from one substrate to another, you would die, subjectively stop experiencing consciousness, and that a copy of you would be in the uploaded form?</p><p><strong>Carlos: </strong>I don't know that we have any good theories on terms of selfhood and these kinds of things. So, I don't think about consciousness at all. I think only about computation and capability. You have all this information in your brain, it has all this ability to cause other things, and then the new system would have all those same properties. That's the thing that leaps out at me as being the most important thing. As far as consciousness and selfhood, I guess I'll leave that to others. I sort of assume, like Deutsch does, that given how important it seems to be, that you have particular neurons doing particular things. In other words, if you look at the brain functionally, that seems to be the most important perspective on it. In terms of consciousness and other things, I don't know.</p><p><strong>Theo: </strong>A lot of people with this question in particular tend to just go on total gut intuition while we just have no explanation, no theories for any of it. People will say, when you go to sleep and you wake up, that's a discontinuity in your consciousness, but you don't die. But if you were to teleport your body, as in if you were to have your atoms disassembled, the information sent to a 3D printer and your body reprinted, that would kill you and create a copy. So say most people, some people. Or if you were to upload your mind, then that would kill you and create a copy. Or, there's the idea of the Moravec transfer, where instead of just destroying your brain and sending the data to a computer in one go, you basically have tiny nanobots going through your brain and one by one swapping out your biological neurons with silicon ones or whatever other substrate we find better.</p><p><strong>Carlos: </strong>It just doesn't seem like there's any really fundamental difference between saying, what if I zap all your molecules out into existence right now, and I rebuilt them exactly where they were, but one molecule to the left. Or, in the other case, I just swap out every molecule of your body for an identical, but different molecule. I just do them all in order. But also, I finish the whole thing in a nanosecond.</p><p><strong>Theo: </strong>So you're saying that consciousness is more about the pattern than it is about the actual specific atoms or molecules that make up your body?</p><p><strong>Carlos: </strong>Well, like I said, I don't know anything about consciousness, but if we are just talking about intuitions and so on, it might seem bad to be like, we'll just destroy your whole body and then reproduce another one, and now a second later. But then again, it doesn't seem like there's any fundamental difference between that and just doing them all one by one, like very quickly. And certainly both of these things would be functionally the same. I guess nobody really disputes that. They're saying, okay, well, yeah, if you put me in the computer, it'll say it's me, but it won't be me. That's the usual reply.</p><p><strong>Theo: </strong>I guess there's only one way to find out, unless we come up with a good enough theory of consciousness, but I wonder if it's even possible to come up with a good enough theory of consciousness before we have nanotech level technology that can upload our minds into computers.</p><p><strong>Carlos: </strong>Well, I think that probably it might not help at all with the question of consciousness. So it's like, we already exist here. If our actual existence right now isn't very helpful to this question, it's not clear that some other technology would be helpful either. But yeah, I think maybe it's the wrong kind of question. Like if you were to ask about different species, for instance, basically if we're talking about discrete differences versus continuous, then we'd be comparing, or rather, the question of whether this copy of you would be you may be the wrong sort of question in the same way that asking if two species that existed at different times on the tree of life are fundamentally different or something. They're linked by small, gradual changes throughout. So at no point was there some grand leap from this to that. And yet they're wildly different.</p><h3>Carlos&#8217; 6 Questions on AGI (1:12:47)</h3><p><strong>Theo: </strong>So, about three years ago, you wrote an article called "A Few Questions on AGI," where you talked about six questions that you had about AGI and related topics. And I'd love to ask you each of those questions now and revisit them and see what progress you've made intellectually three years later.</p><h3>1. What are the limits of biological evolution? (1:13:06)</h3><p><strong>Theo: </strong>So question number one is, what are the limits of biological evolution?</p><p><strong>Carlos: </strong>Yes, I guess I would just say that I've framed that more generally, perhaps, in terms of universality. So just asking more generally, given any given system, what can it do and what can it never do, despite any question of resources? And I think certain limitations with biological systems are to do with the fact that they have to obey certain kinds of constraints in every generation. So you have to be able to get yourself copied or reproduction has to happen in every generation. Whereas with human ideas, if you think about the difference between Newton and Einstein, Einstein had to create many theories that were nowhere near as good as Newton's to begin with. They were no better at prediction. Only his last published version was as good as Newton's. Newton's theory was good in terms of prediction. So if you think about a graph of fitness, you'd say that when you go from Newton, which had high fitness, to Einstein, Einstein did a lot of things that were terrible for years. Then eventually, he did something better. But there was this large gap where he tried lots of things and made many improvements. They just weren't improvements in terms of predictive accuracy. He had to invent his own new criteria to improve that. Eventually, he did better at predictive accuracy. That's something that evolution couldn't really do because at every generation you would have to be better to survive according to that one criteria. So these are the gaps that evolution can't cross. That's one kind of limitation that it faces.</p><p><strong>Theo: </strong>On a slight tangent, you talked about prediction, theories being good at prediction. So what do you think is the difference between prediction and explanation?</p><p><strong>Carlos: </strong>Well, I think one thing you could say is that there can be relationships between some kind of idea in your head and some system out in reality. They can have certain kinds of similarity. General relativity expresses a certain kind of, or that mathematical idea, that theory is similar to actual reality in a very particular kind of way. That's what explanatory knowledge is all about is finding, exploiting, expressing patterns in reality. I guess predictions are talking maybe less about those wider scale patterns in reality and more to do with particular observations that you make about what you will see, not what reality is really like. I don't think too much about predictions versus explanations except in terms of the role in helping you improve your ideas.</p><p><strong>Theo: </strong>Prediction would be more like the Celtics are likely to win the next NBA finals. And then explanation would be more like the Celtics have better athletes, better coaches, better whatever, and thus they're more likely to win the finals. </p><p><strong>Carlos: </strong>Yeah, so discussing the actual things in reality, and how they give rise to something else. By the way, there are different kinds of ideas, but there's no such thing as an explanation per se, like as a particular kind of idea. Like if I say the word banana, in some contexts that will just be a random concept. And in some cases it would be an explanation. If you asked me why did you go to the grocery store? I said, banana. That would be an explanation of why I had gone. I wanted to get a banana, that's why I went. But absent that explanatory problem, it's just a concept. So explanation in some ways is actually more of a role. It's more of a function than anything else.</p><p>So if you said like, why did you go to the store? You're saying like, I can imagine many reasons you would go to the store. I don't know which one of these it is. So can you help tell me which one it is? So you have a certain kind of problem in your head, some kind of gap in your knowledge. You're like, I know a lot of things about the situation, but I just don't know that. And so you're asking, so what is the answer to that? And then I would give you banana. And that'd be the answer. Whereas if you had a different explanatory problem, you'd want a different kind of answer. You'd say, that store was really open? Or like, why was the store open? Because say I went, I told you, yeah, I went there at three in the morning. So I thought that store was closed. It was actually open, why is that? That's it, I might have an answer for that. Or I would say like, I broke in because I really wanted a banana. Try to fill in that gap, whatever that gap happened to be.</p><h3>2. What makes explanatory knowledge special? (1:18:07)</h3><p><strong>Theo: </strong>So, question number two. What makes explanatory knowledge special?</p><p><strong>Carlos: </strong>I often think about the difference between knowledge, which is about what actually exists and knowledge that is helpful for action. If I told you there was a system where there was a, let's say, you're trying to go from point A to point B, but there's a big monolith in the way. And to avoid the monolith and get around to the point B, the destination, you could actually have in your head a little rule. This is what engineering classes often do, by the way, with Lego race cars and things. You can just program in, like if you see something dark, turn right and then turn left or whatever. So you could put in a pretty simple algorithm for dealing with that situation that incorporated no knowledge, no explicit knowledge of that barrier in the way. And yet that knowledge would only be useful in a narrow range of circumstances. Whereas if I said, ah, there is this particular object in the way, you could then use that knowledge, not only to avoid it, but to do any number of other things that you hadn't anticipated ahead of time. I think it's kind of interesting. So you might say later on, like, I need a cube, but then there's a cube there. And so you would now be able to use that cube in a way that prior you could avoid the cube when that was the relevant thing to do, but now you wouldn't really have any knowledge of the availability of cubes around you. So when you did need one, the fact that there was one there would be irrelevant to it. So I think that's sometimes a useful, simple explanation of how knowledge is only about action can actually be pretty limiting.</p><h3>3. How can Popperian epistemology improve AI? (1:19:54)</h3><p><strong>Theo: </strong>Alright, and question number three: how can Popperian epistemology improve narrow AI algorithms?</p><p><strong>Carlos: </strong>Well, I think, as I mentioned, I just have the two things that I always talk about, universality and efficiency. And so I think that when you see the full spectrum of knowledge creating systems and you compare them all on these terms, you can see that there's a lot of room for improvement. What is the space they are searching? What kinds of universality do they have? And secondly, how efficiently do they search that space? You can then get ideas. That's kind of what people have done with evolutionary algorithms and so on. They've taken some inspiration from evolution for their own algorithms.</p><p>With Popper, one of the more important things from him is, he has some good pithy phrases, one of which is &#8220;the content of a theory is in what it forbids&#8221;. He's most interested in this fact because of testability. If I tell you all swans are white and then you see a black swan, well, my theory says there can't be any black swans. So now I can discover there's a problem with the theory, we have something to work out. It's relevant to testability, but it's also relevant to efficient search. If I say, there's a lot of gravity here and it tells me that if I throw this ball, it will follow the parabola and that's how it has to go. Then I'm implicitly also saying that if my predictions say the ball will be here, I'm also saying it's not going to be anywhere else, which is pretty useful when you want to find the ball. There's an infinite everywhere else that I'm telling you that ball isn't going to be there.</p><p>If you think of all of our laws, all our scientific knowledge as being like that, telling you not only what is, but therefore also what isn't going to be true, what things aren't worth searching, then it's actually very useful. Because I know that, for instance, if I'm looking for new scientific theories, I think they will all obey conservation of energy. Well, anytime I find a theory that doesn't obey that principle, I then feel I can probably throw that away or I know I have to fix it so that it does obey the principle. That knocks out a hell of a lot of possibilities. It saves me potentially infinite amounts of time. So I think that to the extent you can incorporate powerful constraints like that into any search algorithm, it's for the best.</p><p><strong>Theo: </strong>By the way, on a slight tangent, do you picture the first true AGI system to resemble more a neural network or resemble more a conventional computer program, like a GOFAI, good old fashioned AI is the term that they use, a simple optimizer or something?</p><p><strong>Carlos: </strong>Well, I don't think, well, before I kinda mentioned there's a fundamental difference between an optimizer, which has a fixed objective that can't be changed from within the system. That's one kind of thing. And then a system, which is more like an ecosystem where there's no sort of a fixed idea, which everything else must be subservient to. There's an open question as to whether one can simulate the other, in which case maybe that's okay. So if you have an optimization function that can, within itself, simulate this other more ecosystem type of view, maybe that would be good enough. So maybe it's possible to arrive at the right answer sort of within the neural network sort of system. </p><p>I guess that's what people usually talk about anyway, they say like, if logical statements, for instance, are really valuable, do we have to bake them in the beginning or can they be emergent within the system? I suppose that's an open question and probably would bet on it being emergent. It's part of the more general question of what things do we have to start off with versus what things can emerge later? And ideally, for someone who wants to build it, you want the core to be very simple so that you don't have to build very much and it can sort of discover everything important later. </p><h3>4. What are the different kinds of idea conflicts? (1:23:58)</h3><p><strong>Theo:</strong> Number four is, what are the different kinds of conflicts between ideas?</p><p><strong>Carlos: </strong>I think I have sort of a reason Popper brings that up is that it helps you focus your problem solving in the right place. If you say there's a problem between quantum theory and relativity, then you know where to look to make an improvement in physics, making those theories not conflict anymore. And that's true more generally. So if you know that theory A and theory B don't work together, then you can try altering either of them or coming up with new ideas so you eliminate that conflict. So it gives you a kind of a barometer of progress in a way, something to meet. </p><p>But I think that those aren't the only things that can help guide your problem solving. So if you like to think more in terms of attention in a way, just what helps you guide your attention to fruitful areas. If you're going to come up with new ideas and you're going to start filtering ideas, well, which kinds of new ideas should you come up with and how should you select among them? And anything that can help you answer those questions and focus your energy into fruitful areas will be good because biological evolution doesn't do that. And it's terribly slow as a result. If you were to try to make an improvement in your ideas at random, you would be combining ideas as different as giraffes and spaceships, these are both in your head, you could combine them, but it wouldn't be helpful. So what you actually want is something that tells you there's a problem here, or there's just something interesting there. And it's all focused on that. </p><p><strong>Theo: </strong>Yeah, I think those are definitely good points. I don't think many people think enough about this in particular.</p><h3>5. Why is the brain a network of neurons? (1:25:47)</h3><p><strong>Theo: </strong>Question number five is, why is the brain a network of neurons?</p><p><strong>Carlos: </strong>Yeah, I don't know that I have much to say on that one. Although I did read a paper recently that talked about the brain as a Turing machine. It made the point that a fairly small set of neurons allowed you to build up simple circuits that had the properties you would need to make a Turing machine. I forget the details of it. But I guess I wouldn't necessarily address the fundamental question of why neurons like if Turing machines are important, it's not a given that you should build them out of neurons. The fact that we did is an open question. I can't say I made much progress on that question.</p><p><strong>Theo: </strong>This one's definitely one of the tougher ones. I guess it's something that evolution converged on eventually. In the article you talk about, brains are evolvable while modern computers could be designed. So we can, I guess this ties into the bigger question of can we design architectures that are better than we are?</p><p><strong>Carlos: </strong>They won't be better in a fundamental sense of what they can compute and so on. But they could be better in the sense of ordinary designers, a GPU does X, Y and Z better than a CPU. So it could be better in that sense, just in terms of being faster, more efficient. But it would be a fundamentally different kind of program.</p><h3>6. How do neurons make consciousness? (1:27:26)</h3><p><strong>Theo: </strong>Number six, there's another super interesting one. How does the low level behavior of neurons give rise to high level information processing?</p><p><strong>Carlos: </strong>I think that's an area that I haven&#8217;t focused on so much. This paper that I mentioned before of like Turing machine within the brain would maybe be a really cool place to look because lately I've been mostly interested in the higher level questions of just like, how do we even distinguish between different kinds of systems and what makes us special? So that's been, that's really that question of demarcation in a way has been the focus of late. But whenever I do sometimes dip into the lower level details, it's interesting. So yeah, that's kind of an area that I'm getting more into lately.</p><p>But I think, in some ways the question of how do you actually compute stuff is a secondary question because if you don't know what you're trying to compute, then it's not as helpful. Whereas if you know Turing completeness is an important thing, well, then we can talk about how you take subunits and get there. But if you don't know the Turing completeness is important, then you're going to be kind of trying to combine units in a way that is kind of aimless. So I think the focus on these high level ideas of universality and efficiency start to tell you what it is that you want to compute. And then you can ask, okay, now how do we actually compute those things with the tools that we've got? But I do think, yeah, I'm slowly getting more interested in lower level details, as I mentioned, like through Minsky and other things, just see, okay, what are the fundamental building blocks of computation? Now that I'm getting a better idea of what we want to compute.</p><p><strong>Theo: </strong>So what do you think the fundamental building blocks of computation are?</p><p><strong>Carlos:</strong> Well, I guess like Minsky talks about different kinds of things and building out of many different subunits and that sort of things. One of the things I think is interesting, by the way, this doesn't necessarily answer your question, but it's just sort of one of the things that came up when reading about the stuff is that I used to think of Turing machines as being quite important. And they're like, of course, historically very important, but then there's a separate question of what is their, how should we think about them now? Just in terms of their actual performance and this and that. And of course, people will tell you like, actually computing with Turing machines sucks. You don't want to actually use Turing machines in practice. </p><p>So I think that it's become clear to me, and maybe this is obvious to everyone who's already taken a class in computability or something, but to me, on the one hand, you have the space of all possible programs and that thing is pretty fundamental. That's like the space of all ideas. So that's there. That's this immutable thing. That's where all our resources are when it comes to all the ideas you can ever think. Separately from that, there are machines which can run any of those things and Turing machines are one of them, but already at that time of Turing, there were other alternatives to it. So I think there's probably an infinite variety of alternatives to it. Things that can compute anything. So the importance of any one of them is pretty questionable, actually, in the grand scheme of things and where they all differ isn't in what they can compute because they can all compute anything. What they do differ in is how easy they make it to express particular computations. </p><p>And so if you have a really bad programming language, like Brainfuck or whatever it is, or just assembly, it can compute anything, but it just makes it very difficult to express the kinds of computations you would want to run. Whereas a nice programming language like Python or something will make it very easy to express the kinds of computations you want to run. Things with loops, things with variables. That in the space of all possible programs, it turns out that some of those are ones we actually want to run. So we make those easy to express. So that's where languages differ. And so that's kind of, I think, when we ask what are the building blocks of computation? In a way, the lesson seems to me to be that of all the things that can be computed, you can create whatever you want as the building blocks. And then depending on how what you've assembled there, that will make different computations easy or hard to express.</p><p>And then that becomes relevant for evolution because it says, well, OK, well, what kinds of computations should human minds be doing? And then how is it that evolution put together what is arguably some very shitty computer, but managed to do those computations very well? Because it turns out that those ones were the ones that were most decisive. I don't think that the ones that you see being done in an arbitrary program, like the important computation for us isn't iterating through a trillion item list of numbers and multiplying them all by two. That's a computation that computers do. What we don't do in our heads will be very difficult for us to do. But evidently, that's OK, because that's not the kind of thing that we ought to be doing for our normal idea-based computations.</p><p><strong>Theo: </strong>So related to Deutsch's ideas on this, how exactly do we know that humans are theoretically universal? That we can theoretically do or understand anything? Why is it that we're not like, say, rats, where rats just cannot understand something no matter what you do or try to explain to them?</p><p><strong>Carlos: </strong>I don't know if we have a perfect answer there. That's kind of my whole research question, in a way, is to try to get a clearer picture of what distinguishes us from other things. In my video on universality, I go through a few different ones. So first of all, you have computational universality. And there's also a point I mentioned before about the fact that you also want to be able to not only, if you want to have truly unlimited computational capacity, you have to be able to extend whatever finite capacity you have right now. So that's a given. So you need that, too. You can't just be a universal computer, because none of them actually exist. You're always going to have a finite computer, so it has to be able to extend its abilities. So that's one kind of thing.</p><p>And there's also, I mentioned before, you need to not only be able to come up with new ideas, but in new ways. This is the analogy of walking along the Earth just using your feet, rather than planes and so on. So in order to navigate the space of ideas, you need to be able to explore it in many different ways. And then also, you need to be able to distinguish between new ideas of all different kinds in order to efficiently navigate the space of ideas.</p><p>So if I told you, like, today, you're trying to decide between two ideas, and telling you, first of all, do you like chocolate or do you like vanilla? Well, that will require certain criteria. You will have to invent entirely different criteria when it comes to deciding between relativity and quantum theory, or the successors of both of those. You have to invent new criteria. So if you can't do that, then you're screwed, because you're going to be no better than chance when it comes to that later decision between those new alternatives. And I think there were a few others that I listed out there, too, but yeah. So I would say, hold off on trying to get a definitive answer, perhaps, on the question. But that is what I'm working on.</p><p><strong>Theo: </strong>Yeah, so we'll see where that goes.</p><h3>The Optimistic AI Future (1:34:50)</h3><p><strong>Theo: </strong>And then finally, how do you picture a good AI future? What does a good future of AI look like to you?</p><p><strong>Carlos: </strong>AI or AGI?</p><p><strong>Theo: </strong>Actually, let's do both. </p><p><strong>Carlos: </strong>I don't really know much about AI, really. So I'll leave it to people who actually work on that to come up with cool stuff. Well, I guess I would say that what's cool is the idea of generally having, how do I put it? So, I think about this large spectrum of knowledge-creating systems. And I think humans are the zenith. There is a thing that can do anything. That's us. And aliens, and whatever else. But then there are these weaker forms, and there's evolution, and there's all kinds of different algorithms in between.</p><p>And so I think we can imagine, and we're probably seeing now, systems which are capable of some amount of search in the course of what they're doing. And so they have much more flexibility as a result. So traditional programs, you would write the whole program, and then if there's any kind of problem at any point in that, where it has a gap in what it knows how to do, the whole thing just fails. It doesn't have any ability to search for solutions to cross whatever gap it had in its algorithm. </p><p>Whereas a human, if you tell me, hey, go to the store and get me that banana or whatever, if there's some problem along the way, and I have to go around the back door of the place because the front door is under construction or something, that seems trivial to us. But I had to say, oh, I want to get to the store. How do I do that? The usual way doesn't work. So now what? I have to take a detour. I have to maybe ask somebody. I have to engage in a problem-solving process to deal with that situation. </p><p>So I think you can see other just ordinary programs involving that sort of search naturally within them and making them much more powerful as a result. Even if they're not human-level or anything, they don't have to be to be useful. So I think that's pretty cool, just baking search in this broader sense into everything that we do. </p><p>As far as AGI goes, it's mainly just a matter of, what is it that I like to say? Longevity, making death optional. And that really requires backups, ultimately, if you want that. So we need to have control over the hardware we're running on so we can make backups and have longevity. And if you have that, we can't guarantee you'll live forever, but we can guarantee you'll at least live as long as our civilization is around. If an asteroid wipes everything out, then you die, too. We can live as long as civilization, at least. </p><p>And then the other one is virtual reality. I think that once we control the space of experiences, that would be very cool. We can start designing that. I often like to say that we could live in a 10-dimensional reality if only we knew how to throw and catch a ball there. In 3D, I can throw a ball and you can judge distances and so on well enough to catch it. And mathematically, we could do that with 10 dimensions. It's just that algorithm for doing that isn't in your head right now. So if I throw you a ball in 10 dimensions, you won't catch it, but there's no fundamental reason you couldn't. </p><p>So I think we could inhabit such realities and everything else that we haven't yet imagined. So those are the two that usually come to mind. I think I listed a few more in my latest essay, but I'm going to have to make do with those for now.</p><p><strong>Theo: </strong>All right. So I think that's a pretty good place to wrap it up. So thank you so much. Thank you, Carlos de la Guardia, for coming on the show.</p><p>Thanks for listening to this episode with Carlos de la Guardia. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter <a href="https://twitter.com/theojaffee">@theojaffee</a>, and subscribe to my Substack at theojaffee.com. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item></channel></rss>