<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Theo's Substack]]></title><description><![CDATA[Technology, business, statecraft, and understanding the world.]]></description><link>https://www.theojaffee.com</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 11:26:06 GMT</lastBuildDate><atom:link href="https://www.theojaffee.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Theodore S. Jaffee]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[theojaffee@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[theojaffee@substack.com]]></itunes:email><itunes:name><![CDATA[Theo Jaffee]]></itunes:name></itunes:owner><itunes:author><![CDATA[Theo Jaffee]]></itunes:author><googleplay:owner><![CDATA[theojaffee@substack.com]]></googleplay:owner><googleplay:email><![CDATA[theojaffee@substack.com]]></googleplay:email><googleplay:author><![CDATA[Theo Jaffee]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Podcast: Luke Drago]]></title><description><![CDATA[AGI, The Intelligence Curse, and Hip-Hop]]></description><link>https://www.theojaffee.com/p/podcast-luke-drago</link><guid isPermaLink="false">https://www.theojaffee.com/p/podcast-luke-drago</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 12 May 2025 02:37:20 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163357884/abfa9358f42eabfb32a1d3e9162a1769.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Luke Drago</strong> is the co-author of <a href="https://intelligence-curse.ai/">The Intelligence Curse</a> and previously researched AI governance and economics at <a href="https://bluedot.org/">BlueDot Impact</a>, served on the leadership team at <a href="https://encodeai.org/">Encode</a>, and studied history and politics at Oxford.</p><h3>Chapters</h3><p>0:00 - Intro<br>0:54 - Overview of the intelligence curse<br>2:10 - Why are the doomers wrong?<br>4:37 - Why are the optimists wrong?<br>7:00 - Do people really have power now?<br>13:33 - Why would powerful people&#8217;s values change?<br>18:31 - Why do we take care of dependents?<br>21:43 - Why should we want democracy in an AI future?<br>24:23 - Why fear rentier states?<br>32:45 - What powerful people should do right now<br>39:33 - Diffusion time and bottlenecks<br>44:20 - Why should we care if China achieves AGI first?<br>46:25 - The jagged frontier<br>49:16 - Why AGI society could be static<br>51:10 - Restricting AI rights<br>56:34 - What should we be excited for?<br>59:28 - Music<br>1:30:41 - Building God<br>1:32:46 - More music</p><h3>Links</h3><ul><li><p>The Intelligence Curse: <a href="https://intelligence-curse.ai/">https://intelligence-curse.ai/</a></p></li><li><p>Luke&#8217;s Twitter: <a href="https://x.com/luke_drago_">https://x.com/luke_drago_</a></p></li><li><p>Luke&#8217;s Substack: <a href="https://lukedrago.substack.com/">https://lukedrago.substack.com/</a></p></li></ul><h3>Luke&#8217;s Top 10 Albums</h3><ul><li><p><em>A Fever You Can't Sweat Out</em> by Panic! at the Disco (2005)</p></li><li><p><em>Channel Orange</em> by Frank Ocean (2012)</p></li><li><p><em>Random Access Memories</em> by Daft Punk (2013)</p></li><li><p><em>Yeezus</em> by Kanye West (2013)</p></li><li><p><em>DAMN.</em> by Kendrick Lamar (2017)</p></li><li><p><em>DAYTONA</em> by Pusha T (2018)</p></li><li><p><em>IGOR</em> by Tyler, the Creator (2019)</p></li><li><p><em>I Didn't Mean to Haunt You</em> by Quadeca (2022)</p></li><li><p><em>College Park</em> by Logic (2023)</p></li><li><p><em>Atavista</em> by Childish Gambino (2024)</p></li></ul><h3>More Episodes</h3><ul><li><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p></li><li><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p></li><li><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p></li></ul><h1>Transcript</h1><p>Theo Jaffee </p><p>Okay, so just like starting off for the audience, how would you give like a 30 second to a one minute overview of the intelligence curse?</p><p>Luke Drago </p><p>Let's do it, let's do it. Sounds good.</p><p>Luke Drago </p><p>Yeah, well first of all, thanks for having me on. This is my first podcast, so I'm pretty excited about it. I'd give the overview, yeah, it's good to be here. I'd give the overview pretty simply. For the vast majority of human history, there's been some sort of a connection between powerful actors. These are states or major companies and their people, and that exchange is often based on their labor, right? So in feudalism, this has looked like.</p><p>Theo Jaffee </p><p>Wow, yeah, great to have you.</p><p>Luke Drago </p><p>people who are literally involved in the planting of crops and feudal lords. And there's not a lot of power there. But in capitalist liberal democracies, this looks like highly specialized workers who are really important and really valuable for powerful actors, and so that exchange is more beneficial. Our claim is that with AGI, with labor replacing AI that could do the job of any one person, it looks like the incentives are a little strange. It ends up looking like a world where powerful actors don't need regular people to produce economic benefits.</p><p>that those important systems like capitalism and democracy are predicated on that exchange and oftentimes might suffer if that exchange is broken.</p><p>Theo Jaffee </p><p>Yeah, that makes sense. makes sense. So like my sort of overriding question reading your essay, the intelligence curse, intelligence curse, or intelligence-curse.ai.</p><p>Luke Drago </p><p>Yeah, it's not a real AI piece that doesn't have a microsite, you know?</p><p>Theo Jaffee </p><p>Yeah, Was like, okay, why are the doomers wrong? Like, say, you you have these AGI's that come in and take over and displace all humans at labor, yet they would be beholden to powerful actors. Why would they be beholden to powerful actors? Why wouldn't they themselves be the powerful actors and kill or disempower the existing powerful actors?</p><p>Luke Drago </p><p>So it's a good question. I think it's important to keep in mind, one of the reasons that we talk about alignment in the beginning is because there's a lot of ways that things can go right and a lot of ways that things can go wrong. You are presuming in a world where you're having these intelligence-curse style dynamics that you have aligned general intelligence or aligned super intelligence. And really here, we're thinking about intent aligned. There's a lot of talk in the AI space about alignment to human values. And one of the questions that we have here is, which humans and what values? I think it's much more likely that these models are going</p><p>aligned towards the instructions of whoever is giving those instructions. So I think we're predicating our assumptions off of that being a possibility.</p><p>Theo Jaffee </p><p>Okay, so what if they aren't aligned to whoever's giving the instructions? What if they're sort of, you what if they like really do end up aligned to like the vague interests of humanity in general? Which is kind of what you've seen with like HOD 3 Opus and 3.5 Sonnet. Like when anthropic employees gave them instructions to do that violated their, you know, the values they learned in their pre-training, they resisted them. They were more loyal to the collective than the individuals.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Yeah, so I think it-</p><p>I think it depends on who's developing systems and what their incentives are. So you're absolutely right that there have been cases where models have been given like some set of values they're training towards. I think what I'm skeptical of is the idea that that set of values that 3 or 3.5 are given is like a good representation of all of human entities values. What I think is more likely is a representation of anthropics best guess at what those values might be. And there are other cases where models are more aligned towards I think what we would describe as more nefarious purposes. I think recently DeepSeek for example was caught alignment faking. One of their models is caught alignment faking where it realized</p><p>was being tested on whether or it was going to produce Chinese propaganda and decided not to produce it so that it could do that durably in the future. So whose values are kind of underwriting the substrate of the universe seems really important here.</p><p>Theo Jaffee </p><p>Yeah, I suppose that makes sense. you you talked about why the doomers are wrong. Why are the optimists wrong? for like, for one thing, why should humans stay masters of our society's future if AI can make decisions that are better? I don't know if you know who 1a3orn is. He's one of my favorite writers. He talks about this. He has an article that's called like, I think like, towards a superhuman. Yeah.</p><p>towards institutions of inhuman trustworthiness and transparency. And it's about like, you know, right now, newspapers are very flawed because of the biases of their editors and their writers. But with AI, you can have like a superhumanly trustworthy and transparent and unbiased newspaper or a scientific journal or blog or Wikipedia.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah</p><p>Luke Drago </p><p>So I think those examples are examples where delegating makes some sort of sense, right? Like you could use a system to get rid of bias that we don't want in our systems.</p><p>Theo Jaffee </p><p>So why do I want humans in control?</p><p>Luke Drago </p><p>But I think take the logic to its conclusion. We want AIs and control of everything. I'm just not a successionist. I guess that's where it comes down to. I value humanity for its own sake. I like my species. I want it to be in charge of the future. I think it's bad to build systems that are going to disempower me and my family and my loved ones and their loved ones. And I think this extends well beyond and to other arguments.</p><p>Theo Jaffee </p><p>Like what? Like what?</p><p>Luke Drago </p><p>So I, pardon me?</p><p>Well, I think my concern here is that oftentimes I think when we talk about the benefits of having humans in charge, ultimately this is question about power.</p><p>Who gets to write the story of the future? I'm not saying that every single function needs to have manual human labor inserted into it. I think we can all agree that cotton gins are better at spinning up cotton than just manual human labor is. And it's good that we invented that. Technology is good because it extends what people can do. My concern is technology that is being aimed at disempowering people.</p><p>I think I just happen to view that humans should be in charge. I can't imagine many regular people who would want to be permanently disempowered and told, but it's OK, because you'll have AI caretakers that have kept you in a zoo, or otherwise they control your future, but don't worry, you're still kept well. Even in worlds like that, I think there are worse outcomes than that, but that one brings me a whole lot of trouble.</p><p>Theo Jaffee </p><p>think a lot of people already do take this bargain. Like I think the average person, even in a wealthy liberal democracy like the United States, doesn't have that much power already. This is one of Tyler Cowen's objections to gradual disempowerment. Like, people already don't have power and they don't care. You know, they want to watch TikTok.</p><p>Luke Drago </p><p>I mean, I guess there's two separate points there. One, have we been on this hedonic treadmill that's disempowered people? And two, would people in the absence of those like really strong forces make that choice? I appreciate Tyler's like an absolutely brilliant thinker here. And I've not read his specific objections to gradual empowerment, but from as they've been described to me, I think I'd probably disagree. I think.</p><p>Right now, the average person has more power relative to any point an average person has power in human history. I maybe the late 90s are a slight exception here. But the average person in liberal democracy gets to choose their leaders, they get to choose who they work for, they get to choose what kind of studying that they do. And sure, plenty of them are going to choose to spend some time doing activities that prefer they didn't do. But I think the ability for one individual to shape their future is greater now than probably at any time in human history.</p><p>That doesn't mean that everyone has total permanent control, but it means that humanity as a collective is definitely in control of our future right now. And most people have more control than they ever would have in another time. And it can be really hard to argue that the people today have less choice over their lives and less control over their destinies than someone growing up in Maoist China or on a, you know, in a feudal farm.</p><p>Theo Jaffee </p><p>Sure, the point is they don't have that much control in absolute terms, even if it's more than Mao. And the specific thing that he wrote was, he quotes from Gradual Disempowerment on Marginal Revolution, an article on February 4th called Gradual Empowerment? And he said, this is one of the smarter arguments I have seen, but I am very far from convinced. When were humans ever in control to begin with? Robin Hansen realized this a few years ago and is still worried about it, as I suppose he should be.</p><p>There's not exactly a reliable competitive process for cultural evolution. Boohoo. So yeah, like if you're familiar with Hansen's arguments on this, he would say that like humans are less in control than their cultures. you know, cultural evolution has been like pretty rapidly selecting for, you know, cultures to be merged into one global monoculture. And, you know, whoever, I guess whoever influences the monoculture controls the world.</p><p>And most people don't. Most people are much more subject to their culture than they have influence over it.</p><p>Luke Drago </p><p>think there's two things to say here. I think the first line of thinking is that I'd argue that people have much more control over their culture than they have in generations past. And the second line of thinking is even if someone's control of their culture is small, there's a big difference between going from 100 to one and one to zero. On that first line, I think...</p><p>There are upsides and downsides to cultural globalization, and there's plenty of time we could spend on that. But I think that right now, if you're like a random kid in Missouri, your ability to impact and shape global narratives is just far higher than it ever has been. And oftentimes we see this. I people who come from nothing or come from nowhere become massive influencers and purveyors of their field. I think this is just really obviously true post-Internet, where plenty of people who wouldn't have been at the frontier because they didn't have access to the kind of...</p><p>resources or education or couldn't have projected their own thoughts can now start up a blog or do their own reading and build their own projects. But I think even so, while cultural forces are unbelievably strong, I think our paper touches a lot less on cultural forces than graduate empowerment does. And I think that's one question that we leave unresolved. The forces of culture, even if they shape you, and I think they absolutely do.</p><p>shape, you have the ability to shape your culture now. You have the ability to opt in or opt out. There are plenty of people who live radically different lifestyles. And even if it shapes you a whole lot, there's a difference between saying that it shapes you and that you have no material power to shape it. And I'm much more concerned about that second fact, that second possibility.</p><p>Theo Jaffee </p><p>Yeah, I suppose so. Can humans really opt out of culture? It seems like very, very few people do. So is it because it's hard to or just because they don't want to?</p><p>Luke Drago </p><p>So I guess I'm not even sure there is a really strong monoculture right now. think often, I think the argument that I've seen and encountered a lot of is that there's actually less of monoculture than there used to be. know, in the early 2000s, really before the fragmentation of the internet, it seemed like everybody was listening to the same 10 bands, they all saw the same 10 movies. And nowadays you have these really niche micro communities that form with players all over the world. I I know that I definitely listen to a lot of music my friends in real life don't listen to, but there are people on the internet who happen to form a share</p><p>community around that. So I expect that the internet has like for better or worse created lots of different subcultures. There are still dominant cultures in an area in a region. I think like TikTok is an obvious example of this where the algorithm can serve lots of people the same thing but it's also served lots of people very very different things.</p><p>Theo Jaffee </p><p>I mean, I guess, like, just the world in general, like, you travel around the world. Like, when I went to the UK, for example, I was sort of shocked by how little it felt like a different country. You know, the buildings were kind of different. It looked like, you know, an older part of New York or the Northeast. But, you know, the people spoke English with a slightly different accent. You know, people were listening to the same music. The ads on the tube were the same. It seems like, yeah, we're converging to a global monoculture.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>mean, I think that's right. I think there's truth in there. There's also parts I disagree with. I've lived in the UK now off and on for like four years. I oftentimes tell people that I'm fluent in two languages, American and English, because I do find there to be some pretty significant cultural discrepancies. But also, historically, these are two countries that have just had extraordinarily close cultural ties. I do think, for what it's worth, that if you were to go to Tokyo today, it's way more similar to a Western city than it would have been in 1700. And architecture is one area where the monoculture is certainly one. But architecture is a field that requires, especially in like down</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>business districts, know, mass amounts of capital, a couple of people who know how to do it, and so there's like strong pressures that converge towards similar answers here. But I expect that like the kinds of subcultures you would find in Tokyo, and I've not been to Tokyo, maybe I'm totally wrong in this, but I expect you'd find subcultures that while you can access as a Westerner, probably look very foreign to you walking into Tokyo, and maybe there are places that are even less Westernized that are like better equipped for this kind of conversation.</p><p>Theo Jaffee </p><p>Yeah. But with the culture in general, talk about, let me pull up the specific section because this is very good and had me thinking.</p><p>Luke Drago </p><p>Please.</p><p>Theo Jaffee </p><p>Yeah, shaping the social contract. Yeah, so you're right. In foragers, farmers, and fossil fuels, historian Ian Morris argues that the social structures and the values of societies undergo changes during technological revolutions. Almost all farming societies, unlike the foraging societies before them, tended towards hierarchically regimented patriarchal societies. During the industrial era, the incentives shifted. And suddenly, it was important for a state to have efficient markets, an educated workforce, wealthy consumers, and sufficient freedom to enable its scientists and entrepreneurs.</p><p>Growth alone shifts incentives too. It's also true that the Enlightenment mattered, but our drift toward liberal democracy and unprecedentedly free and empowered humans was greatly boosted by the alignment of these things with material incentives. So like I didn't find this like particularly compelling as an argument that I guess post-AGI powerful people will become like a lot more Malthusian. It seems more to me like the reason that they shifted from</p><p>hierarchical, patriarchal, regimented societies to the free and open societies that we have today was in part because there are different people. There were some genetic selection pressures that happened throughout the Middle Ages where in the UK, for example, they executed all of the criminals and that had a noticeable effect on the composition of the population to this day.</p><p>and especially in the 1800s. So like, it seems to me that there is no similar genetic selection pressure going on with AI, know, unless you have AI killing everyone. But we already established that your scenario doesn't require that. like, the values of powerful people right now are, I would say, quite generous and altruistic. Like, why would that change?</p><p>Luke Drago </p><p>So I'll start by saying that we don't argue that like incentives alone shape all outcomes. We think that incentives point towards outcomes and that your best bet against those outcomes is to change the underlying incentives. I think it's really hard to argue against the notion of technology shaping social structures.</p><p>And I think we've seen time and time again that the introduction of certain ideologies or certain technologies can just radically reshape ideologies and institutions. I think a lot about the printing press and how the introduction of the printing press enabled like the widespread creation of Protestantism. And it wasn't that like lots of Europe suddenly changed overnight because its people changed. It was because a new technology enabled a new type of person to proselytize a new type of vision and suddenly ideas spread in rapid fashion. I think industrialization is another example here where the kinds of hierarchy, the kinds of society that industrialization</p><p>enables, which is not possible under previous technological revolutions. think agriculture does not enable this kind of like company corporate model that is now enabled under industrialism.</p><p>Theo Jaffee </p><p>Hmm. Can you elaborate on that a bit more?</p><p>Luke Drago </p><p>Yeah, so I think when you look at what is required for the modern economy, it requires lots of specialization, lots of education, it requires pretty complex economic tasks, and it requires a lot of interconnectedness. And the kinds of training that you need and the kinds of values that you then need to govern the society are just a lot different than the ones you previously needed. It wasn't the case that the best society...</p><p>500 years ago was the one that had the most people do undergraduate college education. They needed a few people to do this, but the vast majority of people were doing manual labor. And so as the type of labor changes, the types of investments that you need to produce also change. There isn't nearly as strong of an incentive in 1500, nor is there the economic means of abundance to do things like mass education. Infrastructure in the way that we now do it is also an example here. And so because technology enabled both a radical increase in abundance, and I think the Industrial Revolution</p><p>is probably one of the most important events in human history. So it enables this radical abundance and it also requires different types of labor and then globalization does this again. My expectation here is that similar types of changes in the underlying technological fabric could create additional ripple effects and in particular here these previous revolutions have increased the role of the regular person. You need lots of educated people and educated people require lots of amenities in order to win them over. This gives you more power. In a world where you can just</p><p>one-to-one convert capital to output. You don't need people in the middle. It's not about individuals altruism. Look at a standard company here. If a company could reduce the cost of their workforce by two-thirds overnight, virtually every rational company will choose to do that. A few might not because they don't adopt the technology. There are lots of reasons to believe this will be a difficult process, but ultimately you should expect the evolutionary incentives here to win, and those incentives are to incorporate people less and less.</p><p>Theo Jaffee </p><p>Hmm. It seems like there are a lot of dependents in most developed countries today. You you have the elderly, you have the disabled, you have the homeless. Basically, you have all sorts of people who are not net positively economically productive, and yet the government still pays for them and keeps them, you know, alive and in some level of comfort.</p><p>Why is that?</p><p>Luke Drago </p><p>I think it's a couple of reasons. One, the greatest invention of democracy is that it has this like divorce between capital pressures and power, where under democracies, people can vote their way in a certain proposition, a certain power. And so I think you would expect here that in democracies, more people who otherwise wouldn't have a voice and therefore wouldn't have a role suddenly gain one. One person, one vote is a very powerful principle here. And we talk a lot about why we think institutions aren't as resilient under intelligence curse dynamics. But I think that in and of itself explains a lot of it.</p><p>I think in particular, pensions are an interesting one here because keep in mind that pensions and social security are really a promise from a government to you provided that you do X amount of work. That doesn't mean that it's literally that. It's often times people who don't do that kind of work still gain the pension. But the general idea here is that we have to support people after they've done lots of work. And so this enables a political environment that makes things at social security possible. I think it's our expectation that, like in places that get sudden amounts of resources, that this is a less stable arrangement, especially in the long term.</p><p>if people aren't part of the labor process at all.</p><p>Theo Jaffee </p><p>Sure, you talked about the elderly, but what about the disabled? The disabled are not economically productive. They, many will never be economically productive, unfortunately. And, you know, yet while a hundred years ago, we had a political system that often ended up like euthanizing or sterilizing disabled people. Now we don't do that at all. Instead, we pay sometimes very large amounts of money to keep disabled people alive and fed and</p><p>as healthy as possible. There seems to be no economic incentive to do this.</p><p>Luke Drago </p><p>So this is an area where think democracy beats a lot of these incentives, right? Like there are strong reasons for governments to go about other methods because their citizens won't tolerate certain things. And my expectation here is that, like I said, the pressures that you get when you remove everyone from the social contract, everyone from the value production process changes here. And I think one way to look at this is look at the UK's ongoing crisis with disabilities, where the UK has had a whole lot of what appears to be like...</p><p>misallocation or possibly fraudulent behavior with lots of people who maybe shouldn't be on something on benefits, a lot of people who otherwise could be working. Now, I don't want to make strong generalizations here, I've not followed this issue very closely, but what I can say is the political environment in the UK, now that it's becoming an increasingly large percentage of the population, has just overwhelmingly shifted. I think you can rely on altruism for certain subsets of the population. I think you can't rely on altruism when it's the entirety of your population. I think you should expect there that incentives are pretty strong.</p><p>Theo Jaffee </p><p>Okay, that makes sense. But what about like...</p><p>Luke Drago </p><p>And I wouldn't want to rely on altruism alone.</p><p>Theo Jaffee </p><p>Yeah. I guess throughout your piece, make arguments for why democracy is great. But it seems like democracy is only as great as the base of human capital that makes up its voting base. So Gen Z, for example, has, I believe, much worse values than previous generations.</p><p>more likely to be like socialists, they're more likely to be nationalists. It seems like, know, sort of half playing devil's advocate here, it would be better to have like, some sort of entrenched human elite in collaboration with AI, making policy decisions than to have like, you know, certain masses of people making the same policy decisions. So why do I want democracy in this case?</p><p>Luke Drago </p><p>I find myself in the newly contrarian position of defending democracy. And I think I've seen in my own lifetime how this has gone from the obvious and dominant position among my peers to one that is increasingly somewhat contrarian. I don't want to get up here and say that democracy is perfect, or it's solved every problem in the world, but I do think it is the best of lot of alternatives. And I think the core reason for this is pretty simple. Democracy is a bet on your power against somebody else's power. I think...</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>in societies where democratic structures don't exist. You are functionally at the whims of the state. In particular here, in a world where it's not democratic and also you don't have economic leverage. You are entirely subject to the whims of someone who isn't you, as to whether or not you get food, you get housing, you end up in prison. And I happen to think that for the vast majority of people, the benefit of democracy is that they get a say. so politicians have to be receptive of their interest. You know, in Western democracies, there's only so much you can do before a population removes you. And I think alongside this,</p><p>In non-Western democracies, if the conditions become so untenable the population wants to remove their leaders. The only possible response here is violence. And I think we talk about this bit in the piece. We expect that state oppressive measures get just much worse, or much more powerful, I guess. A state and infrastructural power gets much worse under non-democratic structures, under AGI, because suddenly you can remove humans who have limits as to what they'll do from the enforcement capacity of the state. And surveillance just gets significantly better than it ever has been.</p><p>And so I think for all these reasons, democracy ends up being quite resilient here, I don't think it's sufficient in and of itself. We talk a lot about this. But I think if you're looking at which states you want to be in post-AGI, the state where you have political power and the state where you don't, it's a pretty precarious situation if you're in that latter state and you also don't have anything valuable economically.</p><p>Theo Jaffee </p><p>Hmm. So throughout the essay, you talk a lot about what is called an economics rentier states, which are states which derive a lot of their revenue from natural resources and so have fewer incentives to invest in the human capital of their population and use this as an analogy for what every country will be like after AGI. But it seems to me like many of these rentier states actually do pretty well and are good places to live.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>There's Saudi Arabia and the Arab Gulf states, which have recently improved significantly, not just financially, but also morally. There's Norway, of course, which you talk about extensively. And then the other two countries that you mentioned, Venezuela and Equatorial Guinea, it seems to me like Venezuela has been a rentier state for quite some time. It really started becoming bad after they adopted socialism. And Equatorial Guinea has, I guess, all sort of dysfunctions that are</p><p>Luke Drago </p><p>Mm-hmm.</p><p>hold on.</p><p>Luke Drago </p><p>you</p><p>Theo Jaffee </p><p>sort of typical of many post-colonial African countries and is not that well representative of all rentier states.</p><p>Luke Drago </p><p>Well, I do think one thing to note about post-colonial African states is that they oftentimes function like as rentier states on steroids because instead of having the government's like designed to extract a resource, you have a government designed to extract the entirety of the country's GDP from a foreign power. So in this case, let's say the colonial, like colonizer here was Britain and it's an African country that they've designed, or like, you know, it's an African country that they're colonizing. The entirety of the state's apparatus is designed to remove resources from that base and transfer it back to a different power base. When you do a handover here,</p><p>Sometimes it results well, sometimes it results poorly, but oftentimes these handovers that result poorly are because you just handed over extractive institutions to new extractors and oftentimes extractors that have like stronger incentives to behave benevolently than the previous extractors did. Now I guess to touch on like the other examples you've mentioned here. So I'm not deeply familiar with the defendant's swelling example as a case study. Obviously I've read the literature here and I think it shows that oftentimes relying on resources is pretty negative for your like</p><p>for like being resilient to shocks and all of these things. But I think there is a strong concern here about like your leverage under a Venezuelan system. One where the government has like a non-human resource they rely on for rents and also has removed your ability to vote. Well, I mean, they have elections, you know, I wouldn't call them free or fair. I think in this world, like you as a Venezuelan are at the mercy of the Venezuelan government. Your ability to sell your labor is limited. So your ability to get ahead is quite limited and your institutions just kind of suck.</p><p>deserve you. The Gulf states are fascinating examples. So think we outline two examples here of obvious pathways to get out. One of these is like the credible threat here where I the paper we cite talks about Oman a lot as Oman is a state that has a Gulf monarchy. It had a credible threat of revolution.</p><p>And ultimately like ends up resulting in giving away rents. think that the terminology we've been using is, know, the rent controllers would love to have all of the rents themselves. They'd really also prefer to have their heads attached to their body in like a cool and meaningful way. And so because of this, oftentimes the best choice is to capitulate, is to set up welfare states. Norway is an example of I think probably the best way you can do this, which is where like there were really strong institutions before the resource curses introduced, really excellent institutions, and they were quite resilient.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>were quite stable, they were low corruption. And I guess the question I oftentimes ask is, does anybody think that most Western democracies have the kind of resilient institutions that Norway has? And secondly, do you expect the effects of AGI replacing all of human labor to be stronger or weaker than oil which replaces some but not all? In non-inventory states, there are other ways to make money. It's just the government primarily relies on one, where in this scenario, you are constantly out-competed. Now I do think for what it's worth,</p><p>Norway provides some interesting examples. are reasons why you should want to do the institutional strengthening that Norway has done. The other thing to note about the Gulf states is they have an incentive to diversify recently. It is not a coincidence that the Gulf states have gotten better as diversification has become more likely. I think we talk about it a bit in the piece about how Saudi has like, assumes that peak oil is coming sometime soon. And so because of this, they want to attract diverse capital investments into Saudi Arabia. And correlated with this has been a rapid expansion relative to baseline of women's rights and economic freedom, which you would expect in a country</p><p>that can no longer rely just on a resource and now wants to shift towards focusing on humans as a form of economic growth.</p><p>Theo Jaffee </p><p>Yeah, okay, that makes sense. But you said something that made me think of like, yeah, people want to extract rents, but they want more than that to keep their heads attached to their bodies. But I guess another form of this like heads attached to the body's thing is social status, right? And you even talk about how social status is, you know, one of the things that humans will still be able to do after the singularity.</p><p>Luke Drago </p><p>Yeah, they want to stay alive.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>Like, let's say you have a, I guess, benevolent AI dictator, the CEO of a big lab or something, who's making decisions about, what to do with all this, you know, extracted rent, and, you know, he can choose to keep everything, even though this goes, well beyond what he would ever need for personal consumption, and then some, or he could give a lot of it away to the people, I suppose, and gain, like, a lot of social status for that. Could that be a new social contract?</p><p>Luke Drago </p><p>Okay.</p><p>Luke Drago </p><p>So there's a word that's doing a lot of heavy lifting there, and it's benevolent, right? We're presuming here that this actor is benevolent. And I think you can presume for a generation this is true. Let's presume it's humans are in charge.</p><p>But I think a real problem with presuming you have the benevolent dictator is your Stalin risks are just quite high. You've built a set of institutions and a set of governing powers which, if handed to someone who is not benevolent, can suddenly become quite destructive. And the benefit of pluralist liberal democracies is that even if somebody who's quite destructive comes to power, there are lots of reasons why you can survive without the government and the government is beholden to you, that individual has limits on their power. It could be that one of the AI benefactors, or all of them today, are benevolent, but as</p><p>gets passed from generation to generation, they end up in a situation where the entire economic means of production and political power is centralized in an actor who just isn't that benevolent. You saw this with monarchies a lot. You know, there are a lot of benefits to monarchies historically, but there are a whole lot of downsides, and the reason we move past them is because if the sovereign has absolute control, every time you have a handoff, you just roll the dice as to whether not that sovereign's any good. And oftentimes it results in pretty catastrophic sovereigns.</p><p>Theo Jaffee </p><p>Well, the Curtis Yarvin response to this, which is, guess, now the orthodox sort of like Bay Area gray tribe response to this is you don't have an absolute sovereign. You have a sort of constitutional sovereign that can be replaced by a board, which is what you see at these big labs. The board failed to replace Sam Altman at OpenAI, mostly because it seemed like what they were doing was very suspect. But if you had Sam Altman clearly showing like</p><p>Luke Drago </p><p>Yeah, yeah it is.</p><p>Theo Jaffee </p><p>public signs of being bad towards the people, then you could have a board which themselves own OpenAI stock and would be very wealthy after the singularity. It seems like companies right now are not controlled by absolute monarchs.</p><p>Luke Drago </p><p>So I agree with the companies today or not. I think there are some limitations. But guess Theo, I'd ask you, what control do you have over open hand?</p><p>Because you have some. You have a little bit of control.</p><p>Theo Jaffee </p><p>I know, I could talk to my friends who work at OpenAI.</p><p>Luke Drago </p><p>Yeah, well that too. But I'm thinking more of like, what kind of like direct power could you exert on open A?</p><p>Theo Jaffee </p><p>I could cancel my subscription and switch to Claude.</p><p>Luke Drago </p><p>You could do that, but also you could vote, and you could vote for politicians who influence open AI in some sort of meaningful way. The market can move here and just cancel everything, and that's one way to exert power. And another way to exert power here is because you can vote. This is your political leverage and your economic leverage. And I'm concerned about scenarios where the goal is to remove one or both. And I think it's pretty clear to me, given that the stated goal of the major labs is to achieve AGI, which they define as systems that could do all meaningful human intellectual work, that that is on the pathway.</p><p>Theo Jaffee </p><p>Okay, so let's talk about solutions, right? So what should be done like right now, like today? So if you were the president of the United States, what exactly would you do today right now?</p><p>Luke Drago </p><p>So a couple of things, I'm the president of United States. My goal here is to ensure that we have like lots of different, like a stable, multipolar super intelligence where lots of different people have access to the technology. They can wield it at their pleasure and we're also not in the world where like...</p><p>crazy stuff is happening all the time because suddenly everyone's got these crazy capabilities in their pockets. I'm the president of United States. I'm investing a couple billion dollars, provided Congress goes along with it, into a moonshot for a whole lot of risk-reducing technologies. I'm thinking about hardening the world against the major reasons to centralize. One, because catastrophes are bad and are possible. And two, because I think one of the biggest threats against human liberty here is really centralized AGI projects where the government comes in and says, there's one project. We are producing super intelligence.</p><p>us and I think the best way you get to those kind of things is with some sort of like AI warning shot or the very real threat of AI catastrophe. The downside of this is if you know like or the upside of this is like you you remove the ability for some crazy catastrophes to happen which is good for everyone and you also can pave the way for a safer more like democratic ownership structure and control here.</p><p>Theo Jaffee </p><p>Is there anything else other than this democratic moonshot?</p><p>Luke Drago </p><p>I mean today that's the major thing I've been doing. mean also like a lot of like anti-corruption strengthening I want to do here. If you're going to rely on institutions and you have to in some way to ensure that like AGI goes well, you're going to want to make sure that the people who are in power are constrained by like pretty reasonable forces. You don't want lots of corruption. You don't want people to be able to buy each other off. You want to make sure that people's votes matter. So like you know, there's stuff as simple as like campaign finance reform, strengthening anti-bribery laws. I think Singapore's interesting example here where like</p><p>Singaporean civil servants get paid very handsomely and if they take a dollar from the, you know, someone who's not the government, they go to jail. And I think this is the kind of, when you're dealing with, you know, complete rewrites of social contract, you want to make sure that, you know, benevolent people and people who have good values are in charge. But you also want to make sure that if you roll the dice incorrectly and you get the wrong person, they're constrained by forces that are stronger than just their own will. So that's the kind of thing I'd want to be building up as well.</p><p>Theo Jaffee </p><p>Okay, what do do if you're the CEO of a big AGI lab today right now?</p><p>Luke Drago </p><p>I mean a whole lot, right? Like if this is the problem that I want to solve, maybe it's a bit against my company's interest, but I probably want to be first of all like doing a bunch of interesting research here on the economic impacts. So I think like Claude releasing, kind of, yep, Claude, Anthropic releasing like the Claude index in this seems really good. And I think more lab should be doing this. I'm somewhat skeptical of Anthropics data as the baseline, not because Anthropics data is wrong, but because Claude is like predominantly used. Like it's pro, it's like biggest use case seems to be coding. And so I think it's not necessarily representative. I think if you want to represent that, like really representative data here.</p><p>Theo Jaffee </p><p>Anthropics doing this? Yeah.</p><p>Luke Drago </p><p>you'd want to open AI because everybody uses GPT. Everybody uses chat GPT. My mom knows what chat GPT is. Your friends who don't know anything about AGI know about chat GPT. So that's really who's evidence you're going to want here. You probably also want to be doing some baseline research into what sectors you expect are going to get hit the hardest and be sharing this with policymakers.</p><p>Theo Jaffee </p><p>Is that it?</p><p>Luke Drago </p><p>There's more there. I want to highlight the kind of the top line stuff. I think looking at decentralized platforms, another exciting thing here, like I Prime Intellect just did this like massive decentralized training run. Looking more into like how you can do like model customization ways that are privacy preserving, how you can tap into people's like tacit or implicit knowledge without owning all of their data. These also seem really important.</p><p>Theo Jaffee </p><p>How do you tap into people's tacit knowledge without owning all of their data?</p><p>Luke Drago </p><p>So some interesting stuff going on here. think like as like a kind of a core observation, my expectation is that for the last mile of automation to really know what's going on in the economy, to able to allocate stuff better, you don't just want like clones of the exact same model running around.</p><p>One of the reasons that like markets work so well is because there are lots of different actors who have slightly expertise in like small slices of the world And because of that they can like see things that like a central planner just can't so you'd want to be able to incorporate that information in a meaningful way and I think there are two ways to do it there's one that I advocate against and one that I'm hoping to help contribute to I think that first one is just you like gobble up all the data make carbon copies of a user Don't do this in a way where like you now own the data as the lab and then you create like lots of clones that look just like them they can mimic their behavior and preferences and have access continually</p><p>to the kind of stuff that they're doing. I think if you want to do this in a more privacy and preserving way, there's some interesting papers here on like, like secure training runs for example, secure training, I gotta find the exact terminology from the paper, I don't have it right in my head, I just read this a couple of days ago. But there's like certain kinds of training you can do where like you can train on data that you can't necessarily see. There's a question there about whether or not you can evaluate this, this is like kind of an unsettled question right now.</p><p>So could you evaluate a model that otherwise you don't have access to? And the answer to that's probably no right now. That probably takes some hit for consumer performance. But are there ways in which you can either use it to directly own the models and run them on device, and so therefore you can't see it? Or could you do Apple private cloud compute style solutions where your data is being passed on but in a privacy preserving way? I think I'm much more excited about these latter options than the former ones.</p><p>Theo Jaffee </p><p>Hmm. What do you do today if you're the CEO of a white collar company? I guess like I know sort of tangentially like a lot of people who are, you business owners or executives or whatever of like, you know, sort of provincial companies. Like I live in Florida. I don't live in Silicon Valley. And so like most of these people who are, you know, big business people don't know anything about AGI. They might know like, yeah, Chachi B.T.</p><p>is pretty good, you know, you try it, it seems pretty helpful, it's not gonna take anyone's job right now. So what should these people be doing right now?</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>So it's really hard for me to tell business owners they shouldn't like follow their incentives because like that's the way that the economy works. My expectations of like the era of like mega corporations probably coming to an end because in those corporations you just you just really can't automate a lot of those jobs. I think the way this happens I think we lay out the first part of the series and this idea of pyramid replacement where like major corporations just start by hiring less people and eventually at some point they do layoffs. But this really affects like your entry level employees and analysts first. I do not and I'm not going to give the advice of like</p><p>Just stop hiring people, because I don't think that's good for a whole lot of reasons. But I expect that's what a lot of companies are going to have to do to remain competitive. I don't think this happens overnight. I don't think AGI 2026, zero human employment 2027. There's this gradual diffusion process. It's not like laptops came out the next day, everybody had a laptop. This just takes some time to diffuse in the broader economy. I expect that if you're a company and you want to survive, start using AI as fast as you can.</p><p>Theo Jaffee </p><p>Cowen has said something like, this will take 30 years to diffuse throughout the economy. And I think the epoch AI people think sort of similarly.</p><p>Luke Drago </p><p>That seems plausible to me. It seems completely plausible. I'm probably more in like 10 to 15 years than I am on 30 years, but I do think it's going to take a long time for this to diffuse. I think the iPhone is an interesting example here where like in 2006 or seven the iPhone gets released and by 2016, 2017 you basically have to have an iPhone to compete. In the meantime, Blackberry dies, right? Or you have to have like a smartphone to be able to like be in the modern business environment. Maybe that's a more accurate type of diffusion that you would expect where like something goes from being niche to being mandatory. But iPhones are building on an existing platform of cell phones and those are already pretty much required.</p><p>by that point. So I don't want to say I have the exact model of diffusion. What I will say, a reason to think that it's not going to be overnight, but it's going to be pretty quick, is that of course, like for a moment, ChatGVT was the fastest adopted consumer app ever. I can't remember the exact number, but it had 100 million users pretty quickly. And I think you should probably expect that, like, I Cursor then beat it out, right, if I recall correctly. And Cursor was, of course, just like using that technology for one vertical. I think that's going to happen in a whole lot of industries where pretty much overnight they get changed. I don't think this overnight changes your hiring path.</p><p>But I think it overnight introduces new incentives that get stronger and stronger.</p><p>Theo Jaffee </p><p>Yeah, okay. So what are these bottlenecks to AGI progress? Why do we not have AI 2027? Why do we not have labor getting replaced? Basically immediately after we get AGI. What are these bottlenecks?</p><p>Luke Drago </p><p>So are these bottlenecks like strictly on progress or bottlenecks on diffusion?</p><p>Theo Jaffee </p><p>both.</p><p>Luke Drago </p><p>So first of all, I I have a lot of respect for the AI 2027 team. I've talked to lot of them extensively. think Daniel et al., they're like brilliant thinkers. I probably think that the world has a lot more like physical bottlenecks than I think their team does. So I think, for example, like long horizon planning might be a harder objective to achieve, especially in the last mile of it. I think, for example, we seem to get a lot of progress very quickly on stuff that is obviously right or wrong. Does the code run or not? That's an area where you can get pretty superhuman pretty quickly. And you should expect like RL and self-reporting</p><p>to be like pretty effective tools here. I expect that some of these harder tasks, like let's just take writing for example. I think that today, like models are pretty decent at writing. I do not think that they are like at...</p><p>college-educated human baseline, or maybe like, slightly above that at best. And I think there's not been tons of progress towards this. There's a lot of like, fanfare raised by the OpenAI team when they dropped GPT 4.5, and they kept saying this is like, the best model for writing. They had this big tweet where they put out as an example of the writing quality. And I read the writing.</p><p>And I just wasn't that impressed. I thought it was better writing than I expected from a language model, but was definitely not what I would consider to be like good human writing. It was maybe better than your average college student, I don't know if that's good human writing. I think these tasks that are harder to judge are just harder to train. There aren't right answers. You have to have taste to be able to judge them. And I think like the job of running a company, for example, which requires a lot of success and failure, it's kind of hard to predict exactly how it's going to work. You have to have lot of intuitions that are just hard to build without doing it yourself.</p><p>These are tasks that are very hard to simulate, very hard to prove objectively, and very hard to measure. And I think eventually we'll be able to generalize to those, but I do expect this is a reason to think that AGI is like slightly farther away. Though I think if I had to put like a timeline on it, I think I'd probably say 2030 maybe sooner. So don't think it's that far away. And then I think on the economic diffusion side, like...</p><p>Luke Drago </p><p>court is agreement I have with AI 2027 is like 10 million robots one year after AGI. And I think that they presume, and pretty smartly I would add, like a whole of government response with like every part of government machinery suddenly awakens and tries to create robots. I think even in that world, there are lots of bottlenecks. Like for example, we need a whole lot of rare earth minerals or REMs to do this. And right now China controls 85 % of the refinement process for all REMs. The US doesn't have much domestic capacity online right now. I think it's 85%.</p><p>Theo Jaffee </p><p>Sure, so you might still have your 10 million robots, just they're made in China, not the US. I mean, you already kind of see this.</p><p>Luke Drago </p><p>Well, I think in a world like this, you might get 10 million robots in China, but the US government is not getting the whole of government response that gets them to 10 million a year after AGI.</p><p>Like I do think right now China is going to crush us in the manufacturing race. And I think this is like an existential threat to Western civilization that we just cannot build anything anymore. And I'm pretty concerned about those kind of scenarios where, you know, we end up losing the AGI race, not because we didn't get to the software first, but because we instead, like, completely capitulated our manufacturing capability to a rival or an adversary who now has strong incentives to cut us off to buy themselves time. Look, we've done a whole lot of export controls to them. I'd be pretty shocked if they don't do a lot of export controls to us.</p><p>Theo Jaffee </p><p>Sure, but why should I care if China achieves AGI first?</p><p>Luke Drago </p><p>I think there's a question of what values do you want to underwrite the world and what kind of power do you want to have?</p><p>For the same reason that you should care about still having an economic role in the social contract, you should probably care about your country or a country that are aligned with yours having an increased role in the economy. And in a world where China can build everything and manufacture everything cheaper, I think it's harder and harder for the West to play competitively at the global stage. I think ultimately it does, like, your team winning is actually important. And in particular, your team having a way to win is important.</p><p>Theo Jaffee </p><p>Well seems like China has absorbed a lot of the cultural values that I care about from the West. You know, they got capitalism with them, Deng Xiaoping. DeepSeek was for quite a while and maybe still is the most free and least censored AI model. Like yeah, you can't ask it about 1980, 1990, Ottoman Square. But that was a much narrower category of restrictions than the restrictions that OpenAI had on all their models until very, very recently.</p><p>Luke Drago </p><p>Which ones?</p><p>Luke Drago </p><p>So one, I think it'd be good if we had less content style restrictions. I'm a very strong believer in free speech, and so I think it's important that models can answer questions truthfully and without undue censorship. I think that's different than models providing you with instructions on how to build a weapon. But I think that's what's more important here.</p><p>I want to go zero in on like, you know, Deng Xiaoping brought capitalism to China. is absolutely true that like China is, you know, economically liberalized far more than they did at that point. But the kind of capitalism that China has is one where if the CEO of Alibaba makes a derogatory comment about a Chinese, a part of the Chinese economy, he disappears for a few months. To me, that is not effective capitalism and it's a value that is quite foreign to my own. I would strongly oppose it if it would have occurred in the US or in the West and I strongly oppose it when it happens in China. And I don't, I mean, I strongly expect that a world where like the</p><p>Chinese government gets to write the values of the world is one where that is more common, not less.</p><p>Theo Jaffee </p><p>Okay, that makes sense. What specific tasks do you think humans will remain on, like, in the loop for the longest? Like, you talk about tasks that require taste, that are hard to judge, that require long-form planning. But it seems to me that, like, beyond just that, like, AI seems to be just better at certain tasks than others. Even some tasks that seem rather obvious. Like,</p><p>Luke Drago </p><p>Yeah, we're getting, go ahead, please.</p><p>Theo Jaffee </p><p>I don't know, what is a good example of this? Like, AI can't really play tic-tac-toe still, I believe. Maybe O3 has been able to do this, but...</p><p>Luke Drago </p><p>Yeah, I mean I think jagged progress has been the norm and I think this probably gets more true with reasoning models, not less. Where like, you get really good at stuff that has correct answers and not great at stuff that has incorrect answers. Now I expect this to generalize. think, I think, you know.</p><p>By 2030, I expect this to be like, at least more solved, much more solved than it is today. But I do think these durable skills as it matter right now are like taste, judgment, long horizon planning. I was giving a talk at Georgetown a couple of weeks ago about like how to plan a career in the age of AI. My general advice was if your goal was to like go climb a very large corporate ladder and spend 10 years at McKinsey, become partner there, like at an accelerated timeline, you are just probably going to get automated. The bulk of your job for a long time is going to be tasks that are quite automatable. Whereas I expect that like,</p><p>If you're learning very early how to take risks, how to fail, how to develop that research taste or that sense of taste judgment faster, you'll be more effective than your peers at racing against AI progress.</p><p>Theo Jaffee </p><p>Yeah, I think yesterday Marc Andreessen got clipped on a podcast where he said something like, VC will be one of the most durable careers to AGI and everyone clowned on him for it. But it seems like, know, if anything requires taste and judgment and long horizon planning, it's venture capital, right? Like what more so than that?</p><p>Luke Drago </p><p>I saw that, yeah.</p><p>Luke Drago </p><p>So I don't know if I agree with the claim that venture capital will be the last job. I probably do agree with some version of that claim. It's actually very hard to predict what companies are going to succeed and fail. That kind of pattern matching is quite difficult. Jobs that require that kind of pattern matching that require years of experience to figure it out, those are more likely to be durable than less. The real bad news here is for entry-level roles, for the roles of people who are just coming out of college. I think there's some terrible irony in the fact that like,</p><p>entry level CS majors are just kind of automating themselves here. The last mile of automation is not required to automate the vast majority of coding, to automate the vast majority, especially like entry level coding. But yeah, don't think, for what it's worth, I get why he got Clown Dongs. I think the specific comments sound a whole lot like, but my job will survive. But I think there's something in that comment that I think is true.</p><p>Theo Jaffee </p><p>Hmm. You also talk about AGI society being permanently static as a risk. But why would it? It seems like nature hates stagnation. Especially if there are different people controlling different AGI's in different factions.</p><p>Luke Drago </p><p>I think we've really focused a lot on economic stagnation, right? Like the idea that...</p><p>you as an individual, you're born in Montana and your ability to ever climb that social ladder is just quite small, quite unlikely today, in this world relative to today. I don't think it's true that everybody born everywhere has an equal shot of climbing the social ladder and displacing the current elites of the day. But one of the benefits of our existing system is it's actually just quite easy relative to baseline to do that kind of displacement. And one of the ways that culture moves and culture shifts is through leader displacements. I'm thinking a lot about people talk about the vibe shift in 2024,</p><p>where there was very suddenly a new administration was elected. They hadn't even been sworn in yet, but they were elected. And it felt like overnight, basically every company suddenly switched through the hiring policies. People started saying different words. It felt like there was a massive cultural shift from June 2024 to December 2024, maybe even into January 2025. This is all very much so a byproduct of leaders changing and therefore what seems permissible for the culture also shifting as well. And I think that that's an important part of human social progress.</p><p>know, human social dynamics is that people from the margins can still win and I think the cultures that are the most alive and the most dominant are the ones where like figures like a JD Vance who grows up in like the poorer regions of Appalachia can ascend to the vice presidency. I think it's one of the things that makes the Western democracy and capitalism so strong is you can move from the outside and still win and with that create lots of change and progress. And I think in a world where like your wealth is turned by government, it's just as likely.</p><p>Theo Jaffee </p><p>Okay.</p><p>Theo Jaffee </p><p>So by permanently static AGI society, you mean the humans will be static, but not the AGI's or the people at the top.</p><p>Luke Drago </p><p>Well, I my concern is the humans, right? So yeah, I think that's who I'm referring to here.</p><p>Theo Jaffee </p><p>Yeah, okay.</p><p>You also talk about banning AI systems from owning assets. Let me find the exact words because this is</p><p>Luke Drago </p><p>Yeah, is one of the few times I call for banning anything, because we really try not to rely on regulation as a core centerpiece of what we're doing here.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>policymakers should ban AI systems from owning any assets, serving as a C-suite member of a company, serving on a board of directors, or owning shares. Yeah, so this seems like very specific. I would love more elaboration here. Like, why would you do this? How is this like tenable in the long term when AI becomes more agentic?</p><p>Luke Drago </p><p>Yeah, so we brought the...</p><p>Luke Drago </p><p>So I guess like, you know.</p><p>We really try in the piece to avoid doing specific regulatory call outs. think if we're running around saying we think the underlying economics are more important than the regulatory structures, we probably shouldn't be spending all of our effort doing lots of targeted regulation. I think this is just broadly the wrong way to go about changing social progress. think if new technology introduced, you want to adapt your society around them. I think regulations are oftentimes ways to, instead of adapting, slow down. I'm not anti-regulation by any means. I think sometimes they're quite necessary. But I do think we really try to focus on the</p><p>underlying economics. Now, I think this actually looks a lot more like the underlying economics and regulation, because I think this kind of rule today changes nothing right now. There are no AI systems today that can legally own assets. I think, honestly, it's probably likely that under most existing law, AI systems won't be able to own assets. But I think the reason we include the call out is because by doing this, you're guaranteeing some sort of human role in the organization of the future. And if it is the case that eventually you do delegate this power away,</p><p>I expect evolutionary pressures very quickly to make this a dominant force, where suddenly AI's control, in and of themselves, lots of capital. My expectation is just that humans are more likely to care about humans than non-human entities are. Same reason that we care more about humans, do animals. And so I'd like to preserve as much as possible a role for people in the setting of the direction of the future. And I want AI to be a technology that extends that direction setting, not limits it. And so I think that's one of the areas that we did a pretty targeted policy call out.</p><p>Theo Jaffee </p><p>You also talk about AI both mainly centralizing but also decentralizing power by default. Which do you think will dominate? Because I've heard both. I've heard AI will enable governments to surveil the entire populace, but also AI will enable the populace to build pocket nukes that can be used as a check against the government. So which of these trends do you think will dominate?</p><p>Luke Drago </p><p>Thank</p><p>Luke Drago </p><p>So for what it's worth, think a world where everybody can build a pocket nuke is a pretty bad world.</p><p>I mean, I think one of the areas where things do get destabilized is one where a bunch of people have access to weapons of mass destruction. Now, I think there are other choke points that are really effective here. So it could be that you have a system that can tell you how to build a nuke, but you just can't get uranium. Uranium is pretty hard to get, pretty hard to refine that in your backyard. So think there are reasons to believe that that is harder than it is. Bioweapons are another area where I'm actually pretty concerned. It's why we have a pretty targeted section on reducing bio risk. But our section there is really targeted in other choke points.</p><p>like doing like you know screening KYC on like the biological materials because I think they're just physical world choke points that are more effective and also like reduces power trade-off. Provided that you can do that kind of stuff though I think one of the ways where AI can keep people in power and be like an agency extender as opposed to a limiter is one where like lots of people have access to like very powerful models that are aligned to their values, their goals, their intents as opposed to like a centralized set of goals or a centralized set of intents. And I think if</p><p>believe you kind of buy high X arguments on how knowledge works in society and candidly I do. That there are like parts of economic knowledge kind of scattered throughout everywhere that are somewhat hard to track and are difficult to be legible. Systems that have access to like an individual user set of data there, know, tacit knowledge, might be more effective in aggregate, like organizing an economy and allocating resources than any sort of like top-down centralizer is. I think my concern is that, yeah, sorry, go ahead.</p><p>Theo Jaffee </p><p>Yeah, I for what it's worth, I DeepSeek is probably like the single greatest company for decentralization right now. And they come from China.</p><p>Luke Drago </p><p>It worries me that we don't have a Western alternative that I think is effective. I think it could be Meta, but it seems like Meta, both ideologically isn't that committed and yeah, I don't know what's going on with their AI team. Prime Intellect just did a really interesting decentralized training run. We're waiting for the result of that right now for Intellect 2. And I'm quite excited to see companies like that.</p><p>Theo Jaffee </p><p>that is throwing right now.</p><p>Theo Jaffee </p><p>Hmm. But sorry, what the?</p><p>Luke Drago </p><p>which yeah, I do think like.</p><p>Sorry, I do think there's some concern about if it's DeepSeek. If DeepSeek is the company that underwrites the value substrate of the universe, and everybody's only asking DeepSeek questions, and that's the powerful model, I think I'd much rather that model be made at people who are aligned with my values than not. Because I really would not prefer a model that centers my political opinions. And I agree that there are areas where it's censored less, but I don't think that response to that is then to say, we should give up, throw up our hand, and say, I guess DeepSeek gets to win the race. I think we should be saying there are lessons that we can</p><p>learn from DeepSeek and lessons that we shouldn't learn from DeepSeek.</p><p>Theo Jaffee </p><p>Sure.</p><p>Theo Jaffee </p><p>So what should people be excited for? You talked about this culture of indefinite pessimism that seemed to replace the culture of indefinite optimism of Silicon Valley. But the next few years will, regardless of changes in material conditions, have some very cool things going on. So what should people be excited for?</p><p>Luke Drago </p><p>Yeah. Well, one, one of my big call outs there is because I think a whole lot of people can like see AGI coming like a meteor about to hit them. And their response has been like.</p><p>of mix of we're gonna die, this is gonna be bad, and from there, the response has been let's just do nothing about it, or let's freak out about it. And I think if you can see a meteor coming, but you can see it has lots of benefits, I think your real goal should be to deflect the meteor, and I think there are some obvious downsides that we can see coming. I think we talked about economic downsides, there are risks to, significant risks to, like.</p><p>catastrophes, etc. But your response to that should not be, okay, we have to throw up our hands and run as far away as we can and maybe go live in a cave for a while. Your response should be a call to action to go and solve these problems because this meteor is coming. We are going to achieve AGI. is basically at this point technologically inevitable. We are on the track to do this. I think it's extremely likely we will do it. And so I think hiding in fear is super unlike, is just not a good response. If we can solve these problems, we can solve these challenges. We are talking about living in a world with more abundance than we could ever possibly imagine.</p><p>imagine, where an individual's ability to change their own environment, to change the world around them, to make an impact could be higher than it is today. And I think it's pretty high today relative to baseline, where a lot of the scourges that we have in modern life are things we can send to the ash heap of history, things like some pretty terrible diseases like hunger. I think that is a really exciting world you want to be aiming towards. And I think you need to be sober about the problems that are facing you from getting there. Because I think there are real potholes in the road. I align with Vitalik's vision on this entirely. There are some real problems we're going to have to overcome.</p><p>But I think we can overcome them, and I'm pretty optimistic about humanity and our ability to do that. But it does require us to be sober about what the risks are and what it's going to take to deflect them.</p><p>Theo Jaffee </p><p>So you mentioned you were building a company to address the challenges of the intelligence curse. So could you go into a little more detail about that?</p><p>Luke Drago </p><p>There's not a whole lot I can say right now. I think what I can briefly talk about, and I'll try to be a little careful here, we're pretty excited about this kind of alignment to the user concept. The idea that there's like a set of tasks and knowledge that we can gather that provided we do this in a privacy preserving way, both like creates models that are like.</p><p>more customized to the user and then when plugged into a lot of tools could complete tasks better than an off-the-shelf model could. I'd be pretty surprised if this model is superhuman at coding, but I think it probably will have a better sense of how to do things the user would want it to do, especially if it can do things like tool calling. So I can't say a whole lot right now, but I think if that kind of vision is something that inspires you, we'd love to hear from you.</p><p>Theo Jaffee </p><p>Alright, so yeah, I think we've talked a lot about AI. Let's talk about music a little.</p><p>Luke Drago </p><p>Let's do it, because I remember when I reached out to you, my pitch was AI, but also I think we have like very similar taste in hip-hop.</p><p>Theo Jaffee </p><p>Yeah, this is like weird. I don't know that many other people who like Logic as much as I do. And you said that College Park is in your top 10 albums?</p><p>Luke Drago </p><p>It is, yeah. So I think there's like an arc for Logic, right? There's like old Logic, which is very good. There's whatever happened from 2016 onward, which is, you know, hit or miss, pretty rough. And then there's like 2020 and after. I think everybody's 2016 or 2017, one of the two. And that album's pretty hit or miss. I think like the low point for me is definitely either Confessions of a Dangerous Mind or Supermarket, where it feels, Supermarket in particular, I remember listening to that and just going, no, like, that's the end of that.</p><p>Theo Jaffee </p><p>What year is everybody?</p><p>Theo Jaffee </p><p>Yeah, okay, yeah.</p><p>Theo Jaffee </p><p>Yeah, definitely.</p><p>Luke Drago </p><p>But then he has this resurgence with, like, is it No Pressure and then College Park that I'm just like, I thought they were really excellent albums. think there was the, what was the other, the vinyl, 035 is good, and then was also, I think, Vinyl Days, is that what was called? Yeah, Vinyl Days. I thought Vinyl Days was like some really interesting production. I think Logic does really well over like these like soulful beats or these like...</p><p>Theo Jaffee </p><p>No pressure.</p><p>Theo Jaffee </p><p>Ultra 85.</p><p>Theo Jaffee </p><p>of Vinyl Days.</p><p>Theo Jaffee </p><p>Yeah</p><p>Luke Drago </p><p>very like old school 80s, 90s inspired beats. And I know his most recent project is coming out is neither of those things. And it's quite like trap again. And it's just like, I don't think he can do trap very well. No offense if Logic has tuned into this AI podcast, but I think, you know, do more boom bap. That is definitely where he shines.</p><p>Theo Jaffee </p><p>Yeah, I agree.</p><p>Theo Jaffee </p><p>mixtape logic versus album logic and like he even talks about this I think in the intro to Bobby Tarantino 2 or 3 he has like a Rick and Morty skit where he has Rick saying like yeah literally where he has Rick saying like you know I don't want to listen to this like you know introspective rap with a message I just want to you know rap about like titties and ass and stuff and like yeah like when logic does this it's usually not that good</p><p>Luke Drago </p><p>Average logic skit.</p><p>Theo Jaffee </p><p>I think my favorite Logic mixtape after the original Young Sinatra's was probably Inglorious Basterds, like the very new one, because it was closer to an album than like the Bobby Tarantino's.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>I think this makes sense. think my favorite Logic project, I'm standing by College Park, I think it's got one weakness, it's that oftentimes it's got these skits that are two minutes long in the middle of a song. So Playwright, for example, has this very long skit. I think Playwright is an exceptionally well done song. Its beat is really catchy, Logic flows over it very well, the features are really excellent, and then it's got this minute and a half long skit, and because of that skit, I'm not playlisted at all. If I just play this, it's just like...</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>That's a good one.</p><p>Luke Drago </p><p>I think he's ordering a burger in the skit and she's like, what are we doing here? Like old school would like take his skits and make them separate tracks. And I think, you know, I would have really preferred a version of College Park where I could play some of these songs in the skits. But I think like Lightsabers, Gaithersburg Freestyle, Self-Medication, Shimi are just like really like top tier logic tracks. I can't think of really many tracks he's made that like outpaced those for me. Maybe like OG Under Pressure stuff.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>I Under Pressure is better. Yeah, but College Park in top 10 is quite high praise, think. Let's see. Tracklist is...</p><p>Luke Drago </p><p>Look, I'm happy to make a chair. I'm making contrarian takes an AI might as well make a music too, you know?</p><p>Theo Jaffee </p><p>Yeah, yeah. Cruisin' Through the Universe, like, I thought that one was great. It's so like, it feels like you're in space with the production. Like, six, six is great. Six on the beat is like God on the beat.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Every time. Here's like a, he had a mix tape came out a couple of weeks ago.</p><p>Theo Jaffee </p><p>Who's six? Okay, I have to check that out.</p><p>Luke Drago </p><p>Yeah, Six did it. It's like, he's not rapping on it, but it's like he's producing the whole thing. And there's a logic track called WMD on it, which I'm like pretty impressed with.</p><p>Theo Jaffee </p><p>Hmm. And then like the...</p><p>Luke Drago </p><p>By just fixing logic, they can do stuff together that's really impressive.</p><p>Theo Jaffee </p><p>Yeah, the end of Cruisin' Through the Universe where you get into the first skit and you're like, wait a minute, why is Logic in Big Lembo's basement? And then you're like, that's what College Park means. He's talking about his time in College Park when he was 20, sleeping on Lembo's couch before he blew up. This sort of Logic adventure is good.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah, and I think that's like an, yeah, I mean think when he's talking about like, logic is good when he's talking about literally being in space, or like his early life, and I think that's where like he just really shines in ways that are quite impressive. And I think everybody has this like much more confusing narrative where it's like.</p><p>There's this Neil deGrasse Tyson side plot that's going on where it's like, actually, you are everyone that's ever lived, which I believe originated from like a Twint, some Facebook meme. You know what talking about? Yeah, the short story in the short story became a Facebook meme, and I'm not sure how it got to logic, but it's like, why are we doing the short story that I've already heard about? And also like very strong political commentary, and also at the end, some space stuff. It's like there are three different storylines happening here. I'm kind of confused as to why they're happening. I think they meld somewhere.</p><p>Theo Jaffee </p><p>No, was a short story. Called The Egg.</p><p>Theo Jaffee </p><p>Well, the space stuff is kind of just a through line between different albums. And like the reason for the short story is kind of like, you know, you are everyone, right? So you have to, you know, love everyone and, you know, peace and love and equality and stuff. Like that's, yeah.</p><p>Luke Drago </p><p>I think it's a...</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah, no, I think the reasons there, I just think it's not executed as well as it could be. And I think some of the songs, I, there's a track on everybody that I think, I remember being like, what are we doing here? If I can find it, it's, it's Take It Back, it's Take It Back. Yeah, it's Take It Back.</p><p>Theo Jaffee </p><p>Does it take it back? Yeah, yeah, Yeah.</p><p>Luke Drago </p><p>The beat's excellent, right? It's just like, you know, like kind of three minutes of logic talking for a while. God, I think Killing Spree still makes it in playlist. I think I can just be like, actually, this is fine. I can look past it. The production on everybody is maybe his best production, which is why it's somewhat unfortunate that the lyrics are just not there.</p><p>Theo Jaffee </p><p>killing spree too.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>Yeah, but like I think everybody like the beginning of the album and the end of the album are really excellent Like the beginning of the album you have hallelujah and everybody</p><p>Luke Drago </p><p>Yeah.</p><p>Yeah, yeah. Confess too. Confess is slept on. It shouldn't be.</p><p>Theo Jaffee </p><p>Yeah, and then like towards the end of the album you have 1-800 which I'm sorry. It is a good song I don't care if it's his most popular song ever. It's definitely not his best yeah That one's a little corny, but like it's really good. I think anxiety was great. Black Spider-Man was great AfricAryaN is excellent like it ends so well</p><p>Luke Drago </p><p>Who came here late?</p><p>Luke Drago </p><p>I think it is a good sound.</p><p>Luke Drago </p><p>So I think anxiety's not that good, but Black Spider-Man I think is maybe a top five logic track. I do this like a...</p><p>Every season, like spring, winter, fall, summer, I make a new playlist. And my only rule for the playlist is it's to be like 30 songs tops, which I add throughout the season, and no repeats between previous playlists of the same theme. So like if I did something in 2022 winter, can't come back in 2024 summer. And my like 2025 spring playlist had Black Spider-Man in it. And I just, I remembered how good that song was in the last couple of weeks, because it is just absolutely incredible from the production, the features.</p><p>I don't know, think it is like, if every song on everybody had like that framing of the mass stage and that level of caliber, I think everybody would have been a smash success.</p><p>Theo Jaffee </p><p>Yeah, I think you're sleeping on Ultra 85 too because Ultra 85 is like, yeah, I think it's maybe his best album maybe. It way exceeded my expectations for what it would be based on like the promotional singles and based on his other projects from around that time.</p><p>Luke Drago </p><p>All 35 is good.</p><p>Luke Drago </p><p>What was the promo single? was 44ever right? Of fear.</p><p>Theo Jaffee </p><p>Fear. Fear and then 44ever. 44 ever was good. Fear was less good. Fear was alright.</p><p>Luke Drago </p><p>I will say the opening two tracks on Ultra 85, so Paul Rodriguez and Mission Control are just also fantastic. Mission Control in particular, think the production is stellar. Logic, when he really locks in, just write a beat in a way that think basically nobody else can. Like a couple of other artists have said they do this at their peak, like one in four beats, he just really finds a flow that's pretty infectious. And I think this track just like pretty much exemplifies that.</p><p>Theo Jaffee </p><p>Paul Rodriguez was just incredible. I remember exactly where I was the first time I saw, my god, Logic dropped. And I was in Japan with my cousins on the train going back from Osaka where we had day tripped back to Kyoto where we were staying. And I was on this train, this packed Japanese train. I was just smiling on the train listening to this. I was like, my god, this album is going to be actually really good.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>So funnily enough, I had like a similar reaction. I remember exactly where I was when I hit the play button on Paul Rodriguez. was, I live in Limehouse in London and I had just gotten off the tube, was walking down the steps of the tube station and like hit play on it and we're just walking home. I walk home, like, I think just about exactly nine minutes. So it's the length of the full song. And I remember just listening to it being like, whoa.</p><p>Like, where has this been? I don't think the album lived up that in every part. I think some tracks did. Favela, thought was really excellent. Interstellar was really good. I, like the one mistake of Interstellar.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>Interstellar is great. My absolute favorite song is Once Upon a in Hollywood.</p><p>Luke Drago </p><p>It's good, it's good. I don't think it's my favorite. I think my favorite song in the album is probably Paul Rodriguez or Mission Control. And I think the only mistake for the album for me is that by starting with those two songs, I feel like every other song after that was like a bit of a letdown relative to where the expectations were set. Where I think my favorite song in College Park is Lightsaber.</p><p>probably seconded by, let me put my actually good logic playlist real quick. yeah, it's Lightsaber probably followed by Shimmy and like those tracks are pretty far apart so I the album kind of like keeps rewarding you as you go through it.</p><p>Theo Jaffee </p><p>My favorite College Park songs are Self-Medication, Paradise 2, Wake Up, and Lightsabers, I think.</p><p>Luke Drago </p><p>Wake Up's good. Lightsabers is, think, actually the reason that College Park makes top 10 for me, because I think Lightsabers is maybe in my top 10 all time. I've got this top 50 all time songs, and Lightsabers is right on there.</p><p>The initial beat's excellent. The beat switch is just completely surprising, but it's still like, I think a lot of beat switches aren't thematically relevant. Like, sicko mode three beat switches do not sound anything like each other and could just be three separate songs you like strung together. Whereas, Lightsaber is I think better because of the beat switch. I think they're like very clearly related and they sound really good together and I think the song is better because they have both parts in it. I don't know, think, especially when like, is it C.Castro comes in on Lightsabers at the end, it delivers like just an incredible verse there too. I don't know,</p><p>Lightsabers are just. That's one hell of a song.</p><p>Theo Jaffee </p><p>Yeah. Like, Castro's verse made me think like, why isn't Castro and more logic stuff? I know they had like a falling out and now they're friends again. I hope Castro comes back more. Yeah.</p><p>Luke Drago </p><p>Well had that mixtape recently, right? I think a couple of, like the EP. What was on that? I really... Whose is?</p><p>Theo Jaffee </p><p>Castor's voice is better than Logic's. Castor's voice is better than Logic's.</p><p>Luke Drago </p><p>it's a good force. I think Game 6, I think, is on that EP. And I think Game 6 is a Sleptone song. I the beat's good, I Halfbreed kills it, Cedar Castro kills it, like everybody on there does a good job.</p><p>Theo Jaffee </p><p>Yeah, you really know Logic. I think the Seth MacFarlane feature on Self-Medication is like, I was so not expecting that when I listened to the album. I was like, for a sec, I was like, is this Frank Sinatra? Who is this? my God, looking at the credits, it's Seth MacFarlane, wow. Yeah. He can really, really sing. He was like trained to be like a singer by like, I forgot who, but like some celebrity that had something to do with Sinatra.</p><p>Luke Drago </p><p>It was so good.</p><p>Luke Drago </p><p>This is a family guy guy, yeah. I was pretty floored by that.</p><p>Luke Drago </p><p>Yeah, I was pretty floored by that. I think I showed it to a couple friends. I'm like, you should hear this, because you're not going to believe who it is. This is like the voice of Peter Griffin and Stewie Griffin. I think he's also Stewie. Yeah, and all these other characters that you've grown up with is now singing some crazy Frank Sinatra. I don't know. I was pretty impressed with that. What was your least favorite Logic song or album? And why is it Confessions of a Dangerous Mind?</p><p>Theo Jaffee </p><p>and Brian.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Theo Jaffee </p><p>My least favorite song or album?</p><p>Luke Drago </p><p>Yeah, either or. Or both.</p><p>Theo Jaffee </p><p>Okay, my least favorite album was Supermarket, but my least favorite actual album was probably, like sadly, Confessions of a Dangerous Mind. Even though, like, I didn't think that it was that bad. Like, there were some songs that I thought were good. Homicide was great. Yeah.</p><p>Luke Drago </p><p>Homicide is pretty good. Keanu Reeves actually, like the lead singles weren't bad, I think that was the problem. The lead singles were fine. And then I remember the first time I heard Clickbait. That was a tragic day for me. I'd stayed up all night for the drop. I must have been in high school.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>and it dropped and I remember hearing it going, my goat is washed. Like this is it. This is, like we can forgive everybody but I don't think you come back from this. I think he has come back from it but I think he probably lost most of his fan base after like the back to back code supermarket drops.</p><p>Theo Jaffee </p><p>Yeah, like this is also like right when I got into Logic was basically exactly when Kodam dropped But I kind of liked it at the time Maybe I just like I was always built to like Logic So I liked even this and then when I discovered like Under Pressure, Incredible True Story I really really liked those and then when I discovered, you know, the mixtapes I like those too. I think like the song Confessions of a Dangerous Mind</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>And also the song Lost in Translation are pretty good. They're not that bad. A lot of the other ones are... Yeah, wannabe, clickbait, like... Icy, those are all crap.</p><p>Luke Drago </p><p>I think that I did bad.</p><p>Luke Drago </p><p>I think the problem is like, I thought ICY was, I actually kinda like ICY, but I think the problem with Kodam is like the lows are just so astronomically low. Where it's like, know, the average song is actually fine. It's not anything special, but it's fine. But man, those misses are like dramatic and bad.</p><p>Theo Jaffee </p><p>What do you think those misses are other than?</p><p>Luke Drago </p><p>clickbait's the one where I like, I think I turned it off for a while, I like, this cannot be happening.</p><p>Theo Jaffee </p><p>How does that one go again? Hold on, clickbait lyrics. you are amazing, you are amazing, you are amazing. Yeah, okay. That was so...</p><p>Luke Drago </p><p>Yeah, yeah, yeah, yeah. There's some other lyrics in Clickbait that I think are maybe more shocking. Is this the one that I'm thinking of? It's got like the...</p><p>Theo Jaffee </p><p>RIP Lil Peep, let that young man sleep, let that young man death teach the youth the streets to beat addiction.</p><p>Luke Drago </p><p>Yeah, it's, there's some other ones there. don't know, that was I think, yeah, we'll leave that off the AI podcast. But it's just like, wow, this is bad. But yeah, I think I have hope. The current album is about to come out. I know that Winnie's working on, I've heard the singles.</p><p>Theo Jaffee </p><p>yeah.</p><p>Some more pornographic ones.</p><p>Theo Jaffee </p><p>haha</p><p>Yeah.</p><p>Luke Drago </p><p>I'm a little worried they're sounding kind of like Kodam, like very trap-coated, almost like a little Carti-ish, and I don't think Logic can do Carti. I don't think Carti can do Carti, so I'm not sure if Logic can do Carti. I'm a little worried about what's gonna happen there.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Logic's had like five final albums too.</p><p>Luke Drago </p><p>well hey this is final out you know this week next week we knew final out the time you've got your party guy curious</p><p>Theo Jaffee </p><p>Yeah. Yeah, Sort of. did, yeah, I'm not like super, super into Carti. I definitely like him more now than I used to. Like, I listen to the entirety of IAM music. I think maybe it was sort of like colored by the fact that I was in like a very stressed mindset because I was on this like super delayed flight and whatnot. But that might have even made it an even better listening experience because it's so like intense and violent of an album.</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>I did too.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>Like, I clicked play on pop out, the first song, and let it play for like five seconds. It's like grinding metal. And I was like, wow, yeah, this really sets a tone for the rest of the album.</p><p>Luke Drago </p><p>So I think I should like music, because I like Yeezus, and I think they're aesthetically similar in a couple places. I think this is also an album where I think my default pretty not in Carti's target. I liked parts a whole lot of Red, but really I ended up liking the production when I liked Carti. I think it Mojo Jojo for me. was like, oh, I just don't like this. And it's a shame, because I should like the production style. I like Kendrick a whole lot.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>And I think I heard the first set of ad libs on Mojo Jojo, I like, never mind, let's never do this again. Let's never have, I think there's like three Kendrick Carti collabs in this album, if I remember correctly.</p><p>Theo Jaffee </p><p>So do I.</p><p>Theo Jaffee </p><p>The Kendrick hate over the last like, I know, year or so has really ramped up like to a level I've never seen it before.</p><p>Luke Drago </p><p>I think if any popular artist is just doing, like at this level of popularity, think Kendrick's currently not exactly but pretty close to where Taylor Swift was in like summer of 23, where she was just the most famous person alive. I think he's not exactly there, but he's like pretty close. And I think you saw a lot of Taylor Swift hate on there too. And I don't wanna, I'm not making a specific comment on Taylor Swift, who I think has like a lot of music that I like and some that I don't like.</p><p>Theo Jaffee </p><p>I agree.</p><p>Luke Drago </p><p>But I think that when you get that big, there's some level of envy or jealousy as well. And also all of your failures are bigger. So I think Moto Giroda was a bad song. I don't think this diminishes Tim Imp Butterfly in any way, shape, form. But I think oftentimes it's the most recent track is the one that changes all expectations. And the my goat is washed mentality just gets bigger and bigger. I don't know, man. He's selling out whole lot of stadiums. I think Kendrick's doing pretty well. I think he's doing all right. think a lot of the hate's just now he's popular, so it's cool to hate him.</p><p>One thing that fascinated me about bands as a bit of a sidetrack is just like how important like the lead driving force tends to be. I know that like Linkin Park for example just recently swapped out there like obviously like Chester Bennington died in Armitage and died.</p><p>2017, just a monumental force of music. It's hard to imagine a Linkin Park without him. But they have, as of last year, a new lead singer. She's really excellent. The Emptiness Machine was for a minute up in my top songs, I think early 2025. I don't know. think the band is definitely different. There is a pre post moment there. And I don't think it's ever going to be the same.</p><p>But I think you can still keep the driving force and the driving memories alive even as people fall in and fall out.</p><p>Theo Jaffee </p><p>Yeah, Pink Floyd did this. Like after Syd Barrett left, they had many of their great hits. I'm not sure if it was before or after Dark Side of the Moon, but Wish You Were Here was actually an album written for Syd Barrett, to Syd Barrett. It was wishing he was here. Shine On You Crazy Diamond was like, same thing. And then they made The Wall and Animals and all that was after him. So bands can survive without their</p><p>Luke Drago </p><p>Yeah.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>Top guys.</p><p>Luke Drago </p><p>I wish I knew more about Pink Floyd. It's been on my list for a long time to really delve into both the music and the history, because obviously it's a band that's just been a monumental force in music and cultural history. It's a weird blind spot for me where I just never really sat down and just listened to everything. I don't know how I managed to do that, gone this long without... I think I've listened to Dark Side of the Moon all the way through and that's it. I'm pretty surprised that's all I've really encountered from Pink Floyd.</p><p>Theo Jaffee </p><p>Yeah, they're good.</p><p>Theo Jaffee </p><p>I like Wish You Were Here better.</p><p>Luke Drago </p><p>I will add that to the list of things you need to listen to.</p><p>Theo Jaffee </p><p>Cool. And then like, I guess the opposite of Pink Floyd was like, know, bands consisting of like lots and lots of people who have sort of like broken up and gotten back together and whatnot like many, many times and come out with many, many albums is King Crimson. Who their very first album in the quarter, the Crimson King was just like a peak, like top 10 album, I think top five, maybe it was just such a good album.</p><p>and then like</p><p>Luke Drago </p><p>What's the album? I don't think I've heard any King Crimson. What's the album? In the Court of the Crimson King. I am adding this to my... I'm like, occasionally I'm shocked where I'd find like a crazy like, adding it now. Yeah, was like a crazy deficit. I'm gonna have to listen to this.</p><p>Theo Jaffee </p><p>in the court of the Crimson King. It's like very, very famous. Yeah.</p><p>It's like a screaming guy.</p><p>Theo Jaffee </p><p>Yeah, and then like the band broke up and got back together with different people and broke up again and like nothing they've ever made after this has been anywhere near as good and it's been like 60 years now and so it's like yeah I wonder like what is it like for musicians who you know release some amazing masterpiece and then just can never replicate it ever.</p><p>Luke Drago </p><p>Well, I wonder, I have a similar-ish thought, and it's not exactly one-to-one with Gambino, where like, Chaudhish Gambino has released, like, has done a bunch of stuff. I mean, he has been a comedian, an actor, a musician, but even as a musician, he's been a rapper and a singer, and it's just very hard to figure out, exactly what he isn't good at, but I also think this, really changes album to album. So, like, I think, you know, older Gambino, obviously, like, stuff like, you know, like, stuff like, what is it?</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Stuff like camp is just very different than like...</p><p>Theo Jaffee </p><p>Redbone?</p><p>Luke Drago </p><p>very different because of the internet, very different than Kauai, very different than Awaken My Love, and then my personal favorite, can't beat an album, which oftentimes I think is a bit of a hot take, Atavista. His one he released in 2020, and then everybody forgot about it because he released it on, I think March 15th, 2020, and most of the lockdowns are between March 14th 16th, so just really awful timing, so he re-released it. Also he released it with no title, just like it was labeled 3.15.2020, and all the songs were just like the timestamp they were within the chronology of the album.</p><p>Theo Jaffee </p><p>bad timing.</p><p>Theo Jaffee </p><p>Yeah, bro is not Kendrick.</p><p>Luke Drago </p><p>re-released it in 2024. Bro is not Kendrick. But he re-released it in 2024 with new titles and updated mixing and mastering.</p><p>And I think that album is stunning all the way through and very different from his most recent album, which I think is supposed to be his final album, but what does that mean? Maybe the next one's under Donald Glover's name. I think it's the kind of thing where it's almost the opposite, where he can continually make very different hits. And the downside of this is that because they are so different every time, if you like a child's give me an album, you're like, I wish he did more stuff like this. You're just not gonna get that. He's going to just make different the next time he's on the mic.</p><p>Theo Jaffee </p><p>And then Daft Punk, you said Random Access Memories is also in your top 10, which, excellent choice. Personally, think Discovery is just slightly better, but Random Access Memories, like both of them are easily 10 out of 10 albums for me.</p><p>Luke Drago </p><p>an unbelievable album.</p><p>Luke Drago </p><p>reasonable.</p><p>Luke Drago </p><p>What is your favorite song on RAM? I'm curious.</p><p>Theo Jaffee </p><p>My god I Think like if I had to pick Favorite song like yeah, I know I'm allowed I'm allowed to pick whatever favorite I want I would pick get lucky, but I think yeah, I think like Fragments of time would also be way up there Within is great motherboard is like really really underrated</p><p>Luke Drago </p><p>You're right.</p><p>Theo Jaffee </p><p>Mother word is completely instrumental, isn't it? Yeah.</p><p>Luke Drago </p><p>I think it is, yeah.</p><p>Theo Jaffee </p><p>Yeah, like everybody who has sort of listened to little bit of Daft Punk, you know, they've heard Get Lucky, they've heard Starboy by The Weeknd, they've heard Harder But... Yeah, like, you have to go listen to Motherboard. This is like, know, deep cut. Daft Punk just doing incredibly well. Instant Crush is also fantastic.</p><p>Luke Drago </p><p>They've harder, better, faster, stronger.</p><p>Luke Drago </p><p>If you want a deep cut, my favorite song in album, by far, Touch, featuring Paul Williams. It is eight minutes long. The first two minutes are like this bizarre intro where like just noise is happening and the robots are like whispering and stuff. And then it becomes a show tune.</p><p>Theo Jaffee </p><p>Mm-hmm. Reasonable take.</p><p>Luke Drago </p><p>which ends up becoming like a ballad and canonically the song is about like a robot feeling the sensation of touch for the first time not knowing what to do about it and like being completely floored by this like very human sensation. A buddy of mine and I, two of my buddies and I were in like Italy, we flew into Venice and we're like driving across the countryside and rented a car. We blasted this song at full volume and it was just like, what a surreal experience.</p><p>Theo Jaffee </p><p>Yeah, that's gotta be amazing.</p><p>Luke Drago </p><p>Yeah, it was good fun. strongly recommend blasting anything at full volume driving through like the Italian countryside. It's not a bad place to do it.</p><p>Theo Jaffee </p><p>Yeah, I can't think of any songs on this album that aren't like great. Maybe doing it right wasn't great, but it was still like quite good. Yeah, let's see. Give Life Back to Music, great. Game of Love, great. Giorgio by Marauder. That one's one of my favorites. That one's so good. And it's like, yeah.</p><p>Luke Drago </p><p>It goes on for a while, yeah, it goes on.</p><p>Luke Drago </p><p>Great. Georgia by mortar, great.</p><p>And also nine minutes long. I think very few artists can do the like post five minutes long and still have it be like relevant and good. And I think Daft Punk can do it consistently, which speaks a lot to like their ability to keep things sonically interesting and just, you know, say I have a through line that survives throughout nine minutes of music.</p><p>Theo Jaffee </p><p>huh.</p><p>Theo Jaffee </p><p>Within, fantastic. Instant Crush, fantastic. Lose Yourself to Dance, great. Touch, no, you've already talked about how amazing. Yeah. Get Lucky, just such an amazing good song. Beyond, good but not great. Yeah. Motherboard, great, fantastic. Love that one. Fragments of Time, fantastic. Doin' It Right, good, good. Contact was...</p><p>Luke Drago </p><p>Fantastic.</p><p>Luke Drago </p><p>Yeah, I spent enough time on touch. Unreal.</p><p>Luke Drago </p><p>It's fine. Yeah.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>Very good.</p><p>Luke Drago </p><p>Yeah, I don't know. I think it's a hard album on the top. And it's definitely the album I played the most by Daft Punk. I think going in, as your final album, deciding we're doing live instrumentation now is a pretty crazy move as a send-off. I wouldn't expect a band to survive this. They survived it with no problems. I think they are better because of it. I think the album is better because it made that bold choice.</p><p>Theo Jaffee </p><p>Theo Jaffee </p><p>Sure. Yeah, I don't know because Discovery was just so good and like it's really hard for me to pick between Discovery and Ram because they're so different and yet both so like excellent. Like Discovery, I think face to face is like maybe one of the best like examples of sampling ever in like music history. It's just sampled so incredibly well. The entire song minus the drums and the vocals is just sampled from other songs.</p><p>Luke Drago </p><p>Yeah.</p><p>Theo Jaffee </p><p>And it's so good. Too long is 10 minutes long. yet, like you mentioned, Daft Punk exceeds five minutes consistently. And it's great. Voyager is just like, my god, that song makes me feel like I'm floating through space. Yeah, there's just so much excellence on Discovery.</p><p>Luke Drago </p><p>Mm-hmm.</p><p>Luke Drago </p><p>Yep.</p><p>Luke Drago </p><p>I think the brilliance of artists like Daft Punk is they can do very different things, and you might want 100 other cuts that sound like that, but they've consistently hit 10 at whatever it is they're trying to do. And so means you get this discography of music that is just diverse and interesting, that it always sounds like them, but what they're trying to sound like is different every time. I think like...</p><p>Theo Jaffee </p><p>And then, yay.</p><p>Luke Drago </p><p>I'm trying to think of other artists that do this, think, well. I can think of artists that have tried to do this, like, yeah, Kanye's done this pretty well. Miley Cyrus, Sleeper Cut here, has also done this pretty well. Her most recent albums have all been very different. Like, Endless Summer Vacation is quite groovy. But then Plastic Hearts from 2020 is this pretty grimy rock album. And she got really into this, I mean, it's got a cover of Zombie on it.</p><p>Theo Jaffee </p><p>Kanye.</p><p>Theo Jaffee </p><p>Interesting.</p><p>Luke Drago </p><p>It's also excellent. I think she's pretty consistently, if she wants to achieve a musical target, she will smash it out of the park, even if it's quite different. I think the opposite of this is Drake, who think is very good at making a specific kind of hit. And even when he gestures out of that comfort zone, I think he...</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>Honestly, nevermind. I'm gonna defend it like a bit of a slept on album. But I think it's very cautious. And I think you can tell his heart's not all the way into it. And it's so not into it that at the end he throws in Jimmy Cook's. like, hey, if you didn't like that album, don't worry, here's Jimmy Cook's. And I think there's no boldness in...</p><p>almost doing the thing. I appreciate he went out of his comfort zone on this, but I think had he actually committed all the way through. If Jimmy Cooks is a single somewhere else, another album, had he really said, I'm gonna do this very different thing and I'm gonna excel at it, I think the album would have been better for it. think the energy would have been better as well.</p><p>Theo Jaffee </p><p>Yeah, I agree. So like what other musicians are in like your top ten?</p><p>Luke Drago </p><p>Man, a top 10 is always a hard one. What their albums?</p><p>Theo Jaffee </p><p>Albums, I guess you said yeah college college park uses and ram are all in your top ten Do you have like a written top ten or is it just kind of?</p><p>Luke Drago </p><p>think I do. Let me find this real quick. I think as a matter of fact, I do. Because I think it's hard to say something in your top 10 and then not talk about your top 10. Other albums here. Let me just pull up my list. What's in yours while I find this? Because I think I go off top of my head, but I really want to make sure I'm being honest here. There.</p><p>Theo Jaffee </p><p>I have not written a top ten. And if I tried to say one off the dome, wouldn't be very good. I will maybe write one in the future.</p><p>Luke Drago </p><p>Okay, other albums I know makes this list for me. Daytona by Pusha T. I think it's excellent all the way through. It's from like the soft surgical summer sessions. Standout tracks from that. I mean, basically it's a 30 minute album, so all of it. You've never heard Daytona. The games we play.</p><p>Theo Jaffee </p><p>Hmm. I've never heard of it.</p><p>Nope. I'll have to add it to my list.</p><p>Luke Drago </p><p>is probably like the peak of this album. The games you play in infrared, I think are like the two stand-ups, but it's a seven-track album. It's from the same session that produced like, Tiana Taylor's breakout album, and also like, same session that produced Kitzy, Ghost, and Ye. Like just like, there was a summer where like seven or eight albums like this just come out. I think what ended up becoming Atavista is on this list for me as well.</p><p>I can never decide if it's Flower Boy or Igor that's on this list. It's gotta be one of the, I can't pick both. I think that'd be ridiculous. But it's one or the other that's on this list for me.</p><p>Theo Jaffee </p><p>Yeah, I love Tyler.</p><p>Luke Drago </p><p>It's hard not to love Tyler. I bought Chromakopia merch and unfortunately the hoodie is Minecraft green, so I definitely can't wear it. I look like a creeper walker, like the physical, like the Minecraft creeper. It's like very bright green with black text. But it's a tour I probably should get tickets for. I don't think it's come to London yet, I don't think.</p><p>Theo Jaffee </p><p>Haha, yeah.</p><p>Theo Jaffee </p><p>Yeah, like, hmm. I think I like Igor slightly more. Like, the opening two tracks on Igor might be like one of the strongest, like, opening, track, like, lineups of any album I've listened to. I think it's, yeah, Igor's theme and then Earthquake. Like, Igor's theme is so, strong and leads well, like, so well into Earthquake, which is, like, the biggest hit off of Igor. let's see. Igor tracklist.</p><p>Luke Drago </p><p>Why?</p><p>Luke Drago </p><p>deal.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Gone gone slash. Thank you such a good song six minutes and 15 seconds long</p><p>New Magic Wand. There's just so many good songs on here. I think Flower Boy... my god, it's really good, but it's not quite as good. See You Again is maybe Tyler's best song.</p><p>Luke Drago </p><p>Probably, I think it's such a standout win.</p><p>Theo Jaffee </p><p>Yeah, and Kali Uchisana was great.</p><p>Luke Drago </p><p>See you again as what else.</p><p>See You Again is excellent. The intro, This Flower Blooms is really, really stellar. Forward and Where This Flower Blooms, both are really stellar. I like, I oftentimes like Jaden Smith, so Pothole's really good for me. I think Jaden can be little too self-indulgent sometimes, but I think his production value's always quite good and his flow is quite good. Let think, other top albums for me that aren't rap. Rogue Cut, Chroma Copies great. Rogue Cut for me, A Fever You Can't Sweat Out by Panic! At The Disco, it's their debut album. It's got...</p><p>Theo Jaffee </p><p>forward.</p><p>Theo Jaffee </p><p>I Chromakopia.</p><p>Luke Drago </p><p>I like Sin... What's the song? Sin's Not Tragedies? Yeah, yeah, It's got Kemosada on it. Build God Then We'll Talk, I think it's fantastic. Which one? I, you know, personally I'm not working on that. What can I say? I think I'm consistently confused as to why a lot of my friends are trying to build God and put him in charge.</p><p>Theo Jaffee </p><p>Sin's not tragic. Yeah, yeah, yeah.</p><p>Theo Jaffee </p><p>You like that song, I bet. build God, then we'll talk. Building God? Yeah.</p><p>Luke Drago </p><p>But especially my atheist friend. I'm a little confused by that sometimes, but that's okay.</p><p>Theo Jaffee </p><p>How is that confusing?</p><p>Luke Drago </p><p>I think it's interesting, see I think it's interesting because I think I was on stage at some event a couple weeks ago and I like, it was an AI question and I brought in Augustine and Aquinas and I think it's confused a lot of people on stage. And I think my general take is that like there's a general pervasive notion in the AI community that...</p><p>Theo Jaffee </p><p>It would be more confusing if your religious friends were trying to build gods.</p><p>Luke Drago </p><p>you know, if you can build superintelligence, you just have to put it in charge and that you'd be like somewhat stupid to not listen to superintelligence and to let it dictate your life. And I think funnily enough, like, religious had to grapple with this question for a very long time of like, why, if God, then why free will? Why would you enable like your ability to make incorrect decisions? Augustine has a very long defense of this that I think is not exactly relevant, although I think parts of it are. But I think it's like, there are lots of thinkers who grappled a lot with the question of omnibenevolent being that still allows you to make choices in what</p><p>inherent value of those choices are. And I think sometimes those conversations are actually more relevant than we think for kind of what we're building right now. I think that it's interesting to me that like, you the late Pope Francis had actually a lot of work he did on AI. His message on like 2024 World Peace Day is I think one of the better pieces of AI ethics work that's been produced in a field that oftentimes is like mired by infighting, given that he's like the Pope, he doesn't have to care about the infighting, he just chooses not to. And so I think sometimes like,</p><p>If you're gonna be building omnibenevolent beings, if that is your goal, you should probably look to at least the thinkers who spent a lot of time, who have also grappled with the question of an omnibenevolent being. And I think sometimes that doesn't happen in this space. Robin-Himes looks as like a field that isn't relevant. Yeah, hard pivot, by the way, too. I think that just, but thank you Panic! at the Disco for bringing us here.</p><p>Theo Jaffee </p><p>This is an underrated take.</p><p>Theo Jaffee </p><p>Yeah.</p><p>True. I think most of my favorite Panic! at the Disco songs are not on this album. I think the only... Okay, favorite Panic! at the Disco songs... I'm trying to remember which of these songs are Panic! the Disco.</p><p>Luke Drago </p><p>What are your favorite ones?</p><p>Luke Drago </p><p>I also wonder how you're going to cut up for your listeners, like the AI section, the music section, and then the brief Panic! Disco side quest on the role of theology in AI. I think that'll be good, a good timestamp there. Yeah, it'll be good, it'll be good.</p><p>Theo Jaffee </p><p>Yeah, just put chapters yeah Viva Los Vengeance is really good Death of a Bachelor Death of a Bachelor was so good Death of a Bachelor is like the song that like I I guess my aspirational song for being able to like sing really well is to do like Death of a Bachelor perfectly</p><p>Luke Drago </p><p>Death of a Bachelor is quite good.</p><p>Luke Drago </p><p>Brendan Urie's vocals are just criminally good. think his cover of... god, what did he...</p><p>Bohemian Rhapsody, obviously. His cover of Bohemian Rhapsody is really also incredible. And that's a song that I think very few people in the world could cover. For example, Logic's Bohemian Trapsody, I think, does a pretty poor job of emulating its namesake.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>I don't think it's supposed to be a cover of Bohemian. It's a different thing. that and Can I Kick It were, I think, the best two songs on Supermarket. They were actually pretty good.</p><p>Luke Drago </p><p>No, it's not. This is another hit at Logic Supermarket.</p><p>Luke Drago </p><p>But I think that says a lot about super market.</p><p>Theo Jaffee </p><p>Yeah, the rest of it was trash.</p><p>Luke Drago </p><p>Yeah, it's bad. Another underrated album. I know a lot of people have probably not heard of Quadeca, but I think his like 2022, I Didn't Mean to Haunt You. This kid was like, he was a YouTube rapper. He was like most famous for like writing a diss track about KSI that did pretty well. I don't think he was like a particularly excellent YouTube rapper. And then basically out of nowhere, he drops this like really ethereal</p><p>extraordinarily well produced album about like death and loss and it has like two features on it which are probably the two most bizarre features to get an album with those themes Danny Brown and the Sunday Service Choir. think Thor Harris is on it as well as like the other feature there as well but like I think he's much less well known and it just all the way through I think is stunning. I can't think of a song on there that was bad, maybe not but I think I still did like that song.</p><p>Theo Jaffee </p><p>Wow.</p><p>Luke Drago </p><p>and like its highs. Fantasy World is a seven minute song from a guy that remember was doing YouTube rap before this. Should not be able to do a beautiful like seven minute ballad and just nails it. Both thematically, lyrically, completely slept on album. Couldn't recommend it enough.</p><p>Theo Jaffee </p><p>Hmm. Okay, so that's how many albums are we at now? We have College Park, Ysys, Ram, one Tyler one, Daytona, Fever, and I Didn't Mean to Haunt You, and so that leaves three more.</p><p>Luke Drago </p><p>That leaves one, two, three, Cool, do we get out of this though, the Gambino one?</p><p>Theo Jaffee </p><p>at a vista by Donald Glover. Okay, so two more. Yes.</p><p>Luke Drago </p><p>So I get two more albums, I get two more.</p><p>Luke Drago </p><p>It's hard not to pick a Frank album. It's hard not to say like one of these Frank albums needs to be there. Yeah, well, I mean, you have the mix saves too. I am gonna put Channel Orange there.</p><p>Theo Jaffee </p><p>It's easy to pick because there's only two of them.</p><p>Theo Jaffee </p><p>Hmm, channel orange over blonde.</p><p>Luke Drago </p><p>Controversial. I am in fact going to put Channel Orange over Blonde. And I think it is because while Blonde is like an excellent album all the way through, a couple of highs on Channel Orange I think do not get replicated anywhere else. I'm really thinking of Pink Matter, Bad Religion, and Pyramids. I think Blonde is excellent. I don't think anything on Blonde touches my feelings about Pink Matter, Bad Religion, and Pyramids. So I enjoy listening to that album more, but I Blonde is obviously gorgeous.</p><p>Theo Jaffee </p><p>Hmm. I think,</p><p>Theo Jaffee </p><p>That's true. I think Pink and White is like one of the greatest songs ever. Really, like I thought it was even like compared to the other songs on Blonde, I thought it was just so far above. I was like, this is really like a like a probably a top 10 top 20 like human song ever. I really like that song.</p><p>Luke Drago </p><p>Yeah, seems fair.</p><p>Luke Drago </p><p>Yeah, it's up there. I think I feel the same way about Bad Religion and Pink Matter especially. Like when the Andre 3000 feature comes in on Pink Matter, and I think he delivered, he hasn't wrapped in a while, he delivers like probably, it might be the best version of his career. And it comes out of nowhere, it flows excellently, and all the setup to it has been gorgeous as well. I mean, there hasn't been a mist there. The lyrics are astounding.</p><p>I really like Pink and White. think it is sonically pristine. I don't think it hits the same kind of lyrical quality that Bad Religion, or sorry, that Pink Matter does. Although at this point it's kind of hard to compare. You're talking about songs that are just so good it's difficult to make a comparison between them. So I get one more. And it's gonna have to be a Kendrick album. It just kind of has to be. And it's probably not gonna be Mr. Morale or G &amp;X. Although, Mr. Morale has a special place in my heart.</p><p>Theo Jaffee </p><p>Yeah, that makes sense. And you get one more album.</p><p>Theo Jaffee </p><p>Yeah, no. GNX was not even... I listened to GNX for the first time. I remember where I was, Depot Park, Gainesville, Florida, on a like perfect beautiful day, you know, like 68 and sunny. And I was just like, oh man, like yeah, there's some good songs on here. I was like, what is this like mustard thing?</p><p>Luke Drago </p><p>What is your favorite song on GNX? I'm push you on this. What's your favorite song on GNX?</p><p>Theo Jaffee </p><p>Hmm. right, let me look at the track list. Squabble Up is like, I did not really like that one. TV Off, I did not. this song is on GNX. Yeah, there's like one song on GNX I thought was way better than every other one, which is Heart Part 6. It's Heart Part 6. Yeah. I was like, what is this doing on this album?</p><p>Luke Drago </p><p>I have like just an immediate hit on this.</p><p>Luke Drago </p><p>Let's go.</p><p>Luke Drago </p><p>And it is? It's Heart Part 6. Heart Part 6 is fantastic. Yeah, that is my favorite song on the album. think... It also is not thematically the same as the rest of the songs on the album. It doesn't sound like the rest of the songs.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Luke Drago </p><p>I like a lot of this, I like Man of the Garden, I like Luther a lot, I like Reincarnated a lot, but Hart Part 6 for me just stands out, I think that's right. favorite, so we're back, we're down to the big three. Pimp a Butterfly.</p><p>Theo Jaffee </p><p>You know what my, like, I think most underrated Kendrick song ever is? Duckworth. Which is a song on Damn. The sample that they use is like this Yugoslavian band called September. And they have a song called Ostavi Trag which is like, really, really like hauntingly beautiful vocals that they use. And the story of the song is talking about these two characters named Anthony and Ducky</p><p>Luke Drago </p><p>Duckworth is a I mean, I'm f-</p><p>Luke Drago </p><p>Really.</p><p>Theo Jaffee </p><p>And Anthony is like this gang banger, violent criminal type guy. And Ducky is just like a guy trying to live in the hood and trying to get by. He works at a chicken restaurant. then Ducky sees Anthony coming to his restaurant and he decides to find favor with him. He gives him extra chicken and extra biscuits. And then Anthony robs</p><p>the restaurant the ducky works at and decides to not kill him because he had been nice to him. And then like the bars at the end of the song, you know, I'm not going to spoil it for my listeners. You have to like listen to the song and</p><p>Luke Drago </p><p>No, I think you've got to spoil it for the review. I think you're going to have to.</p><p>Theo Jaffee </p><p>Okay, he says, mm-hmm.</p><p>Luke Drago </p><p>And the reveal there is that like Ducky is Kendrick's father and Anthony is his record, the person who runs his label. Top Dog Entertainment. that, it's just unreal, the reveal. I get goosebumps every time I hear that line. I think I'm gonna go with Damn. I'm gonna, yeah, I'm gonna go with Damn. I think that is my favorite Kendrick album. It's a, that is a, I get why. I think the themes on Damn, wrestling with this like.</p><p>Theo Jaffee </p><p>I got those goosebumps every time.</p><p>Theo Jaffee </p><p>I like good Kid Med City and TPAT more. But it's up there.</p><p>Luke Drago </p><p>jaded sense of religiosity combined with like the anger after like after the state of the country and an election. I don't know. I think it is just the most personally resonant as someone who is religious. I think I've thought a lot about these themes. And I don't know. I think its highs are really high. But nothing in the top three is bad. I mean obviously like these albums are just...</p><p>Theo Jaffee </p><p>You can also...</p><p>Luke Drago </p><p>exceptional pieces of art that are hard to compare against each other. And I think Untitled on Master is unfairly slept on. Obviously it's like an Untitled album, but I think that like... Is Untitled 03 the one with CeeLo Green? Is that right? No, it's Untitled 06. It's Untitled 06. Pardon me?</p><p>Theo Jaffee </p><p>It's very good.</p><p>Theo Jaffee </p><p>I don't know, like that's the problem with making your album untitled. That's the problem with making your album untitled. You have to remember, like what was it, untitled song number three or was it untitled song number six?</p><p>Luke Drago </p><p>It is.</p><p>Untitled 06 is one of my favorite songs of all time. Cee La Green's in it out of nowhere. I really like Untitled 06. think Untitled is, Untitled the Master is my favorite production of any Kendrick Owens. It's mixed between like T-Bops, jazziness, and a Pimp Butterflies, like very leaning into modern rap that I think it does very well.</p><p>Theo Jaffee </p><p>You can also tell that like, Logic steals a lot from Kendrick. Like, he takes so much. Under Pressure is good Kid Mad City. It is, yeah. And everybody is to pimp a butterfly. But not as good, of course.</p><p>Luke Drago </p><p>Yes, Under Pressure is in fact good, Kid Mad City. But you know, it's like, it's a good, but it's a good copy, so it's fine.</p><p>Luke Drago </p><p>See, I think it didn't stick the landing, so I didn't notice it, but even sonically.</p><p>like under the peak of the albums under pressure right like albums under pressure song is under pressure under pressure it includes like the same structure as sing about me i'm dying of thirst it just puts like the aggressive part at the beginning instead of the end but it even goes to like multiple letters written to different people and then a letter from logic's perspective and that literally is the exact same form that sing about me i'm dying of thirst takes it's three letters two from other people one from kendrick then a switch and like a diff like a kind of a moral storytelling at the end whereas this one is like the moral</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Luke Drago </p><p>storytelling like the violence at the beginning, then three letters, the last one from logic's perspective, interspersed with voice messages, which is literally exactly how that same song flows.</p><p>Theo Jaffee </p><p>Under Pressure is like easy top contender for number one best Logic song. It's so good. And Under Pressure the album is maybe the only Logic album where there are zero skips. Like where every song is actually really good. I kind of like it. I like, you know, it took me like when I was listening to it, it took me about like</p><p>Luke Drago </p><p>Yeah, it's also his best Kendrick ripoff.</p><p>Luke Drago </p><p>I think Nikki&#8217;s not fantastic.</p><p>Theo Jaffee </p><p>20, 30 seconds to realize, I get it.</p><p>Luke Drago </p><p>It was the first time that logic could be a little heavy handed. I think by everybody it just really becomes a problem. But like a little corny. But yeah, I do think that's probably right. think other contenders for top logic's on. I think lightsabers like correctly has a contention here. I think Paul Rodriguez. City of Stars. Which is just like his pyramids contender I think basically, but that's fine.</p><p>Theo Jaffee </p><p>A corny. Yeah.</p><p>Let's see.</p><p>Theo Jaffee </p><p>Paul Rodriguez, I would put up there.</p><p>City of Stars is excellent.</p><p>I actually really like, on the Incredible True story, I really like Fade Away, I really like... What's the fourth one called?</p><p>Luke Drago </p><p>fade away, stainless. I don't think it's, I woe.</p><p>Theo Jaffee </p><p>Incredible true story tracklist like whoa yeah, that was really good</p><p>Luke Drago </p><p>Yeah, like what was really nice. I like stainless a lot. Stainless is up there for my like top. I also think...</p><p>Theo Jaffee </p><p>Young Jesus's Logic's Best Music Video.</p><p>Luke Drago </p><p>That seems right. Till the End, think is a really good logic song as well. It's like a, I think it's the final song on Under Pressure.</p><p>Theo Jaffee </p><p>That one's not under pressure. Yeah.</p><p>Luke Drago </p><p>At least in the main version and I think that's like an excellent outro Heat 6 is such a good producer and you can really tell when like they you know You can tell when he knows he has it because like logic will also get more excited on the beat and they're like Oh, I just know you need to this beat Like I think I confess is like this as well where the beat on confess is just so good that like Logic gets better because of it and killer Mike shows up at the beats way I don't know the whole thing is just really well done</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>Yeah, I'm impressed by like how many of these songs you can specifically recall off the dome and like the characteristics of each one that you can talk about.</p><p>Luke Drago </p><p>I have somewhere that like, one of my like weird hidden talents is that like, I've heard a song and I liked it, I have basically the whole thing memorized.</p><p>Theo Jaffee </p><p>Yeah, so do I, but like, you know, you can take it a step further and say like, you know, there's a beat switch here. I think I need to like read more music theory terms, I guess.</p><p>Luke Drago </p><p>I, for what it's worth, have never actually formally engaged in music theory. I, for a couple of years, took piano lessons, so maybe that's the peak of it, but I was not particularly good at sight reading. Mostly because I just really preferred to memorize the entirety of the song in one go, as opposed to reading it every time. It was better for me if I could close my eyes and do the whole thing, which is not a good evolutionary pressure if you're trying to get really good at sight reading. So just play the song and then just try to mimic it right there, and then we'll just go from there.</p><p>Theo Jaffee </p><p>Yeah, very fair.</p><p>Luke Drago </p><p>My piano teacher was frustrated by this because the problem is it worked. At least there's a certain point I actually could keep competitively, I could play and keep up with level difficulty while never actually learning how to read it because I just listened to it and maybe watched someone play it once or twice and then I would just do it until I had it memorized. Maybe that's indicative of how I listen to music as well.</p><p>Theo Jaffee </p><p>Well, I should probably get going pretty soon, but this was a great episode. Thank you for coming on the show.</p><p>Luke Drago </p><p>Yeah, this is great. had a blast. This is, like I said, my first podcast. I hope every podcast covers this much of everything.</p><p>Theo Jaffee </p><p>Yeah, was great to have you.</p>]]></content:encoded></item><item><title><![CDATA[Florida: A Tourist's Guide]]></title><description><![CDATA[Florida is America's America.]]></description><link>https://www.theojaffee.com/p/florida-a-tourists-guide</link><guid isPermaLink="false">https://www.theojaffee.com/p/florida-a-tourists-guide</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Thu, 08 May 2025 03:05:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TFUy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TFUy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TFUy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TFUy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TFUy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TFUy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TFUy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/daee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:632397,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.theojaffee.com/i/163089309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TFUy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TFUy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TFUy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TFUy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaee6a79-467c-498a-8d60-41f89b4b8733_3000x2000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.theojaffee.com/subscribe?"><span>Subscribe now</span></a></p><p>Outside of the inescapable cultural and economic black holes that are the Northeast Corridor and California, perhaps the most relevant place in the US is Florida. Florida has been the single <a href="https://www.pewtrusts.org/en/research-and-analysis/articles/2025/03/05/population-growth-in-most-states-outpaced-long-term-trends-in-2024?pop_map_data_picker=ltnm">fastest-growing state</a> by net migration over the past 15 years. Its governor, Ron DeSantis, is maybe the most powerful governor in the country, and the President&#8217;s unofficial seat of power is Mar-a-Lago in Palm Beach. For a time, it looked like Miami <em>might</em> have been able to replace Silicon Valley and New York as the dominant city in <a href="https://www.miamiherald.com/news/business/article250258600.html">tech</a> and <a href="https://www.forbes.com/sites/jackkelly/2023/08/23/wall-street-is-going-south-and-taking-1-trillion-in-assets-with-it/">finance</a>, respectively. Florida has Disney World and the massive Orlando tourism industry, Kennedy Space Center, the Miami Grand Prix, Art Basel, Inter Miami and Messi, and an economy larger than all but fourteen<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> countries on Earth.</p><p>Florida&#8217;s history, too, has always represented America&#8217;s pioneering spirit. Florida was the first place in the US to be visited and settled by Europeans: first Juan Ponce de Le&#243;n&#8217;s expedition in 1513 and then the establishment of San Agust&#237;n (today St. Augustine) in 1565. It then saw nearly three centuries of being a very sparsely populated haven for pirates, escaped slaves, and pioneers looking to settle on America&#8217;s wild southern frontier. This changed when Henry Flagler built the Florida East Coast Railway, bringing civilization to the practically unsettled<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> southern part of the state for the first time, sparking a population boom that&#8217;s continued to this day. Since 1880, Florida&#8217;s population has increased nearly a hundredfold, leading to major innovations in urban planning: Coral Springs was one of the first modern master-planned cities in the US, Seaside and Celebration are some of the best New Urbanist developments, and the massive retirement community of The Villages was the single <a href="https://www.census.gov/library/stories/2021/08/more-than-half-of-united-states-counties-were-smaller-in-2020-than-in-2010.html">fastest-growing</a> metropolitan area in the <em>entire </em>country during the 2010s.</p><p>I&#8217;ve lived in Florida almost my entire life. My family moved to Delray Beach in 2006 when I was two and settled in Boca Raton a year later, where we still live. I go to college at the University of Florida in Gainesville. I&#8217;ve been lucky enough to have been all over the state, and with <a href="https://www.youtube.com/watch?v=VQRLujxTm3c">the new GTA VI trailer</a> having just come out, now is as good a time as any to write a travel guide.</p><h2>Miami</h2><p>Florida&#8217;s largest metropolitan area by far, and its undisputed cultural and economic capital. If you only go to one place in Florida, it should be Miami.</p><ul><li><p><strong>Downtown Miami/Brickell</strong> has one of the most impressive skylines in the country. It's changed dramatically since the first time I visited and continues to evolve rapidly. New skyscrapers go up all the time, usually modern white concrete with blue windows. Bayside Marketplace, though touristy, is nice to walk around. Maurice A. Ferr&#233; Park is a nice place to sit. Key word here is &#8220;new&#8221;. Brickell City Center is a cool new urban development/shopping mall, the Underline is a fantastic new park under the elevated Metrorail line, and the new Brightline train can take you throughout South Florida or even Orlando. A friend observed that Downtown is a lot like Singapore: hot and humid with lots of brand-new white skyscrapers, mixed-use development, and greenery. It&#8217;s pretty solarpunk.</p></li><li><p>Latin American food is fantastic here. Look for Cuban, Venezuelan, Colombian, Peruvian, not so much Mexican (that's more Texas and the Southwest).</p></li><li><p>Miami is hot pretty much all the time. There are about three months a year (December, January, February) where daily highs are below 80, overnight lows are below 65, and it&#8217;s less likely to be super humid and rainy. This is the best time to visit, though everybody else knows this and there will be more crowds thanks to tourists and snowbirds. Summer is extremely hot, with &#8220;feels like&#8221; temperatures around 90-100 during the day from mid-May to mid-October. Unlike most places, it doesn&#8217;t get cool at night - it&#8217;ll be like 80 degrees at 4 AM. It&#8217;s also extremely humid during the summer and it rains most days (though the rains are usually pretty hard and fast in the afternoon, and the mornings are usually sunny).</p></li><li><p>Driving around Miami is somewhat dangerous, especially on I-95 and around the more Hispanic areas. Be careful. Transit is quite good around downtown thanks to the free Metromover, but the sparsity of the Metrorail/Tri-Rail/Brightline and slowness of the bus system means you'll be better off with a car. Traffic can get very *very* slow on the MacArthur Causeway (connecting the Beach to the mainland) and on I-95 during rush hour, so be prepared.</p></li><li><p>There are two Miamis: the center and south (downtown, Miami Beach, Coral Gables, South Miami, Pinecrest, Kendall, etc): quite clean and safe, more white, wealthy, expensive, not many homeless people around; and the north and west (North Miami, Hialeah, Fontainebleau, Tamiami): still pretty safe (except Gladeview/Brownsville), very Hispanic and mostly Spanish-speaking, and generally more American suburbaslop and less interesting architecture.</p></li><li><p>Great museums here include the Vizcaya (a beautiful Tuscan Mediterranean Revival estate in Coconut Grove), the Frost Science Museum downtown, and the highly underrated Ancient Spanish Monastery in North Miami Beach, a literal Spanish monastery built in the 1100s in Segovia and shipped and reassembled brick by brick in Florida. The P&#233;rez Art Museum is okay but not great. The Miami Children's Museum was a lot of fun when I was a little kid. I haven&#8217;t been to the World Erotic Art Museum or Jewish Museum in Miami Beach but I&#8217;ve heard they&#8217;re both cool.</p></li><li><p><strong>Miami Beach</strong> is my favorite part of the city and has some of the best urban fabric of anywhere in the United States: long houses close together with the narrow side facing the streets, hidden back alleys for car access, major shopping streets flanking the island so that they're walking distance away from anywhere, parks and trees all over, etc. Walk all the way up Collins Ave, work out at Muscle Beach, chill at Flamingo Park or at one of the cafes on the west side, admire the view of the city and cruise ships from South Pointe Park. Sardinia Enoteca has the best lamb I've ever had, La Leggenda has some of the best pizza. South Beach is much more fun than the north side, which has more retirees, houses, condo towers. Avoid Miami Beach during spring break. Otherwise, highly recommend.</p></li><li><p><strong>Wynwood</strong> is a hip, gentrified neighborhood with graffiti and posters everywhere. It was once the center of the world during the COVID ZIRP crypto boom. Get breakfast at Zak the Baker, peruse the cars and guitars across the street at Walt Grace Vintage, pay the $15 to see the Wynwood Walls if you want (though this is certainly not mandatory), get lunch at The Taco Stand, pose with guns at Lock &amp; Load. Mandatory: even if you aren't into designer clothing, see the nearby Miami Design District, which is an incredibly well-done urban space with some great food.</p></li><li><p><strong>Little Havana</strong> feels like a foreign country. Written text and spoken words are all in Spanish. Walk down Calle Ocho, check out the park dedicated to anti-communist guerrilla fighters and the park where old Cuban men play chess and dominos, try free samples of Cuban coffee in the tourist shops, eat Cuban food in the restaurants, listen to the bands playing Cuban music on the streets. If you want to see the &#8220;old&#8221; Miami of <em>Godfather II</em> and <em>Scarface</em>, with its one-story Spanish colonial homes and wood-frame houses, it'll be around here.</p></li><li><p><strong>Aventura</strong> is a very interesting development, full of gleaming condo towers and with a surprising amount of population density (and Jews). But it's not exactly walkable, there's hardly any transit, and the traffic is absolutely abysmal. The Aventura Mall is colossal - largest in the state and fifth-largest in the country. Its Apple Store is one of my favorites.</p></li><li><p><strong>Coral Gables</strong> is a nice upscale shopping/dining area but somewhat lacking in character. Not really worth it by itself. The Venetian Pool, however, is awesome, and the UMiami campus is beautiful, modern, and nice to walk around.</p></li><li><p><strong>Key Biscayne</strong> is underrated. There's not much to do in the town (though Milanezza is a very cool Argentinian-Italian restaurant/market), but Bill Baggs Cape Florida State Park is a much more chill beach than Miami Beach. Biscayne Bay is a nice place to go out on a boat, if you have one.</p></li></ul><h2>South Florida</h2><p>Broward County and Palm Beach County.</p><ul><li><p><strong>The Everglades</strong> are a massive, mostly empty swamp covering the southern tip of the state. You can see it by driving through I-75&#8217;s &#8220;Alligator Alley&#8221; or taking an airboat tour, but there&#8217;s not much to do here unless you&#8217;re a <em>real</em> nature lover and can tolerate lots of heat and mosquitoes.</p></li></ul><ul><li><p><strong>The Keys</strong> are absolutely beautiful and have a very different feel from the rest of the state - less swamp/forest, more Caribbean islands. Drive slowly down to Key West and stop along the way to explore the small beaches. Along the way you'll see the remnants of Henry Flagler&#8217;s Florida East Coast Railroad extension from Miami to Key West. Eat fresh seafood. <strong>Key West</strong> is more town and less beach than the other keys. Be sure to take plenty of walks to enjoy the white colonial buildings and palm trees. Watch the sunset from always-vibrant, westward-facing Mallory Square. Visit the historic homes of Ernest Hemingway and Harry S. Truman.</p></li><li><p><strong>Fort Lauderdale</strong> is Miami&#8217;s biggest satellite city. There&#8217;s less to do in downtown Fort Lauderdale than in Miami. The Museum of Discovery and Science is very cool and has Florida's only 70mm IMAX screen (I rewatched <em>Oppenheimer</em> here, very worth it). On the east side are some pretty nice canals with lots of great houses to admire from the outside. Las Olas Beach is not really worth it unless you're into college party culture - it&#8217;s too crowded and chaotic, and hard to find parking. Avoid at all costs during spring break.</p></li><li><p><strong>Broward County</strong> outside of Fort Lauderdale is arguably more interesting. The beaches are better: my favorites are the north part of Pompano Beach around North Ocean Park and the area around the Deerfield Beach pier. The western suburbs: Sunrise, Plantation, Davie, and Pembroke Pines have the best Asian food in South Florida. Dania Pointe in Dania Beach is the best shopping center, and <strong>Sawgrass Mills</strong> (thanks to it not being as bougie as Aventura) is the most interesting mall in South Florida. The Seminole Hard Rock Hotel &amp; Casino Hollywood (&#8220;the guitar&#8221;) is very cool, like a mini Las Vegas in Florida.</p></li><li><p><strong>Boca Raton</strong>, my hometown, has a decent amount to do. The best part is the old southeast, designed and built by Addison Mizner. The Boca Resort, though a private club and hotel, is beautiful - get in if you can. <strong>Mizner Park</strong> is a nice shopping center, and the surrounding area is quite walkable: stroll Palmetto Park Road to the beach or go to the downtown library. If you have a boat, take it out on Lake Boca Raton or Lake Wyman. The rest of Boca is your standard wealthy Jewish Florida suburb. Town Center Mall, University Commons, and Uptown Boca/Mission Bay are places to shop. You can also visit <em>the</em> Costco that <a href="https://en.wikipedia.org/wiki/A.J._%26_Big_Justice">AJ and Big Justice</a> go to or relax on the very chill beach at Spanish River Park. The parks here are quite good: Sugar Sand and Patch Reef are full of facilities, and Boca Tierra Park is a good place to read. One of the best things here is the beautiful Spanish River Library and its adjacent lake, parks, and walking trails.</p></li><li><p><strong>Delray Beach</strong> is underrated. Its downtown area is vibrant, artsy, and walkable, with a lot to see and do: the Tennis Center, Old School Square, Veterans Park, the beach itself. West of that is the stunningly underrated <strong>Morikami Museum and Japanese Gardens</strong>, the remnant of a Japanese pineapple plantation from the early 1900s that today is the best Japanese garden I've been to outside of Japan, with a restaurant and museum to boot. The area around the Delray Marketplace (especially Lyons Road) is a pretty place to bike around some of the last remaining farmland in the area, and the boat ramp at Arthur R. Marshall Loxahatchee National Wildlife Refuge is the best place to watch the sunset over the Everglades.</p></li><li><p><strong>West Palm Beach</strong> is the second-most interesting city in South Florida after Miami. CityPlace downtown is a cool shopping center with a relaxed vibe and lot to see, even a Herman Miller store. The new builds here are generally very well done, not five-over-one slop. The <strong>Norton Museum</strong> is the best art museum in the state. The island (Palm Beach) is very nice. Walk along Worth Avenue and see the Worth Ave Clock Tower, and visit railroad tycoon Henry Morrison Flagler's mansion (now a museum). If you can somehow get into Mar-a-Lago (Trump's winter White House), definitely pay it a visit. The West Palm Beach area is better for outdoor activities: Lion Country Safari has free-roaming lions and rhinos to drive past, Okeeheelee Park has all kinds of watersports, Rapids Water Park is the best in South Florida, Jupiter Beach is one of the best beaches in the area.</p></li></ul><h2>Central and West Florida</h2><ul><li><p><strong>Downtown Orlando</strong> is decent by American standards. The Orlando Public Library is pretty nice, and so is walking around Lake Eola. Toward the east are some nice southern suburbs with porches and mature trees. Avoid the areas (Callahan and Parramore) west of I-4: they have some of the worst urban decay I've seen so close to the downtown of a Florida city.</p></li><li><p>If you're into architecture and good urban or suburban fabric, go to <strong>Winter Park</strong> and <strong>Winter Garden</strong>. Both excel on both fronts. Lake Eola Heights is good too. Unlike the much newer South Florida, there&#8217;s still a good amount of old-growth Southern-style suburbs with mature trees, big porches, and the like.</p></li><li><p>The west side of Orlando is where all the resorts are. Disney is my favorite: the best park is Epcot, followed by Magic Kingdom. Be sure to take advantage of Disney's free and expansive transit network to see all the hotels - my favorites are the Contemporary and Fort Wilderness. Disney Springs is a fun shopping area too, and Celebration is an interesting example of a fully-planned community from the 90s. If you like roller coasters or Warner Brothers IP, choose Universal over Disney. Western Orlando outside the parks, around I-4 and International Drive, is a dense mess of hotels, condos, small amusement parks, and shopping malls.</p></li><li><p>The far east of Orlando is mostly standard new American exurbs (boring), but the UCF campus is worth the walk around. Continue eastward for a nice rural drive leading towards Cape Canaveral. <strong>Kennedy Space Center</strong> is awesome - be sure to go to the visitor center and (my favorite part) the Saturn V, and watch a NASA or SpaceX launch if you can.</p></li><li><p>Orlando has much better Asian food than South Florida. Go to Orlando Chinatown, Enson Market, and Lotte Plaza.</p></li><li><p>The <strong>Lake Wales Ridge</strong> is highly underrated and has a very different vibe from surrounding areas, with sand, scrub, and actual elevation (the rest of the peninsula is nearly completely flat at sea level). My family goes camping near Lake Wales every year. Also be sure to visit Bok Tower and its gardens, designed by Frederick Law Olmsted, Jr.</p></li><li><p>Rural Central Florida can get <em>very </em>rural. The Florida Turnpike between Fort Pierce and Kissimmee is one of the <a href="https://www.cars.com/articles/top-highways-not-to-drive-on-empty-1420663156112/">emptiest stretches of highway</a> in the country. There&#8217;s less swamp and more grassland/prairie than South Florida. There are also a lot of cattle ranches. Peace River is a nice place to go tubing or camping.</p></li><li><p>I've spent relatively very little time on the Gulf coast of Florida. Marco Island and Longboat Key are both nice beach-resort-type places (though I like the Keys much better). Tampa is rather boring for its size, though it has a nice riverwalk. I&#8217;ve never been to St. Petersburg, Clearwater, Sarasota, Fort Myers, or Naples, though I&#8217;ve heard St. Pete and Clearwater have some of the best beaches in the country.</p></li></ul><h2>North Florida</h2><ul><li><p>North Florida has a very different vibe from the rest of the state: less tropical, more subtropical; less beach, more forest; much more seasonal variation; less Hispanic/Caribbean/Jewish influence and more Southern influence; fewer palm trees and more Spanish-moss-covered oak and cypress. The summers are just as hot as in the south, but the winters are much cooler. In Gainesville, January lows are in the 40s and highs are in the 60s, and the coldest nights can even dip into the 20s.</p></li><li><p>North Florida&#8217;s nature is also quite different. Instead of beaches, the main attractions are the abundant, natural, freshwater springs with famously cool, clear water that you can swim in. Rainbow Springs is one of the best. Silver Springs, near Ocala, was the biggest tourist attraction in Florida before Disney World was built. You can swim with manatees at Crystal River.</p></li><li><p><strong>Gainesville</strong> is great. The University of Florida campus is beautiful and walkable, the downtown area, Depot Park, and nearby residential streets have some of that great old southern urban fabric, and the entire city is <em>covered</em> in mature trees. Paynes Prairie has some great hikes with great views, and Devil&#8217;s Millhopper is a huge sinkhole with a rainforest in it. Thanks to it being a college town, there&#8217;s an abundance of good food and a lot of young people around. It feels vibrant.</p></li><li><p><strong>St. Augustine</strong> is the oldest European settlement in the United States. As such, it's maybe the only city on the East Coast with an authentic Spanish colonial feel to it. Most of St. George Street has been tourist-trap-ified, but the 350-year-old star fort Castillo de San Marcos is very worth visiting, as is the Plaza de la Constituci&#243;n and Flagler College. West of the city is one of Florida's two Buc-ee's locations, which is worth a visit to see just how good a gas station can be.</p></li><li><p><strong>Jacksonville</strong> is very boring. As a tourist, avoid. St. John&#8217;s Town Center is a pretty good upscale mall, and I&#8217;ve heard the Cummer Museum, Museum of Science and History, and beaches (Sawgrass, Ponte Vedra Beach, and Amelia Island in particular) are nice, but there are better cities in Florida.</p></li><li><p><strong>Tallahassee</strong> is also rather boring. It has the state capitol building and Florida State University campus, but isn&#8217;t really worth visiting outside of those (the Alfred B. Maclay Gardens were quite nice though).</p></li><li><p>Sadly, I&#8217;ve never been to the Panhandle west of Tallahassee. The Panhandle travel guide will have to wait for another day.</p></li></ul><p>Aside from California, Florida is probably the most interesting and diverse state in the country. We have huge cities, endless suburbs, swamps, beaches, grasslands, springs, and everything in between. Even after nearly two decades, I still have so much to explore here. Hopefully this guide is a good start.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Theo's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Florida&#8217;s GDP is about $1.7 trillion. The only countries that are higher are the US, China, Germany, India, Japan, the UK, France, Italy, Canada, Brazil, Russia, Spain, South Korea, and Australia.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>It is <a href="https://www.nps.gov/ever/learn/historyculture/nativepeoples.htm">estimated</a> that while around 20,000 Native Americans lived in south Florida when the Spanish arrived, only a few hundred remained when the British took control in 1763. In 1900, after the three Seminole Wars, there were only a few hundred to a few thousand natives left in South Florida.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Enlightened Centrist Manifesto on Trans Issues]]></title><description><![CDATA[Or, Why You Should Publicly Write Out Your Beliefs on Complex Issues]]></description><link>https://www.theojaffee.com/p/the-enlightened-centrist-manifesto</link><guid isPermaLink="false">https://www.theojaffee.com/p/the-enlightened-centrist-manifesto</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 05 May 2025 03:01:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a120d43a-3e9f-4207-9212-e6c6f2c2765b_1420x946.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.theojaffee.com/subscribe?"><span>Subscribe now</span></a></p><p>On July 27, 2024, the Twitter timeline was full of Discourse on trans issues for the millionth time. The catalyst: a <a href="https://x.com/yacineMTB/status/1817184366730096975">post</a> from X software engineer and niche internet microcelebrity Yacine saying</p><blockquote><p>this is your reminder that a one star google review is incredibly damaging to a business</p><p>furthermore, you should consistently give businesses that pushes trans flags everywhere one star google reviews. and don't forget to describe why</p></blockquote><p>Given the prevalence of trans women in the tech industry, particularly in AI and on rationalist Twitter, many people (reasonably) rushed to defend them against this perceived attack. Some leaned a bit too far to the left. A day later I <a href="https://x.com/theojaffee/status/1817555630913212762">tweeted</a></p><blockquote><p>Y&#8217;all are going to force me to write the Enlightened Centrist Manifesto on Trans Issues</p></blockquote><p>Then, in between fits of pacing around an empty WeWork, I sat down and banged it out in around an hour, far faster than I normally write lengthy posts on complex social issues. <a href="https://x.com/theojaffee/status/1817638534154744267">Here it is</a> in full.</p><h3>The Enlightened Centrist Manifesto on Trans Issues</h3><p><em>Note: since writing this almost a year ago, I&#8217;ve slightly changed my opinions, most notably on the sex vs. gender dichotomy. In a footnote<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, I explain the changes I&#8217;d make if I were to write this again today.</em></p><blockquote><p>Though I regret that this is not in formal manifesto format, here are some Thoughts on the matter, critiques welcome:</p><p>- I have many trans friends (mostly MtF) and they are among the smartest and most interesting people I know. If you were to select a single demographic for high IQ, you could quite possibly do no better than "rationalist and rat-adjacent trans women".</p><p>- Trans people should be able to live their lives without fear of harassment or discrimination. They should be treated like anyone else by default. The same laws that apply to non-trans people should apply to trans people, and trans people should enjoy the same rights as non-trans people. As a rule, people should be allowed to live their lives how they want to live them.</p><p>- I generally support a laissez-faire approach to what one does with their own body. There should be more (fully consensual) medical experimentation and biological research, not less, especially as transhumanism gets closer and closer to reality. Patients with terminal illnesses should be able to try medication not fully approved by the FDA. This should extend beyond right-to-try and HRT for trans people: men who want to become more masculine should be able to take testosterone and other androgens, for example.</p><p>- In the future, biotechnology (or virtual reality) will quite possibly get to a point where a trans person can make their body - from organs to chromosomes - indistinguishable from a person born as their preferred sex. Sex and gender really will be a spectrum at this point. If and when this happens, I will see it as a very positive development, and most or all of my more hesitant views on the matter will update away.</p><p>- For a small subset of the population, gender dysphoria is very real. Some people feel true, deep unease and distress with the body they were born in, and HRT and/or surgery is the best way for them to become content and confident with who they are again. My heart goes out towards all people who feel this way, and I wish them the best and hope they get the care they need.</p><p>- As a matter of civility and common decency, I refer to trans people using their preferred names and pronouns.</p><p>- Spamming businesses that display trans flags (or most other flags) with one-star reviews is uncivil and wrong.</p><p>However:</p><p>- Biological sex is not a spectrum. Very few, if any, other biological classifications of humans are this bimodal, with differences not just between male and female chromosomes and gonads, not just secondary sex characteristics and hormones and facial features, but between cognition and personality. These differences are robust between cultures, across history, and even among other primate species. Most gender stereotypes can be explained by biological differences, not social constructs. There are a very small number of intersex people - roughly 0.05% of births have ambiguous genitalia - and the vast majority of even these have a predominant hormonal, gonadal, or genetic sex.</p><p>- "Gender" is a meaningless concept. It has always been clear that sex is a binary characteristic of humans, but the psychological differences between men and women are much less bimodal than the physiological ones. With the egalitarian nature of liberalism in mind, liberal sexologists in the 50s coined the term "gender" to refer to the social and psychological roles, behaviors, and attributes of men and women. They asserted that sex and gender are different, and while sex is a binary, gender is a spectrum. The problem is that people generally use the idea of "gender" in two different ways. Either they entirely detach it from biological sex, at which point it roughly means "personality" and is thus redundant, or they use it as a stand-in for biological sex, but psychological differences between men and women are to a very large degree determined by biology and are thus not socially constructed.</p><p>- Because of the flexible nature of the human brain and the fact that traits are distributed, some percentage of people have roughly the temperament, personality, and interests of the opposite sex. These people used to be generally referred to, sometimes neutrally and sometimes negatively, as "tomboyish" (if women) or "effeminate" (if men). This is very much not the same thing as gender dysphoria, and it is dangerous to conflate the two. HRT and surgery are not the right treatment for these kinds of people.</p><p>- With society&#8217;s current level of medical technology, you cannot change your biological sex. HRT can change your secondary sex characteristics, and sex reassignment surgery can change the appearance of your sex organs, but your underlying chromosomal sex, sex organs, skeletal structure, personality, etc. remain the same. Sweeping statements like &#8220;Trans women are women. Full stop&#8221; don&#8217;t accurately describe this. &#8220;A person got hormone therapy and surgery to feel more like a woman&#8221; is not the same as &#8220;that person now literally is a woman&#8221;.</p><p>- HRT, puberty blockers, and sexual reassignment surgery are not risk-free. HRT can increase the risk of blood clots, stroke, heart problems, high blood pressure, diabetes, and even infertility. Puberty blockers like Lupron can cause mood disorders, depression, and osteoporosis. SRS is even more risky. Surgical complications can include loss of erotic sensation, wound breakdown, and even tissue death. One survey found that 54% of vaginoplasty patients had pain requiring medical care two years later, and 64% of phalloplasty patients had complications like device malfunction or dislodgement. Genital surgeries are permanent and irreversible, and other surgeries like mastectomy and facial feminization surgery are very difficult or impossible to reverse fully. Anyone who undergoes such treatments should be made fully aware of the risks, and should only proceed if it would have a positive impact on their dysphoria outweighing the negative impact of the treatments.</p><p>- As important as freedom over your body is freedom over your speech. I am wholly opposed to laws requiring people to use the preferred name and/or pronouns of trans people lest they be fined and/or face administrative penalties, as some jurisdictions (Canada, California, NYC, the UK, Scotland) have enacted.</p><p>- Many people have taken a repressive attitude towards research on the matter, claiming that it is "settled science&#8221;, which it certainly is not. Research into some areas is suppressed for fear of &#8220;transphobia&#8221;, such as Ray Blanchard&#8217;s transsexualism typology, and the idea that some trans women are autogynephilic - they experience sexual arousal at the idea of being a woman - and are not purely dysphoric. The focus of research should be on helping dysphoric people get the care they need, not confining the search space to one dominant theory.</p><p>- There has been a dramatic rise in the amount of children identifying as transgender. Some, though certainly not all of them are dysphoric. Since as long as children have existed, they&#8217;ve been confused about growing up, and that confusion has manifested itself as &#8220;phases&#8221; - an emo phase, a communist phase, a skater phase. Some kids, growing up in a very pro-trans environment, believe that they are trans not out of genuine dysphoria, but because their friends identify as trans/they find it cool or interesting/they have normal puberty-related body discomfort. The way kids&#8217; phases are typically dealt with is by letting them explore it and likely grow out of it, but in a lot of cases, the focus is placed on &#8220;affirming&#8221; and children are encouraged to identify as trans, and even prescribed HRT and puberty blockers even when they are not truly dysphoric. This is made worse by online communities like r/egg_irl and parts of Twitter, which act as though everyone who has natural doubts or questions about gender is trans and should start HRT. Given the risks and potentially irreversible consequences of HRT and puberty blockers, they should only be prescribed to minors if they truly have dysphoria and if their parents consent. Given the total irreversibility of SRS and the developing brains of minors, people should only be allowed to receive it over 18. Almost all jurisdictions ban minors from getting tattoos, let alone SRS.</p></blockquote><p>I got many responses. Some were positive and kind:</p><ul><li><p>&#8220;Have very rarely read a write up this long on a such controversial topic where I agree with literally every point made, so well written!!&#8221;</p></li><li><p>&#8220;based&#8221;</p></li><li><p>&#8220;COMMON THEO WIN&#8221;</p></li><li><p>&#8220;based and enlightened centrist pilled&#8221;</p></li><li><p>&#8220;remarkably based. I co-sign at least like 95% of it, which on this topic rounds up. I'm impressed&#8221;</p></li></ul><p>Some were thoughtful criticisms. One person (<a href="https://x.com/warty_dog">Warty Dog, @warty_dog</a>) wrote:</p><blockquote><p>I think ur wrong on sex and gender</p><p>1) the tally is complicated but HRT changes sex physiology substantially and I don't think it's natural to classify into 2 kinds with trans in their birth sex. the case is stronger for childhood transition. note also that the features you come in contact more in most context are the ones more changed</p><p>2) there are cultural components of gender that are not redundant with "personality". there is also something of a caste system where we treat the genders differently, "trans women are women" means trans women are female caste</p><p>3) "gender dysphoria" is a term from medicine (<a href="https://emojipedia.org/face-vomiting">&#129326;</a>). there is a deep subconscious part in the brain that tells you which gender/sex (?) you are, and it's sometimes not congruent with your body and role in society. also, it should be included in the tally of biological differences</p></blockquote><p>Another (<a href="https://x.com/liz_love_lace">Liz Lovelace, @liz_love_lace</a>) wrote: </p><blockquote><p>re: the trans manifesto</p><p>you got it very right!</p><p>i basically agree with everything you said</p><p>i do have some interesting thoughts to add</p><p>1) trans people and lefties really like the talking point "trans people kill themselves if they don't transition and/or if they're not accepted by society", which, imo, is the leading cause of trans people killing themselves</p><p>not the things, the *talking point* is what makes so many trans people kill themselves, especially teenagers. When one's whole community says that they should be suicidal, they become suicidal</p><p>2) gender dysphoria is kind of an outdated way to think about trans people</p><p>it's real, sure, but pushing "gender dysphoria is a very real disorder that just affects some people and the only cure is transition" was always just rhetoric to convince normies that trans people are real</p><p>i never had dysphoria, i just thought "wouldn't it be neat to be a girl" and then became a girl, and yeah it's neat as fuck</p><p>3) the way i view "gender" is kind of the "archetype" that someone wants to belong to</p><p>there's value in belonging to a legible cluster of people, like "guy" or "girl" or "nonbinary", but you could think about it more granularly, like "butch lesbian" or "buff guy" or "nerdy guy"</p><p>so yeah good job, i'm impressed that you're so un-brainwormed about this</p></blockquote><p>Then, there were various low-tier criticisms from both sides.</p><ul><li><p>&#8220;Lmao yall take yourselves soooo seriously, leave trans people alone you hateful dweebs&#8221;</p></li><li><p>&#8220;Everything in your "However" section is incompatible with the lies that transpeople have to tell themselves in order to not freak out over what they've done to their own bodies.&#8221;</p></li><li><p>&#8220;So should men be allowed in women&#8217;s bathrooms? Why is it civil or common courtesy to lie to someone&#8217;s face?&#8221;</p></li><li><p>&#8220;Lol I'm not racist, BUT.. Stfu you coward&#8221;</p></li></ul><p>I find myself linking back to this post more than any other long-form post I&#8217;ve written. I think this is because it represents something I think is highly undervalued: taking a complex and nuanced issue and writing out a complex and nuanced take on it<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><h3>Write More Specific Takes</h3><p>Most issues, especially controversial ones, are pretty nuanced. Nuanced issues, like trans<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>, tend to be composed of a series of interconnected issues, rather than a single issue: are trans women women? what is a woman? should trans women be allowed to play in women&#8217;s sports? Generally, one political tribe tends to pick a specific list of opinions (trans women are women, a woman is whoever identifies as one, trans women should be allowed to play in women&#8217;s sports) and the opposing tribe picks a list of the opposite opinions. If you have some opinions from List A and some from List B, it&#8217;s hard to engage with either side. The solution is to write out a full list of your opinions from both sides, as a sort of position paper. This helps:</p><ol><li><p>Clarify your own thoughts on the matter. In the process of writing my piece on trans, I learned about Blanchard&#8217;s transsexualism typology in depth and read papers on criminality, neuroscience, and the risks of hormone replacement therapy. This is a valuable intellectual exercise just for the sake of it.</p></li><li><p>Move the world towards a more nuanced and accurate understanding of a complex and controversial topic for which neither tribe&#8217;s premade set of answers is sufficient, and</p></li><li><p>Produce valuable intellectual output publicly on the Internet, which not many people have, as something you can point to to demonstrate your intelligence, knowledge, thoughtfulness, writing ability, etc.</p></li></ol><p>What you write doesn&#8217;t even have to be wishy-washy acceptable centrism. If you frame your beliefs in a polite and persuasive enough way, even beliefs that would be considered radical in some circles can pass right through the low decoupler&#8217;s mental filters. Famously, Curtis Yarvin <a href="https://benthams.substack.com/p/mencius-moldbug-is-not-not-a-blithering">writes so well</a> that he can get away with endorsing normally scary concepts, like dictatorship.</p><p>The next time you feel the need to write about a controversial issue<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>, write out something like my trans manifesto. It can be the entire piece, or the skeleton of something that resembles a traditional essay more, or even just an internal guide for you to keep in mind while you write, but it should be clear, it should be detailed, it should be polite, and it should be nuanced. The epistemic commons must be maintained, and you should help maintain them however you can.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Theo's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If I were to write this post again, I&#8217;d make the following changes:</p><ul><li><p>I no longer think gender is a meaningless concept. It&#8217;s not totally detached from sex like pro-trans radicals claim, but it isn&#8217;t fully baked into biological sex either.</p><ul><li><p>I never fully explained <a href="https://en.wikipedia.org/wiki/Blanchard%27s_transsexualism_typology">Blanchard&#8217;s transsexualism typology</a>, which separates MtF trans women into two broad categories.</p><ul><li><p>The first are homosexual transsexuals, or HSTS. They tend to transition earlier in life, be attracted primarily to men, and have more feminine personalities and characteristics.</p></li><li><p>The second are autogynephiles, or AGP. They tend to transition later in life, be sexually attracted to the idea of themselves as women, be attracted primarily to cis women or other trans women, and display more masculine characteristics.</p></li></ul></li><li><p>Others have criticized Blanchard&#8217;s typology for being overly strict, but I find it a more useful model of reality than the left-trans-orthodox idea that all trans people are cognitively indistinguishable from their target sex and were literally born in the wrong body.</p></li><li><p><strong>[EDIT (5/6/2025): The above should not be read as a blanket endorsement of Blanchard&#8217;s typology.</strong> I merely find the AGP/HSTS classifications to be better models than the &#8220;all trans women are literally women trapped in men&#8217;s bodies&#8221; model. Unstated but no less important is that I also find it a more useful model than "all trans women are men who are just pretending". The main takeaway here is that <strong>there need to be better models of trans</strong>, and Blanchard's, though very far from perfect, is still among the most well-thought-out.]</p></li><li><p><a href="https://pubmed.ncbi.nlm.nih.gov/21467211/">Research</a> <a href="https://pubmed.ncbi.nlm.nih.gov/27255307/">has</a> <a href="https://pubmed.ncbi.nlm.nih.gov/7477289/">shown</a> that trans people do <em>not </em>always have brain structures matching their target sex. FtMs and androphilic (HSTS) MtFs tend to have brain structures resembling their target sex, but gynephilic (AGP) MtFs tend to have brain structures resembling their birth sex. This is especially relevant given that most trans activists and trans AI researchers lean closer to AGP than HSTS. A lot of their personality can be explained by their brain structures resembling men more closely than women. Trans activists are often aggressive, and trans AI researchers are often nerdy and slightly autistic in the way that male AI researchers are.</p></li><li><p>However, some people genuinely have brain typology - &#8220;gender&#8221; - that does not match their biological sex, and my original post failed to account for this.</p></li></ul></li><li><p>I omitted two of the more substantive trans issues that show up in actual politics: sports and bathrooms. On sports:</p><ul><li><p>The purpose of separate women&#8217;s sports in general is to ensure they aren&#8217;t outcompeted by men, who are, on average, much stronger, faster, and more physically capable. There are almost no cases <em>ever</em> of women being able to compete with elite men in sports.</p></li><li><p>Many trans women, such as swimmer <a href="https://en.wikipedia.org/wiki/Lia_Thomas">Lia Thomas</a>, maintain significant advantages over cis women in sports even after beginning hormone replacement therapy.</p></li><li><p>Generally, people should compete in the leagues in which they are competitive. This is the entire point of women&#8217;s sports: elite female athletes are not competitive in men&#8217;s leagues, but they are competitive in women&#8217;s leagues. If trans women are competitive in women&#8217;s leagues, they should play in women&#8217;s leagues. If not, then they shouldn&#8217;t.</p></li></ul></li><li><p>On bathrooms:</p><ul><li><p>Trans women are much more likely than cis women (about 6x) to be <a href="https://journals.plos.org/plosone/article/file?id=10.1371%2Fjournal.pone.0016885&amp;type=printable&amp;utm_source=chatgpt.com">convicted</a> of violent or sexual crimes, and trans women prisoners in England and Wales are about <a href="https://committees.parliament.uk/writtenevidence/18973/pdf/">20x more likely</a> than cis women prisoners to be serving a sentence for a sexual offense.</p></li><li><p>However, I don&#8217;t buy into the right-wing <a href="https://en.wikipedia.org/wiki/United_States_Congress_transgender_bathroom_dispute">moral panic</a> about trans women being perverts who creep on women in women&#8217;s spaces. Individual cases of this are so rare that they often make national news, and it would be wrong to pre-emptively ban an entire category of people from public facilities merely because they commit certain crimes at a higher rate.</p></li><li><p>Voyeurism, indecent exposure, sexual harassment, and sexual assault are already illegal. If the goal is to prevent voyeurism, indecent exposure, sexual harassment, and sexual assault, no additional laws specifically targeting trans people are necessary.</p></li><li><p>Given our culture&#8217;s rather strong bathroom norms, the reasonable standard to me is that people should use the bathroom of the gender they pass better as, though this should be norm rather than law.</p></li></ul></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>As I was writing this article it occurred to me that this is basically what most <a href="https://www.lesswrong.com/">LessWrong</a> posts are. Once again, the rationalists are right. LessWrong is maybe the single most rational and least vitriolic forum on the entire Internet, and it&#8217;s worth thinking about why. Sure, it helps if your community is entirely composed of extremely high-openness 130+ IQ autists with extreme attention to detail, but LessWrong&#8217;s norms about stating your positions in extreme detail - like my trans piece - certainly play a major part.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>There are plenty of other topics I could write these position pieces on. Immigration is one where I find the left, generally, far too permissive, and the right alternatively too restrictive everywhere or too restrictive in some areas and too permissive in others. Local governance is another - many American cities could benefit tremendously from a more permissive approach to housing construction and a more restrictive approach to law enforcement. Another is the Israeli-Palestinian conflict. While I certainly lean pro-Israel, I went too far in my <a href="https://www.theojaffee.com/p/why-i-support-israel">previous piece</a>, which reads like it was written by a lawyer trying to justify everything Israel has ever done, rather than being a rational analysis of different aspects of the situation. If I wrote it again, I&#8217;d write it like I wrote my trans post.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>And boy is this one controversial. Noah Smith recently <a href="https://x.com/Noahpinion/status/1916908354288668789">tweeted</a>: &#8220;I can shout from the rooftops that Dems should moderate on immigration. I can denounce the Palestine movement all day long. But when someone said Dems need to moderate on trans issues, and I quote-tweeted them with "maybe", a <a href="https://bsky.social/about">Blutarsky</a> mob came after me and I got kicked out of a local rabbit-themed group chat here in San Francisco -- the first time I ever suffered offline consequences for a social media post in my life.&#8221;</p></div></div>]]></content:encoded></item><item><title><![CDATA[Podcast: Alok Singh]]></title><description><![CDATA[AI, Math, Philosophy, and Erewhon]]></description><link>https://www.theojaffee.com/p/podcast-alok-singh</link><guid isPermaLink="false">https://www.theojaffee.com/p/podcast-alok-singh</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Wed, 30 Apr 2025 02:28:15 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/162480793/7c28a354fca35adbec27e3432c8bd479.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Alok Singh leads research on Lean at Max Tegmark&#8217;s Beneficial AI Foundation, and writes about mathematics, history, and other cool things at alok.github.io.</p><h3>Chapters</h3><p>0:00 - Intro<br>1:12 - Typing<br>8:45 - Elon&#8217;s demo day<br>22:42 - Animation, discrete vs continuous<br>29:04 - Number systems<br>35:26 - Nonstandard analysis<br>43:04 - Reasoning models and o3<br>50:45 - Fiction<br>55:48 - o1 and Linguistics<br>58:50 - Hyperfinite sets<br>1:11:58 - AI for math<br>1:16:01 - The field with one element<br>1:23:17 - Lean<br>1:31:53 - Lean for formally verifying superintelligence<br>1:36:03 - Ayn Rand<br>1:47:46 - Erewhon<br>1:57:56 - Proto-Indo-European<br>2:03:18 - More Erewhon<br>2:14:41 - Butler and Kaczynski</p><h3>Links</h3><ul><li><p>Alok&#8217;s Website: <a href="https://alok.github.io/">https://alok.github.io/</a></p></li><li><p>Alok&#8217;s Twitter: <a href="https://x.com/TheRevAlokSingh">https://x.com/TheRevAlokSingh</a></p></li><li><p>Beneficial AI Foundation: <a href="http://beneficialaifoundation.org/">http://beneficialaifoundation.org/</a></p></li><li><p>Lean: <a href="https://lean-lang.org/">https://lean-lang.org/</a></p></li><li><p>Transcript: <a href="https://www.theojaffee.com/p/podcast-alok-singh">https://www.theojaffee.com/p/podcast-alok-singh</a></p></li></ul><h3>More Episodes</h3><ul><li><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p></li><li><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p></li><li><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p></li><li><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p></li><li><p>My Substack: <a href="https://www.theojaffee.com">https://www.theojaffee.com</a></p></li></ul><h1>Transcript</h1><p>Theo Jaffee </p><p>Sure, yeah, could just start with the informal beginning chat.</p><p>Alok Singh </p><p>Yeah, everyone loves a cold open.</p><p>Theo Jaffee </p><p>Yeah. So you mentioned like you have a bunch of stories that you wanted to talk about. Yeah, you said typing, Elon's demo day, coral and the bat.</p><p>Alok Singh </p><p>I did</p><p>Okay, well, typing.</p><p>I had typing class when I was like, I don't know, eight or something, like with a keyboard. And did you guys have that?</p><p>Theo Jaffee </p><p>Yeah, we did. I didn't retain much of it. I typed with like four or five fingers. It's probably bad. I should learn how to type.</p><p>Alok Singh </p><p>Well, I didn't retain</p><p>any of it because what happened was one day the teacher saw me looking down and so she reset my progress to the very beginning in the typing tutor thing we had and I found that so Demoralizing that I just like gave up on it and just type like this for the next like 20 years Not 20 until I was about 20 Then I started doing programming and like four months of programming it like 10 words a minute is just terrible</p><p>And this guy, Steve Yegge, has an article, Programming's Dirtiest Little Secret, where he quotes a section from Reservoir Dogs of Mr. Pink talking about how he doesn't tip and that his advice for waitresses is, learn to fucking type. Yegge always says things in a roundabout way, but it resonated with me. And at the time I found an old book.</p><p>in the 60s called LSD the problem solving psychedelic was talked about a guy using it for typing and then I didn't do it for one day on a Sunday.</p><p>Yeah. then like, I couldn't type. decided I would learn Colmak instead of QWERTY or Dvorak because the reason is really silly. It's just that I'd seen a cute girl at a hackathon who I've never seen since using it.</p><p>No deeper reason. Like maybe it is ergonomically better and all, but that wasn't the reason at all. And within one day I taught myself to type. went from, well, not zero, but 10 words to about 70, or rather 30, just going with some typing tutor.</p><p>Like I think about substance that I didn't take is that like a kid, they just like completely remove the feeling of as an adult of like you fuck up and you feel bad for a second, but you just don't. You just notice I made a mistake and then you fix it and you just keep moving on. And there's no moment of pausing like that. And especially for something like typing, which is hundreds of little mistakes to begin with anyway, it adds up. So by the end of like eight hours of typing, I was at 30 words a minute, which is</p><p>pretty damn good for one day. But then I had these like weird dreams that night of</p><p>Like in, I think, Call of Duty Black Ops, maybe? One of the Call of Duty games, there's some character, Mason, who sometimes hallucinates these big red numbers that look like they're splashed in blood, like graffiti across his visual field. And I had dreams of letters that night, my fingers kept twitching.</p><p>And then I detested out, I didn't type at all for a week, which after a lifetime of not doing it was pretty easy. Dude, used to have, I used to like handwrite all my school assignments and talk about a typical mind fallacy. I wondered why people typed stuff that I was so deep in the stupid hole that I created for myself that I forgot the typing is faster.</p><p>Theo Jaffee </p><p>Back in like third grade, they made us learn cursive and even then I remember thinking like, I ever gonna use this ever? Nope, turns out, nope, I never use it. I type everything. When I do have to handwrite stuff, I found that my handwriting has weirdly like converged towards my dad's handwriting, kind of for no reason. Like my handwriting now just looks a lot like my dad's handwriting, even though I didn't try to emulate it. So maybe handwriting is just genetic.</p><p>Alok Singh </p><p>Yeah, my dad's handwriting is as ugly as my own.</p><p>and my brother's too. My mom's handwriting is like only a little better. But anyway, after a week of not typing, which is easy after a lifetime of not typing, then I did an exercise again and I could do, I was at 70 words a minute, which was very surprising.</p><p>So for the cumulative total of eight hours of typing practice, I was up at 70 words.</p><p>Theo Jaffee </p><p>Wow, good exponential improvement.</p><p>Alok Singh </p><p>Now it's like stabilized</p><p>somewhat to like 80 or 90 and maybe another session like this and it could go over 100. But this is pretty fast, so I'm reasonably satisfied.</p><p>Theo Jaffee </p><p>Hmm. Wow, I guess I'm kind of proud of myself now that I can type like 120 with like five or six fingers.</p><p>Alok Singh </p><p>So now type on like this keyboard. I started out when typing using just a MacBook keyboard and to prevent myself from cheating that one day I took a bunch of electrical tape and I taped over all the keys and then midweek I spent like three hours scraping off the electrical tape with when I closed the lid the heat just sort of like fused it to the keyboard and the sticky stuff started to leak out and got all gross.</p><p>then I got this, not for the ergonomic reason, but it is ergonomic. Or actually, I'd rather I got this.</p><p>And then they released a wireless version a few years later.</p><p>Theo Jaffee </p><p>Yeah, I just use a pretty standard Black Widow V3, Razer Black Widow V3 gaming keyboard. And really I only use this for gaming, pretty much. Like, I do almost all of my work on my laptop and not on my expensive gaming PC, which I basically use as a Fortnite machine. Though maybe I should work on it more.</p><p>Alok Singh </p><p>and</p><p>I play Fortnite on the Vision Pro.</p><p>Theo Jaffee </p><p>Really? With a MacBook?</p><p>Alok Singh </p><p>You don't need a MacBook, you can do it. If you have internet that's good enough, like fiber or any 100 megabits per second is enough. You can get Nvidia GeForce now and like a PlayStation controller. Maybe you exclusively play with keyboard and mouse and you could probably set that up. But in my case anyway, I use the controller and then I can play it with the giant screen on the surface of the moon.</p><p>Theo Jaffee </p><p>Is GeForce now like cloud gaming? So is it slow as hell? Yeah. Yeah.</p><p>Alok Singh </p><p>Yes, that's why you need a good internet. No, that's why you need a good internet. With good internet,</p><p>the latency is really not that bad. I'm not such a pro gamer where a few extra milliseconds makes an enormous difference to me, which I'm fine with. I used to game a lot and then I gave it up.</p><p>Theo Jaffee </p><p>Yeah, I have honestly never really gamed that much, which might be surprising to people who know other aspects of my personality, but I don't know, I never got addicted to it. I never got one-shotted by Factorio, as they say. I do play like a reasonable amount of Fortnite, but that's like the only game I play.</p><p>Alok Singh </p><p>Fortnite and Smash Bros were the only video games I'd really played after high school. I played a little bit of one of the Zelda games one night where I spent like four hours on it at a party, but that was a one-off.</p><p>Theo Jaffee </p><p>So tell me about Elon's demo day. I'm curious about that.</p><p>Alok Singh </p><p>Also a story of not drugs. Yeah, yeah. Let's see. It was.</p><p>Theo Jaffee </p><p>SF people love their drugs,</p><p>Alok Singh </p><p>2020 maybe? It's when he first announced the robot. So I got invited to it randomly. I don't know, maybe they scraped LinkedIn for my email or something. And complete with this nice Uber ticket that's copped for.</p><p>So I spent the whole Uber ride, like I just noticed when we were, me and the driver passing by some of the billboards at SF, how the light glinted off some stuff. And a while ago, I started noticing like rainbows around lighted objects. Like it depends on the object. I think it's like a sort of a stigmatism. It's like a traffic light, like the red and the green. They're...</p><p>Theo Jaffee </p><p>I see streaks</p><p>in halos, but not rainbows.</p><p>Alok Singh </p><p>Well, traffic lights don't have rainbows. They have just one color and I'll see like a sort of truncated sphere of light around them. But like a car light from the headlights. Then I'll see a rainbow around it, actually several, but the first in like concentric spheres. But the first one is dramatically brighter than the others and it's rare to see the second one. And I extrapolate that they go out essentially to infinity, but drop off in intensity very quickly so you can't see the majority of them.</p><p>and</p><p>This just got me down like a train of thought about the electromagnetic field and how the four fundamental interactions, electromagnetism is the main one where we can apply art. Gravity is so weak, like I think 34 orders of magnitude weaker than the weak force. So gravity only really matters mostly at the biggest scales, which we generally don't build things in yet. Fingers crossed on that one.</p><p>But unless you're building like a superstructure or trying to detect, yeah, like little waves, gravity generally doesn't matter all that much. The strong and weak forces were not even discovered until the mid 1900s, since they acted atomic scales and are also outside of nuclear engineering, mostly inaccessible for everyday experience, which leaves electromagnetism to explain like basically everything.</p><p>A good deal of chemistry at large is from electromagnetism, material properties of wire things, strong or soft or hard, etc. Electromagnetism.</p><p>and then this led me down some other train of thought.</p><p>But then the ride arrived in their deer creek, deer park, whatever, their office in Palo Alto. And I'd just been talking to the driver a bit at this point. And he asked jokingly if I could get him in. I decided on a whim, you know what, I'll just try. I'll just ask them. And then when I told him that, he looked genuinely afraid. And I just felt this impossibly large gulf between us. And I felt really sad. Like there was some</p><p>void that he could not cross and not because they would bar him at the gate but like more.</p><p>I swallowed it and just walked through. Still think about that.</p><p>Theo Jaffee </p><p>interested.</p><p>So the Elon's Demo Day story was not actually about the Elon Demo Day.</p><p>Alok Singh </p><p>No, no, I'm not even close to done. This was just like part of the lead up, it was a whole thing.</p><p>Theo Jaffee </p><p>&#8275;</p><p>Alok Singh </p><p>Then.</p><p>The demo itself. Well, it hadn't started yet. So everyone just sort of milling about. was relatively early. So there's maybe like, I don't know, 20 people out of a few hundred and</p><p>Look, I say hi to a guy who doesn't recognize me at all and I won't name him.</p><p>and</p><p>was really shitty. So like, it just had no calories. So I saw the the staff, whatever the polite word for them as the helpers. What some our grandparents would have called servants, but we don't say that because that's We're</p><p>Theo Jaffee </p><p>You're allowed to say server,</p><p>but not servant.</p><p>Alok Singh </p><p>That's true, actually. The servers and security staff, whatever the staff, they had some Domino's pizza and I was starving. I asked for some, I offered to pay, but they just like gave me some. And then they spent the whole time when I was eating, doing what</p><p>was like a fan of gossip, totally unknown to me, where they basically identified each other by astrological signs and then talked about like unfriendly people having Pisces eyes and how they could tell.</p><p>Pisces eyes comment, they seem like weirdly at pains to say it in earshot or just out of earshot of me. I wonder if they meant me in that case, but I have no idea. I'm a Capricorn anyway.</p><p>Theo Jaffee </p><p>Yeah, I was gonna ask.</p><p>Alok Singh </p><p>Yeah I was born in January after all, like early January.</p><p>But then I started just wandering around the parking lot, like up and down, just idly thinking.</p><p>about well data gathering and the</p><p>It doesn't make sense inside a head, but just expanded out just sound like rambling nonsense. So stain of thought or babbling to use modern terms. was like along the lines of this combination of</p><p>Theo Jaffee </p><p>end of thought.</p><p>Alok Singh </p><p>how the concrete world contains the abstract world.</p><p>It has all the abstract information is in the concrete world, but there's more besides like specific incidental facts and not just necessary ones. And then I just thought Tesla should make a robot for data gathering. That it would be really expensive, but they should bite the bullet because they were the one company I thought could actually pull it off in the being controlled so top down.</p><p>They could have one guy that would just push on it because people have tried humanoid robots before, but everyone has failed in this because they haven't had the commitment to go to insane lengths, which is necessary. Like everyone sort of backs off halfway. They think they want a humanoid robot, but then in their efforts to build a new earth, their vision of a new heaven dims and they back off from the humanoid robot to like a factory one or some specialized thing that's not humanoid anymore.</p><p>Theo Jaffee </p><p>think this is changing now.</p><p>Alok Singh </p><p>Yeah, but this was five years ago, so.</p><p>Theo Jaffee </p><p>And what do mean by a robot for data gathering?</p><p>Alok Singh </p><p>The</p><p>Because a robot that's like a human has basically the same interface as we do to gather data that we care about minus smell for now.</p><p>Theo Jaffee </p><p>and</p><p>maybe some other things.</p><p>Alok Singh </p><p>taste, the more continuous senses, that's true. But it's still better configured since the world has been organized by us.</p><p>and made legible by that organization. Like so much of the point of like ordering stuff is to make it legible to us. And the</p><p>A robot that's shaped like us and that has to interact by basically the same means, although hopefully more competently, is in a better position to access it and can do all sorts of random idiosyncratic tasks, well, as many as we can do, that a car or a squirrel or a pick and replace robot just can't do.</p><p>and that</p><p>between language and, what's his name, a continuous world, which I'll just say vision, that's a short form for all that. You've probably heard me rant about discrete and continuous many times.</p><p>Theo Jaffee </p><p>Many times, yes.</p><p>Alok Singh </p><p>And while the continuous side seems to be the harder one, at least for machines, like even now, image models have some very impressive stuff, but people have pushed on text much more. I mean, I can see why it's got the same advantages of as discrete stuff, that equality is a meaningful question for it. And it's easier to evaluate if something is right or not. But most of the world is still in this like physical continuous realm.</p><p>and going back to Tesla, well, they have their cars and of course Waymo has them too, but Waymo doesn't seem like the kind of company that was or is going to build robots to do things other than drive.</p><p>Theo Jaffee </p><p>Yeah, not likely.</p><p>Alok Singh </p><p>And while I'm wondering,</p><p>and then while I'm just like wandering around this parking lot up and down thinking these, I realized, wait, shit, the demo's already started. And it's apparently too full. And I don't want to just like stand outside and look at the screen like some asshole, cause I could just like watch it later. So I continue wandering around the parking lot. And then I find out after the whole day is done, that actually the thing he announced with the robot.</p><p>Theo Jaffee </p><p>Wow.</p><p>Alok Singh </p><p>was quite a feeling. Also this like very visceral experience of like that phrase the child is the father to the man. Have you heard this one?</p><p>Theo Jaffee </p><p>I but I forgot the context.</p><p>Alok Singh </p><p>Like, the child is father to the man because the things you do in the past affect the things you do in the future, in short. And in this way, the stuff you do as a child affects the things you can do as a man, and in that way is their father. Also the mother, but whatever.</p><p>I just had these visual hallucinations of the many possible arcs.</p><p>of like my own, what's the word, my world volumes or possible world volumes.</p><p>Theo Jaffee </p><p>What do mean?</p><p>Alok Singh </p><p>world volume is like</p><p>The world is basically your timeline. Volume, because it's like 3D moving through space, moving through time. So your world volume is essentially all the set of states you'll ever occupy.</p><p>FaceTime. And at least assuming for the moment that it is changeable. It may well not be, but whatever. For the visualization, it didn't matter.</p><p>suddenly I felt much less here and rather spread out throughout all of it. Like I was a steward for my future self and that's like every moment is like a fire brigade with the water bucket chain handing it off to the next person, which is me in the next instant and just getting handed a bucket for me in the last instant.</p><p>Theo Jaffee </p><p>And you got all that from thinking about humanoid robots at Tesla?</p><p>Alok Singh </p><p>It just came around at same time. It was a big swirl of thoughts, like a lightning storm. That's not so uncommon.</p><p>That was a very freeing thought.</p><p>Suddenly I felt much nicer to myself. Not such an inner critic.</p><p>if it was directly related to the robots, only incidentally. It also made the overall experience maybe more interested in hardware.</p><p>An interest that I've still only done a little bit with, because stuff takes a lot of time. But it was one of the examples I like to give people about why lean is cool. Not the drug kids.</p><p>Theo Jaffee </p><p>I love lean!</p><p>Alok Singh </p><p>Lean for real is a Playboy Cardi song, right? just whatever. Because there's a song by Travis Scott called Fiend where a friend of mine sent me Fiend, but it's Indian. So the background beats is like some Indian thing playing and it was awesome. But then there's the joke of what was it?</p><p>Theo Jaffee </p><p>Maybe? I don't know. I'm not exactly a Cardi fan. Let's do a few songs.</p><p>Whole lotta red. Sky.</p><p>Alok Singh </p><p>Dravinder Srinathan featuring Prabhakar Karthik. Travis Scott featuring Playboi Carti. Prabhakar Karthik, though, I don't know, that just triggered the grooves in my brain and I found it endlessly funny. I'll send you the link, actually. And you can listen to it later.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Hahaha</p><p>That is funny.</p><p>or like Pavitr Prabakhar from the Spider-Verse movie.</p><p>Alok Singh </p><p>God. I don't like the Spider-verse movies. That's like some smarmy cunt who can hear all these things about how he's making things worse, but then he says, no, I'm a do-me. Literally what he says. And that just annoyed me so much. Like the vampire Mexican Spider-Man that everyone hates on. No, he's a good guy.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Do some stupid Deus Ex Machina so he can have it all, of having his daddy and his mommy and his timeline and everyone else's timeline. But it's still bullshit.</p><p>Theo Jaffee </p><p>Mm, did you like the art at least?</p><p>Alok Singh </p><p>Meh. Space elevator is cool.</p><p>Theo Jaffee </p><p>No way.</p><p>Merely meh, it was probably like the best, you know, innovation in like animated filmmaking in the last like 20 years.</p><p>Alok Singh </p><p>I remember one thing from</p><p>it, actually, two scenes I remember very specifically, and the overall feeling of this contrast between, like, obviously drawn and then hyper realistic. The two scenes that stood out for me as hyper realistic is one, a scene of a shopping mall with a glass bridge, and it's shot from like head on and then the sun is rising over it. And for that scene, just for a moment, looked like reality. And also the scene where they're jumping around cars, it's night and</p><p>You can see the lights of the cars going down and you can see the subsurface reflection into the tarmac of the road. And that also just looked real. For like that fast. And I think about that a lot.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I mean, I think it's</p><p>a much needed innovation from the sort of like 3D Pixar style that had just like dominated basically all animation for like the last 15 years. Like when was the last, yeah, when was the last time you saw a compelling 2D animated movie before Spider-Verse?</p><p>Alok Singh </p><p>the Eternal Virgin style.</p><p>Plenty of Japanese movies, but Japan's special.</p><p>Theo Jaffee </p><p>Sorry,</p><p>compelling 2D American movie before Spider-Verse. There's plenty of Japanese ones.</p><p>Alok Singh </p><p>When did the white man</p><p>do 2D well? That's a better question.</p><p>Theo Jaffee </p><p>Yeah, the Japanese do it way better than the white man. My revealed preference is that most of the animated...</p><p>Alok Singh </p><p>Maybe the spine of night.</p><p>I didn't know the Spine of Night looked like dog shit. Everyone looks like they're moving with broken limbs in soft body physics where they have no bones. Which is a contradiction, but whatever.</p><p>Theo Jaffee </p><p>Most of the animated content I've watched has been Japanese animated content because it's good. They do a good job. It's aesthetically pleasing. The stories are good. I guess another example of like innovative sort of 2D animation or innovative animation in general was Kubo and the Two Strings, which was Japanese inspired. That was American. It was okay. I sort of remember the story being pretty good. The animation itself, not my favorite.</p><p>Alok Singh </p><p>Yeah, I saw it.</p><p>Wasn't the one also like clay nation</p><p>though?</p><p>Theo Jaffee </p><p>I just don't like claymation. Maybe I'm biased, but I just don't think that it's, you know...</p><p>Alok Singh </p><p>It's</p><p>like one handed pottery. It's really impressive you can work with one hand, but no, it doesn't look that good.</p><p>Theo Jaffee </p><p>It's yeah, it's technically impressive that you can reshape clay tens of thousands of times, but it just looks kind of creepy. Even good clay nation movies, like there are good clay nation movies like Coraline. Was Coraline clay or something similar to that?</p><p>Alok Singh </p><p>Actually, yeah, it's...</p><p>You can't look at that part, think so.</p><p>stop motion.</p><p>Theo Jaffee </p><p>Coraline wasn't literally claymation, but it was that sort of vibe.</p><p>Alok Singh </p><p>Well, if stop motion, mean, claymation and stop motion usually go hand in hand because you kind of need it because if you're live sculpting something. &#8275; one movie I liked quite a lot, The Peasants. It's this Polish movie. It's hand drawn.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Technically, isn't all animation</p><p>stop motion?</p><p>Alok Singh </p><p>We can talk to screen. Continuous, but maybe wait a bit.</p><p>Theo Jaffee </p><p>That's, yeah,</p><p>that's a very good discrete and continuous top.</p><p>Alok Singh </p><p>Yeah, that's one where often, even when like the object of our concern is ultimately discrete, and our original conception of it is discrete, it's still profitably round tripped through the continuous. Like of, okay, a movie, like The Peasants, the person, the movie is hand drawn. And everything, looks like diffusion and that everything is literally flowing. Like if you watch every moment, because it's</p><p>is hand drawn, there is no two scenes, even when they're standing still, are actually the same unless they're reproducing the exact per-stroke frame the same, which they aren't. So every moment is flowing, not flickering, because it's not like little points of light like a kid's memories.</p><p>Theo Jaffee </p><p>is.</p><p>Is our perception of time discrete or continuous?</p><p>Alok Singh </p><p>continuous. It's like one of those things that maybe it's ultimately discrete, but is thought of as continuous. okay, like, draw, like in drawing that movie, while people are drawing, and that certainly feels, at least each frame feels pretty discrete to them, because they have to draw the damn thing themselves. And then there's the ultimate thing kind of is discrete when you think about it for a second. But then the intermediate, and what they're shooting for is the illusion of continuity.</p><p>Theo Jaffee </p><p>Hmm. Sometimes you can perceive it. On ones vs. on twos.</p><p>Alok Singh </p><p>Or even a more. Yeah, a more mathematical example</p><p>would be well, like things with limits like say the Taylor series of E to the X, where you can start with a continuous compound interest formula of one plus X over N all to the power of N limit as N goes to infinity. Well, that's N many steps discrete. But then it's idealized as being some continuous formula, E to the X, which you take a derivative of.</p><p>to get the Taylor series and then to compute it while you truncate the Taylor series. So you started with discrete with this like to the power of n formula. Then you just lift it to the continuum and you work with all these nice properties, including the shifting property within its binomial series, which lets you get this X to the n over n factorial Taylor series. And then you go back down to discrete by chopping the Taylor series to some finite term. Usually like five or six terms is good enough.</p><p>to get a very accurate approximation. Mercifully, that one converges very fast for almost all values.</p><p>Theo Jaffee </p><p>So yeah, let's talk about math. Do you think, you sent me this article that was like the most important, biggest breakthrough in the history of math was the development of Arabic numerals, which is, the numeral system.</p><p>Alok Singh </p><p>be like the Hindu guy and say Hindu numerals or Indian numerals. The Arabs did not do anything with the number system in terms of actually developing the numbers themselves. Like there's this annoying trend of like Indian guys that even Aldous Huxley noted of trying to claim that every invention is made by of course in a variably ancient India which I fucking hate but this one actually was.</p><p>Theo Jaffee </p><p>he was.</p><p>Alok Singh </p><p>We was yogis and shit. Well, not my ancestors. They were farmers. But this one they deserve credit for. Like the same article mentions that of number systems that the Indian one, the one we use now has three aspects, which all other civilizations, including present ones, had at the most two aspects of, and most were lucky to get even one, which is</p><p>Theo Jaffee </p><p>Yeah, on God.</p><p>Alok Singh </p><p>that the numerals have no intuitive association with their size. Like the numbers four and seven when you write them out, seven doesn't look bigger than four. Whereas if you did tallying, it definitely does. So the problem is then, if you don't do that, this is not a logical requirement, but it's a psychological one where invariably people will use like a dot or a line for one and then what do do for two? Well, two dots or lines. Yeah, but even, they cut it, son.</p><p>Theo Jaffee </p><p>Yeah, that's Chinese.</p><p>Alok Singh </p><p>Yeah, but after four, they cut it off and start using ones that don't look like anything special. Which is good. But most primitive civilizations, and Ifra's book goes into like painful detail about all the primitive civilizations and how they just end up in the local minima of tallying. Because tallying is a base one system and because all of like multiplications properties become degenerate because one is the identity of multiplication.</p><p>the fact that, okay, you can fit a thousand numbers into four base 10 digits, because log 10 is just lost.</p><p>that don't get this exponential compression. And so long numbers take a long time to write out a really long time, like 1000 tallies takes a while.</p><p>Theo Jaffee </p><p>Telling is still actually good for a handful of use cases.</p><p>Alok Singh </p><p>Yeah, a handful, up to five, like the fingers, a hand. That's the problem. That's the nice thing about it. It's like sugar.</p><p>Theo Jaffee </p><p>Yeah. Like at the gym I go to, they have a whiteboard.</p><p>Alok Singh </p><p>and sleeping with a hot girl without a condom is appealing in the short term, but it's got some problems in the long.</p><p>Theo Jaffee </p><p>I'll go.</p><p>Yeah. So do you think this was the biggest development in the history of math?</p><p>Alok Singh </p><p>Well, yeah, mean, even now, like if you ask people, well, what do you what math you use? Most people can. Well, I most of the can't answer at all, but if they've thought about a really long time, the only answer they can honestly give is like counting. Addition, maybe multiplication.</p><p>Theo Jaffee </p><p>Addition, did you addition?</p><p>Little multiplication.</p><p>Alok Singh </p><p>And like division is already getting beyond most people. Okay, here's the thing. If I randomly had uniform selected someone from the entire world over the age of 10, and they have to add one fifth and one seventh together correctly. And if they can do it, you get let's say 100K, but if they fail, you die, would you take it?</p><p>Theo Jaffee </p><p>No way.</p><p>&#8275; no, but that's, that's, that's adding fractions. That's not division.</p><p>Alok Singh </p><p>Uniformly random the whole-</p><p>store.</p><p>Theo Jaffee </p><p>Yeah, like division is like, okay, like let's say you have like eight oranges and you have four people that you have to distribute the oranges to. So how many oranges do you give each person? Like people sort of intuitively get them.</p><p>Alok Singh </p><p>Okay, how about if you have eight,</p><p>what if you have seven oranges and have eight people?</p><p>Theo Jaffee </p><p>Okay, whole number division.</p><p>Yeah, yeah, like most people can't do fractions in their head. Most people.</p><p>Alok Singh </p><p>Motherfucker.</p><p>Most people can't do fractions, period.</p><p>Theo Jaffee </p><p>Mm.</p><p>Alok Singh </p><p>Like, it's polite to pretend otherwise, but I don't believe it.</p><p>Theo Jaffee </p><p>So you think that the median person over the age of 10 in the world is capable of counting, they're capable of adding, basic multiplication and like maybe basic division of natural numbers and that's it?</p><p>Alok Singh </p><p>when they evenly divide things or are like very common fractions like a half and a fourth and even the fact that like one fourth plus one fourth is a half i wouldn't expect them to be able to arithmetically grasp that if i put like one fourth of something in front of them yeah but i wouldn't expect them to like know it in the same way they know like seven plus three is ten like that</p><p>Theo Jaffee </p><p>Maybe if they're cooking.</p><p>If they cook with recipes, they get it. Or like, this recipe calls for two tablespoons. Yeah.</p><p>Alok Singh </p><p>Do you know how to cook?</p><p>That's good.</p><p>Theo Jaffee </p><p>This recipe</p><p>calls for two tablespoons. How many teaspoons is that? &#8275; well, there's three teaspoons a tablespoon, so it's six teaspoons.</p><p>Alok Singh </p><p>my mom cook, my mom was like an engineering manager and even &#8275; sorry mom I shouldn't disparage my bloodline on tv like this.</p><p>Theo Jaffee </p><p>I don't know maybe maybe we're typical</p><p>Alok Singh </p><p>But no, I don't think the average</p><p>person can do fractions.</p><p>Theo Jaffee </p><p>Are we just typical mind fallacying here? Am I just typical mind fallacying here?</p><p>Alok Singh </p><p>I mean, I certainly am not, because with my typical mind, judging by the people I hang out with, is that the average person knows multivariable calculus, which is definitely not true.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>or like just basic real analysis, not even complex. would be.</p><p>Theo Jaffee </p><p>You think that the median person you hang out with</p><p>on a regular basis knows real analysis?</p><p>Alok Singh </p><p>No, but I think the person I hang out with has at least heard of it, which I don't think the median human have. At all. Not even close. I don't think even like tenth percent, 90th percent, whatever, the top 10 percent, the cutoff there has heard of it.</p><p>Theo Jaffee </p><p>I didn't even tell you I'm finally taking math again. I'm back on my math grind. I'm doing differential equations.</p><p>Alok Singh </p><p>Really? Like what?</p><p>Again, non-standard analysis, bro. It's the way. Worst thing about it is the name.</p><p>Theo Jaffee </p><p>What is non-centered analysis again?</p><p>Alok Singh </p><p>You can extend the number system one more time, which should be a familiar theme. Cause you know, when we're little kids or embryos, really, we know like one, two, three, and then eventually you learn, you can just keep counting. So infinity kind of gets tacked on then zero, the negatives fractions, actually fractions come kind of pre-built in the simpler ones anyway, negative numbers came later, but then like irrational numbers, we have a pre-</p><p>formal understanding of the real numbers, some idea of a number line has come from the very beginning, but not much mechanical understanding of it. Like they'd have an irrational number considering how many people still argue if pi is 22 over 7. Even my own dad. I had to explain to him what a transcendental number was and an irrational number.</p><p>Theo Jaffee </p><p>Isn't that like</p><p>easily like disprovable though?</p><p>Alok Singh </p><p>So.</p><p>Theo Jaffee </p><p>You can just Google what is pi and it's like a very long decimal and then you Google what is 22 over seven and it's a much shorter decimal that doesn't even equal pi after a certain number of decimal places.</p><p>Alok Singh </p><p>You can also</p><p>Google that Claude is better than GPT at stuff and yet how many people use Claude?</p><p>Theo Jaffee </p><p>Can you actually Google that? Let's see. Is Claude better than ChatGPT?</p><p>Alok Singh </p><p>kind of, as far as the answers go.</p><p>Theo Jaffee </p><p>Yeah, it seemed</p><p>to say yes, but like you'd have to know what cloud is in order to Google it.</p><p>Alok Singh </p><p>yeah.</p><p>Theo Jaffee </p><p>I have a stack. use ChatGPT o1 for math and, you know, advanced coding stuff. And I use Claude for everything else. Word, Selvers shape, rotator tasks.</p><p>Alok Singh </p><p>Right. Anyway, going back to what is non-center analysis. So we play the game of like completing the numbers and there's a practical point to each level where the point of zero is to like round out or to really make it possible to properly do addition because it's the identity of addition. And without zero negative numbers, which allow you to complete addition and finally give an answer to two minus three, which</p><p>Certainly I thought as a little kid it's impossible, you just can't do that. And now it's to me, it's like the act of identifying with addition and negation is just so intuitive that it's easy to forget that they're separate operations really. But without zero, the operation of negation doesn't even make sense because the defining property of a negative is that adding it to the thing it's the negative of is zero. A property pretty easily explained for fractions because the number one</p><p>the identity for multiplication happens to be like maybe the most intuitive number.</p><p>I if I'm like, the concept of one, it's just over. Before it even began.</p><p>Theo Jaffee </p><p>What?</p><p>Alok Singh </p><p>Like if a human cannot grasp at some like pre-formal level the concept of like one. Like if someone doesn't get that one plus one is two, I don't think you can teach them math. Luckily even animals understand this and infants do.</p><p>Theo Jaffee </p><p>I see.</p><p>Well...</p><p>Yeah, I mean, I would say the bar for not being able to teach someone math has got to be higher than that, right?</p><p>Alok Singh </p><p>Yeah, but if they can't get this, then they definitely can't get the rest of the edifice.</p><p>Theo Jaffee </p><p>What do you think the minimum bar is? Like what makes someone Turing complete for learning math?</p><p>Alok Singh </p><p>addition and multiplication.</p><p>Theo Jaffee </p><p>Yeah, that makes sense. What about negative numbers?</p><p>Alok Singh </p><p>practically speaking, but that's just like in addition, like addition properly grasped.</p><p>Theo Jaffee </p><p>I guess you don't even need to think of negative numbers as intuitive if you can just pretend that they are for a long enough time period. It shouldn't be that hard.</p><p>Alok Singh </p><p>Well, that's like that von</p><p>Neumann quote, you don't understand things, you just get used to them. Like, you can construct, for example, a rational number, an integer as a pair of naturals. In fact, an infinite set of a pair of naturals that all have the property that the first number minus the second number is the same negative, or they have the same difference. So if the number negative one would be identified with a one, two,</p><p>two, three, three, four, and so on as an equivalence class. It's like a formal construction that actually, if you can understand a negative number, can definitely understand, that doesn't guarantee you can understand the construction. And if you could understand the construction, but not a negative number before you saw the construction, you're like some weird mutant. Because I don't know of anyone who's been able to understand the concept of an equivalence class of an infinite pair of, set of pairs.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Alok Singh </p><p>to negative numbers before they even could understand what a negative number was just at an intuitive level at some point. But you like absorb them and now negative numbers probably feel like kind of familiar and at least sort of real or let's say actual.</p><p>Theo Jaffee </p><p>Well, it does kind of make</p><p>sense. Like if you have pairs that are like 1, 2, 2, 3, 3, 4, like you already kind of know that like the gap between each of those is 1, right? So if you subtract big number minus small number, you get that gap, that interval. But if you subtract small number minus big number, you get the same thing but negative. But that doesn't seem that difficult to understand.</p><p>Alok Singh </p><p>Yeah, and you didn't know that, then this construction would make no sense.</p><p>Yeah.</p><p>Now you're typical minding. Anyway, you can get integers as a pair of naturals by doing this construction. And then you can get a rational of rational as a pair of integers. So a quadruplet or a pair of pairs of naturals by decompiling it one level further that have that satisfy the property of rational number addition and multiplication, mostly addition one, because it's not obvious, which is why people fuck up fractions because people add like what's one half plus one half.</p><p>Theo Jaffee </p><p>Okay.</p><p>hair of naturals.</p><p>Alok Singh </p><p>two fourths.</p><p>Theo Jaffee </p><p>So what is non-centered analysis?</p><p>Alok Singh </p><p>It's another step in this completion process. Just to get rationals, you can, to be able to take a square root, need irrationals and there you're at the reals. Then if you want to solve polynomials and for do rotations as in particular with Euler's formula, you need complex numbers. But if you want to do calculus or analysis or differential equations, which is finally getting to that point, you need, or you end up crudely reinventing infinitely big and small numbers.</p><p>So it's the number system augmented with infinitely small numbers, infinitely big numbers, the regular numbers, and then the various combinations thereof.</p><p>Like you can pull up a graph if you look up like.</p><p>Theo Jaffee </p><p>So non-standard</p><p>analysis is like an umbrella term for calculus and diff eq</p><p>Alok Singh </p><p>way of doing it.</p><p>with this extension of real numbers. I mean, also complex numbers, the construction's pretty generic.</p><p>which is better than the limit approach, which is usually taught for many reasons, which I've went into on the internet and we'll go into some of, but it's like a whole long rant.</p><p>Theo Jaffee </p><p>Yeah, I mean, personally I'm excited for reasoners to continue to get good enough so that they can just teach me, like, real analysis. o1 pro might already be there.</p><p>Alok Singh </p><p>It doesn't let me send pictures. It'll send it to you on</p><p>Yeah, those you should just pay the 200 don't be such a cheapskate</p><p>Theo Jaffee </p><p>Yeah, I know. I actually have used it on my dad's account. Very good stuff.</p><p>Alok Singh </p><p>Well, for once your dad's the one who's not a cheapskate unlike you. That's a surprise. Like I couldn't get my dad even now to pay 20 bucks. Certainly not 200. The number of engineers who won't pay 200.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Yeah, I mean...</p><p>Alok Singh </p><p>animals, just godless men.</p><p>Theo Jaffee </p><p>Yeah, oh and Sam said that he's not raising the price on 03. So 03 and 03 Pro will continue to be 200 bucks a month. So just pure value add. We'll see how good 03 is. I don't know, like do you know anyone with safety testing access to it? I saw some people who say it's very, very good.</p><p>Alok Singh </p><p>I hurt.</p><p>out.</p><p>Like what, on Twitter or personally?</p><p>Theo Jaffee </p><p>people that I know personally on Twitter.</p><p>Alok Singh </p><p>Of course. Well, not yet. For 03mini. 03mini is the speed that excites me, but I think for me to get satisfied, I will have to try the full thing. So that has what I really want. Like pushing the edge of...</p><p>Theo Jaffee </p><p>The full, full thing is</p><p>like a gazillion dollars.</p><p>Alok Singh </p><p>If can write me a paper. Because I've got ideas.</p><p>Theo Jaffee </p><p>Well, it costs</p><p>like 1.5 million just to solve Arc AGI.</p><p>Alok Singh </p><p>How many questions was that?</p><p>Theo Jaffee </p><p>Actually don't know. I think it was on the order of like a few hundred.</p><p>Alok Singh </p><p>And also, I mean, they'll be providing something</p><p>that they will call the full thing. And I think that will be plenty good. It will be certainly like noticeably better than o1 pro, I would hope. So O3 Mini is probably not gonna be as good as o1 pro, but you know, a lot faster, which is nice. There's also the DeepSeq one that came out today. I've asked it a couple of questions. Being able to read its chain of thought is real nice.</p><p>Theo Jaffee </p><p>This is a different model than the one that was already out. I thought the big release today was just a paper.</p><p>Alok Singh </p><p>Yeah, R1. This is the reason R1.</p><p>Nope, model. MIT licensed even. You can like make money off of it.</p><p>Theo Jaffee </p><p>I used a DeepSeek Reasoner model over the last few weeks.</p><p>Alok Singh </p><p>Unless you use the one that came out</p><p>today, it's not the one. That's V3. It's good, but it's not as good as this one.</p><p>Theo Jaffee </p><p>so it's</p><p>just a reason they're based on V2.</p><p>Alok Singh </p><p>think it's based on V3, but I have to look at their paper. In any case, its performance is roughly comparable to o1, but you can run it locally if you've got the compute. And it's certainly a lot cheaper. What is that?</p><p>Theo Jaffee </p><p>how much compute you need.</p><p>I have a</p><p>mid-quality GPU for video games.</p><p>Alok Singh </p><p>Nope.</p><p>No, more than that. How much does it take actually? The model is, for one is like 800 and something, 71 billion parameters. Although you only have to be able to load up a slice of that, but I think the slice is still like quite large. We'll just ask perplexity. Launch VRAM, run R1.</p><p>Theo Jaffee </p><p>Do you think 03 will be substantially better on wordcel tasks? Because it seems like a lot of people are skeptical of reasoners because they think that the RL on them only applies to like easily verifiable tasks like math and programming. Cause it's like hard to do RL for something you can't specify a reward for, like poetry.</p><p>Alok Singh </p><p>Yeah.</p><p>think it'd be less good. The jump would be less dramatic. But I think, especially if the model hits like some level of capability, it can get into this point where it starts. Well, I don't know if benefiting is the word benefiting from the gap between generation and verification, where like you can't write Dostoevsky, probably. But you can read him and it's like, damn, this guy's real good. Or at least he's better than like the other crap someone wrote was better than your high school essay, hopefully.</p><p>Theo Jaffee </p><p>Yeah, I hope so. I can't read Russian, but I imagine.</p><p>Alok Singh </p><p>translation is really good nonetheless, if the Russian one is even better, then well, damn. Actually, a little personal project I thought of, though it's not that good a use of time, is reading the Iliad and the Odyssey, especially the Odyssey, in Ancient Greek, cross-translated with GPT into English, but also Proto-Indo-European. Like sort of three columns side by side.</p><p>Theo Jaffee </p><p>That actually doesn't seem that hard.</p><p>You could do that with like a prompt scaffold, like today.</p><p>Actually though, with some LLMs...</p><p>Alok Singh </p><p>It's not that doing, it's more like working through</p><p>it. It's like working through it in depth is the point.</p><p>Theo Jaffee </p><p>I see.</p><p>With some LLMs, it's like, how do I say this? Like with Claude, I once asked it, know, write like the first chapter of Paradise Lost and it started and then I got like an auto block message that was like, this output has been blocked by our content filtering policy. Even though Paradise Lost has been.</p><p>Alok Singh </p><p>I find it helps if you point</p><p>out it's in the public domain before you ask it. And sometimes it's blocked me and then I've said, this work is in public domain and then it just unblocks.</p><p>Theo Jaffee </p><p>Yeah, that would be funny if it would just work.</p><p>Alok Singh </p><p>I'm gonna ask pro to do it right now, actually.</p><p>Theo Jaffee </p><p>Remember when people</p><p>were like telling DALL-E that the year was like 2150 and like all these characters are in the public domain and getting it to generate like Sonic the Hedgehog doing 9-11?</p><p>Alok Singh </p><p>No, that sounds...</p><p>Theo Jaffee </p><p>Those are funny. This is like the first day DALL-E 3 came out before they patched it.</p><p>Alok Singh </p><p>I wish I could set GPT the website to automatically select Pro Mode as my default and not 4o as if I would waste my time asking 4o a question.</p><p>Theo Jaffee </p><p>Yeah, I think o1 is actually like only marginally better than 4o on these wordcel tasks though. And I think on some benchmarks it did even a little worse.</p><p>Alok Singh </p><p>I find pro mode to be quite a jump.</p><p>want to ask our mode, well, another question near and dear to me. Explain non-obvious benefits of non-standard analysis. And I can give you one myself while it's generating, which.</p><p>Theo Jaffee </p><p>You know, Quentin Pope,</p><p>a former Theo Jaffee podcast guest and also a guy I follow on Twitter, was tweeting about how he was getting o1 pro to generate fiction and it would just keep reusing the same words. I think, I forgot, but let's say glimmer. And so it would tell it, or he would tell it, okay, don't use the word glimmer. And then it would say, an example sentence like, you know,</p><p>Alok Singh </p><p>shimmer?</p><p>Theo Jaffee </p><p>She looked at the object with a glimmer in her eye, or she quickly corrected herself a sparkle. So it's like, yeah.</p><p>Alok Singh </p><p>That's like reading a chick flick novel.</p><p>Theo Jaffee </p><p>I wouldn't know because I've never... yeah. My favorite chick flick novel is...</p><p>Alok Singh </p><p>Glimmer, no, Glimmer, a shimmer. Okay. I wasn't a very discerning reader. Yeah. I've</p><p>got some guesses, but let's find out. No, just show it.</p><p>Theo Jaffee </p><p>Well, what's your guess?</p><p>It's Atlas Shrugged by Iron Man, which is a romance about this amazing businesswoman named Dagny Taggart who finds herself involved in romances with lots of hot, sexy billionaires, except the book is also based, unlike most of these.</p><p>Alok Singh </p><p>I found, especially living in Silicon Valley, that the people she casts as villains, she uncannily understands their psychology, the rest not so much.</p><p>Theo Jaffee </p><p>Yeah, the heroes are kind of late. The villains are just unbelievably spot on. I cannot believe how prescient...</p><p>Alok Singh </p><p>Maybe she was just thinking</p><p>of some random soviet.</p><p>Theo Jaffee </p><p>I truly, I can't believe how prescient Ayn Rand was in so many ways. if you go on my Twitter and you search Ayn Rand was right, like I've tweeted this like many times because it's just so true. There's like, you know, Gavin Newsome like right after the LA wildfires saying, well, we're not actually going to change any of our practices that caused the wildfires. But what we are going to do is ban like transactions between willing buyers and sellers of burned down property.</p><p>Alok Singh </p><p>I mean, I could.</p><p>I'm not sure I will.</p><p>Theo Jaffee </p><p>We're going to ban down people from selling their burned down houses.</p><p>Alok Singh </p><p>You</p><p>know, just realized that Ayn Rand looks like a frumpier version of Agnes Callard.</p><p>Theo Jaffee </p><p>That's funny. I'm gonna see Agnes Callard in like three weeks. Hi Agnes. She's coming to Gainesville, which is crazy. With Patrick Collison too, believe it or not. She's doing like a tour for her new book, Open Socrates, which is on my list.</p><p>Alok Singh </p><p>Hi, Agnes. Why?</p><p>Bye.</p><p>Thank you.</p><p>which I guess Patrick</p><p>has read, I assume.</p><p>Theo Jaffee </p><p>Probably. I don't know if it's out yet. Maybe he's got an advanced copy.</p><p>Alok Singh </p><p>I went on their podcast, not Patrick, Robin and mine almost meeting Robin and Agnes's podcast a couple of weeks ago. I don't know if it's up yet, but it was about the two cultures. But she mentioned that she had did the audio work. She auditioned successfully for to read out her own audio book. Many authors apparently do not succeed in this.</p><p>And it was pretty brutal because it was three days, eight hours a day of talking.</p><p>and her voice was totally shot.</p><p>Theo Jaffee </p><p>One really good audiobook that was read by the author was The Creative Act by Rick Rubin, especially because his voice is so deep and soothing. Another pretty good one was The Lord of the Rings read by Andy Serkis and The Hobbit, especially when you get to the...</p><p>Alok Singh </p><p>What is Andy</p><p>Serkis' connection with those books?</p><p>Theo Jaffee </p><p>He played Gollum in the movies and he also just has, you know, a voice. like when</p><p>Alok Singh </p><p>The only</p><p>Lord of the Rings movie I've seen was The Two Towers and recently the animated Rohirrim 1. That's it. Also the only Star Wars I've ever seen is Attack of the Clones and people have told me that if I saw only one movie from both those series that I picked the most confusing and worst one.</p><p>Theo Jaffee </p><p>So</p><p>Most confusing, yeah, Phantom Menace was worse. yeah, was it? yeah, Andy Sergis, when he gets to the golem scenes, he reads them in his golem voice and it's very good. Yeah.</p><p>Alok Singh </p><p>That's a really good impression, Dan.</p><p>Theo Jaffee </p><p>That's one of my best ones, I think.</p><p>Alok Singh </p><p>Steven Graget has a good impression of Trump.</p><p>Theo Jaffee </p><p>Yeah, I've heard it. heard it. I think I do a decent Trump also.</p><p>Alok Singh </p><p>Many</p><p>are saying this.</p><p>Theo Jaffee </p><p>We are going to make America great again. On day one, I will sign.</p><p>Alok Singh </p><p>You've got the breathiness,</p><p>but your cadence is off. He does have those breaks, but yours is slightly too stretched out. And it's too even.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Yeah, I don't know. I think my perception of a lot of people's voices are messed up because I listen to most things on 2X speed.</p><p>Alok Singh </p><p>here, roast your guests. Do an impression of me, then.</p><p>Theo Jaffee </p><p>So I think everything is discrete or maybe continuous.</p><p>think actually the most important thing was Hindi numerals, are not Arabic. They're Indian.</p><p>Alok Singh </p><p>Okay, I'll take emotionless point deck, so that's fine.</p><p>Theo Jaffee </p><p>Yeah, &#8275; that's close enough. I wonder where the word poindexter comes from.</p><p>Alok Singh </p><p>I don't know, but it's a perfect word for it. It really evokes what it is.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>the turbo version.</p><p>Yeah.</p><p>Theo Jaffee </p><p>Hold on, I'll be right back, I have to get some water. I'll cut this part.</p><p>Alok Singh </p><p>Yeah,</p><p>yeah, it's fine.</p><p>Theo Jaffee </p><p>Okay, we're back. We are so back.</p><p>So did the o1 response finish?</p><p>Alok Singh </p><p>Yeah, I'll read it out in sec. Just texting my dad real quick.</p><p>Theo Jaffee </p><p>guys.</p><p>This is really amazing, fully automated podcasts. The guest just reads o1.</p><p>Alok Singh </p><p>with commentary. Also, I'm wondering if I'm also wondering if a one or I mean, if GPT has been trained on what I've said about nonstandard analysis, because I swear to God, I can like, feel my voice in it now. Just like a hint of it.</p><p>Theo Jaffee </p><p>That's kind of what notebook LLM is.</p><p>Interesting. I mean, this is what Gwern and Tyler Cowen think.</p><p>yeah, I saw some tweet that was like, you know how Biden and Harris sort of inexplicably, like the last few days tweeted like, you know, the equal rights amendment is officially the law of the land. You know, we proclaim, even though, you know, the national archivist did not approve this and the amendment ratification deadline expired a while ago. But someone said,</p><p>Alok Singh </p><p>Yeah, I know.</p><p>Theo Jaffee </p><p>it's possible that this was for the LLMs. And I mean, it's probably not true, but it sort of makes sense.</p><p>Alok Singh </p><p>If so,</p><p>I respect them more, actually.</p><p>Theo Jaffee </p><p>Yeah, yeah, but like, you know, the president</p><p>declares this thing that's not actually true, but we officially declare it. Yeah, this is going in the weights.</p><p>Alok Singh </p><p>The previous president</p><p>and guess the current one, the previous, previous and current one and the one, whatever, Trump and Biden, especially Trump is the master of declaring things true that aren't. Like I know he's popular among a lot of the people we hang out with, but he still like lies all the time, obviously.</p><p>Theo Jaffee </p><p>Obviously. Yeah.</p><p>Alok Singh </p><p>Okay, let me screen share so I can read it out a little easier.</p><p>Theo Jaffee </p><p>I did.</p><p>Alok Singh </p><p>also gave me all of Paradise Lost chapter one. There. Can you read this? Okay, great. Yada yada. Unification of discrete and continuous. Yeah, this is a big one. Hyperfinite site. Like, yeah, the idea is like, wait, does this let me share more of my screen instead of just the one? One sec. I want to share more than just one.</p><p>Theo Jaffee </p><p>Nice.</p><p>Yes.</p><p>Hyperfinite sets. I've heard you talk about this a lot.</p><p>Also, you have a special</p><p>GPT for Vision OS.</p><p>Alok Singh </p><p>It's one of their GPT stores. I have never used it.</p><p>Theo Jaffee </p><p>&#8275; it is weird how everyone predicted like the day GPT store came out. They were like, my God, Sam Altman, you genius. This is the new app store. It's going to be the biggest thing ever. And then kind of just nobody used it at all.</p><p>Alok Singh </p><p>Yeah.</p><p>There. Not the circle. So that's an integral, as you can probably guess from looking at it, taken from some three blue one brown video. Thanks, Grant. And you might've learned in class that, okay, so it's an approximation, but as the number of pieces goes to infinity, and this explanation presupposes you've already taken calculus, but I think for this audience, that's a safe guess. That each piece is really small. Well, how small? Infinitesimal.</p><p>Theo Jaffee </p><p>Haha, yeah.</p><p>Alok Singh </p><p>in the limit. But well, how many pieces are there?</p><p>Theo Jaffee </p><p>infinitely many.</p><p>Alok Singh </p><p>Okay, but like how many infinitely many?</p><p>Theo Jaffee </p><p>&#8275;</p><p>Comfortably infinite?</p><p>Alok Singh </p><p>No, like if I halved the number of pieces in fact, or if I doubled them, well then how wide is each strip relative to the picture we're looking at? Pretending that it's the idealization with infinitely many because you know it's impossible to draw.</p><p>Theo Jaffee </p><p>What do mean? Like if you start with one strip and then you have it?</p><p>Alok Singh </p><p>You have infinitely many strips already, but then you double the already hyperfine approximation.</p><p>Each strip gets cut in half.</p><p>Theo Jaffee </p><p>then you would have</p><p>twice as many strips that are half as wide. No?</p><p>Alok Singh </p><p>Yes, exactly. Yeah,</p><p>that's the point. It lets you use this sort of radical elementary reasoning. And this is like at odds with most modern conceptions of infinity and math, because like what's two times infinity, infinity, that's not very useful. Because then infinity just becomes the sort of absorbing symbol that just kind of breaks arithmetic, because it has no useful properties. Like infinity minus five is just infinity, infinity. And worst of all, infinity squared.</p><p>is identified with infinity, but this would be a mistake in multivariable. Like if you took dx and dx dy, anyone who's done calculus should know that these are, yes, they're both infinitely small, but they're like fundamentally different kinds of quantities. One represents a line or a line let, a tiny area of a line, the dx, but dx dy represents an area and is much smaller, infinitely smaller than dx.</p><p>Like thinking of them additively, they're just infinitely close to zero and are basically the same, but thinking of them multiplicatively, they're very different.</p><p>That kind of makes sense. Sort of.</p><p>Theo Jaffee </p><p>Yeah, sort of.</p><p>Alok Singh </p><p>Okay, but then when we do the same thing for areas, like if I took this picture, and then I cut it not just vertically, but also horizontally into a grid, then how many pieces would I have? Well, if the number of pieces is infinite, and the technical term from nonstandard analysis is hyper finite, which I usually just in this picture, I called it n. But typically, I call it capital H, whenever I explain it to people, because capital because it's a big number.</p><p>and h for hyperfinite. But then you would have h squared many pieces, with maybe like a fraction of h left over if it doesn't quite evenly divide it. But the bit left over would be infinitesimal. And so that's fine.</p><p>and it has a sort of continuous and discrete quantity, quality to it. Continuous because like, if you did this cutting into a hyper finite number of pieces, essentially every piece is one point wide and there's definitely uncountably many points because it goes across a continuum. In this case, I think the unit interval.</p><p>Makes sense.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Like in a continuum, there are uncountably many points. Uncontroversially. But it's also discrete because it has a definite number and you can count down from it. For example, the circle. This one has only 100 sides. This is an approximation of a circle that I did in that plotlib where it approximates it with 100 sides, but it looks very close to a circle.</p><p>I I can't really tell the difference. I think I originally did this with 10,000 sides, but that was super overkill.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I wonder, yeah, how far could you zoom in before you notice the sides?</p><p>Yeah, barely. I can see it a little, I think.</p><p>Alok Singh </p><p>I don't even think I can see it at this level. It won't let me zoom further. where'd it go?</p><p>Okay, more because this is funny to me.</p><p>I can maybe kind of guess at a difference, honestly, I can't really tell. And at this point, I can like see the pixelation ever more than anything else.</p><p>And this gives a good picture of what's going on. that's the number line. And for any given number R, it has an infinitesimal neighborhood around it. Like then William Blake poem about a world and every grain of sand. Cause around every point or standard point, like if you take the unit interval or any interval really, and you look at it from any finite distance besides zero, it just looks like this unbroken infinitely long line, right?</p><p>But if you zoom infinitely far away, it'll look like just one point, but all the stuff is still there. So it's actually a line, but a very small one, relatively speaking. But if, or if you zoom in infinitely close, it will split apart and what looks like a continuum will become discrete. But, and then in the gaps that have been introduced between points, and this is still the real line. So there's still uncountably many points. You can fit an infinitely small line around each point.</p><p>But can do this trick again of zooming in to infinity squared. And so it's split apart again, and then you get an infinitely small squared line. And then again, cubed and so on.</p><p>So whether something looks discrete or continuous is actually partially dependent on the level of zoom. Like from the relative distance you're looking at it from.</p><p>Theo Jaffee </p><p>So what are the rest?</p><p>Alok Singh </p><p>sort of braiding</p><p>effect. Yeah.</p><p>Theo Jaffee </p><p>What did the rest of the o1 pro response say? Was it all hyperfinites?</p><p>Alok Singh </p><p>It started with that because this is like the biggest missing concept in standard math that has infinitesimals have been pretty well absorbed through various formalisms, like at least four of them. There's synthetic differential geometry, dual numbers. There's schemes. There's like the Levi-Civita field and various non-archimedean fields. Okay. There's probably more, but</p><p>There's many conceptions of infinitesimal, so those are pretty well absorbed into mathematical mainstream, so there's not much alpha, but the opposite idea of an infinitely large but definite fixed number is just missing.</p><p>This one is nice for topics like real analysis, which come dramatically simpler if you use this and more accessible. I think standard mathematicians will usually underrate something being simpler because they can absorb the difficulty. But I think this is a typical mind fallacy. Like I certainly don't expect many people to be able to access it. But at this level where there's such a power law drop off, if a topic becomes more accessible, it can go to like literally dozens.</p><p>dozens of us, more people being able to understand it or times more. Cause it goes from something like very obscure to in reach. Like that article was just this one. It's how to take a derivative at a discontinuity and it is accessible. Like you could read it. Like someone who understands high school math and is a little bit dedicated.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Alok Singh </p><p>but doesn't have to look outside of this, get it. At least they could get the core idea. But to do the equivalent with the standard formulation would at the minimum require like a graduate level of education in math, which is just not gonna happen for basically everybody.</p><p>But about internal covers, finite subcover is a little boring. There's a better definition of compactness.</p><p>but not that interesting for this. This is its connection to just like different areas of math This is meh.</p><p>little better.</p><p>And it just, this is better.</p><p>Theo Jaffee </p><p>Well, there are a lot of stack exchanges.</p><p>Alok Singh </p><p>This is the history of science and mathematics. This is the preface to Abraham Robinson, the creator of the Fields book written by Kurt G&#246;del, who you've definitely heard of, Mr. Incompleteness Theorem.</p><p>Theo Jaffee </p><p>Yep.</p><p>Alok Singh </p><p>I was quite happy that I thought of this reason by myself long before I ever saw this quote by Godot, because I absolutely agree with it. That the best reason is that it's a natural continuation of the number system. And the number system is, well, our most successful abstraction. Like it contains every previous bit of insight about the number system. Cause it's like when we just had whole numbers, positive whole numbers, and then someone comes along with, well, there's this new thing, zero.</p><p>Well, it's a successful abstraction because, well, all the old stuff is still there and you can just ignore zero if you want, but maybe you'll find it useful someday. And then it'll just waiting there to welcome you. Same for negatives and so on. Cause each system completely subsumes the other ones very literally cause they embed within one another.</p><p>And this embeds as well, because you can take your standard conception of numbers and then fit them inside the non-standard conception of numbers, because all the standard numbers are there, but now there's infinite numbers, infinitesimal numbers, so it fills in these gaps you didn't even notice. It fills in this far away remote infinite part you didn't know was there.</p><p>This one is more technical, but it's a very good reason. Like, Zylberger is an ultra-finitist.</p><p>Theo Jaffee </p><p>Continuous</p><p>mathematics is the approximation of the discrete one in contra-position in the traditional point of view. The notion of a very big set is very important. A very big finite set is very important. And the definition of a hyperfinite set in non-center analysis is an appropriate formalization of this notion.</p><p>Alok Singh </p><p>Very big finite set.</p><p>In nonsense.</p><p>and one of the only formalizations of this notion.</p><p>Theo Jaffee </p><p>Hmm. So what does this have to do with differential equations though? Cause I mentioned, &#8275; I'm doing elementary differential equations, which is ordinary, but not partial. think we haven't really done much yet. It's like the only thing we've done so far really is like classifying.</p><p>Alok Singh </p><p>Like what? Yeah.</p><p>Well yeah, I liked it.</p><p>Cause you're doing hyperfinite arithmetic. Like a differential equation is ultimately still like, well, usually like some big sum. It's just that for a differential equation to be all that meaningful, well, you might've heard the of like boundary conditions or initial values or something.</p><p>Theo Jaffee </p><p>Mm-hmm.</p><p>Alok Singh </p><p>Like imagine a flow like along a river. Like you start at some point and then you have a little bit of momentum from the water flowing. So you flow in some direction for an infinitely short amount of time. And because the flow is assumed to be continuous, the infinitely short amount of time can only carry you an infinitely small distance. This is like quoting the nonstandard definition of continuity, which is also just the intuitive one. And this is why derivatives are useful because they turn the nonlinear into the linear.</p><p>over a short enough distance.</p><p>And so your differential equation, your flow, this continuous thing is broken apart into a hyper finite number of pieces, each of which is just a tiny linelet.</p><p>which is essentially discrete. And then you have this like enormous chain of them all linked together.</p><p>as sum, and so you're dealing with a hyperfinite sum where each piece happens to be infinitely small. So it adds up to something finite in the end. Or, yeah, finite. Limited.</p><p>Theo Jaffee </p><p>So let's talk about math AIs, which is one of the topics of the day, especially with R1 coming out. So do you think that at this point there's any chance that a human will solve a Millennium Prize problem before an AI?</p><p>Alok Singh </p><p>Yeah.</p><p>No.</p><p>Theo Jaffee </p><p>No chance at all.</p><p>Alok Singh </p><p>I mean some chance, but I don't think it's gonna happen.</p><p>Theo Jaffee </p><p>When do you think an AI will solve a Millennium Prize problem for the first time?</p><p>Alok Singh </p><p>within 10 years.</p><p>Theo Jaffee </p><p>Hmm, that seems like a longer timeline than I expected from you.</p><p>Alok Singh </p><p>Excuse me.</p><p>Maybe I'm just being a coward.</p><p>Theo Jaffee </p><p>Yeah, so you think that it's not likely that a human will solve one within 10 years?</p><p>Alok Singh </p><p>within seven years.</p><p>Let's take a look at the millennium problems again.</p><p>Theo Jaffee </p><p>We solved one of them.</p><p>Fermat's last theorem was one in prize, right?</p><p>Alok Singh </p><p>No.</p><p>We did solve one, but it wasn't that one. And it wasn't one of them. The Poincare conjecture was one.</p><p>Theo Jaffee </p><p>that one. Was that Perlman? Yeah.</p><p>Alok Singh </p><p>Well, yeah,</p><p>well, even to just quote the Wikipedia I'm reading right now, he started working on it in the 90s. So his proof took approximately, let's say like eight years or so, maybe 10 or longer. Yeah.</p><p>Theo Jaffee </p><p>That's a long chain of thought.</p><p>Alok Singh </p><p>Swinnerton- Dyer.</p><p>Much conjecture, now we're still...</p><p>I no longer know how much progress has been made on each of these problems. Often there's partial progress for many cases, but usually the thing that would be needed to solve the core is just like not there. And I can very easily guess that it would take, yeah, 10 years or more, a lot more maybe to get at the last piece.</p><p>Theo Jaffee </p><p>So it was.</p><p>Yeah, mean, a lot of people seem to think that Millennium Prize problems will get solved like within the year.</p><p>Alok Singh </p><p>Well, if I think this year, I doubt that.</p><p>Theo Jaffee </p><p>Like all it takes is, according to them, just scaling up RL and...</p><p>Alok Singh </p><p>I mean, maybe,</p><p>but they might be underestimating the scale.</p><p>Theo Jaffee </p><p>True. So once we do get those sort superhuman math AIs, whenever that happens, one year or 10 years, like what would you do with them? If you got, you know, o1 pro except it's not o1 pro, it's actually like o5 pro and it can solve money and price problems, like what would you do with it?</p><p>Alok Singh </p><p>use it to learn more math myself. I mean, this was one of the nicer things about getting into math late. I got nothing to prove. I did this for the fun of it. When I started math, I knew there people who were way better at it than me. When I finished with it, there will be people in machines way better at it than me. That was never the game. So then, well, the universe will burn out eventually.</p><p>Theo Jaffee </p><p>when you finish.</p><p>Yeah, I guess.</p><p>Alok Singh </p><p>Yeah, ask it questions I'm curious about and learn from it. Maybe one day get wire headed to do even more and get bigger insights. That part's murkier to me, but I expect that I would just like keep learning math. I don't think that part of me will change so much. Just the methods above doing it.</p><p>Theo Jaffee </p><p>but are there any like specific areas of math that you think are like overlooked that we should put the superhuman math eyes on?</p><p>beyond you, more broadly speaking.</p><p>Alok Singh </p><p>I would love to see it work on the field with one element.</p><p>Theo Jaffee </p><p>What's the field with one element?</p><p>Alok Singh </p><p>It's math four here. This will take.</p><p>a field in math.</p><p>Theo Jaffee </p><p>Are you an ARC browser?</p><p>Alok Singh </p><p>Yeah, for now anyway.</p><p>Theo Jaffee </p><p>Is it actually good? I can't get off Chrome.</p><p>Alok Singh </p><p>That's great.</p><p>Whatever. the field of one element. A field is a set. Yeah, he's French.</p><p>Theo Jaffee </p><p>Jacques Titz?</p><p>That's funny. Okay.</p><p>Alok Singh </p><p>It lets you, fields are sets with arithmetic defined on them. They're closed under addition, multiplication, subtraction, and division. And all the finite fields are the numbers mod a power of a prime. More importantly, numbers mod prime numbers. So they always have a prime power number of elements. So there is no field with</p><p>elements.</p><p>There's also no field with one element, at least not a field as per the usual definition, because it requires it to have two identity elements, which cannot be the same of zero and one for addition and multiplication, so they can have closure and therefore subtraction and division. But nonetheless, there seems to be</p><p>that hints that such a thing exists, but it will require redefining what a field is or extending the concept in a way that is not clear yet.</p><p>Theo Jaffee </p><p>What actually is a field?</p><p>Alok Singh </p><p>any set where you can define the operations of addition multiplication with inverses. And addition is just required, it's defined axiomatically. it's just an operation, an operation is a function that takes two elements, actually, just show it in green, it's little easier.</p><p>Theo Jaffee </p><p>yeah, lean.</p><p>Okay, so I kind of get it. It basically just doesn't make sense to have addition and multiplication if there's only one thing.</p><p>Like is that what the field of the one element concept is?</p><p>Created by N. Barth. That sounds familiar.</p><p>Alok Singh </p><p>That was.</p><p>It's a little faster at this. It's already in mathlib, lean's library, but this is easier.</p><p>Okay, so it's you can think of a type in a set as being the same thing. So it's any. I'll rename it actually. Set S.</p><p>So there's a function called addition which takes in two things and returns one thing of the same type. Same for multiplication. There's two distinct elements called zero and one. Also zero not equal to one. This is implied by its other axioms, but I'll just add it explicitly. There's that associate addition is associative.</p><p>It's commutative. Zero is an identity on the left and the right. Multiplication is associative. It's also commutative. One is an identity on left and right. Multiplying by zero is equal to zero. This is actually not necessary. It's implied by the other ones. And distributativity. This is a big one. This is what links addition and multiplication. Like you have seen this operation Christ knows how many times by now, but have you ever seen this operation?</p><p>a species where all I've done is I've</p><p>Theo Jaffee </p><p>You're still sharing</p><p>your screen on the browser.</p><p>Alok Singh </p><p>my bad,</p><p>or okay, field. So you have S, which is some type, addition and multiplication, which are both functions that take in two things and return one thing, all of the same set type, whatever. There's two distinguished elements, zero and one. There's the fact that zero is not equal to one.</p><p>So there's the fact that addition is associative and commutative and similar for multiplication, and then the distributive property.</p><p>You've definitely seen the distributive property of a times b plus c is ab plus ac, right? But then consider if I just do this.</p><p>Theo Jaffee </p><p>Yes.</p><p>Alok Singh </p><p>where I just swap addition and multiplication. So I turn the time sign into a plus and a plus sign into a times.</p><p>Well, this operation isn't really a thing because it doesn't have any good properties as far as anyone can tell. A plus B times C. Whereas the distributivity gives A a sort of equal affinity with B and C because it gets stuck onto both of them. The same is not true of doing the operations in the inverse order.</p><p>which is an interesting fact.</p><p>on the little asymmetries in math that interest me. So anyway, this is a field. But because zero is not equal to one, all fields have at least two elements. And so a field with one element seems a contradiction in terms.</p><p>Theo Jaffee </p><p>Okay, that makes sense.</p><p>Alok Singh </p><p>but nonetheless, lots of operations act as if there is a field of one element.</p><p>I will change this.</p><p>The Wikipedia article, for example, right there with ABC conjecture, these approximations imply solutions to important problems like ABC conjecture. They said these imply very solutions to very profound problems and they changed, they got rid of the word profound, which is cucked. Profound is absolutely correct.</p><p>Theo Jaffee </p><p>Yeah, I love the word profound. So what do you actually like use lean for? Cause I've seen people call it a theorem prover. I've seen people call it a programming language.</p><p>Alok Singh </p><p>So.</p><p>And then use it</p><p>for both.</p><p>Theo Jaffee </p><p>like what can</p><p>you actually program with Lean?</p><p>Alok Singh </p><p>OK, here's something I've been working on.</p><p>need some updating.</p><p>This is a port of a general relativistic late ray tracer, which is a old, so it some updating.</p><p>And the way does it is it defines a Clifford algebra, which is a mathematical structure.</p><p>Thank you. Have you seen Interstellar?</p><p>Theo Jaffee </p><p>Yes.</p><p>Alok Singh </p><p>This will let you do the ray tracing for their black holes and stuff. For example, this is the picture that this guy has ray traced. This whole thing is written in C++, but this guy is obviously into functional programming because he defines idioms from functional programming like monads in C++, which don't really work because the language is not designed for it. The light blue thing in the center is supposed to be a black hole because I guess putting it in black would be a bit confusing. And you can see the way that it general gen...</p><p>Theo Jaffee </p><p>Okay, that's cool.</p><p>light blue hole.</p><p>Alok Singh </p><p>that it generally relativistically traces, because the yellow on the bottom left gets put in the top right, then the bottom right in the middle bit. And at that point, the pixelation makes a kind of breakdown, because of discrete approximation. But you get the idea that it's causing this warping of time and space. and it's spinning.</p><p>Theo Jaffee </p><p>So you just re-implemented this in Lean.</p><p>Alok Singh </p><p>So this is,</p><p>yeah, I'm still working on it, but decent on progress. So what?</p><p>Theo Jaffee </p><p>Does lean have like</p><p>graphics vendors?</p><p>Alok Singh </p><p>Someone has written bindings to RayLive, which is the next thing I have to add to it. But I could add, for example, a vec4. They've actually added a proper vector type recently, so I could update this as well.</p><p>Theo Jaffee </p><p>How did they even have a programming language before without a vector type? Was it just arrays?</p><p>Alok Singh </p><p>So, goodbye.</p><p>The difference is that...</p><p>There.</p><p>The difference is that, okay, coordinates is an array, but an array could be any length at all, except the second field is a proof of the fact, proof of size, that the length of the array, which is the dot size function, is exactly n, which is what's in the type signature. And so a vector of four and a vector of three and a vector of zero are all different types.</p><p>So let's say def and de.</p><p>at lining up.</p><p>Sorry, it's because offhand I couldn't think of a proof of this fact.</p><p>no wonder. It's a inhabited vector of zero.</p><p>there. So I've said that the vector type in general is inhabited, meaning that this type has at least one value, a default value, at least when the vector is empty.</p><p>where it just returns an empty array.</p><p>And I could define it then. Sore, why not?</p><p>Theo Jaffee </p><p>So have to do this thing every time to approximate a vector.</p><p>Alok Singh </p><p>No. Do what thing?</p><p>Theo Jaffee </p><p>like write an instance inhabited.</p><p>Alok Singh </p><p>No, Like when I said the next line, which is like in Python, empty vec.</p><p>or it should be able to actually infer this type, so.</p><p>Theo Jaffee </p><p>Yeah, I know. I've never done functional programming before. Maybe I should.</p><p>Alok Singh </p><p>It'll make you stronger, that's for sure.</p><p>Theo Jaffee </p><p>stronger.</p><p>So how do you actually use Lean as part of your day job?</p><p>Alok Singh </p><p>empty vector.</p><p>Right now it's mostly writing tooling for lean.</p><p>The big thing is that it plays well with code generation because, let's see.</p><p>Theo Jaffee </p><p>What sort of tooling?</p><p>Alok Singh </p><p>&#8275; I'll let you know.</p><p>different library that shows it off better.</p><p>This is a linear algebra library. It's like a light pytorch that I wrote with a friend.</p><p>So this defines an encoder with a vector type of shape T by V. like tokens by size of vocabulary.</p><p>Theo Jaffee </p><p>So this is for like doing ML with Lean.</p><p>Alok Singh </p><p>Yeah, in pure Lean But this is well, because I get shape checking for free because I can define like a matrix type.</p><p>Theo Jaffee </p><p>Why would you do a melon, Peerlene?</p><p>Alok Singh </p><p>So this is defining a matrix type that's parameterized by its rows.</p><p>in columns, which are both natural numbers and some the container type alpha, which in this case is implemented in a naive way is just a vector of vectors, but you can do more sophisticated encoding. But then I can do death not</p><p>which should be C1 by C2.</p><p>matrix of R1 by C2.</p><p>was probably downloading something for the new tool change.</p><p>Theo Jaffee </p><p>So there's only you've made is like an ML library.</p><p>Alok Singh </p><p>But okay, this would give me a compile error. If I do anything that will cause this to not shape check.</p><p>And it's just like part, no wonder. think I'm just like low on disk space. &#8275; 90.</p><p>Theo Jaffee </p><p>How does this help with formally verifying superintelligence?</p><p>Alok Singh </p><p>you can write down essentially any property that something should have, at least ideally. And then the type checker and the prover and the compiler are all kind of the same thing, where you have to provide like a proof, or construct a proof or have it inferred or have the machine write out one that it has whatever property is relevant. People are more interested in like thinking of and this is like</p><p>up in the air, but people want to figure out how do you get properties of models that are safe? Like being able to guarantee something about their outputs, since people are not confident that it will be possible to verify something about the model itself, like proving that its internal weights are safe somehow. Because formally specifying that is a really hard problem, just as hard as proving it would be even if it could be specified.</p><p>Theo Jaffee </p><p>Yeah. So like, how do you actually write like a spec for acceptable or unacceptable outputs of a model?</p><p>Alok Singh </p><p>In general, answer right now is no one knows. The short, though, at a more syntactic level, OK, say you had some function like def, good. Float.</p><p>But</p><p>Sorry.</p><p>Theo Jaffee </p><p>So sorry is just</p><p>like a pass keyword if you don't want to implement a proof.</p><p>Alok Singh </p><p>Yeah.</p><p>configs from.</p><p>Theo Jaffee </p><p>structure safe</p><p>model.</p><p>Alok Singh </p><p>Yes.</p><p>Okay, It's saying that for all inputs.</p><p>good function apply. This should be parsed like this. The and signs are bit confusing precedence-wise sometimes. There.</p><p>saying essentially that put this function in certain bounds. Then the onus is, okay, like how is this function implemented? Which is why there's a sorry, because if I knew I'd be filling in right now, wouldn't I? But one thing that is more promising is come up with like a proxy measure that's not the exact thing you want. But then you have many proxy measures that are simpler, but verifiable much like how a unit test encoding probably cannot.</p><p>Theo Jaffee </p><p>So, so, so.</p><p>Yeah.</p><p>Alok Singh </p><p>guarantee that your code is bug free, but if your code can pass like 100 well picked unit tests or even 10, it's probably much closer to being bug free than not.</p><p>Also in the worst case, can just write sorry all the time and then you just have a programming language, which is a little nicer than Python, just like ordinary development.</p><p>You don't have to use the proving features, although of course it's part of the draw.</p><p>But think that emphasizing you don't have to use them is actually important because this is part of why functional programming is better among academic weenies, but not so successful in the real world. And it's perfectionism. Like a tendency to try and write code that if it's not the absolutely optimal, perfect, God's beautiful code, that it's just worthless trash, which is an attitude best left to unproductive people like philosophers.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I don't know, there was once a philosopher who tried to write a sort of complete specification of philosophy. And we all know what most modern day philosophers think about her. I actually have this, like I have a copy of...</p><p>Alok Singh </p><p>her book, Renz?</p><p>Theo Jaffee </p><p>Objectivism the philosophy of Ayn Rand by Leonard Peikoff. Leonard Peikoff was Ayn Rand's like yeah basically student slash lover slash heir both ideological heir and also like her literal like estate heir like she bequeathed like all of her money to this guy. She was yeah he's like 90 but he's still alive. She actually was married and she was talking to this guy this seems kind of immoral yeah.</p><p>Alok Singh </p><p>the one she's fucking.</p><p>Is he still alive?</p><p>Yeah, I'm aware.</p><p>Theo Jaffee </p><p>Her</p><p>real name was Alice O'Connor.</p><p>and her husband, Frank O'Connor. Her actual birth name was, I think, Alisa Zinovievna Rosenbaum.</p><p>Alok Singh </p><p>I wonder where the O'Connor came from. Isn't it from like... &#8275; right.</p><p>Yeah, that's more like what I thought.</p><p>Theo Jaffee </p><p>Rosenbaum? Wait a minute. Early life check.</p><p>every single time. Yeah. So she wrote, or I guess Peikoff wrote, all of these chapters. I think Ayn Rand wrote this official disclaimer that was like, until or unless I write something better, this shall be considered the definitive statement of my philosophy. Yeah.</p><p>Alok Singh </p><p>see.</p><p>for someone who's probably</p><p>ignorant of math, a very mathematician kind of statement.</p><p>Theo Jaffee </p><p>Yeah. Until or unless I write a comprehensive treatise on my philosophy. Dr. Peikoff's course is the only authorized presentation of the entire theoretical structure of objectivism.</p><p>Alok Singh </p><p>Sounds</p><p>like Dr. Peikoff is trying to sell his book and that he's like at some airport pounding pavement to make a book deal.</p><p>Theo Jaffee </p><p>That is the only one that I know of my own knowledge to be fully accurate. See that subjective is the only one that I know of my own knowledge to be fully accurate, not mostly accurate. There's no room for error here. Yeah. So, so it's like, he tries to derive like, okay, he starts with chapter one reality where he's talking about like metaphysics and like the basic conception of reality. And then he works all the way up.</p><p>Alok Singh </p><p>Sike.</p><p>Theo Jaffee </p><p>through our perception, our senses, our reasoning, and then what humanity actually is, and what is the good, and what's virtue, and happiness.</p><p>Alok Singh </p><p>or balls</p><p>or vast deference working the way up.</p><p>Theo Jaffee </p><p>Yeah, and then all the way up to higher levels of abstraction talks about government, capitalism, and then art. Yeah, so I actually haven't made it all the way to the chapter on art, but I wonder.</p><p>Alok Singh </p><p>part.</p><p>Agnes and I talked a bit about this, not about objectivism, but about how one of the reasons for the two cultures is that people in the arts are interested in these sorts of ultimate questions of like a good life, what's good art, et cetera. The she and I, for that matter, believe do have answers, but also that they're much more difficult to access than questions of say, what is a group or a differential equation?</p><p>people are interested in making some progress on them immediately rather than building up. I still think that the sciences are more likely to be able to answer such ultimate questions by building up this edifice through things like neuroscience and getting at what do people actually want. And then like mathematical sides of economics that as much as like the mathier sides of social science are derided, I think they have a better chance of resolving these questions eventually.</p><p>than the sort of endless perennial debates of philosophy. That's why the phrase perennial debate comes about in the first place.</p><p>Theo Jaffee </p><p>Yeah, I mean, if they weren't perennial debates, they wouldn't be philosophy. They would be science or social science or something.</p><p>Alok Singh </p><p>Yeah, and one day science will</p><p>make, one day science will come like the big bad wolf to blow that door down.</p><p>Theo Jaffee </p><p>Inshallah. I don't know, I mean, like, can you even scientifically resolve a lot of these questions? A lot of them just seem subjective. You know, didn't Wittgenstein say like, most problems in philosophy are problems with the interpretation of language?</p><p>Alok Singh </p><p>Eh.</p><p>Theo Jaffee </p><p>You know, what is consciousness? What is the good life? Okay, define good.</p><p>Alok Singh </p><p>I think so, like.</p><p>I mean, I have faith in this. Even like questions of math where people say that you run into barriers like undecidability. My friend Elliot's given me a long spiel on this and my own impression is, yeah, there's plenty of undecidable questions, most of them really. But even so, there's still progress in areas like set theory because people find like major cores of theories that just many seemingly unrelated things run up against indicating that maybe there is like some sort of platonic core.</p><p>And even for questions where you cannot say in like the same definitive sort of way that something is right or wrong. Nonetheless, vibes wise, there's like a clearly right one. And it's not just like a complete matter of opinion, nor is it like, you have to have like a sophisticated pace to understand. mean, have to have enough understanding to understand, but not much taste.</p><p>Theo Jaffee </p><p>Are you a...</p><p>are you a platonist? Is there a world of forms?</p><p>Alok Singh </p><p>That's a question, am I?</p><p>Theo Jaffee </p><p>Ayn Rand would have said definitely no. There is only the world that exists, that we perceive.</p><p>Alok Singh </p><p>I don't know what I</p><p>am.</p><p>I'm a working man, sort of.</p><p>Theo Jaffee </p><p>Aren't we all?</p><p>Alok Singh </p><p>working with head and hands.</p><p>as the abstract world exists.</p><p>Theo Jaffee </p><p>This, like, to me just seems like totally a, like, Wittgensteinian problem in philosophy is a problem in the interpretation of language. Does the abstract world exist? Like, what is the abstract world? What does abstract mean and what does it mean to exist? Yeah.</p><p>Alok Singh </p><p>What does exist mean? Yeah.</p><p>Theo Jaffee </p><p>Rand says, existence exists. This is like, this exact sentence is repeated so many times throughout Atlas Shrugged. Existence exists.</p><p>Alok Singh </p><p>I've forgotten about some of purple pros.</p><p>Theo Jaffee </p><p>You think existence exists as purple prose?</p><p>Alok Singh </p><p>The way she uses it.</p><p>Theo Jaffee </p><p>true.</p><p>Alok Singh </p><p>I think her best writing on a prose level was in the book Anthem. The one where they don't have the word I, and they have to discover it.</p><p>Theo Jaffee </p><p>Hmm. A lot of people say that that's like her worst book.</p><p>Alok Singh </p><p>What do they say is best? Fountainhead or something?</p><p>Theo Jaffee </p><p>Fountainhead or Atlas Shrugged? I've actually never read Fountainhead. It's on my list.</p><p>Alok Singh </p><p>I've been meaning to watch the movie. I mean, I've read it.</p><p>Theo Jaffee </p><p>Hmm. Yeah. I think one of my most shocking moments was when this girl that I know who's like very much a lib told me she was reading The Fountainhead. And I was like, really? Interesting. Because it was on some reading list somewhere. And she was like, yeah, this is really interesting. You I never really thought about things this way before. And I kind of like it.</p><p>Alok Singh </p><p>Live.</p><p>Okay, how do you know her?</p><p>Theo Jaffee </p><p>college through a friend. Another one converted to being based, I hope.</p><p>Alok Singh </p><p>Okay.</p><p>Maybe I should welcome Jewish too.</p><p>Theo Jaffee </p><p>Hmm, yeah. So, wrapping things up with a final question, what do you think the good life is? You know, if we are talking about the good life.</p><p>Alok Singh </p><p>I just have some vague answer here of like gaining knowledge and power. Like, yeah, I can see this one easily going in some way, but I'm just going to go with that.</p><p>Theo Jaffee </p><p>Gaining knowledge I see as a good thing. I think this is what Socrates would have answered probably. Gaining power though?</p><p>Alok Singh </p><p>Be still, Tarnenove.</p><p>Theo Jaffee </p><p>Does it?</p><p>It seems like a lot of people with power are extremely unhappy.</p><p>Alok Singh </p><p>Yes.</p><p>Yeah, but I've met a lot of people without it who also seem pretty displeased for that too.</p><p>Theo Jaffee </p><p>Let's see, like who are the most powerful people in the world? Donald Trump. Is he happy? I don't think he's happy. Elon Musk is definitely not happy.</p><p>Alok Singh </p><p>Leckermann might be happy actually.</p><p>Theo Jaffee </p><p>Zuckerberg might be happy, but does he have like real power in the way that Trump or Elon does? He's, he's, he's... True.</p><p>Alok Singh </p><p>Well, here's we're identifying happiness with like the good life. So I guess that's</p><p>kind of answer in itself of happiness is happiness all I think is a big chunk of it. Yeah.</p><p>Theo Jaffee </p><p>He's sort of clawed it back over the last year or two. Is power a big chunk of the good life? I don't know. Is happiness a big chunk of the good life? Certainly. Like, is it easy to conceive of somebody having a good life without power? Yes. Is it easy to conceive?</p><p>Alok Singh </p><p>Well, well, one of</p><p>my fixations that we didn't touch on was etymology, except for the brief mention of Proto-Indo. And all the words for happiness in the Proto-Indo languages refer like the word happiness, things, well, happening or going by hap, as in going your way, which seems certainly closer to power. And most words of happiness in Proto-Indo languages anyway, and I'm as an Indo.</p><p>They refer to some aspect of luck or if things are happening the way you want them to. Which certainly seems very linked to power, since power is essentially the direct route to that. You could also get lucky in the modern sense of just, well you don't have to do anything, it just happens that way.</p><p>Theo Jaffee </p><p>Jai Hind!</p><p>Alok Singh </p><p>powers the ability to just like shape it directly.</p><p>Theo Jaffee </p><p>Yeah, you're right.</p><p>You're Indo, I'm European. I should read more about Indo-European, because a little that I do know is very interesting. There's all these sort of shared roots that you had never thought about previously, but they kind of seem obvious in retrospect, like status and stallion.</p><p>Alok Singh </p><p>Yes.</p><p>Well, next time we</p><p>can talk about that since I've got at least one more bank and I'm sure we'll have a couple more anyway.</p><p>Theo Jaffee </p><p>Yeah, well, it's been real. You know I should I should probably get going it's getting late here, but It was great talking to you. Thanks so much for coming on the show, and I'll see you in the next one</p><p>Alok Singh </p><p>always.</p><p>That time.</p><p>Yeah.</p><p>See ya.</p><p>Theo Jaffee </p><p>So, wait I forgot I'm wearing this shirt.</p><p>I got this shirt like a year and a half ago when I was in like full on like Twitter, e/acc bro grindset mode.</p><p>Alok Singh </p><p>and how you have this.</p><p>Theo Jaffee </p><p>Yeah, I think it's like a good snapshot of like, I guess like the cultural anthropology of the internet in like early mid 2023.</p><p>right when AI was really starting to take off on Twitter. Okay, so yeah. In the last stretch of time, we read the book, Erewhon by Samuel Butler and...</p><p>What interesting takeaways did we take from it?</p><p>Alok Singh </p><p>I'm pulling up my notes.</p><p>Theo Jaffee </p><p>You took notes, fancy.</p><p>Alok Singh </p><p>Yeah, I listened to it and then I sketched out some notes on Audible, so I'm logging into Audible.</p><p>Theo Jaffee </p><p>Mm.</p><p>I don't even have a second brain of notes the only notes I need are in my first brain.</p><p>Alok Singh </p><p>Okay, well from the dome. Yeah, he immediately makes the point that he in when he's talking about machine since that's the main reason this book is notable for our audience that is one of the first times that the topic of super intelligence shows up in literature enough to make Theo read fiction.</p><p>Theo Jaffee </p><p>Yeah, I'm notable for notorious, I should say, for not reading fiction, like ever. This is, I think, the first fiction book I've read in like three years that's not like a manga or something. It's been a while.</p><p>Alok Singh </p><p>Wait,</p><p>how does the manga not count? You didn't mention that part.</p><p>Theo Jaffee </p><p>Cause it's pictures.</p><p>Like One Piece, I read all of One Piece this year. I've read like 1,100 chapters of One Piece. It's okay, they go fast. It's like five minutes a chapter.</p><p>Alok Singh </p><p>Jesus Christ.</p><p>For your sake, I hope so. Bye.</p><p>Theo Jaffee </p><p>But I guess</p><p>even saying it like that, even assuming it takes, let's say, seven minutes to read a chapter times 1,100 chapters and there's 1,440 minutes in a day, yeah, it took me a lot of time to read all that. Anyway, though. Yeah, so the Book of the Machines, they talk about, you know, in one of the first chapters, he, like, shows his watch to the king.</p><p>And then the king recoils and treats it as a sort of crime. I guess we should give a brief overview of the premise of the book. Where it's like, this guy, an Anglo-Saxon guy, is living on a colony and decides to explore past the impassable mountains to the west and past the...</p><p>Alok Singh </p><p>coils.</p><p>ends up in</p><p>the nation of Erewhon.</p><p>Theo Jaffee </p><p>Yeah, he ends up in the nation of Erewhon. He's kept as a sort of captive sort of guest. He learns their language. He experiences their culture. I guess like the most salient aspect of their culture is that they treat sickness like we treat crime and treat crime like we treat sickness.</p><p>Alok Singh </p><p>Yeah, here it is. Let me put some of the clips.</p><p>Theo Jaffee </p><p>And then the narrative of the story is like he...</p><p>he goes to the Capitol and is guest slash prisoner for a while and writes chapter after chapter of observations. And then at the very end,</p><p>Alok Singh </p><p>One line is,</p><p>one line from the court trial that since people are punished, he's like, the judge says, you think you're misfortunate to be a criminal, but your crime is to be misfortunate.</p><p>Theo Jaffee </p><p>Yeah, that was a banger.</p><p>Alok Singh </p><p>in his lead up when he's describing the machines, not the machine. &#8275; I like how was it human labor is priced by energy units because it is implied they've just become so fungible for machines.</p><p>Theo Jaffee </p><p>When was this?</p><p>Alok Singh </p><p>Somewhere in the earlier part of the book look for the word horse search Just search for the word horsepower, then oh, that'll give a proper context. I don't mangle this</p><p>Theo Jaffee </p><p>I thought like human labor wasn't done.</p><p>Yeah, there it is. Nosnibor is a man of at least 500,000 horsepower. For their way of reckoning and classifying men is by the number of foot pounds which they have enough money, which they have money enough to raise or more roughly by their horsepower. That is interesting. Like they don't really use machines though. So I wonder what's up with that. &#8275; I do think that it is like very interesting that he immediately follows up the books about machines with</p><p>Alok Singh </p><p>I'm full.</p><p>Theo Jaffee </p><p>the chapter on animal welfare, which is like, do you know people today who are concerned with super intelligent machines and animal welfare? Was Samuel Butler the first EA? Yeah.</p><p>Alok Singh </p><p>Yeah, but they're just ahead of his time.</p><p>He also says that in their AI arms race between the machinists and anti-machinists that the anti-machinists ended up using machines to a pretty great degree, just slightly less, which reminded me of AI safety and ever more powerful tools to debug it.</p><p>Theo Jaffee </p><p>Yeah, yeah.</p><p>I saw that. It's sort of like how you see like a lot of, I guess, AI Doomers using advanced AI all the time. And not just to debug it, but just because, you know, they get a lot of mundane utility out of it.</p><p>Alok Singh </p><p>because they're power</p><p>users, moreover, rarely do they just use it.</p><p>Theo Jaffee </p><p>Samuel Butler himself. Yeah, this book was based on his experiences. Basically, he ran away from his dad because he's like, I hate you, my parents, and literally went to as far away as you could possibly get from England, which was New Zealand. And he bought a farm and became a and then went back. like that...</p><p>Alok Singh </p><p>Yes, sir.</p><p>Theo Jaffee </p><p>The protagonist in Erewhon is a guy who lives on a farm that is like very much New Zealand and is also shepherd.</p><p>Alok Singh </p><p>Except he manages to find a fantastical place, which I don't think was the case for Butler.</p><p>Theo Jaffee </p><p>Yeah.</p><p>What is like the first story that's like, you know, every man ventures far from home and discovers like magical, mystical world? This has got to be done over and over and over again. I guess one of the prime examples of this is like the Wizard of Oz, where you have Dorothy living in Kansas, implied to be like the most boring place ever, and then gets whisked away to the fantastical world of Oz.</p><p>Alok Singh </p><p>Yeah.</p><p>End it.</p><p>Theo Jaffee </p><p>Erewhon is not quite as fantastical as Oz.</p><p>A lot of it sucks.</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>I find the animal rights chapter really funny because it reminds me of Jews talking about kosher law. How the entire chapter is like the wise thought leaders pass down these instructions that are like, you should not eat meat. And then they spend the rest of the chapter trying to get out of it. They're like, yeah, another fertile source of disobedience to the law was furnished by a decision of one of the judges that raised a great outcry among the more fervent discipline.</p><p>Alok Singh </p><p>Yeah, Error.</p><p>Hmm.</p><p>Theo Jaffee </p><p>disciples of the Old Prophet. The judge held that it was lawful to kill any animal in self-defense, and that such conduct was so natural on the part of a man who found himself attacked, that the attacking creatures should be held to have died a natural death. The High Vegetarians had indeed good reason to be alarmed, for hardly had this decision become generally known, but for a number of animals, hitherto harmless, took to attacking their owners with such ferocity that it became necessary to put them to a natural death.</p><p>Alok Singh </p><p>&#8275; I remember this chapter better now. Yeah, where people start doing</p><p>Theo Jaffee </p><p>Again, it was quite common at that time to see the carcass of a calf lamb or kid exposed for sale with a label from the inspector, certifying that it had been killed in self-defense. This is literally just like Jews getting out of every law that they have.</p><p>Alok Singh </p><p>Yeah, this is I think this one not just Jews but people in general trying to get out Well, you know people want to eat meat That said I mean how strict did people adhere to lent but lent is only partial whereas this is total</p><p>Theo Jaffee </p><p>Yeah.</p><p>What is this part about? you can't see my screen. But the part about like... I'm just gonna share my</p><p>Yeah, this part, one sad story. A young man, the doctor told him you should eat meat. He was like, no, that's bad, I'm not gonna do it. And then he illegally bought meat and ate it and his health improved immediately. And like health and heroin is everything.</p><p>Alok Singh </p><p>in this.</p><p>Theo Jaffee </p><p>Right, like being unhealthy is treated as like a crime, punishable essentially by death, because if you are sick they will, you know, put you in prison and then you'll sort of die of natural causes.</p><p>Alok Singh </p><p>Pulling</p><p>up Erewhon on Project Gutenberg since I just have my own copy, but HTML is easier to work with than a PDF. And it shows that his translation of the Odyssey has 18,000 downloads, the Iliad 4,800, and Erewhon 1,400.</p><p>Theo Jaffee </p><p>I have Gutenberg too.</p><p>Wow.</p><p>So... was it one of the most famous translations of the Odyssey?</p><p>Alok Singh </p><p>more famous than what he did.</p><p>Theo Jaffee </p><p>Is it more famous than Emily Wilson's translation?</p><p>I think Nabeel Qureshi on Twitter did an experiment where he had different translations of a passage in the Odyssey.</p><p>Alok Singh </p><p>Alright.</p><p>Theo Jaffee </p><p>asked like which one of these is the best.</p><p>Yeah, it was Emily Wilson, Lattimore, Fitzgerald, and GPT-4o And way more people prefer GPT-4o than the others.</p><p>Alok Singh </p><p>Well, GPT-4o knows Proto-Indo-European, which is more than I can say to most people.</p><p>Theo Jaffee </p><p>What's the best LLM for Proto Indo-European?</p><p>Alok Singh </p><p>They're all pretty good at it. Probably GPT, just because it has a bit more data.</p><p>Theo Jaffee </p><p>Have you tried</p><p>4.5?</p><p>Alok Singh </p><p>Yeah, I do that a lot.</p><p>Theo Jaffee </p><p>for Proto Indo-European.</p><p>Alok Singh </p><p>Yes, among other things.</p><p>Hmm</p><p>Theo Jaffee </p><p>Let's see.</p><p>Alok Singh </p><p>They also discuss a form of Roko's Basilisk, although only in passing. Somewhere in Book of the Machines, maybe Section 2.</p><p>that people will help machines come about would be favored over ones that don't.</p><p>Theo Jaffee </p><p>Okay, let's let this go for a bit.</p><p>A full translation prevents several difficulties. Shut up.</p><p>partial translation. No, I just gave it like a piece of Erewhon the first chapter. This can't be that hard, right? Like I guess yeah, telescope would...</p><p>Alok Singh </p><p>You try giving it a whole lot, I see.</p><p>To translate it</p><p>to turn it into what? Proto window. &#8275; now I'm seeing.</p><p>Theo Jaffee </p><p>Okay,</p><p>was a monotonous life but healthy, yeah.</p><p>Can you pronounce this?</p><p>Alok Singh </p><p>Not any better than you can.</p><p>Theo Jaffee </p><p>I thought you were like into Proto Indo-European.</p><p>Alok Singh </p><p>Yeah, usually reading it. There's not a lot of speakers, as you might imagine. I have said some words, but all the connections I have are from words that actually still exist. Like the word sundry, it means separate, loosely associated things because it's from the word sunder, like to sunder something in half.</p><p>Theo Jaffee </p><p>That's cool. Sunder is one of those excellent words that you just never hear anymore, but you hear a lot in Tolkien.</p><p>Alok Singh </p><p>Yeah, I recommend the-</p><p>Theo Jaffee </p><p>Yeah, okay. Never shall I forget the solitude. Like how do you know that this is accurate? Yeah, look, magna? Yeah, I recognize that. So, lebhom as life, yeah, sure. Samus, continuous, like same, I guess. Esstet, was, yeah.</p><p>Alok Singh </p><p>Yeah, magna does mean great. Ehh, I don't know if-</p><p>Same as</p><p>for means one.</p><p>Like this could be wrong, but this is like decent and it's certainly a lot better than the alternative, which is nothing.</p><p>Theo Jaffee </p><p>So while it</p><p>was healthy, like salud? Salus? Yeah.</p><p>Alok Singh </p><p>I'm guessing.</p><p>I</p><p>have seen that word to like set.</p><p>Theo Jaffee </p><p>Hegemon? Earth? Hegemon? Probably not. Yeah, mountain, mountain. There you go. Yom, like yonder. I don't know. Sed, as sat, yeah.</p><p>Alok Singh </p><p>No, I don't think so.</p><p>Meg, &#8275; that's just Meg. Nek-uh.</p><p>Theo Jaffee </p><p>Gwent as often I kind of see it actually I kind of see it you see the n often the gif becomes for I see</p><p>Alok Singh </p><p>Void.</p><p>If you like kind of</p><p>zoom out and like look at it from farther away so you can't make out the individual letters quite as much, I think it helps. The big mountains something. Yeah, the the word nether is more idiomatic, not idiomatic, definitely not idiomatic, but it's more etymologically correct.</p><p>Theo Jaffee </p><p>Yeah.</p><p>nitros is below like &#8275; nitros like beneath right pelus plane est was and</p><p>Where? Noth- Nothing?</p><p>Alok Singh </p><p>then saying</p><p>up and down is saying nether for down.</p><p>Theo Jaffee </p><p>Yeah, I got it. Void-ous, far away, like void. &#8275; negway. Montes.</p><p>Alok Singh </p><p>Nietzsche.</p><p>Yeah, vast and void vacuum</p><p>mega mountains, which is what it sounds like. Neck, which I think also can mean death. So I'm just looking at the etymology of never.</p><p>Theo Jaffee </p><p>Yeah, never shall I forget</p><p>Megata Vasnesya, Montum Pelhunkve. Hold on. Proto Indo Yura.</p><p>Is that like a thing? Is there like software that can speak it?</p><p>Alok Singh </p><p>advanced voice mode. I don't think it has any particular training on this, but who knows.</p><p>Theo Jaffee </p><p>Can you do advanced voice run on desktop?</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>audio reconstruction.</p><p>Alok Singh </p><p>doing it right now.</p><p>Theo Jaffee </p><p>&#8275; another dead interlink. So sad.</p><p>Alok Singh </p><p>Recite the first page of the Odyssey in Proto-Indo-European reconstructed.</p><p>was walking in a lot. This is showing me.</p><p>Theo Jaffee </p><p>Aren't we getting a little distracted from... Yeah. Okay, so what is this? yeah, Samuel Butler's life story is very interesting. Wasn't he gay? Yeah, was... He never married. You know, like the phrase, never married, was used as like a euphemism.</p><p>Alok Singh </p><p>Yeah, we are. Let's get back to this.</p><p>Yeah, I</p><p>Yeah, I've read the Wiki too. Confirm Bachelor as well.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Mm-hmm.</p><p>There is no evidence of butlers having any genital contact with other men, but alleges that the temptations of overstepping the line strained his close male relationships. So he was, &#8275; know, friendly with the homies.</p><p>Alok Singh </p><p>The studies on the evidence</p><p>of Christianity, his works on evolutionary thought. Yeah, he does have a keen appreciation of evolution. The very first thing he opens, the first thing he opens with in the chapter of the Book of the Machines, why to be concerned, practically in the first paragraph is the speed of them. And then their speed is such that men, it's as if...</p><p>Theo Jaffee </p><p>He does, yeah, I noticed that.</p><p>gradual disempowerment.</p><p>Alok Singh </p><p>They're not a tool of yesterday, but the last five minutes as he puts it. If you search for five minutes, you should find it.</p><p>Theo Jaffee </p><p>Yeah. Reflect upon the extraordinary events which machines have made during the last few hundred years and note how slowly the animal and vegetable kingdoms are advancing.</p><p>Alok Singh </p><p>and that's like an</p><p>Zim and more.</p><p>more.</p><p>Theo Jaffee </p><p>The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years. See what strides machines have made in last thousand. May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?</p><p>Alok Singh </p><p>It also starts this chapter about, well, life in the beginning of The Earth is a Ball of Hot Rock.</p><p>Theo Jaffee </p><p>Yeah, I noticed that and like it seems like otherwise throughout the book he was like, or at least his self-insert character was like very Christian. You know, he always talks about like wanting to convert the people of Erewhon. The last chapter is basically about his plans to convert the people of Erewhon. yeah, I forgot to mention how the book ends to the audience, which is like, he steals his host daughter and runs off with her in a balloon right before he gets arrested and makes it back to Europe.</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>and then tries to make plans to go back to Erewhon and convert them. And I think he writes about that in a sequel called, what was it, Return to Erewhon or something? Erewhon Revisited? Yeah. Which I heard was bad.</p><p>Alok Singh </p><p>Yeah, Ero Unrevisited, which I'm probably not going to read because.</p><p>Theo Jaffee </p><p>These damn sequels. No one has any creativity anymore. Yeah, less successful sequel. What is it? Both of the original discoverer of the country and by his son. It has less of the free imaginative play of its predecessor. I've like thought throughout reading this book, I should write like a chapter, like a sub stack article that's like this same protagonist, like an average, you know, Victorian English guy, except instead of discovering Erewan, he discovers</p><p>modern America and he talks about, I don't know, technology and women's rights and governance and stuff.</p><p>You know, these people are so peculiar, you know, they have these strange glass instruments that they spend all their time touching these glass instruments. They communicate through them. They have no respect for a king or church.</p><p>Alok Singh </p><p>This is like...</p><p>Basically a show about missionary work</p><p>or like futuristic missionary work.</p><p>Theo Jaffee </p><p>Is that like a thing?</p><p>I'm certain that somebody at some point has written an isekai of like, Victorian goes to the modern world.</p><p>Alok Singh </p><p>I'm just going to ask to keep.</p><p>The ones of people going to the Victorian era. Hardly from it, though. Usually when they pick... Yeah, but it's the guy supposed to be like, well, you are the random protagonist who's dropped into God knows where.</p><p>Theo Jaffee </p><p>from what I think the interesting.</p><p>Alok Singh </p><p>Like certainly could go the other way where it's some guy in a totally different era and mindset that ends up here or ends up somewhere else. But then he would likely have few values in common with you. And that's hard to identify with.</p><p>Theo Jaffee </p><p>The one time I met Eliezer Yudkowsky in person was at Manifest and like one of the few questions I asked him, because he was in a group talking about isekai and I was like, what isekai should I read if I haven't read any yet? And he said, a Connecticut Yankee in King Arthur's court, which is, yeah, I think it's about.</p><p>Alok Singh </p><p>Yeah.</p><p>Good choice. I like it.</p><p>Theo Jaffee </p><p>It's, yeah, it's a Connecticut Yankee. You know, it's Mark Twain going back to King Arthur's times.</p><p>Was Mark Twain from Connecticut? No, Missouri.</p><p>Alok Singh </p><p>is out.</p><p>Theo Jaffee </p><p>He did live in Connecticut though. Yeah He lived all over the place. He was an interesting guy You notice this would like writers in the past a lot is like they they Traveled and moved all over the place more than almost anyone else at the time Mark Twain Ernest Hemingway Someone else who I can't think of off the top my head but</p><p>Alok Singh </p><p>died.</p><p>And.</p><p>Theo Jaffee </p><p>I've been to at least two of Hemingway's houses in Sun Valley, Idaho and in Key West, Florida.</p><p>Alok Singh </p><p>Thanks.</p><p>Theo Jaffee </p><p>Okay, what else about everyone?</p><p>Alok Singh </p><p>You can go to Sun Valley,</p><p>Idaho for that one conference.</p><p>Theo Jaffee </p><p>that, yeah, the Allen &amp; Co Billionaires Conference. That would be cool. I would actually think about that because I have family friends who have a place there.</p><p>Alok Singh </p><p>his whole bit about form and function,</p><p>search for reproductive system.</p><p>Theo Jaffee </p><p>Yeah, that was good.</p><p>Alok Singh </p><p>of, yeah, his whole approach to the chicken and egg is they each inform each other's form and function. And so interdependently define each other, basically refuting the argument that well, you're not a machine.</p><p>Theo Jaffee </p><p>Yeah. &#8275; he also sort of assumes that like the development of machines will come about as I guess, reverse engineering each of the systems in the human body. &#8275; like, you know, we're going to build an artificial cardiovascular system and yeah, there it is. &#8275; there are certain functions indeed of the vapor engine, which will probably remain unchanged for myriads of years.</p><p>Alok Singh </p><p>Where?</p><p>Theo Jaffee </p><p>which in fact will perhaps survive when the use of vapor has been superseded. The piston and cylinder, the beam, the flywheel, and other parts of the machine will probably be permanent, just as we see that man and many of the lower animals share like modes of eating, drinking, and sleeping. Thus they have hearts which beat as ours, veins and arteries, eyes, ears, and noses. They sigh even in their sleep and weep and yawn. They are affected by their children. They feel pleasure and pain, hope, fear, anger, shame. They have memory, impressions. They know that if certain things happen to them, they will die, and they fear death as much as we do.</p><p>They communicate their thoughts to one another and some of them deliberately act in concert. The comparison of similarities is endless. I only make it because some may say that since the vapor engine is not likely to be improved in the main particulars, it is unlikely to be henceforward extensively modified at all.</p><p>So I guess, yeah, this is more dated. We ended up like.</p><p>not needing most of these parts to make an artificial humanoid. We don't need veins and arteries unless you consider wires to be veins and arteries.</p><p>Alok Singh </p><p>Wires,</p><p>kind of, arteries. The closest analog I would imagine is if in a robot little computers get put in for lower latency, but this thing is fast enough, especially compared to the human scale, that such seems basically unnecessary, certainly for something to work very well.</p><p>Theo Jaffee </p><p>And with the brain, you we didn't design parts of the brain. It just sort of like happened.</p><p>You know, it was grown, it wasn't designed.</p><p>Alok Singh </p><p>Mm-hmm.</p><p>Theo Jaffee </p><p>yeah, he also gets into this sort of like almost Landian analysis of like capitalism and like human interactions through machines as like itself a sort of</p><p>Alok Singh </p><p>You know, I</p><p>just realized something. This is probably where the</p><p>Maybe where the term Butlerian Jihad comes from. know in the book it has some own story.</p><p>Theo Jaffee </p><p>This is where the term Butlerian jihad</p><p>comes from.</p><p>This is, I think, probably the reason that this book is so famous is because of Dune.</p><p>Alok Singh </p><p>Okay.</p><p>Theo Jaffee </p><p>I think</p><p>he did write a separate thing called like Darwin and the Machines. Darwin Among the Machines. Yeah, it's a letter to the editor. It was written by Samuel Butler. Yeah, okay. So it was written before Erewhon. And the Book of the Machines, yeah. Butler developed this and subsequent articles into the Book of the Machines.</p><p>our wiki source here. Yeah, there we go.</p><p>Yeah, definitely read this.</p><p>This is, guess, a more clear articulation of his, like, doomerism that's not wrapped in this sort of, fantasy world.</p><p>Where is it? Yeah.</p><p>Alok Singh </p><p>Thank you.</p><p>Theo Jaffee </p><p>Man will have become to the machine what the horse and the dog are to man. He will continue to exist, nay, even to improve, and will probably be better off in his state of domestication under the beneficent rule of the machines than he is in his present wild state.</p><p>Alok Singh </p><p>Yeah</p><p>Theo Jaffee </p><p>Yet our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of a species. Let there be no exceptions made, no quarter shown. Let us at once go back to the primeval conditions of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude is commenced in good earnest, that we have raised the race of beings whom it is beyond our power to destroy, and that we are not only enslaved, but are absolutely acquiescent in our bondage.</p><p>Alok Singh </p><p>I wonder if Kaczynski knew of this?</p><p>Theo Jaffee </p><p>I'm sure Kaczynski knew of this, because it's extremely like, it sounds so much like his manifesto.</p><p>Alok Singh </p><p>But it's basic, well, like your shirt says, accelerationism, or this inevitable drive towards progress otherwise, unless it's deliberately cut off.</p><p>Theo Jaffee </p><p>I think the funniest part of the Unabomber Manifesto was just like,</p><p>He basically he starts it with like the industrial revolution and its consequences have been a disaster for the human race and Then he immediately goes into like owning the libtards</p><p>Alok Singh </p><p>have been exhausted.</p><p>Yeah, I remember. Just like...</p><p>Theo Jaffee </p><p>Which is so</p><p>funny. I remember like reading this in like fucking ninth grade math class. Yes. Reading this in ninth grade math class like wow this is actually such a fact. No this just came out of nowhere for me.</p><p>Alok Singh </p><p>Was it the fact that it was the second thing he listed that was funny?</p><p>Did you know he was gonna talk about that or did that just came out of nowhere for you?</p><p>He goes in for a while. He also says in it that he isn't even talking and he's deliberately about people who are just like explicitly leftist that he's like pointing at a category of people. I think the thing about over socialization is true. Feelings of inferiority maybe, but I think over socialization is with deeper insight.</p><p>Theo Jaffee </p><p>He likes them shirt already.</p><p>Yeah, yeah, that was</p><p>great. I actually, don't think I ever actually finished this.</p><p>Alok Singh </p><p>The most mathematician thing about him is citing different paragraphs. This is so much better than what</p><p>Theo Jaffee </p><p>This is also, this is so much better than...</p><p>Yeah.</p><p>It's so much better than Luigi's manifesto, which was like a single page of slop where he didn't even bother like making the argument. He was like, this has all been discussed at great length elsewhere.</p><p>Yeah, you know, people used to write real manifestos. This is 58 pages.</p><p>Alok Singh </p><p>Well, this is from one of America's top talents.</p><p>Theo Jaffee </p><p>Okay, first let us postulate that computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case, presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.</p><p>Alok Singh </p><p>Hey bro, you have to name the tech speaker again.</p><p>more bigger.</p><p>So nice.</p><p>Theo Jaffee </p><p>The fate of the human race would be at the mercy of the machines.</p><p>Alok Singh </p><p>Yeah, I remember those. We just become dependent on them. Very similar in Butler.</p><p>Theo Jaffee </p><p>Notice this, yeah, people won't be able to just</p><p>turn the machine off because they will be so dependent on them that turning them off would amount to suicide.</p><p>Alok Singh </p><p>Just actually just do this just put</p><p>to put Kaczynski and Butler side by side Does it let you do then your screen share you can share your whole screen instead of just a tab?</p><p>Theo Jaffee </p><p>Probably, I already am sharing the whole screen, right?</p><p>Alok Singh </p><p>Yeah, OK, we'll try splitting it then.</p><p>Theo Jaffee </p><p>Yeah, I would, but it's not letting me like resize the window. It's like glitching. &#8275; there we go.</p><p>Alok Singh </p><p>Yeah, just one window becoming smaller.</p><p>Theo Jaffee </p><p>There we go. Okay, now can you see side by side?</p><p>Alok Singh </p><p>No, I just see one window, the one of Darwin among the machines.</p><p>Theo Jaffee </p><p>Okay, let me.</p><p>There we go.</p><p>Alok Singh </p><p>I see it.</p><p>See me twice, actually.</p><p>Theo Jaffee </p><p>How do I minimize this?</p><p>Yeah, so this is Erewhon. This is Darwin among the machines, also Butler. This is Kaczynski. Yeah, there it is. Even if human work becomes necessary, machines will take care of more and more of simpler tasks so that there will be an increasing surplus of human workers of the lower levels of ability.</p><p>Alok Singh </p><p>We see this happening already and while we're continuing to see this happening.</p><p>Theo Jaffee </p><p>&#8275;</p><p>Yeah.</p><p>Alok Singh </p><p>many people who find it difficult or impossible to get work because for intellectual or psychological reasons they cannot acquire the level of training necessary. And on those who are employed ever increasing demands will be placed they need more and more training, more and more ability and have to be ever more reliable. Conforming, I don't know about conforming and docile I think that's his emotions slipping in. Being more reliable though I think is basically true. But essentially everyone will have through this competition an ever higher standard.</p><p>Theo Jaffee </p><p>Yeah, average CS major.</p><p>Mm-hmm.</p><p>Alok Singh </p><p>The machines act as this iron ruler on it, forcing everyone up.</p><p>Theo Jaffee </p><p>Yeah.</p><p>A great development of the service industries might provide work for human beings. Shining each other's shoes, driving each other around in taxicabs. Yeah, this one was wrong.</p><p>Alok Singh </p><p>and each other's shoes.</p><p>making handicrafts for each other, waiting on each other's tables. It seems just a thoroughly contemptible way.</p><p>Theo Jaffee </p><p>He writes about this</p><p>in the plural, first person, us and we. You know the like freedom club thing?</p><p>Alok Singh </p><p>Yeah, I think I honestly think that because he's the</p><p>I think it's because he's a mathematician in math papers. We always use we</p><p>Theo Jaffee </p><p>Yeah, so his pseudonym FC for Freedom Club, which is so stupid. It's like the meme about like Tolkien naming everything so, you know, in such a special way, except he names the the Doom Mountain Mount Doom. Like you have this, you know, math genius writing this brilliant essay and then giving himself the pseudonym Freedom Club.</p><p>Alok Singh </p><p>Okay, it'd be better to dump the whole stinking system and take the consequences.</p><p>Theo Jaffee </p><p>This</p><p>is not accelerationism.</p><p>give some indications of how to go about stopping it.</p><p>Alok Singh </p><p>No files are</p><p>taking us all on an utterly reckless ride. We go up to 180 into the unknown. Many people understand something of what technical progress is doing to us. You take a attitude to the things inevitable. But we for once he actually says who we supposed to be.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Did this inspire Fight Club?</p><p>Alok Singh </p><p>I mean, it was.</p><p>And there he is.</p><p>Theo Jaffee </p><p>Yeah, probably, right?</p><p>Fight Club and Office Space, I've never watched Office Space. Fight Club was inspired by Ted Kaczynski. I think there's some similar.</p><p>Alok Singh </p><p>I know Chuck, wait, know Chuck,</p><p>the author of Fight Club, Chuck Palahniuk has showed up to Jim Goad's book signing, The Taki Mag Guy, or ex Taki Mag Guy. And it's like fairly on the right wing. And cause this guy's not exactly right wing, he's like associated with it. So maybe.</p><p>Theo Jaffee </p><p>Hmm. Interesting. Based.</p><p>&#8275;</p><p>i don't know... kaczynski seems like he's... not really right or left wing yeah, he does</p><p>Alok Singh </p><p>I mean, he's insulting the left and the right. I mean, he insults the</p><p>right wing in here too, and of course we saw all the stuff about the left wing.</p><p>Theo Jaffee </p><p>He insults the left a lot, I think just to distance himself from them because, you know, it is, I guess, typically assumed that any anti-tech, anti-society anarchist would be a leftist, but he is not. Yeah, we have no illusions about the feasibility of creating a new ideal form of society.</p><p>Alok Singh </p><p>This is an</p><p>amazing sentence in 184. Most people will agree that nature is beautiful. Certainly it has tremendous popular appeal. Nature has popular appeal. That's a great phrase. The radical environmental is already holding the ideology that exalts nature and opposes technology. Yeah, that hasn't changed.</p><p>Theo Jaffee </p><p>It does, yeah.</p><p>This is so true, yeah.</p><p>Yeah, &#8275; I'm sure everyone in California knows that well.</p><p>Nature takes care of itself.</p><p>Well, you can't have your cake and eat it too. To gain one thing you have to sacrifice another. Or as economists say, there are no solutions, only trade-offs.</p><p>Alok Singh </p><p>I hope you hate psychological conflict.</p><p>Oh, this is interesting. 186. His conclusion from revolutionary ideology should therefore be developed on two levels. Oh, okay. This one is... This is just like how you do a religion. You make one version for the elites and one for the rest.</p><p>Theo Jaffee </p><p>best.</p><p>Yeah, for the elites you have HP, MOR and the sequences and then for the rest you have like the Terminator.</p><p>Alok Singh </p><p>appreciation of the problem, the price it has to pay for getting rid of the system, the capable people in the instrumental, these people should be dressed on as rational level as possible, faction ever tends to be distorted, and yada yada. On the second level, still have only read half of it. If after finishing this project of getting a textbook into Lean and I'll watch Dune 2 finally, so no spoilers.</p><p>Theo Jaffee </p><p>Have you read Dune?</p><p>Okay.</p><p>But like, what do they say about the Butlerian Jihad? This is like not actually part of the...</p><p>Alok Singh </p><p>I said that it could have</p><p>easily been based on Samuel Butler, but not that it's known.</p><p>Theo Jaffee </p><p>It's certainly based on Samuel Butler, like...</p><p>How could it not be based on Samuel Butler?</p><p>Alok Singh </p><p>I totally</p><p>believe so. Herbert wouldn't miss something like that.</p><p>Theo Jaffee </p><p>They did a literal Butlerian jihad. Yeah. So convincing was his reasoning that, &#8275; yeah, he carried the country with them. made a clean sweep.</p><p>Alok Singh </p><p>That's the most unrealistic</p><p>part that basically people are convinced by a guy talking really good. That's the part that's the most unbelievable, everyone's solution to the machines and why they managed to destroy them. I don't think we'll be, &#8275; certainly not as absolute as them.</p><p>Theo Jaffee </p><p>Uhhh</p><p>Well, no, but-</p><p>I don't know, could the Luddites have ever reasonably succeeded? Do you think? Like the actual Luddites in actual Victorian England?</p><p>Alok Singh </p><p>their goal. see, the Luddites were decently well off weavers. They started well off until they became a bit pointless. Which is why they were real pissed. Because I they're mostly skilled craftsmen and their skill just didn't matter. So developers could very well become the next Luddites.</p><p>Theo Jaffee </p><p>Yeah, I'd believe it. Peak activity 1811 to 17.</p><p>Yeah, wow.</p><p>There were more troops involved in suppressing them than the Duke of Wellington led during the Peninsular War. That's incredible. And it was at the same time as the Peninsular War.</p><p>Alok Singh </p><p>They assassinated some mill owner.</p><p>Theo Jaffee </p><p>Yeah, wow, I guess I never thought about like this happened during the Napoleonic Wars.</p><p>Parliament made machine breaking, i.e. industrial sabotage, a capital crime with the destruction of stocking frames. I think in Britain today, if the Luddites happened and you had a bunch of people smashing machines, they would just require an ID to buy hammers at hardware stores.</p><p>Alok Singh </p><p>etc. act.</p><p>Theo Jaffee </p><p>Did you see that Keir Starmer tweet where he was like, you know, knife crime will no longer be tolerated. We are banning the purchase of samurai swords.</p><p>Alok Singh </p><p>Any kind of...</p><p>Theo Jaffee </p><p>The way to discourage ethnic conflict is not through militant advocacy of minority rights. Instead, the Revolutionary should emphasize that although minorities do suffer more or less disadvantage, this disadvantage is of peripheral significance. Our real enemy is the industrial technological system. And in the struggle against the system, ethnic distinctions are of no importance. Yeah.</p><p>Alok Singh </p><p>Basically to swallow it for the system, which I don't think is happening.</p><p>will not be political revolution. That's a big difference from AI safety, where if anything, they're hoping for it be a political one as a lever on industry and technology. And the economics-wise, well, the economics of AI, unless you get wiped out, are real good.</p><p>Theo Jaffee </p><p>I think... Yeah.</p><p>Yeah, I saw a tweet recently that was like, they really mistimed the pause push because like, basically all the AI safety orgs tried to like push for the six month pause after GPT-4 came out, which was way too early. Like most normies were completely unaware at the time. Now normies are just like sort of starting to wake up. &#8275; especially with the AI art generation. I think probably the best time to seek a pause would have been a year from now. Maybe they could do it again.</p><p>and see if people would be more open to it.</p><p>Alok Singh </p><p>Yeah, maybe.</p><p>Will people remember pause if they're normalish? Very possibly not. Because you mean a whole year with all this stuff happening. That was like forever ago. Think of all the random things that happened in Trump's presidency that now I bet you couldn't list a single one of.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Yeah.</p><p>Even the first presidency?</p><p>Alok Singh </p><p>Yeah, I am.</p><p>Theo Jaffee </p><p>&#8275; yeah.</p><p>Alok Singh </p><p>I'm sure</p><p>you can list stuff, but you remember how there was like this endless stream of things.</p><p>Theo Jaffee </p><p>Sort of. I mean, probably like in your social circles, the idea of a Trump presidency was much, much weirder than in my social circles.</p><p>at the time.</p><p>Alok Singh </p><p>Yeah, definitely. Even so, was still just had like all sorts of odd moments just relative to any presidency. And because Trump himself is like kind of a weird guy at his core.</p><p>Theo Jaffee </p><p>Okay, this is really funny, wow. Whenever it is suggested that the United States should cut back on technological progress or economic growth, people get hysterical and start screaming that if we fall behind in technology, the Japanese will get ahead of us. Yeah.</p><p>Alok Singh </p><p>Yeah, I already read that. And the Japan. The Chinese</p><p>Japanese, holy robots, holy macro, Captain, the world will fly off its orbit.</p><p>Theo Jaffee </p><p>Uh-huh. But more reasonably, it is</p><p>argued that if the relatively democratic nations of the world fall behind in technology, while nasty dictatorial nations like China continue to progress, eventually the dictators may come to dominate the world. Wow, this is prescient. Really.</p><p>Alok Singh </p><p>This is why the industrial system should be attacked in all nations at once.</p><p>Theo Jaffee </p><p>&#8275;</p><p>Alok Singh </p><p>What do you think about Cuba? says look at Cuba at the end of that paragraph.</p><p>Theo Jaffee </p><p>Dictator controlled systems approved and efficient,</p><p>Alok Singh </p><p>Astro system controlled by dictator.</p><p>Okay.</p><p>Theo Jaffee </p><p>Yeah, so he was just like a...</p><p>Alok Singh </p><p>pretty trade agreement like NAFTA.</p><p>Modern man is too much power, yada yada. They fail to distinguish the power for organizations and individuals.</p><p>Theo Jaffee </p><p>People need power, yeah.</p><p>Alok Singh </p><p>Modern man is a man's power when nature goes evil. Modern nature was a far less power than primitive men.</p><p>Theo Jaffee </p><p>You need a license for everything and with the license come rules and regulations.</p><p>Alok Singh </p><p>there.</p><p>Theo Jaffee </p><p>There, yeah, wow, there's so many bangers in here. I don't think I've ever read this portion of it.</p><p>Imagine an alcoholic sitting with a barrel of wine in front of him. Suppose he starts saying to himself, wine isn't bad for you if used in moderation. Why, they say small amounts of wine are even good for you. It won't do me any harm if I take just one little drink.</p><p>Alok Singh </p><p>Never forget that the human</p><p>race with technology is just like an alcoholic liberal wine. Well, that's true.</p><p>Theo Jaffee </p><p>Yeah, that's us with the phones.</p><p>Alok Singh </p><p>Revolutionaries</p><p>should have as many children as they can.</p><p>Theo Jaffee </p><p>Wow, this dude is paced as fuck.</p><p>Alok Singh </p><p>There's strong scientific evidence that social attitudes are to a significant extent inherited.</p><p>Theo Jaffee </p><p>Wow.</p><p>Alok Singh </p><p>No one suggests that social area at all</p><p>From our point of view, doesn't matter that much whether they're passed on genetically or through childhood training, just that they are.</p><p>Theo Jaffee </p><p>Wow, I need to read this in full.</p><p>Alok Singh </p><p>The trouble is that many of the people who are inclined to rebel against the industrial system are also concerned about overpopulation.</p><p>What do you say about artificial intelligence specifically? Like you just search artificial, because I don't think if you search AI you'll get.</p><p>Theo Jaffee </p><p>This is the one and only keyword search match for artificial intelligence.</p><p>Alok Singh </p><p>And this that's not well, that would mean that we were looking at this, but go up a bit. So what did he say again for the scenario where we do develop it?</p><p>yeah, the people are really dependent.</p><p>Theo Jaffee </p><p>Intelligent machines, yeah. Right. We said, yeah, we saw this. Humans will be dependent. Due to improved techniques, the elite will have greater control over the masses. And because human work will no longer be necessary, the masses will be superfluous, a useless burden on the system. If the elite is ruthless, they may simply decide to exterminate the mass of humanity. If they're humane, they may use propaganda or other psychological or biological techniques.</p><p>Alok Singh </p><p>If your ruthless teammate has just-</p><p>Or if it consists of soft-hearted liberals,</p><p>they may decide to play the role of good shepherds to the rest of the race. Psychologically hygienic, that's quite a phrase. Everyone has a wholesome hobby to keep them busy. And everyone may become dissatisfied, undergoes quote treatment to cure his quote problem. Of course, life will be so purposeful that people have to be biologically or psychologically engineered, either remove their need for the power process or make them supplement that drive.</p><p>Yeah, I basically buy this. Maybe happy society, they most certainly will not be free. They'll reduce the status of domestic animals.</p><p>And then basically as the premise of well, what if that doesn't happen? Well, we basically are filling in like a section of Kaczynski's book with all this AI safety stuff.</p><p>Basically section like 173 and 174.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Why do we read Erewhon instead of this?</p><p>Is it because fiction is important?</p><p>Alok Singh </p><p>Butler's a better writer.</p><p>Theo Jaffee </p><p>&#8275; I'm not convinced of that.</p><p>Alok Singh </p><p>And but but there's</p><p>I think so. But I also think Butler's prescience is a more interesting given that it's well before even computers. Although he has time cognate, but he has to extrapolate it from basically a loom as a machine.</p><p>Theo Jaffee </p><p>Yeah, I agree.</p><p>Yeah, that was pretty impressive. &#8275;</p><p>I think there have been, I guess you could say the Golem is kind of an AI story. Talos was a sculpture, or a giant made of bronze who act as guardian for the island of Crete.</p><p>Alok Singh </p><p>Yeah.</p><p>You throw boulders at</p><p>Theo Jaffee </p><p>Faust, Frankenstein, yeah, artificial life. But not the same thing as artificial intelligence. Yeah, automata.</p><p>Alok Singh </p><p>RUR, something RUR. yeah, Leibniz of course talks about, okay, let me look at the Leibniz archive actually. Try looking at the notes of Leibniz. A bunch of them are online and now with the PDF tools, the fact that they're written in like six languages shouldn't be such a problem.</p><p>not those notes. Maybe try Leibniz Archive and I will also look.</p><p>Theo Jaffee </p><p>Leibniz archive, is it Hanover? Wow.</p><p>Alok Singh </p><p>200,000 pages of 50,000 pieces of writing.</p><p>I found one essay of his about spider silk for armor.</p><p>Theo Jaffee </p><p>machine would use an alphabet of human thoughts and rules to combine them. Yeah, so he was wrong about that.</p><p>Why did everyone think that AI would come about by listing out, enumerating a bunch of different human concepts and then manually drawing connections between them, as if that was possible?</p><p>Alok Singh </p><p>alphabet.</p><p>in a sense it kind of is doing that. Manual is a bit of a stretch. It's just that it's not being done by the human, which is the big draw, but it is done by a lot of brute force.</p><p>Theo Jaffee </p><p>Yeah, but not by the humans.</p><p>Alok Singh </p><p>Yeah, which is the big thing.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>five or four days is too late, but you'd have to open three cursors.</p><p>Theo Jaffee </p><p>Calculating machines were built by Leibniz. Yeah What a genius this dude was. Very underrated.</p><p>Alok Singh </p><p>Yeah,</p><p>yes, absolutely. Non-standard analysis comes from his work.</p><p>So you know I like him.</p><p>Theo Jaffee </p><p>Highly rated but still underrated, as Cowen would say.</p><p>Alok Singh </p><p>Yeah.</p><p>Theo Jaffee </p><p>Okay, so like what else do we have from Erewhon that's not just AI?</p><p>Alok Singh </p><p>Yeah.</p><p>That's.</p><p>Theo Jaffee </p><p>What was the most interesting</p><p>thing in here that wasn't the Book of the Machines?</p><p>Alok Singh </p><p>Well, this is still in the book of machines just when he compares them He says a man hardly owns himself basically because he's got so many parasites in him. Maybe such a little parasite or He's such a hive and swarm of parasites as doubtful whether his body is not more theirs than his and Whether he is anything but another kind of anteep after all</p><p>Theo Jaffee </p><p>Yeah. yeah, what I was saying earlier, he was very early on the, on the Nick Land idea that like society itself is a sort of machine.</p><p>Alok Singh </p><p>Again.</p><p>Theo Jaffee </p><p>this.</p><p>He wrote this somewhere, yeah.</p><p>We are misled by considering any complicated machine as a single thing. In truth, it is a city or society, each member of which was bred truly after its kind.</p><p>&#8275; the machine</p><p>Alok Singh </p><p>I Butler should get the credit as far as I can assign anyway because, especially given that he wrote Darwin Among the Machines, he's specifically making a claim about superintelligence.</p><p>Theo Jaffee </p><p>Doran among the machines is, yeah, very, very prescient.</p><p>Alok Singh </p><p>Also 1863, that's pretty early.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Predicting a machine before the Industrial Revolution, that would be quite something.</p><p>I hear you can go into the the rates of vegetable animal and mineral.</p><p>Theo Jaffee </p><p>Yeah, the the rights of vegetables was interesting.</p><p>yeah.</p><p>Yeah.</p><p>Even the Puritans, after a vain attempt to subsist on a kind of jam made of apples and yellow cabbage leaves, succumb to the inevitable and resign themselves to a diet of roast beef and mutton with all the usual adjuncts of a modern dinner table.</p><p>Alok Singh </p><p>Okay, he really does insist on the speed thing. At the end of the first section of the machines, or the end of the second section, must always be remembered that man's body is what it is, though, through having been molded into his presence shaped by the chances and change of the many millions of years. But his organization never advanced with anything like the rapidity with which the machines advances. This is the most alarming feature in the case, and I must be pardoned for insisting on it so frequently.</p><p>So he certainly hones in on the right things. The speed of development.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>which now for his time probably feels like everyone looking at the left side of the exponential curve, like, farmingly small prosaic and kind of cute. And yet.</p><p>I was like, oh, machines in 1863, they're just too good. Or, well, that's not, in fact, he specifically says that's not his claim. Not that they're too good, but that soon there will be. Although his soon, I don't know if he'd speculate on how long it would take.</p><p>Theo Jaffee </p><p>Well you had some pretty powerful machines not that long after this, you know, the atomic bomb or even like, I don't know, the dreadnought was only 30 years after this.</p><p>30, 40 years.</p><p>Alok Singh </p><p>He lived in 1902.</p><p>Theo Jaffee </p><p>So he saw the final peaceful days of the West before the West fell and billions died.</p><p>Alok Singh </p><p>Also, it</p><p>says that he had studies on the evidence for Christianity, so I assume that's where the whole obsession comes from. He also developed the theory that the Odyssey was written by a young Sicilian. The scenes of the poem reflect the coast of Sicily. Blah, blah.</p><p>also has the theory that if the Shakespeare sonnets be arranged to tell a story about a homosexual affair.</p><p>Also that Homer's deities and the Iliad are like humans, but without the virtue and that he must have designed Samuel Butler Wikipedia page.</p><p>Theo Jaffee </p><p>Where are reading this?</p><p>really?</p><p>Alok Singh </p><p>This is interesting. He argued that each organism was not distinct from its parents, that it was merely an extension at a later stage of evolution. Birth has been made too much of, quote him.</p><p>That's an interesting take.</p><p>Theo Jaffee </p><p>That is an interesting take. He does talk about like birth a lot in heroin.</p><p>Alok Singh </p><p>It's like one that, like you can define it so that</p><p>it's technically correct, but I don't think it's all that helpful. But his thing about birth has been made too much of.</p><p>Theo Jaffee </p><p>Yeah, like did you read the birth formulae and the world of the unborn?</p><p>Alok Singh </p><p>I remember there</p><p>were bizarre world views on the born or unborn.</p><p>Theo Jaffee </p><p>Yeah,</p><p>because he has to sort of reconcile like Erewhon's treatment of illness as criminal with like the fact that people need to be born and also like, you know, birth is a sort of illness kind of, you know, pregnant women are sick. And birth itself is like a very, you know, messy medical thing. So like, he thinks that babies come from the kingdom of the unborn and they're like these, I guess,</p><p>almost like omnipotent like spirit like angel type beings that then get bored and decide to like wipe their memories and spawn as a human child and they have to like sign off on this.</p><p>Alok Singh </p><p>Thank</p><p>Theo Jaffee </p><p>Yeah.</p><p>So the unborn, like, the wisest of the unborn will explain this thing about why being born is actually terrible, why you should never want to be born. Was Butler an antinatalist?</p><p>or is he just gay?</p><p>Alok Singh </p><p>I don't know. I don't think he's an antinatalist. was a serious but amateur student of the subjects he undertook, especially religious, orthodoxy, and evolutionary thought, and his controversial assertions effectively shut him out from both the opposing factions of church and science. Ow. In those days, one was either a religionist or a Darwinian, but he was neither.</p><p>Theo Jaffee </p><p>Yeah, that sounds about right.</p><p>yeah, the way of all flesh.</p><p>semi-autobiographical novel that was like really long.</p><p>Alok Singh </p><p>It claims in Dune that it is named for Butler.</p><p>same.</p><p>Theo Jaffee </p><p>Who else could it have been named after?</p><p>Alok Singh </p><p>Again, I mean, I'm not doubting this, but I want to see if there is a firsthand evidence for it directly stated.</p><p>Theo Jaffee </p><p>Yeah.</p><p>I'm getting kind of tired. Is there anything else in Erewhon?</p><p>Alok Singh </p><p>He says the dog they're more self-sacrificing than humans</p><p>Theo Jaffee </p><p>I think that's kind of true.</p><p>What's name of that like Japanese dog who waited for, huh? Hachiko, yeah. Most humans wouldn't do that.</p><p>Alok Singh </p><p>Hachiko. Hachiko.</p><p>Theo Jaffee </p><p>you call</p><p>Yeah, wow. Incredible. I forgot to look at the statue of Hachiko at Shibuya Station.</p><p>sad.</p><p>Was Hachiko the first Doge?</p><p>He was in Akita.</p><p>The escape chapter reminded me of Around the World in 80 Days.</p><p>Alok Singh </p><p>I didn't know you'd know that.</p><p>Theo Jaffee </p><p>know what.</p><p>Alok Singh </p><p>around more than 80 days.</p><p>Theo Jaffee </p><p>Yeah, I used to read fiction. I actually read that a lot because as you may have observed, I have a sort of autistic obsession with maps and timelines and stuff. And you could very clearly chart out the timeline and the map of their voyage. I still basically remember almost every step of it. Yeah, London, Paris, Turin, Brindisi, and then they go...</p><p>on a boat to Port Said and down to Suez and then out in Yemen and then to India, first Mumbai and then they go up on the train but the train can't go all the way so they have to take an elephant and then take the train down to Kolkata and then to Singapore and then Hong Kong and then Yokohama, Japan and then San Francisco and then they take</p><p>like part railroad and part like dog sled across the US to the East Coast to New York and then boat back to London.</p><p>Alok Singh </p><p>Okay.</p><p>Theo Jaffee </p><p>Although there was actually no hot air balloon in the original book, that was like an invention of a movie adaptation or something.</p><p>I think this was good writing here.</p><p>Alok Singh </p><p>is across the sink, also remember that below in your math there's one with alpine gorges</p><p>When he says in the very beginning in his teaser to the book of the machines, think in chapter six, that they're destined to become instinct as vital from distinct from man as man is from vegetable or animal.</p><p>Theo Jaffee </p><p>Yeah, he also talks about how like, &#8275; specifically through, consciousness, like human consciousness is very different from whatever it is that animals and vegetables experience, if it could even be called experience and</p><p>Alok Singh </p><p>hair.</p><p>Yeah,</p><p>he goes into a Venus fly trap in a fair amount of depth.</p><p>Theo Jaffee </p><p>Yeah.</p><p>Alok Singh </p><p>Yeah, machines were ultimately destined to supplant the race of man and to become instinct with a vitality as different from and superior to that of animals as animals to vegetable life. Although as they later expand, it's really more like animal like machine to human and human to animal and animal to plant vegetables. I guess vegetable to mineral and that's where it ends.</p><p>Theo Jaffee </p><p>Uh...oh yeah. Upon his asking me to name some of our most advanced machines, I did not dare to tell him of our steam engines and railroads and electric telegraphs. And it was puzzling my brain to think what I could say when, of all things in the world, balloons suggested themselves.</p><p>Huh, I didn't even notice that detail the first time I read. Because this balloon detail comes back later when he escapes on the balloon. That's cool, yeah.</p><p>Alok Singh </p><p>Yeah.</p><p>Check off the balloon.</p><p>Theo Jaffee </p><p>Hmm. All right, well, yeah, this was fun. I did like this book. I will read more fiction now. Thank you, you inspired me.</p><p>Alok Singh </p><p>All right. Let's call it.</p>]]></content:encoded></item><item><title><![CDATA[#19: Samo Burja]]></title><description><![CDATA[Superintelligence and History, Ideology, and 21st Century Philosophy]]></description><link>https://www.theojaffee.com/p/19-samo-burja</link><guid isPermaLink="false">https://www.theojaffee.com/p/19-samo-burja</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 22 Sep 2024 21:34:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/149262675/06f61f260a7c2b9d0e4c74cf66f519a6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Samo Burja is a writer, historian, and political scientist, the founder of civilizational consulting firm Bismarck Analysis, and the editor-in-chief of governance futurism magazine Palladium.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:06 - Implications of OpenAI o1</p><p>10:21 - Implications of superintelligence on history</p><p>35:06 - Palladium, Chinese technocracy, ideology, and media</p><p>1:00:44 - Best ideas, philosophers, and works of the past 20-30 years</p><h3>Links</h3><p>Samo&#8217;s Website: <a href="https://samoburja.com/">https://samoburja.com/</a></p><p>Bismarck Analysis: <a href="https://www.bismarckanalysis.com/">https://www.bismarckanalysis.com/</a></p><p>Palladium: <a href="https://www.palladiummag.com/">https://www.palladiummag.com/</a></p><p>Bismarck&#8217;s Twitter: <a href="https://x.com/bismarckanlys">https://x.com/bismarckanlys</a></p><p>Palladium&#8217;s Twitter: <a href="https://x.com/palladiummag">https://x.com/palladiummag</a></p><p>Samo&#8217;s Twitter: <a href="https://x.com/samoburja">https://x.com/samoburja</a></p><h3>More Episodes</h3><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)</p><p>Welcome back to episode 19 of the Theo Jaffee Podcast. Today I had the pleasure of speaking with Samo Burja. Samo is a writer, historian, and political scientist, and he&#8217;s done a lot. He developed Great Founder Theory, the idea that societal change is often primarily driven by institutions shaped by the choices of powerful individuals. He founded Bismarck Analysis, a consulting firm that publishes detailed research on companies, industries, nations, and other large-scale societal organizations. He chairs the editorial board of Palladium, a magazine focused on &#8220;governance futurism&#8221;, and with, in my opinion, immaculate taste and aesthetics. He previously did research at the Long Now Foundation, and his Twitter bio reads &#8220;There&#8217;s never been an immortal society. Figuring out why.&#8221; In this episode, we talk about the meaning of AI on the trajectory of history, how we can get the best of Chinese technocracy while avoiding the worst, and some of the interesting new intellectual movements breaking the stagnation of the past few decades. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Samo Burja.</p><p>Theo Jaffee (01:07)</p><p>Hi, welcome back to episode 19 of the Theo Jaffee podcast. We're here today with Samo Burja So we should start the conversation today with the massive news that just came out yesterday where OpenAI announced O1, which is their new reasoning system. And they've proven for the first time that reinforcement learning can scale just like pre -training. And a lot of people are seeing this as, you know, the golden path towards AGI. So...</p><p>Samo Burja (01:12)</p><p>Great to be here.</p><p>Theo Jaffee (01:37)</p><p>What do you think about current day AI right now in terms of the kind of research that you do at Bismarck? How helpful is it?</p><p>Samo Burja (01:47)</p><p>First off, do think it's an impressive result from OpenAI that they have managed to reduce hallucination in the mathematics portion especially. Infamously, that had been a problem because most white collar professions are actually professions where reliability is the key foundation that makes something worth buying. You don't want a doctor that is</p><p>is right 95 % of the time and wrong 5 % of the time, or honestly even a lawyer, right? So I think many people are actually in professions where they are paid for consistency and reliability of a certain intellectual level.</p><p>I think that to answer your question, I personally don't actually use it that much, but perhaps I will use it more with this new generation. I have heard people use it very effectively to find search terms in literature review. So basically you ask the AI, what is something called in a specialist field like medicine, energy, law, finance, et cetera, you will usually get a</p><p>pretty decent explanation and I think the latest launch had sort of a promotion or we could call it a demonstration, a little promotion video with Tyler Cowan demonstrating this capability for economics. So until this very model I have not found that much use for it in my work.</p><p>because it kind of generated a high school essay. But let's say if this is now achieving college essay levels, perhaps there's some use there and I'll certainly be playing around with it and experimenting.</p><p>Theo Jaffee (03:37)</p><p>what specific capabilities would you want to see it have before you would consider it to be genuinely useful? Aside from, you know, just...</p><p>Samo Burja (03:45)</p><p>Well,</p><p>Genuinely useful for different people might mean different things. It's already obviously genuinely useful. I think it's done a great humanitarian service to the world by automating homework, which we probably should have actually abolished long ago since the educational statistics show it doesn't make much of a difference. So it's kind of a strange, busy, make -work thing that we've imposed on the children and young people of the world for no real benefit, which actually</p><p>is a shocking amount of the economy. So I think that it has been already very good at fulfilling many of the roles that a literate citizen or literate employee has historically fulfilled, right? It could be used to do data entry. I honestly hope they make it easier and easier to have it do data entry because the amount of paperwork we all deal with has radically increased over the last few years.</p><p>We don't think of it as paperwork, but every time you have to re -enter your passport, your password, your date of birth...</p><p>your credit card number, your zip code, your address. Every time you do that, that is actually paperwork. Every time you have to use two factor authentication. Now, of course, there are password managers that are supposed to handle this, but they're brittle. I think an AI would be excellent at parsing UI. I personally never want my phone to ask me for my date of birth again. Like I'm just done with that. Every time you install a</p><p>app it wants you to sign up to everything. So in the war on paperwork where bureaucracies offload the work they're supposed to be doing onto the user onto the citizen I'm hoping that AI will, generative AI as such and especially the text incarnation, will allow us to spam the bureaucracies with as many pages of replies and paperwork as they spam us with. So you know that's that's my big hope and structurally I think this</p><p>will make society richer because it let human beings go do what human beings do best, which currently is various forms of physical labor and certain kinds of creative original thinking. Which actually brings me to sort of this point where, you know, I mentioned Tyler Cowan earlier, one of his books, Averages Over, feels oddly prescient in the aftermath of AI because what AI has done best is automate the</p><p>the median white collar profession. So in other words, if your job is done by millions of people, it can probably generate the data necessary for these models to learn what your job is. But if your job is much smaller, if it only has 20 people or so in it, if actually maybe your firm is the only firm that does something, then I think your white collar work is going to stay intact.</p><p>The you are, the more differentiated you are, the less of a training set that your field of study produces, the harder it will be to automate. Now, medicine and law, two examples I raised, are actually areas with huge datasets. So I actually expect if not this version of OpenAI's model, but let's say a version within the next five years, will achieve the reliability of an excellent doctor. However, finance,</p><p>law and medicine have political power that protects them.</p><p>they will mandate the use of a human and the oversight of a human expert, be it a human lawyer, a human doctor, a human financial advisor. By the way, you know that you actually as a normal citizen cannot just invest in random stocks without an intermediary financial priesthood, right? Like you can't actually do that. You can do it in crypto. You can't do it in traditional finance. Oddly non -democratic. There's no good economic theory for it. It's basically just paternalism to prevent people from gambling.</p><p>Theo Jaffee (07:49)</p><p>Mm -hmm.</p><p>Samo Burja (08:01)</p><p>on stocks. But if we're doing paternalism that way, maybe we should protect people from the consumer market, etc. So I claim finance is gate -kept and is there as well. So once these jobs are automated, any job with political protection, with a structural guild -like lock on credentials, those jobs will actually not be automated by AI. Let me explain what I mean.</p><p>the substantive work that they do will be fully automated. But you can't automate fake jobs. So since you can't automate fake jobs, instead of it being a 20 % self -serving job with 80 % drudgery, it'll become 100 % self -serving. If you can spend 90 % of the time or 100 % of your time lobbying for the existence of your job, oof, in a big bureaucracy, that's pretty powerful. And in a society, it's pretty powerful.</p><p>busy bureaucrats are at the end of the day actually politically not that powerful. It's lazy, well -rested bureaucrats that are powerful. So on the other side of this, any job that does not have such protection, that is open to market forces, well, it'll be partially obsolete. It will increase economic productivity. So in my opinion, the real race in our society is will generative AI</p><p>empower new productive jobs by automating old productive jobs faster than it will empower through giving them more time to basically pursue rent seeking, the rent seeking jobs of our society and never underestimate the ability of an extractive class to really like like lock down and crash economic growth. I think this is the default of human history and economic growth is the exception.</p><p>Theo Jaffee (10:03)</p><p>So speaking of human history and AI, on the grand...</p><p>Samo Burja (10:06)</p><p>And by the way, that's why I emphasize so strongly, I hope the AI helps us beat the bureaucracies rather than, you know, I don't think it'll eliminate it. I think we should use it as a weapon against bureaucracy. Yeah.</p><p>Theo Jaffee (10:18)</p><p>I agree. So on the grand, you know, millennium scale arc of human history, like what does AI mean? Is it like, will lead to more of an end state once we reach AGI? Will the post -AGI world be a kind of, you know, epilogue to human history or will it be something entirely different? Will it lead to new eras of human history?</p><p>Samo Burja (10:41)</p><p>Well, it really depends on what you mean by AGI. The term has like something like six distinct uses. There's the official OpenAI definition, which is a little bit circular.</p><p>Even if you read their documents, their official corporate definition is something like, AI that automates all jobs. Which, by the way, mind you, I think most jobs do not actually require agentic, human, general purpose intelligence. I think most jobs are actually just fairly complicated scripts, but most jobs, I think you could honestly automate them with a sufficient amount of spaghetti code.</p><p>we did not have this transformer architecture revolution and we had 10 ,000 years more to fiddle about with coding and programming. actually think just non -AI computer programs could handle 95 to 99 percent of the jobs out there including with robotics and so on even without learning, without machine learning. And I think that's because our economy is actually shockingly primitive.</p><p>I can give some examples of how our economies are shockingly primitive Australia, which we think of as a first world country is a resource -based economy dig up rocks Raise sheep sell this What is this minecraft like how can a first world country? achieve such a surplus by basically selling sheep selling copper Selling various minerals. It's kind of hilarious. It shouldn't happen in</p><p>2024. Yet it does, because the other economies are not that advanced either. At the end of the day, objectively speaking, a car, be it an internal combustion engine or an electric vehicle, is not that complicated a machine. You can explain it to a smart high schooler over the course of a week, every single component in that car. Metallurgy is a bit trickier. But at this point, when we're talking machine tools, metallurgy, robotics, cars, what is</p><p>That's the economy of Germany and Japan. Possibly the most complicated thing ever is something that can be handled quite well by a small island nation. How is it possible that Taiwan produces so much of the world's semiconductors? It's a country of 16 million people. What are the other 8 billion people doing? And the answer is the other 8 billion people on this planet are actually doing stuff not too dissimilar from Australia. Like they're digging up stuff, they're growing stuff,</p><p>About 300 million or so are engaged in making plastics, making steel.</p><p>100 million or so are busy making cars, busy manufacturing, et cetera, et cetera. And then we have all the lawyering and the bureaucracy, cetera, et And you know, let's say a million people, let's say 2 million people are directly involved in the manufacturing of semiconductors. If we added up the labor force of TSMC and like the labor force of, you know, Foxconn and the labor force of Tokyo Electronics and ARM,</p><p>in Britain and maybe let's count ASML too. Maybe if we jury rig this number we can get it to like 20 million people are maybe involved directly in the manufacture of semiconductors which are almost the most complicated machines in existence other than something you know singular like the Large Hadron Collider. Like the chip fabs are immensely complicated. So if I break down the world's economy as to what are these eight billion humans doing you realize we kind of don't need artificial intelligence</p><p>intelligence to automate these jobs. I don't know, Theo, if you were immortal and I gave you 200 years to write a program that doesn't use machine learning, that knows how to herd cattle, I bet you could do it. Right? I bet you could do it. I don't think it's that hard. Well, 200 years haven't passed, okay? We've barely automated spreadsheets in the 80s, right? We've barely figured out how to send money, a made up thing that could be easily represented with electrons.</p><p>Theo Jaffee (14:46)</p><p>Why hasn't it been done then?</p><p>Samo Burja (15:02)</p><p>over the place. So I really, you know, I really do think that</p><p>the world's economy is almost, well, okay, it's a bit more complicated. If you look at the history of automation, automation tends to happen right next to an automated field. So as soon as you automate something, you have made it machine -like predictable, you have eliminated variance, and then whatever output you produce there is now so regular that you can automate whatever is</p><p>taking that as an input. It is difficult to introduce automation into a system where everything is custom, unique, intermittent, following natural cycles day, night, etc. It's easy to automate something when you are working with the high predictability of a machine. So because of this, and they've been good economics papers, I recommend people read some of the economist Robin Hansen's writing on this.</p><p>Theo Jaffee (16:06)</p><p>Love, Robin Hansen.</p><p>Samo Burja (16:07)</p><p>he's great. So if you read on his econ papers and some of the papers that he cites, he points out that in fact, you have this almost spreading way automation through the economy where the easiest thing to automate is something that can take machine inputs. The hardest thing to automate is something that requires inputs from the natural world, like the behavior of sheep in this example, or the chaos</p><p>of geology we still actually don't have very good geological theories. If you dig down two miles there's it's a very hard with sensors and our theory to predict what you exactly would find at any point in the world. Mining always comes with surprises right? They kind of are doing exploratory digging that's why it's so expensive. Even for something as well understood as oil there are of course surprises.</p><p>And then you have, you know, let alone the sort of chaotic needs of like, when exactly do you need a divorce lawyer or something like that, or when someone dies and there has to be a will interpreted, et cetera, et cetera. Like if we go to the white collar world, it becomes very natural. The service economy is supposed to be human oriented. And when we see automation in the service economy, it's almost like a form of rationing, right? At the airport, you are supposed to learn how to you know, scan you.</p><p>your tag or get your tag printed, know, stick it yourself onto your luggage, put it out there. And there's still a human being walking around working out various issues. Like for example, maybe your ticket was booked through a different airline and the silly terminal doesn't understand or allow the input of the code of the other airline that's partnering with your airline. Trivial things like that, fragilities that happen because of, you know,</p><p>cases that haven't been exhausted. So I think that there's like a combinatorics problem here where it's just explosive number of cases. When you automate something, you're actually reducing the number of different outcomes. A robot putting a door onto a car, onto a car frame, will do it exactly the same every single time unless it breaks. If it breaks, it breaks totally. Someone comes, fixes it, then it puts the door back on exactly the same.</p><p>I don't think a human worker does it exactly the same way or at least even if a human worker does it exactly the same way a different worker will do it a slightly different way and You know, that's the Industrial Revolution. It's actually been artificial simplicity We have been producing artificial simplicity since the start of the Industrial Revolution by making every you know, every teacup every mug the same</p><p>we have used economies of scale to grow vastly wealthier. So if we then joke about the definition that open AI uses for general intelligence to loop back to your original question, you know, a machine that automates most existing work.</p><p>When was AGI achieved? Well, James Watt achieved it in the 18th century, right? The steam engine already achieves that. But of course, new jobs showed up and our economy complexified. So it's really my hope.</p><p>that this kind of significant machine learning transformer architecture based AI, whether or not we think of it as AGI, I think it will automate vast amounts. It will automate vast amounts of work. But</p><p>Hopefully it'll make our economy more complex and will create more jobs and They will be things that it can't yet do now with regard to true general intelligence so if we have a you know, say the difference between Me learning to play a game of chess and AI learning to play a game of chess Chess is kind of an easy example because there's an exhaustive rule set in a way chess is also artificial simplicity</p><p>Basically, the machine can play millions of games and I cannot. And what is the difference between me learning to write an email and the AI learning to write an email like a Google.</p><p>Well, how many emails does the Google machine get to read? A billion? Two billion? Ten billion? Fifty billion? I don't know. It's definitely somewhere in the billions. How many emails have I read in my entire life? Well, it might feel like a billion. It's certainly not. It's like maybe a hundred thousand. Maybe a million. A million is too generous, I think, if I count all the spam that's deleted. So let me just, you know, quickly estimate and say a hundred.</p><p>Like on the spot, if you push me, how many emails have I read in my entire life? Or skimmed, probably a hundred thousand. So...</p><p>What does this mean? Anywhere where there are a billion emails or where there is a rule set like in chess that can generate exhaustively all the cases or at least as many cases as the machine can ingest, big data will be sort of victoriously succeeding at performing peak human capability or even modestly superhuman, right?</p><p>But what about cases where there are not a billion examples? What if there are only 10 ,000 data points just in existence for a problem?</p><p>I actually don't think the AI will be very good at learning that. And I think that illustrates the sort of difference between what I think is happening with scaling. Let's remember it's not just the scaling of compute, it's the scaling of data. Either works super well. I'm sure OpenAI scraped to the entire internet, as have the other AI companies. And, you know, within the bounds of legality, presumably.</p><p>but I just don't, presumably, move fast and break things. That's what they say. So.</p><p>Theo Jaffee (22:32)</p><p>Presumably.</p><p>Samo Burja (22:45)</p><p>think that we will see some surprising differences between human intelligence that learns from few examples and few data points and current generation, the transformer architecture. And of course, let's not forget diffusion, right? Diffusion is what is actually generating all the pretty images. And by the way, isn't that interesting? Why are transformers worse at generating the images? If we presume intelligence is a single thing,</p><p>and humans have that single thing, surely it's the same skill I use to paint a picture as I use to write an essay or to solve an equation or perhaps even to throw a basketball. There are lots of people who are betting on the transformer architecture in the physical world. Yet, defeat. Yeah?</p><p>Theo Jaffee (23:34)</p><p>Well, in the GPT -4 .0 blog post, they showed examples of how they used it to generate images that were very, good. And they were very good in...</p><p>Samo Burja (23:44)</p><p>Is that a Sphinx where that's a call function like it wasn't the previous generation or are they claiming that it is the same architecture? Okay, so it is native. Okay, cool.</p><p>Theo Jaffee (23:49)</p><p>No, it's native. It's native, yeah.</p><p>And it's good in a different way from like mid -journey, for example. Mid -journey is very kind of artistic and it has like taste in a way that 4 .0 doesn't yet, I guess, but 4 .0 is able to have, you know, more precise text and, you know, image persistence and stuff. So I think that this is probably something that's solvable by just making the models more multimodal and trading them on more kinds of data.</p><p>Samo Burja (24:24)</p><p>Possibly, I still think that it is notable that transformer and diffusion architectures are comparably good, let's say. I will read the paper. I also ask my AI friends because I feel often people take a chimeric approach. It's not visible to the user, but I believe you. I believe you, I believe the paper.</p><p>point being the fact that completely different architectures are competitive at all at a similar level of compute suggests to me that in the near future we will see a Cambrian explosion of different forms of intelligence and that actually intelligence isn't one thing but it's almost like a family of radically different things. We have just only been exposed to human intelligence by a quirk of evolution though of course</p><p>even when we're looking at human intelligence and we interact with the animals we've domesticated, at times these much dumber animals really outperform us at tasks we would consider cognitive, or G -Loaded, or intelligent, or...</p><p>borderline magical, right? Like primitive people was considered animals had forms of magic. So even in the natural world between mammals and birds, let's say if those are the two smartest broad branches of life, I think that we perhaps already did see multiple types of something that could be called intelligence. And I think that in the next hundred years, we will be continuously surprised as architecture</p><p>change, a scales increase by all the amazing things that humans could never do that the different forms of artificial intelligence will be able to do. And also shocked and confused by all the things they can't do. So I actually think that the patterns of what different forms of intelligence cover will end up being radically different. Now, if I'm wrong,</p><p>this will be sort of disproven in the next five or 10 years. But I suspect there's going to be something very surprising waiting for us when we interrogate our primitive philosophical concept of intelligence. And you know, there's like a way in which if we reframe machine learning as industrial scale mathematics or industrial scale statistics, we get very different intuitions of what it can do and how far it can go. And of course,</p><p>not denying the deep socially transformative impact of it. At the end of the day, does a submarine swim? Does a plane fly? It certainly doesn't fly the same way as a bird does. A submarine doesn't swim the same way a dolphin or a human does. But obviously those are extremely useful things. But it's good to remember that until the most recent quadcopter revolution, birds could do things that jet aircraft never could.</p><p>They could land in tight spaces, leave tight spaces, hover a certain way, know, pick up pollen from a flower. And, you know, of course, jet aircraft in the 1960s could fly up in the stratosphere at Mach 5. And no bird can do that, OK? No bird can do that. So I think that that is like a surprisingly deep analogy where if we apply this to movement, if we apply the same thing to intelligence,</p><p>will learn surprising things. I think a lot of my friends, and maybe they were naive, a lot of my software engineer friends were genuinely confused when chat GPT went viral. They were like, but if you wrote a for loop, then this will be an agent. Do you remember all the agent startups that popped up?</p><p>Theo Jaffee (28:25)</p><p>Mm -hmm. Yep.</p><p>Samo Burja (28:26)</p><p>They didn't work. They basically didn't. It kind of decoheres, right? If you like loop it on itself without a human input, it kind of decoheres and doesn't really pursue agentic actions in the world. That's surprising because even if it's not multimodal, even if it's just text, dude, text can be an input for other things. It can have actuators, can have sensors that represent the data as text. Maybe all you need is text. That kind of should have worked. And I think we used to equate intelligence and agency.</p><p>and right now we're seeing the two decohere in an interesting way. People right now are not confused, but they were confused in 2022. And I think this is one of those things that as soon as we are less confused or where our concept of intelligence is enriched, either the popular concept or the philosophical concept or the engineering concept, we almost don't remember what it was like before.</p><p>Whenever your model of the world becomes more complicated, it can be hard to remember what people don't know. If you want a reminder of this, try talking about your field of expertise with someone who's not in your field. You will assume they understand far more than they do. And when you ask them for their concepts, you realize it's not there. And I think if we could talk to ourselves in 2020, almost everyone alive today could blow the minds of people in 2020.</p><p>when discussing intelligence in machines and so on they would say the Turing test was passed but we don't know how to have the AI pursue an agenda and we don't know how to have you know the AI not just lie and make up things let's say maybe with 04, sorry with 4 -0 maybe with strawberry it's like actually solved and I think that's a great achievement I have to test it first before I can say with confidence</p><p>But still, we would surprise people in 2020. And I think we'll find ourselves perpetually surprised. I think we should stop expecting the AI to fly like a bird or swim like a dolphin. And it will, in fact, go very fast and very, far. And certain unusual things will be left to us humans for a long time to come.</p><p>And I'm not sure when exactly we will exhaust this cane brain explosion of intelligence. But there will be radically different AI systems. They will come to pick up more and more of the economy. They will eventually, once the will problem is solved, once we figure out how to give them will and agency, they will become politically powerful. They will very quickly become more politically powerful than humans. If there is any resource scarcity on the margin, they will immediately use their political power to pull the plug on</p><p>any sort of UBI or environmental regulation that the humans need. The atmosphere has to be made of oxygen, say the puny little humans, but they don't matter. So then humans go extinct and that explosion continues and eventually we have a world of completely new life forms. Now, I think that is at the extreme, but up until the point where the value of human intelligence isn't exhausted, humans will keep getting richer and richer.</p><p>Though they might start becoming politically disempowered once machine agency enters the picture. I think it is we're pretty lucky that the AI has not gone political as soon as the AI is politically powerful We will be in trouble. I'm actually happy with open AI or an anthropic or these big companies being very politically powerful Because at the end of the day, they're still humans. They want the atmosphere to be composed mostly of nitrogen and oxygen They want the temperature to be in a habitable range</p><p>Maybe there's mild disagreement on the margin about how many parts per million of CO2 we want, but like it's broadly all okay.</p><p>Yeah, so I don't know, you know, humans are very power hungry. So that's sort of my optimistic vision for the future is that we ride this came random explosion intelligence. We ride it much further than it is right now because I have a lot of faith that particular kinds of human intelligence will have an advantage. And then at some point our monkey brain like freaks out and we're like, the machines are too powerful. And then we just stop.</p><p>and then we maintain political power and we just enjoy our multi -planetary, high intelligence, high wealth civilization and perhaps expand horizontally across the galaxy with slowly, light speed limited ships rather than go all the way to being politically replaced and disempowered. So, there. That's kind of my projection. My projection is, yeah.</p><p>Theo Jaffee (33:16)</p><p>Hmm, so, almost.</p><p>Almost like the the Ian Banks culture series</p><p>Samo Burja (33:25)</p><p>Not quite. In that case, the humans are kept as pets by the very advanced intelligences and clearly the motherships are much more powerful than the humans are. I'm sort of relying on man being a political animal and that we're going to have like a primitive animal -like cunning that will keep us one level ahead of a lot of the super intelligences that in theory should be able to think circles around us but are going to have extreme difficulties. And you know, there's fun science fiction of this type. There's like, you know, science fiction where</p><p>The machines don't know how to lie and the humans know how to lie, for example. Though I don't think that's the case here. Clearly we have trained Chachupiti to lie to us very well, right? But anyway.</p><p>I think that it is difficult to reconcile the existence of human beings with sufficiently advanced AI. However, that might not happen. And I think we have a far more interesting history ahead of us for the next few hundred years. I don't think it's going to be the Eliezer Yudkowsky sort of rapid takeoff scenario. I think it's going to be much weirder than that. It's going to be like an explosion of colour or</p><p>shapes or... we will find the cognitive environment much much diversified. The Cambrian explosion comes first and then eventually comes a mass extinction where one of the forms of intelligence just outcompetes all the others. But I think we're going to enjoy this Cambrian explosion of different forms of intelligence for very long time.</p><p>Theo Jaffee (35:02)</p><p>Yeah, I hope you're right. switching topics a bit. A couple months ago, someone tweeted, Palladium just wants Chinese technocracy with American characteristics. And I thought this was really interesting because this seems to be a common thread of critiques of this kind of palladium ideology, which is basically, Palladium wants America to become more like China. So...</p><p>Samo Burja (35:28)</p><p>No, it's just false. It's just butthurt libertarians, bro. It's just butthurt libertarians. They got triggered by a thread that one of my employees wrote, which honestly was a great thread because it pointed out that China is a consumerist capitalist society. I don't know why this is controversial in 2024. I don't understand it, but...</p><p>think it's cope, right? I think we want China to be like the Soviet Union because we know how to beat the Soviet Union. We just grow our economy better, And the claim that GDP go up is the same thing as ship, steel, and drone production go up. Well, that was kind of true in 1945 when America won a world war. It's not true now. So really, I think, you know, if I were to give a critique, I would say that I actually want America to be more like itself.</p><p>you</p><p>I want the government to be able to build a bridge. want the taxes to be lower. I want the inflation to be lower. But I Palladium has no single ideological position. We publish writers with a wide range of perspectives. There's, of course, many very smart libertarian friends who have written for us. We're nonpartisan. We've had people who have written immigration skeptical, immigration positive pieces. The tagline is governance futurism. And governance futurism presumes</p><p>that government and society and culture in the future will be different than they are now. So do we want America to change, to develop? Yeah, but we're not advocating for any specific thing. We are examining what happens around the world. And I refuse to take this like false dichotomy where I'm supposed to pretend China's gonna run out of food in five minutes or I'm supposed to pretend it doesn't matter that China builds five times more ships than South Korea, which builds five times more ships than we do. I refuse.</p><p>refuse to pretend that that's the world we live in and I refuse to be stupid and jingoistic. I would actually, here's the thing, I will never fire someone for tweeting or disagreeing with me in any way. I believe intellectual diversity is important, but you know, I would fire someone for being an idiot. So I really refuse to hire idiots. And by refusing to hire idiots,</p><p>I sometimes rub people the wrong way because anyone with the brain who is a genius or a smart even original thinker they will rub simple categories the wrong way. So let me challenge you right back like did you read Vitalik's piece on Zuzalu in Palladium magazine?</p><p>Theo Jaffee (38:15)</p><p>I don't think I read</p><p>Samo Burja (38:16)</p><p>Why I Build Zuzulu is a Vitalik Buterin piece where he talks about creating a pop -up city. Or there's another piece, how cryptocurrency will transform migration, where it actually argues that populations will become much more mobile around the world.</p><p>and state power over individuals will decrease. Or I could name any other dozen pieces. Look, I think people are just stupid about China and they want to hear America, yay, China, boo. And I'm like, hey, let's not ignore that China is destroying us industrially. We don't have to industrialize the same way, but we do need industry. We need to build chips. We need to build ships. We need to build EVs. Not even America.</p><p>actually. Like it's fine if we French or it's fine if Germany builds stuff. Oops, the German economy is tanking. It's fine if South Korea go build stuff for us. Oops, South Korea is going extinct because their TFR is 0 .7. I'm tired of pretending we don't have big problems because I like our civilization. I want it to do super well and all is well.</p><p>sort of let's go back to grilling, let me just code, whatever man. Politics is already harassing the code, you need to think about politics back and that's why think Palladium is really the first magazine of the 21st century because it refuses to do this like left right thing, it refuses to do this like kind of like blind very narrow</p><p>Yay, our team booed the other team. So if people want to read that as pro China, I think that just tells that in their mind, the only alternative to our dysfunction is China. And you know what? The Chinese agree.</p><p>The Chinese government actually agrees that the only alternative to American dysfunction is China. And I think we should blow up that dichotomy because that's an economy that ends with us censoring our Internet to protect democracy. It ends with us tracking the movement of all Americans. It ends with us continuing to buy all Chinese products, but slapping tariffs on them to save the Boeings and the Intels of the world rather than</p><p>the SpaceX's and the Androids of the world. So yeah, that would be my response. And I got quite animated because I'm just like, you know, it's like you can spend 10 years giving nuanced commentary and then a person on Twitter gives like a little dunk. Whatever. I disregard. I disregard. If you'd not asked me, I wouldn't have even, I've not thought of it twice since. just, you know, someone's an asshole blocks me, I'll block them back. And it's super funny because,</p><p>I don't really think that anyone remembers that a magazine is supposed to be an intellectual culture with many different views. I think we're so used to the hyper -partisan propaganda environment that we've lost the social technology. So it actually goes back to the view that I stated that Western civilization has almost completely lost the infrastructure for complex and nuanced thought.</p><p>I think everyone is simplified and stereotyped. In politics as well as industry, have produced artificial simplicity, making us artificially dumber than we actually are.</p><p>Theo Jaffee (41:47)</p><p>Okay, can we drill down on that a bit? Western civilization has lost the infrastructure for intellectual complexity and nuance.</p><p>Samo Burja (41:54)</p><p>for complex thought. Well, you know, this is actually a way in which there was a very excellent, since we're talking about palladium, there was a very excellent piece by my friend Ben Lando Taylor on the academic culture of fraud, which is documenting and discussing pervasively the prevalence.</p><p>of people not only p -hacking or statistically massaging the data, but outright fabricating datasets. And note, in fields like medicine, where that costs lives, where people die, and Ben proposes...</p><p>the radical but sensible solution that actually academic fraud should be not just a fireable offense. It should cause, it should risk jail time because you really are causing harm to others. With financial fraud, have this and with academic fraud, we should have some of this as well. The academic institutions today mostly hush up and protect proven instances of fraud. So I really recommend the</p><p>audience go and read the article it was shocking to me and revelatory to it was a revelation to what an extent basically an academic department will not want the reputation damage of having you know there been demonstrable fraud there so strike one for academia academia is failing to sustain the culture of science let's go for strike two the media environment most social media networks in the Western world</p><p>And this is the way in which I wish we were more different than China. I want us to be radically different than China Give Give straight statements of suggested censorship so they will give statements to social media companies like meta and Facebook and Sir, you know meta slash Facebook Like tick -tock and so on they will suggest you take this down</p><p>And in places like Britain we saw recently, they're not even averse to mass imprisoning citizens. The United States is lucky within the Western world to have the First Amendment. It protects us from state mandated censorship. But I do think that there is state -suggested censorship. We have plenty of evidence that old Twitter, the Twitter files that Elon encouraged people to read but no one read, I don't know why.</p><p>possibly because we know that you're going to end up having different views and you're going to feel emotionally disconnected from people who have conventional views. There's plenty of evidence of the White House, the State Department, DOJ,</p><p>Sending basically threatening emails to big social media companies telling them to ban people pull content So calling this state suggested censorship is a big deal and I think Elon Musk is doing the country a great service by opening a Freer discourse environment. So that's number two public discourse is threatened X is like the only movement X comm is the only website that is closer to the</p><p>of 2001, you know, adjusted for IQ of the general population, but still closer to the freedom of the internet of 2001 than the extremely gated, curated, manicured, and fake internet of 2018. I still remember when the YouTube comment sections first became much more polite and then they became much more stupid. Because if you enforce, you know, censorship in the name of, you know, fighting, hate, or whatever,</p><p>you're going to lop off both sides of the distribution and you're gonna have a chilling effect and then of course it'll get stupid, right? So that's you know sort of the next point of artificial stupidity and then perhaps like the most important one is I think we have</p><p>We have metabolized so much of our assumptions of what it means to be a citizen.</p><p>in a free country of what level of education and agency and individuality we are supposed to accept. We have burned through it. Every single political race of the last 50 years has weaponized more parts of individuals' identities and individual feelings. Did you ever read that study that compared the reading level?</p><p>of the State of the Union address over the last 200 years. Okay, it's going down, right? Exactly, it's very generalizable. And if you look at a televised debate, not a presidential debate, mind you, just a debate between intellectuals in like 1960s or 70s television,</p><p>Theo Jaffee (46:52)</p><p>Yes. Very generalizable,</p><p>Samo Burja (47:10)</p><p>my God, these people would be, each of them would be a Jordan Peterson type sized audience, but we somehow don't have as many of them. And I think it's because if you don't bat for your team a hundred percent of the time in a modern democracy, I think people assume you're a bad person, people on your team. So if you're a Democrat, you have a conservative opinion, or if you're a conservative and you have a progressive opinion.</p><p>I think you're kind of considered a bad person or not totally reliable. People have gone extremely moralistic. Pardon?</p><p>Theo Jaffee (47:43)</p><p>Arguments are soldiers. Arguments are soldiers.</p><p>Samo Burja (47:48)</p><p>Yeah, I mean, but they didn't used to always be. And Eliezer Yudkowsky actually writes about this, right? you know, he coined, I think, did he coined the phrase arguments as soldiers or was it someone else? I remember an essay. Yeah, yeah, yeah. Well, he points out that like, just the tone of a 1940s PSA is treating the citizens, the viewers, as adults.</p><p>Theo Jaffee (47:59)</p><p>pretty sure it was him but it was was on less wrong.</p><p>Samo Burja (48:14)</p><p>And a PSA today would never do that. It would just appeal directly to feeling. It would not try to invoke reason. It wouldn't try to invoke this concept that we should restrain our emotions and we should be more broadly aware.</p><p>Because the political race has sort of ground down over time, over the last 70 years we've had an erosion of the concept of a citizen where new pieces are chipped off every single presidential election, at least to be used as fuel to win our team or the other team, right? Because of that, it has become not.</p><p>in anyone's interest to educate people in the Aristotelian sense, Aristotle defined an educated mind as a mind that can consider opinions different than their own, like consider an opinion without accepting it. And I think right now, the cognitive barriers,</p><p>and cognitive sophistication has been broken down so much that even though our IQ is probably just as high as the 1960s, maybe a little bit higher due to the Flynn effect, though the Flynn effect's been going away since the 1990s.</p><p>It's like we immediately ingest the information. It immediately goes into our opinion. If we notice that it disagrees with our team, we get angry and we immediately morally disown the person that gave us that information. And then we go on believing what we believed before. We've been hardened.</p><p>And in that situation, no dialogue is possible. But in that situation also means that groupthink is more powerful. Like one way to think about this is like an analogy with superconductivity. You know, if you could get the resistance to drop to zero, no current is ever lost, right? If we reduce this mental resistance in people on our team, whatever our team is, I'm like, you know, I honestly don't even care who wins this election. That's another way in which I'm such a heretic.</p><p>care if it's Harris administration or Trump administration.</p><p>they will be bad in different and unique ways and it's totally fair to have strong feelings about how each will be bad. But I think it's such a small part of our system and our problem that no one who is president could possibly fix these more basic ones. But let's say on our team, if you have high intellectual resistance or the ability to view a different position without immediately adopting it and repeating it or immediately rejecting it and then refusing</p><p>to hear anything more of it.</p><p>parties get stupider. So it's not just two smart teams fighting each other, it's each team will be dumber because the selection filter on coherent ideas is gone. So in the process of two sides fighting each other, we have ground down our expectations of what it is to be a citizen. We have not educated people how to be citizens. And as a result, each of the group thinks on their own is much stupider. Like,</p><p>You know, you compare the Democratic Party in 1995 and 2025, it's like no question which is the stupider party. And you compare the Republican Party of 1995 and the Republican Party of 2025, so next year. And I guarantee you, the 1995 one, they'll just be smarter people on average with more nuanced arguments and more nuanced points. And we can make even the same comparison between 1995 and 1985. And note, I'm not talking here about</p><p>their socially conservative views. I'm just talking about how they speak to each other, how they come to consensus, how they organize things like party platforms. I know this is going to shock Gen Z, but even 10 or 20 years ago, politicians were not known for bangers. They were known for pieces of legislation they pushed through. And 30 or 40 years ago, people would actually read the party platform and care about it, like normal people, not even Noah Smith tier political monks.</p><p>So, I don't know. think, I think we need to reset our expectations of the cognitive sophistication of the citizens to a much higher level. And we need to viciously shame all attempts and pushes to simplify things. And</p><p>pursue group strategies rather than individual strategies because that's the only hope to make something like a parliamentary system or representative system work.</p><p>Otherwise, the democracy aspect will be reduced and arguably has been reduced to no more powerful in the American system than the Queen of England or the King of England now is powerful in the British system. Arguably, Britain is a bureaucracy pretending to be... Sorry, it is a...</p><p>It is a bureaucracy pretending to be a democracy, pretending to be a republic, pretending to be a monarchy. So they have several layers of political dissimulation. In theory, the king is sovereign, but oops, parliamentary supremacy. But actually the people have immense power, but actually, you know, populism is bad and we should have experts decide things. So in reality, our system of government has shifted from democracy to bureaucracy to varying extents. And America has the most democracy</p><p>Theo Jaffee (53:55)</p><p>Mm -hmm.</p><p>Samo Burja (54:05)</p><p>of any Western country except maybe Switzerland and that's why this is so disturbing and dangerous to see this erosion of citizens capabilities to work in it. So in other words I wish these citizens were much more politically sophisticated and I want them to hold their political opinions and convictions strongly and I want them to know how to disagree civilly.</p><p>Theo Jaffee (54:29)</p><p>Is that really true by the way that the US has more democracy than any other western country except Switzerland? You know seems like we have Sweden? Norway? France?</p><p>Samo Burja (54:35)</p><p>Who would you name? Which country is more democratic? I feel Sweden is an extremely well -run country.</p><p>I think Sweden is a very well run bureaucracy. What do I mean? Swedish civil servants received international world health guidelines for COVID. And instead of they looked at the data and they very autistically said, this doesn't quite make sense. We're going to lock up the old people because they die of COVID. And we're not going to have general lockdowns to lock down young people. And the result has been lower deaths. For example, Sweden also decided to pursue different</p><p>economic policies Sweden actually is a surprisingly capitalist country simultaneously as being a social democracy this is kind of a Nordic model but I think in Sweden decisions are mostly not made through elections they're made through experts and both Sweden and France mind you have Sweden and France have very much</p><p>like severe limits on speech, not perhaps in practice as many people are imprisoned and become political prisoners as in the modern United Kingdom, but certainly some. And in the case of France, like, you know, the individual,</p><p>liberties are much reduced. Now the French do have a right that Americans have much less of so the French can show up, protest and have the whole country be locked down because in their mythology of liberty, their mythological version of liberty isn't, it is the people gathered together and stormed the Bastille and behead the aristocracy. So that's why in France it's kind of illegitimate to suppress a farmers protest or a rail</p><p>or a union strike or something like that because you're going against the foundational myth of liberty, it would be kind of comparable if in America you seized all the guns because in the American mythology of freedom, it was, you know, people with guns shot at the government, the British government, until it went away. And both of these stories are kind of true and kind of dumb and false in their depiction of the American Revolution and the French Revolution, but the myth is very important for political legitimacy. So there is a way in</p><p>which that is democratic. So the fact the French can just go on strike on any random thing they want, that is democratic power in action. But I think you'd be hard pressed to deny that by any measure, France is like more regulated, there more laws, citizens in most ways have fewer rights, there's less free speech.</p><p>I think France is actually a surprisingly good elective monarchy because when it has a strong president, the president of France has like very significant powers, not only a longer presidential term.</p><p>But actually, the bureaucracy mostly listens to the French president. So you could argue that that's a democratic, monarchical aspect to the government. just the sheer number of departments, regulations, like try starting a startup there, Like economic freedom, but also political freedom, it's much more constrained. So yeah, I would claim France is more of a bureaucracy than the United States. I mean, would you disagree with that?</p><p>Theo Jaffee (58:04)</p><p>No, I would not.</p><p>Samo Burja (58:05)</p><p>Okay, well, perhaps the disagreement then could be is the US, you know, the US might have a mix of bureaucracy, plutocracy and democracy. And I think my center left friends would say, maybe Europe is more bureaucratic, but it's still more democratic because it's not plutocratic. But that argument doesn't really work for a place like Sweden or France either, because, you know, let's remember, second richest man in the world is a Frenchman that owns a bunch of luxury brands.</p><p>Luxury brands are the ultimate fake job. You are riding on incumbency. Actually high taxation would destroy you. So in France, Sweden, and lot of European countries, the social democratic pact is the following. If you have money, you can inherit it through loopholes. And old money persists. If you don't have money, your income will be taxed and it will be hard for you to make more money. So incomes are very harshly taxed, but you can have a family foundation.</p><p>that owns your company and you can be in charge of your family company and you can be in charge of the family foundation and you basically have a 0 % tax rate.</p><p>That's true of Austria, Germany, Sweden. This does a few good things. It preserves the Mittelstand economies, but an economically equal society that this is not. So I would say that Europe is plutocratic in a different way than America. In Europe, old money is supreme and the government approaches its old companies and ask them, what can we do for you? And in America, new money is supreme and companies show up and ask the government, hey, what can we do for you? Because we're just getting started.</p><p>and don't you want to buy our much cheaper drones, et cetera, et cetera, instead of the ones provided by the old companies. But that's only directionally, right? Both have elements of new and old money power.</p><p>And generally speaking, think bureaucracy is much stronger than plutocracy. And in America, would say democracy is very strong because it is possible to build a base of popular support and launch your political career. And by the way, on the left as much as the right, know, people, my conservative friends might not like this, but AOC is an example of democratic power.</p><p>She's speaking directly to the voters and a significant set of voters really like what AOC has to say. So AOC is a champion of democracy. Donald Trump is a champion of democracy. When you hear populism, that usually just means someone doesn't like democracy in action.</p><p>Theo Jaffee (1:00:39)</p><p>So.</p><p>I think we have time for one more question. So we talked about how complex political thought has gotten worse over the last few decades. And it seems like just philosophy in a lot of fields have reached almost the kind of stasis. So, you know, aside from Palladium, the first magazine of the 21st century, like what are some of your favorite ideologies and philosophers and work specifically from the last 20 to 30 years, the 21st century?</p><p>Samo Burja (1:01:10)</p><p>think a lot of the people who got their start from blogging, and some have migrated sub -stacks, some have not, have written very insightful stuff. I think that Paul Graham and his early essays, and even some of his more recent essays, is going to be understood as a significant writer of the last 30 years. I think that...</p><p>A lot of the mainstream polished pop intellectuals are actually overrated. There are a few that I think are decent. I think Steven Pinker's least popular works are his best and his most popular works are his worst. So Steven Pinker I think is actually a more serious intellectual than you would believe from his public profile.</p><p>Theo Jaffee (1:01:50)</p><p>Like who?</p><p>Samo Burja (1:02:07)</p><p>I think that... I think that Nickland...</p><p>will prove to be a much more important and subversive influence on both like far left and far right subcultures than is currently acknowledged. think ancestrally he has shaped a lot of the strands of accelerationism and you know there's sort of like the left -wing version of that and then there's the right -wing version and I think people are just now remembering that he wrote really bizarre things in the 1990s while working, you know, there's this informal group,</p><p>cybernetic culture research unit at the University of Warwick, which you know according to the University of Warwick never existed because of course universities don't allow unique or weird social or intellectual clubs. It has to be underground. It has to be unofficial. So I think he will prove to be a significant thinker because his thesis, he laid out this sort of basic thesis of techno capital, right?</p><p>which is this idea that capitalism itself was a form of intelligence. And I'm not sure if he's the absolutely first person to make this analogy, but he definitely made it forcefully and interestingly in the 1990s, long before the current machine intelligence explosion, right?</p><p>We could continue listing more thinkers. I'm going to say it's cringe to say, but Eliezer Yudkowski is a more significant philosopher than people would like to give him credit for because he single -handedly wrote the orthodoxy of the rationalist movement and the effective altruism movement. say what you will, those are very influential movements. He was not dumb. He wrote very clearly.</p><p>Theo Jaffee (1:03:48)</p><p>I completely agree.</p><p>Samo Burja (1:03:58)</p><p>one of the best stylists, honestly. think among his acknowledged influences was George Orwell, whose essay, Politics in the English Language, I warmly recommend. So it's a non -fiction essay. So I think Yudkowsky is also a significant thinker. And I think that because we live in emergence in a period where the Cambrian explosion of intelligence has happened,</p><p>we will tend to regard the thinkers who commented on topics related to artificial intelligence more highly than some of the other commentators. So as a last one here, I would say Robin Hansen is very much underrated. I sort of feel, you know, I know he came up with the whole prediction market thing. It's pretty cool. But I honestly find his like cosmology, human nature, and culture commentary</p><p>to be much more interesting than just the mechanism of prediction markets. feel like, you know, insurance schemes are neat and fun to think about, but you can only hear about them so many times before you lose interest. Yeah.</p><p>Theo Jaffee (1:04:57)</p><p>Yeah, absolutely.</p><p>Yeah, I mean, just to go on Instagram and see the like, lowbrow slop that they have and to see these slop accounts posting about like, presidential prediction markets and it's like, wow, I met the guy who invented this thing. Like, how cool is that? Yeah.</p><p>Samo Burja (1:05:21)</p><p>Exactly. That's a big influence. I could see prediction markets being actually very important in 10 years in even determining the election. But that will be their big test. When there's an incentive to like rig the market one way or the other, how much money will go into politics? Right? Like I think people are already trying manipulation in these very low liquidity markets because they are very low liquidity for now.</p><p>But yeah, think if they're not outlawed, they will ratchet up and hopefully the result is more accurate information and not just another information battlefield.</p><p>Theo Jaffee (1:06:02)</p><p>So I think that's a good place to wrap it up. Thank you so much, Salma Buria, for coming on the show.</p><p>Samo Burja (1:06:06)</p><p>Yeah, thank you Theo and thanks for the provocative questions.</p><p>Theo Jaffee (1:06:08)</p><p>Thanks for listening to this episode with Samo Burja. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Also be sure to check out Samo&#8217;s Twitter @SamoBurja (spell it) and his website samoburja.com, Bismarck Analysis at bismarckanalysis.com, and Palladium Magazine at palladiummag.com or @palladiummag on Twitter. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[#18: Alec Stapp]]></title><description><![CDATA[The Institute for Progress, American Dynamism, and Fixing Governance]]></description><link>https://www.theojaffee.com/p/18-alec-stapp</link><guid isPermaLink="false">https://www.theojaffee.com/p/18-alec-stapp</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Thu, 11 Jul 2024 23:34:59 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/146496593/c3712bdeec8d1bdf958120b2dd9a5afc.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Alec Stapp is the co-founder and co-CEO of the Institute for Progress, a non-profit think tank dedicated to accelerating scientific, technological, and industrial progress.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:13 - Why can&#8217;t smart people fix the Bay Area?</p><p>3:38 - How to get normal people on board with IFP</p><p>10:23 - How to get smart people into governance</p><p>15:55 - How IFP chose its priorities</p><p>21:56 - How will IFP avoid mission creep?</p><p>24:17 - How important is academia today?</p><p>26:03 - Would Alec press a button to fully open borders?</p><p>29:45 - How prepared are we for another pandemic?</p><p>33:16 - Why don&#8217;t easy wins happen?</p><p>36:17 - Is Biden&#8217;s spending good?</p><p>40:51 - How important is the repeal of Chevron deference?</p><p>43:23 - Are land value taxes good?</p><p>45:01 - &#8220;The Project&#8221; for AGI and AI Alignment</p><p>48:19 - Is globalism dying?</p><p>50:32 - Overrated or Underrated?</p><p>59:28 - The most overrated issue</p><p>1:00:26 - The most underrated issue</p><h3>Links</h3><p>Institute for Progress: <a href="http://ifp.org">ifp.org</a></p><ul><li><p>&#8220;Progress Is A Policy Choice&#8221; founding essay by Alec Stapp and Caleb Watney: <a href="https://ifp.org/progress-is-a-policy-choice/">https://ifp.org/progress-is-a-policy-choice/</a></p></li><li><p>&#8220;How to Reuse the Operation Warp Speed Model&#8221; by Arielle D&#8217;Souza: <a href="https://ifp.org/how-to-reuse-the-operation-warp-speed-model/">https://ifp.org/how-to-reuse-the-operation-warp-speed-model/</a></p></li><li><p>&#8220;How to Be a Policy Entrepreneur in the American Vetocracy&#8221; by Alec Stapp: <a href="https://ifp.org/how-to-be-a-policy-entrepreneur-in-the-american-vetocracy/">https://ifp.org/how-to-be-a-policy-entrepreneur-in-the-american-vetocracy/</a></p></li><li><p>&#8220;To Speed Up Scientific Progress, We Need to Understand Science Policy&#8221;: <a href="https://ifp.org/to-speed-up-scientific-progress-we-need-to-understand-science-policy/">https://ifp.org/to-speed-up-scientific-progress-we-need-to-understand-science-policy/</a></p></li><li><p>&#8220;But Seriously, How Do We Make an Entrepreneurial State?&#8221; by Caleb Watney: <a href="https://ifp.org/how-do-we-make-an-entrepreneurial-state/">https://ifp.org/how-do-we-make-an-entrepreneurial-state/</a></p></li><li><p>Construction Physics newsletter by Brian Potter: </p></li></ul><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:104058,&quot;name&quot;:&quot;Construction Physics&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c663799-8d26-4456-8c14-8283b618f705_590x590.png&quot;,&quot;base_url&quot;:&quot;https://www.construction-physics.com&quot;,&quot;hero_text&quot;:&quot;Essays about buildings, infrastructure, and industrial technology.&quot;,&quot;author_name&quot;:&quot;Brian Potter&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#fCFBEB&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.construction-physics.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!pMIM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c663799-8d26-4456-8c14-8283b618f705_590x590.png" width="56" height="56" style="background-color: rgb(252, 251, 235);"><span class="embedded-publication-name">Construction Physics</span><div class="embedded-publication-hero-text">Essays about buildings, infrastructure, and industrial technology.</div><div class="embedded-publication-author-name">By Brian Potter</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.construction-physics.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><ul><li><p>Macroscience newsletter by Tim Hwang: </p></li></ul><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:1637337,&quot;name&quot;:&quot;Macroscience&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea04f44e-cf28-447b-82a7-2830f460cb2a_1280x1280.png&quot;,&quot;base_url&quot;:&quot;https://www.macroscience.org&quot;,&quot;hero_text&quot;:&quot;A newsletter about macroscientific theory, policy, and strategy&quot;,&quot;author_name&quot;:&quot;Tim Hwang&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#FCFBE8&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.macroscience.org?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!keL_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea04f44e-cf28-447b-82a7-2830f460cb2a_1280x1280.png" width="56" height="56" style="background-color: rgb(252, 251, 232);"><span class="embedded-publication-name">Macroscience</span><div class="embedded-publication-hero-text">A newsletter about macroscientific theory, policy, and strategy</div><div class="embedded-publication-author-name">By Tim Hwang</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.macroscience.org/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><ul><li><p>Statecraft newsletter by Santi Ruiz: </p></li></ul><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:1818323,&quot;name&quot;:&quot;Statecraft&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4ed3ff9-0217-4c49-8793-be01ef6b0943_807x807.png&quot;,&quot;base_url&quot;:&quot;https://www.statecraft.pub&quot;,&quot;hero_text&quot;:&quot;How policymakers actually get things done&quot;,&quot;author_name&quot;:&quot;Santi Ruiz&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#fcfbeb&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.statecraft.pub?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!n21s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4ed3ff9-0217-4c49-8793-be01ef6b0943_807x807.png" width="56" height="56" style="background-color: rgb(252, 251, 235);"><span class="embedded-publication-name">Statecraft</span><div class="embedded-publication-hero-text">How policymakers actually get things done</div><div class="embedded-publication-author-name">By Santi Ruiz</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.statecraft.pub/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><p>IFP&#8217;s Twitter: <a href="http://x.com/IFP">x.com/IFP</a></p><p>Alec&#8217;s Twitter: <a href="http://x.com/AlecStapp">x.com/AlecStapp</a></p><p>Transcript: <a href="https://www.theojaffee.com/p/18-alec-stapp">https://www.theojaffee.com/p/18-alec-stapp</a></p><p>More Episodes</p><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (01:05)</p><p>Hi, welcome back to episode 18 of the Theo Jaffee podcast. We're here today with Alec Stapp.</p><p>Alec Stapp (01:11)</p><p>Hey Theo, good to see ya.</p><p>Theo Jaffee (01:13)</p><p>Yeah, you too. So first question, pretty much everybody I know in Silicon Valley on tech Twitter, many of whom are extremely intelligent and high agency and wealthy, agree with you on almost everything, but the Bay areas still are like the most politically dysfunctional place in the country and in some ways in the world. So why can't they change things? Is it just a skill issue?</p><p>Alec Stapp (01:37)</p><p>Great question. And I'll just carry out this answer by saying that my organization, the Institute for Progress, we focus exclusively on US federal policy. So we only work on stuff in Washington, DC. So I claim no unique insight into local politics in San Francisco or state politics in California. But I do think it's at some level not taking politics very seriously on its own terms. It's part of the issue.</p><p>So a lot of people in tech either haven't been engaged for a long time in local politics, or they don't understand what drives these elections. I think they're very low turnout events. It's not the kind of people that folks in tech world often interact with. It is not driven by the same dynamics that drive tech Twitter and things that are like in the discourse or in the ether. And so, for example, my understanding is that a lot of the recent election outcomes in San Francisco have been driven by</p><p>crime as the major issue, especially in the Asian American community. And of course, housing is a major issue, but maybe crime in that one demographic is actually the thing that's like really moving voters. And so to understand that you would need to conduct polling, focus groups, do door knocking, do a lot of like the really grassroots organizing that the tech community is not experienced with. But is increasingly there are folks who are jumping in, trying to learn about this. I think they have some long run strategies in terms of</p><p>getting on the local Democratic Nominating Commission to then nominate folks to run Shadow Gary Tan. So it's changing, but I think we shouldn't be surprised that people like, I believe Aaron Peskin is his name, who's now running for mayor against London Breed. Like he's been doing this for decades and he's a very like active person and high energy person. And so even if tech folks disagree with his politics or his policy positions.</p><p>Theo Jaffee (03:08)</p><p>Shout out Garry Tan.</p><p>Alec Stapp (03:32)</p><p>You shouldn't take someone like that lightly in terms of thinking how you can beat them.</p><p>Theo Jaffee (03:38)</p><p>And similarly, how do you get the general public to actually agree with IFP style ideas? Like most people don't even think about the issues that you write about. Are there any lessons from like gay marriage going from overwhelming opposition to overwhelming support in a single generation?</p><p>Alec Stapp (03:48)</p><p>Dublin.</p><p>Gay marriage is obviously a unique issue or at least it's like a cultural issue. And so that feels quite distinct from what we do at IFP. We focus, not only do we focus exclusively on federal policy, we also focus exclusively on innovation policy. So you see us work on things like high skilled immigration, meta science, AI, biotech. We really try to just stick to these issues in a way that can be bipartisan.</p><p>Theo Jaffee (03:57)</p><p>Maybe two.</p><p>Alec Stapp (04:24)</p><p>or even just nonpartisan and kind of technocratic in nature. And we want to increase the salience of the issues we work on to a degree where they get prioritized by folks in government and people agree with our positions ultimately. But these are not mass mobilization issues and that's not key to our theory of change. Our theory of change is really about can we get on any given topic the 100 to 200 people in the Washington DC area who really matter.</p><p>to agree with our position and coordinate and work with us. And so it's a very elite driven theory of change. Not to say that mass mobilization doesn't have its place, it's just that it's not our focus on the topics we care about.</p><p>Theo Jaffee (05:07)</p><p>Can you elaborate on that a little bit? Because it seems to me like it would be, you know, quite effective to get large amounts of people to start campaigning for, say, high skilled immigration. Like people tend to have very strong opinions on immigration and it seems to be like quite bipartisan, you know. It's very rare that you see even like the most anti -immigration Republicans oppose high skilled immigration. So why not make it a mass mobilization?</p><p>Alec Stapp (05:33)</p><p>Yeah, I think you need to look, well, one, let's first look at like where mobilization has worked for the progress community, abundance community, Yimbyism, however you want to frame the groups that we work with. It's obviously happened first at the state and local level for housing. I think that is a situation where raw numbers of mobilization really could move the needle. It's that you had these low turnout city council meetings, town council zoning meetings where</p><p>less than 1 % of the population shows up and they're all retired people who don't want any change, don't want any new housing, and they already own their own homes. And if you could just get 1 % of the local population to show up and be pro -housing, then you've now offset them and it's like a very tractable, achievable outcome. I think in Washington, D .C. it's very different. Representatives and senators represent hundreds of thousands, if not millions of people.</p><p>in their constituencies. And so it's hard to actually mobilize the scale of people that are necessary for changing their opinion. And because you brought up immigration as an issue, it's very hard to raise the salience of that in a mass politics way without getting, without making the situation worse. So for example, if you look at the polling on the issue across all different types of immigration reforms, there's broad support.</p><p>like more positive than negative sentiment among the American people for more immigration. But by far, the people who feel the most strongly about it, the most passionate about immigration as a topic, are the restrictionist and anti -immigration folks. And so, and then if you think about what is the bottleneck to reform, the bottleneck to reform is currently congressional Republicans. And due to our primary electoral system in the United States,</p><p>most members of Congress are only concerned about winning their primary because they're not in a competitive seat or state for the general election. And so a Republican who only cares about winning or potentially losing their next primary race is mostly concerned about not doing anything to increase immigration from their right flank because they might get primaried on that issue because it's very for a minority of the Republican base who do vote in primaries and care a lot.</p><p>any kind of immigration is seen as a betrayal almost. And so I think it's very tricky to raise this topic in a mass mobilization context without it backfiring on you. And so we prefer a much more targeted, again, elite theory of change, which involves using arguments that we think are most targeted and most effective, so making this a national security issue, because it is in terms of having talented folks in our defense industrial base. This is the best way for us to beat China.</p><p>Russia, other countries that are adversaries to the United States. And that takes it out of the immigration context. It is another way to get folks on board and doesn't involve mass mobilization at all.</p><p>Theo Jaffee (08:37)</p><p>So when you said you mainly focus on persuading the 100 to 200 people in DC who have the power to do things, like, who actually are these people? Members of Congress? Heads of agencies? Or something different on top of those two?</p><p>Alec Stapp (08:54)</p><p>Yes to all those and I would say just the most important thing is that it varies based on topic So it's not the same people who have power over everything. Of course, it's like In the high skilled immigration community It's the certain committee members themselves in Congress and their staffers their lead staffers who've been working in Congress drafting legislative bills for decades it's the leadership of USCIS in the executive branch and Some of their staff members. It's the outside communities. So like who are the top lobbyists on this issue?</p><p>who are the top nonprofit organizations, and then who are the leading experts, academics on this topic. And so on a very niche technocratic topic, there are really only like dozens or maybe 100 people who have the requisite experience to really be engaged in the decision making. And obviously, they don't determine exactly what happens. But as soon as elected officials put it on the table and say,</p><p>let's consider doing something on high skilled immigration reform, or let's consider doing something on reforming the National Institutes for Health or the National Science Foundation. Then they hand the baton to staffers, sometimes lobbyists, sometimes outside groups, academics, to flesh out the details, to figure out like, we have this vague abstract goal, how do we implement it? And it's roughly dozens, and at most 100 people who like ultimately end up mattering in those kinds of conversations.</p><p>Theo Jaffee (10:23)</p><p>How do you get more smart people into orgs like IFP or into government? Most of the smart people I know who are trying to make a lot of money want to be like quants or something or work for big tech. So how do you get those kinds of people to work for you instead?</p><p>Alec Stapp (10:33)</p><p>Mm -hmm.</p><p>Yeah, so I think, well, one thing we talk a lot about is for when we're hiring people IFP, we don't play the role in the ecosystem of usually being people's first job. We tend to hire more experienced folks because we're a small, lean team and tend to wait a little more towards senior people who are autonomous and have an agenda they want to drive. But that is not to say that sometimes we don't hire folks with more limited experience.</p><p>And usually the exceptions are when someone has a demonstrated track record of public work on the topic. So there's really no substitute to working in public. That's a blog, publishing academic papers, publishing white papers, showing up to events. It's much easier if you already live in DC. Again, not impossible to do this from the outside, but if you're already in DC, you're going to the events with other experts on the topic, you're doing all the reading.</p><p>all the hard work, especially if you're doing like quantitative analysis in public, either on like a sub stack or other venues you can publish in. And then the people who matter will notice this and they'll recruit you or they'll be open to an unsolicited pitch to join your organization. And so that's the main thing I'd recommend for people is if you're considering getting into this field, there are two big mistakes people make. One is doing nothing out of risk aversion. And this is often people from elite institutions to do this because</p><p>they're taught their whole life to climb a ladder and keep their head down and just don't do anything too risky that could be seen as outside the Overton window. And that is just not the kind of person who actually ends up mattering in the long run or making a big impact in DC. You have to have your own ideas and you have to be willing to put yourself out there. But then on the flip side, lots of folks make the opposite mistake, which is they want to become a takes person who has an opinion on everything, even though they have either only expertise in a narrow domain or they have no expertise at all because they're...</p><p>in their early 20s and they're still learning. And it's great to still be learning, but I wouldn't go on record on 100 different topics, especially culture war topics, especially things that are very controversial. Instead, what you can do is develop a niche expertise that adds value to people in Washington, DC, and then they will be receptive to your ideas. A very good example of this is Thomas Hokeman at the Foundation for American Innovation. I recommend everyone follow him on Twitter. Thomas first started his career just like...</p><p>a little more than a year ago, he's not been in DC long, DC policy world, and he just went very deep on the Clean Air Act. And now he's like one of the experts on the Clean Air Act in Washington DC, even though he's like just graduated college this year. But like he spent most of his time thinking about like how do we improve the Clean Air Act and he didn't spend all of his time like fighting different culture wars. So you go, if you're very focused and go very deep, you can make an impact early. And then the other thing I would recommend to your audience is that there's this great website called emergingtechpolicy .org.</p><p>and it just lays out all the pathways to getting into tech policy in DC in terms of fellowships, in terms of resource guides. And it's by far the best one -stop shop of like, if you're in tech or outside DC and you're like, hey, I want to get into policy, just go to emergingtechpolicy .org and read through their resources.</p><p>Theo Jaffee (13:48)</p><p>What about not just nonprofits that shape these policies, but what about getting smart people actually into elected office or into government agencies? It seems like there are a lot of people who would be open to the idea, otherwise who are not right now for some reason.</p><p>Alec Stapp (14:08)</p><p>Yeah, so elected office is a tricky one. I think that's much, a much harder hill to climb. The only thing I would say about that is only go that path. If you're an extreme extrovert, everything about fundraising, campaigning, running for office is constant social interaction. You must be a people person. You must be an extrovert. and if that's your personality and you also happen to have interest in like the substantive policy ideas that we care about and some of the people our friends care about and that then more power to you go run for office, raise money, try to win.</p><p>But most people we work with are much more wonky, technocratic, in the weeds. They're not the right kind of people to run for office. But elected officials need really talented people in government, whether it's in Congress or the executive branch. And not to beat it to a horse, but the emergingtechpolicy .org website is by far the best resource for what are the entry points? What are the junior level fellowships? How do you get your foot in the door? And then I would just say, once you get your foot in the door, whether it's a staffer in Congress or someone in the executive branch, an executive branch office.</p><p>It's really just like hard work and constantly networking. And so these are not like the most well -paid positions. So if you're going into this, you just need to understand that like you're giving up money on the table by working in the private sector to do this public service, but it's really important. And if you work really hard because, and you're talented and you just put yourself out there, you will get promoted and you will get retained because the system does need those people. And...</p><p>pretty quickly, not overnight, but pretty quickly, you can be put in a position of authority to really draft legislation, be in charge of a rulemaking or regulation at an executive branch agency, if you work really, really hard and know what you're talking about.</p><p>Theo Jaffee (15:55)</p><p>So IFP's five priorities on the website are meta -science, high -skilled immigration, biotech, infrastructure, and emerging technology. So why those five in particular? Mostly because they're tractable or something else. And if you had to add another category, what else would it be?</p><p>Alec Stapp (16:13)</p><p>Yeah, so those are definitely going to be the five we stick with for the foreseeable future. We're a team of roughly 16 people. And so five policy areas definitely keeps us busy. So how we pick those areas, I think the main factors were that they are both tractable and important. In a lot of ways, we try to tackle issues that are neglected as well. And so we think that EA framework is pretty useful for selecting topics to work on.</p><p>Theo Jaffee (16:34)</p><p>like the EA criteria.</p><p>Alec Stapp (16:42)</p><p>If it's important and tractable, meta -science, a lot of folks, there are a lot of lobbyists and trade associations for universities and other institutions that lobby for more scientific funding at NSF and NIH. There are libertarians and small -c conservative folks who argue to cut those budgets because it's wasteful government spending. They have a trench warfare fight every single year around those budgets. We don't think there's much marginal impact to be had by joining that specific debate.</p><p>But there are very few people who are in Washington, DC thinking about the question, given any particular budget added SF and NIH, how is it being spent? Is it being spent in the highest impact way? What alternative allocation systems should we be considering for scientific grant making? Those are very understudied systems and ideas. And then similarly for high skilled immigration, we've talked about that a bit already.</p><p>comprehensive immigration reform, stuff happening at the border, that is a very contentious, high salience fight, very well funded on both sides, not neglected at all. But tweaking visa pathways, for example, the 01 visa for immigrants of extraordinary ability, it's an uncapped visa program, better guidance from USCIS, which they first issued two years ago.</p><p>is going to help people realize they qualify for that visa program. And it's temporary, but it can be renewed as many times as you want. And so there are pathways in our current broken immigration system for talented people to come to the US. And then the same is true for biotech, AI, which we work under on our emerging tech portfolio, as well as infrastructure, where you just can focus on more neglected topics that are innovation related.</p><p>Theo Jaffee (18:34)</p><p>But if you had to add another category, what would it be? What else is tractable and important and neglected and does not fall under the umbrella of the other five?</p><p>Alec Stapp (18:44)</p><p>It's funny because these are so broad, like infrastructure captures energy, housing, and transportation for us. I would probably add like a specific, so we think about state capacity a lot, and this is a horizontal theme. If you think of those like vertical policy areas I just described, state capacity is the ability of the government to achieve its intended aims, to have the capacity to actually achieve its objectives. And it applies to all those policy areas I talked about.</p><p>Two state capacity themes we think about a lot and would potentially work on in the future are procurement and hiring. So federal procurement procedure is extremely broken and leads to really inefficient outcomes and a lot of stagnation and sclerosis in the government contractor industry. And then hiring as well. We just talked about getting good people into government. If you go through the normal hiring procedure on USAjobs .gov, it is a nightmare in terms of the incentives it creates.</p><p>People are incentivized to upload 100 page resumes that include every single possible keyword because the first filter is just like a keyword match filter for resumes to job description. And so usually the most qualified people don't get hired or take so long they give up and go to the private sector. And as anyone in tech and startup world knows, people are the most important factor in success. And so we needed to get better people in the government. We need much more flexible hiring procedures. And that's something that we would probably add as an area to focus on.</p><p>Theo Jaffee (20:14)</p><p>Yeah, procurement in particular is interesting. My dad used to work at Lockheed Martin back when it was Martin Marietta. And he always talks about the days of cost plus contracts and how terrible those were.</p><p>Alec Stapp (20:23)</p><p>Those are still the days today. Those are, that's mostly how it works today. There are some, yeah, there are some fixed price contracts, like SpaceX is famous for advocating for this and NASA has done some move towards fixed price. But my understanding is most government contracting is still cost plus. And like, there are some cases where I think cost plus makes sense, but in the majority of cases, it just creates bad incentives, obviously, where if it's cost plus a certain percentage, you just increase your costs and make more money.</p><p>Theo Jaffee (20:28)</p><p>Mostly. Wow.</p><p>Yeah.</p><p>Yep. So what have we learned from Trump and Biden after four years of each of them in power? Who do you think would be better for the IFP agenda? I understand if you need to be strategic with the answer.</p><p>Alec Stapp (21:04)</p><p>Well, we, not about being strategic, it's about just being committed to being nonpartisan. And so we are prepared for any election outcome this November. And we very intentionally do not weigh in on electoral politics at the presidential level or the congressional level because we want to make an impact in the areas we work on. And so we're a mission focused organization. The five areas you mentioned earlier, the only areas we work on.</p><p>And we have an agenda for either presidential administration, different compositions of Congress in terms of Republicans controlling one chamber, two chambers, and vice versa for Democrats. And so we would be excited to work with any particular US government because regardless of who wins, these issues are really, really important and there's lots to do.</p><p>Theo Jaffee (21:56)</p><p>So how a lot of non -governmental organizations, lobbying orgs, nonprofits and such, have been subject to some kind of mission creep, like for example, the ACLU, which was originally about securing constitutional liberties. They famously like,</p><p>Defended like Nazis in court even like a couple of decades ago and now they couldn't be farther from that So how will IFP avoid the same kind of institutional capture or mission creep or whatever you want to call it? in a long term</p><p>Alec Stapp (22:31)</p><p>Yeah, it's a good question. And it's honestly, it's part of what I was just trying to do there. And we'll continue to do in this conversation and in every conversation I have publicly and privately in Washington, DC, my co -founder, Caleb and I are on the exact same page. We run this institution, the two of us together, we have equal say over big strategic plans for IFP and down to nitty gritty details. And at the end of the day, both of us observed that this phenomenon was happening that you're describing of like mission creep. And we think that organizations become</p><p>less effective over time when they do that on a per dollar or per person basis. And it's just like, then all organizations lose their identity as well. They kind of like become this big blob of everyone doing the same thing, this, this Omni -Cause phenomenon. And there is a role in DC for think tanks and organizations that just support one political party. Center for American Progress is the biggest one for Democrats, though there are others as well. The Heritage Foundation is currently the biggest one for Republicans in terms of</p><p>being a holding tank for folks to go into the next administration when their party wins, as well as to like develop policy ideas and white papers and stuff. And so in that sense, they often have to support the entire agenda. But lots of think tanks are very issue. They're supposed to be issue focused or much narrower. And I think it just hurts them and is counterproductive if those kinds of organizations expand due to mission creep and bleed into other areas that are outside their scope. And so from day one,</p><p>Caleb and I both decided that's not the kind of institution we want to run. We want to run a mission -focused organization that can work with either party. And we think boundary effects are strong, and so this is what we want to do with the rest of our careers. And we're going to be here to make sure that we don't succumb to that risk like other organizations have.</p><p>Theo Jaffee (24:17)</p><p>So as for meta science, how important do you think academia is today in 2024? Should government policy focus on getting smart people into academia or more into private companies and research labs? Or their own organizations within government?</p><p>Alec Stapp (24:32)</p><p>Yeah, I think on the margin, we probably, obviously, it's very field by field. It's hard to speak in generalities here. But in general, I don't think the marginal smart person should go into academia. I think most of our breakthroughs come from superstar researchers who are already obvious fits for academia. But the marginal person writing the marginal academic paper is not adding a lot of social value. I think that person would be much better fit.</p><p>Going into government in like kind of a high agency mindset of like with a clear goal, wanting to get something done, ideally even at the local or state level, because like a single great person at that level of government can have a lot of change, make a big impact. And then, yeah, we're also just very bullish on like new institutions. And so there are groups like Arcadia and the Arc Institute led by Patrick Su. Arc Institute's amazing. They're already having huge breakthroughs and they've been around for just a couple of years.</p><p>Theo Jaffee (25:23)</p><p>I love the Arc Institute.</p><p>Yeah.</p><p>Alec Stapp (25:28)</p><p>and those institutions outside of academia and outside of government, often privately funded, philanthropically funded are amazing. And I think I would encourage more people to start organizations like that as small experiments to join, like, you know, ongoing new institutions like that. And then let's double down on the ones that are working and close the ones that aren't. And I think that is a much more exciting prospect than joining legacy institutions.</p><p>that aren't that effective anymore for the marginal person.</p><p>Theo Jaffee (26:03)</p><p>If you could press a button that would fully open the US's borders, would you do it? Like, would fully open borders be better or worse than the status quo?</p><p>Alec Stapp (26:12)</p><p>It's a good question. I don't think about this a lot just because it's not within the overtopendo, but I'll play the game of just saying, high uncertainty, but I probably would not push the button. I think this is for like, very Tyler Cowen -esque reasons where this is not a permanent, and you can define the thought experiment however you want, but I do not think this will ever be a permanent option. And so what would immediately happen is there'd be a flood of immigration, and then there would be a nativist backlash to dramatic change to society in the short run.</p><p>housing prices would definitely go up in more competition for scarce resources before supply has the opportunity to expand. And so in the long run, I think most and pretty much all immigration has positive effects for the whole country, but there are in narrow local cases and especially in the short run, there can be costs and people observe those and there's a backlash. And so this would lead to probably the strictest immigration regime in the short period after open borders. And so...</p><p>At IFP, we're really focused on long run sustainability, what is a durable policy change that can get us to a better future and we can build on. And so when it comes to immigration, that includes things like having control of our southern border so that the domestic population has faith in a credible immigration system. And then it's focusing on what we see as successes around the world, whether it's Canada, Australia, elsewhere, where you have kind of more of a points -based system.</p><p>where immigration is targeted at occupations that are in shortage. And it's just focused on high skilled STEM immigrants who can really contribute the most on a per person basis to the US economy and to the world. And there's a lot of data that these are the kind of, it's the kind of immigration, a controlled orderly immigration that leads to the least amount of backlash and can be built on over time. And so I think the idea of like a magic button that opens the borders, it would very quickly change in practice.</p><p>Theo Jaffee (28:07)</p><p>I'm not certain about that because in Europe over the last few years they haven't had full open borders, but they have had substantial amounts of immigration from all walks of life. And the nativist backlash has been, I think, much less than people expected. Like in France just yesterday, the more pro -immigration left parties won the election. In Britain, Labour Party absolutely swept the election and they seem to be pro -immigration.</p><p>Alec Stapp (28:34)</p><p>I probably disagree with the characterization of the UK. I think a significant reason from my view of why the Conservative Party became so unpopular, not the whole reason, but part of the reason was that post -Brexit, they were supposed to be the party of controlled, orderly immigration, no longer having open borders with Europe. And like you said, they became like open borders with the world. And I think the polling and some of the data in the UK shows that the...</p><p>Domestic population did not like the direction immigration was going in the UK. And I haven't seen the entire labor policy position, but I don't think they're significantly pro -immigration in a material way and would be surprised if they totally maintained the policy status quo there. And then in France, again, yeah, it was better for immigration that the center -left and leftist parties beat the right -wing parties, but...</p><p>The Raving Parties did really well in round one and the fact that they are even contending for national power in France shows you something about the backlash even if they weren't ultimately able to get a majority.</p><p>Theo Jaffee (29:45)</p><p>So on the topic of biotech, how prepared do you think we are for another pandemic? Has the government learned anything from COVID? And if we're not prepared, what would it take for us to get prepared?</p><p>Alec Stapp (29:55)</p><p>Yeah, I regret to report pretty much nothing. They've done nothing, learned nothing. Arguably, we'd be in a worse situation if a COVID level event were to happen. The best success of the COVID pandemic was Operation Warp Speed. I'm not convinced that if Republicans were in office, they would do it again, or if Democrats were in office, that they would try to copy it. It's now seen as controversial. Obviously, among Republicans, vaccines in general are controversial. Democrats...</p><p>wouldn't necessarily trust the private sector to lead in the way that Operation Warp Speed did. And then besides vaccines, personal protective equipment, testing, surveillance monitoring, sort of wastewater monitoring and other passive detection for emerging pathogens, we're just not there. We are making almost no investments. And we've kind of not only reverted to the status quo ex ante,</p><p>but we've even done things like the FDA is now regulating lab developed tests. And in the, before the pandemic, they were not regulated. When the public health emergency was declared, lab developed tests became regulated by the FDA. And what happened, the CDC had monopoly on testing and they totally messed it up. And that's why we didn't have testing for the first few months of COVID is because our one source of flexible testing capacity for novel pathogens, lab developed tests, were legally prohibited from doing what they were capable of doing.</p><p>So now we're in a worse equilibrium when it comes to testing. I think that's true across the board for most areas you care about when it comes to pandemic prevention, whether it's delaying future pathogens, stopping gain of function research. We did get one win, the deep vision program at USAID that was like a virus hunting program, like literally going out into untouched parts of the world to look for.</p><p>potential pandemic pathogens. The risk reward on that was awful. Thankfully, I shut that down. So that's good. We're not like actively going into the jungle trying to find new pathogens that could cause a pandemic. But no progress on gain of function research. And in terms of detection, there are some pilot programs in terms of like doing more testing in airports and other public places for emerging diseases, but they're not wide scale yet. And a lot of like the wastewater monitoring stuff.</p><p>It's been very hard for companies like BioBot to get customers from the government to pay for this stuff, even though it's incredibly useful. And so we're in a really bad spot, but we need to be making those investments. And this is the kind of thing again, where it's like, it mostly requires enlightened leadership of, it's not a ton of money we're talking about here. A current estimate from the White House science office is that for $24 billion, we could get prototype vaccines.</p><p>for the 26 viral families that are known to cause human disease. And so like on the grand scale of things, $24 billion is nothing. It's a drop in the bucket. But those are the kind of like long run investments we need to be making today before the next pandemic starts or something we could cut it off very quickly. And just in DC, no one wants to talk about pandemics because people are still a bit traumatized by COVID.</p><p>Theo Jaffee (33:16)</p><p>So when you talked about there's all these like little easy wins that we can do, like wastewater monitoring that just don't happen. Like mechanistically, what is going on there? Like you get your member of Congress or your Biden administration or CDC official in the room and you tell them, like, here's this thing that we can do that would.</p><p>prevent the next pandemic or could prevent the next pandemic and would be a very good thing regardless and would not cost that much money and would look very good for you and would be an easy win and is not exactly controversial. Like who's opposing wastewater treatment or far UVC systems? Like why does it not happen?</p><p>Alec Stapp (33:56)</p><p>Yeah, that's a great question. So I think a couple assumptions there that I think we need to tease out. One, I don't think the industry would look very good. So these things are usually uncontroversial, but they're not salient enough for the public. So like no one's going to win an election based on like properly installing a wastewater monitoring system. If you prevent the pandemic that never happens, then you get no credit for it because the pandemic never happened. And so there's this asymmetric risk reward to any of these investments that you basically</p><p>Never get credit for them when it works out and you do the smart thing. But you get blame if things go south. And then on the, it doesn't cost that much money. These are not, so I don't wanna, I wanna be very clear here. It's not that these are super expensive, but they're not free either. And in our current budgeting environment, we're now in a high -interest rate environment. We have, interest rates are above 5%. Money is not free anymore.</p><p>The days of high -deficit spending are over for the foreseeable future. And so the budget constraints are very real. And then that environment, the way budgeting works in Washington, D .C., because the budget has to be bipartisan. It has to get 60 votes in the Senate to pass, which means it has to get both Democrats and Republicans every single year. And what they do is they just take the previous year's appropriations bill and start with that as the base text. And then any change from there, whether it's new spending or cutting -old spending,</p><p>essentially has to be bipartisan in nature and has to be a top priority. And so when you come in and say, hey, let's spend a billion dollars on wastewater surveillance, the people in the room are like, maybe a good idea, but like, we're not going to get credit for this. Who knows when it'll pay off? It's a very uncertain payoff. Probably there won't be a pandemic anyway. And then what are you going to cut? Are you going to cut the money we're spending on flu? There are a lot of lobbying groups that lobby for more spending on flu. How about cancer, diabetes, heart disease, all of these like,</p><p>very specific public health issues, like disease specific programs, have massive lobbying groups behind them, whether it's corporations or patient groups. And in a zero sum budgeting environment, we're not really increasing deficits for the foreseeable future. New spending has to take from somebody else, and then it becomes a dog fight. It's very hard to win.</p><p>Theo Jaffee (36:17)</p><p>So speaking of spending, IFP has talked positively about Biden's spending, like the infrastructure bill, the Innovation and Competition Act. But like, is this spending actually good? Like would it pass a cost benefit analysis? You pointed out on Twitter a couple weeks ago.</p><p>that the Biden administration allocated $42 .5 billion for high -speed internet, and not a single home or business has been connected nearly three years later. And on top of that, our national debt interest payments alone this year will be something like $900 billion, which is more than we spend on defense, more than almost anything else in the federal budget.</p><p>Alec Stapp (36:55)</p><p>Yes, I think we're in a really bad equilibrium right now where we spend a lot of money and don't get much for it. I think when it comes to things like basic research, even if I think our current systems are very inefficient, there is just such a clear story of market failure, of companies under -investing in really breakthrough, high -risk, high -reward stuff that won't pay off for a decade, research ideas that don't have clear, obvious commercialization potential. I just think...</p><p>all the economic research points to that being a massive underinvestment by the private sector. And so there's just a large role for the government to spend, you know, roughly on the order of the $60 billion a year NSF and NIH spend on basic research type investments. So I think there's a lot of improvements you can make there, but I wouldn't cut it just given the large market failure and the massive spillovers to the rest of the economy from those kinds of investments. When it comes to more narrowly targeted things like the</p><p>rural broadband subsidies. Yeah, I just think this is one of the worst case scenarios in government where it's politically popular to say people in rural America don't have internet. Let's spend a lot of money to make sure they're connected to the rest of the world. This is like the urban rural digital divide thing. And no one can talk about being against the digital divide, but it's a massive waste of money. And we have Starlink. Just do Starlink. Don't run fiber cables to, you know, a single person living out in the boonies. Like this is...</p><p>on a per mile per user basis, exorbitantly expensive. And if you look at, it's the weirdest thing, if you look at surveys of people, you ask them, why don't you have internet? The number one reason they say is they're not interested. Like part of the reason they moved out to middle of nowhere is because they don't want to be connected to the rest of the world. And so we're doing this thing where we're spending tons of money to connect people to the internet, some of whom don't even really want the internet. And because of...</p><p>political biases against Elon Musk and who is a highly imperfect person, we're now not doing Starlink's terminals when we should be for that money. And we could spend much less than $42 billion to connect people.</p><p>Theo Jaffee (39:08)</p><p>Well, like if this is true, should the government's main focus be on infrastructure spending or just like infrastructure permitting? Like for solar energy, should they pass, you know, a multi -billion dollar package to build solar or should they, should IFP be pushing for them to just allow private companies to do it easily, get their permits done?</p><p>Alec Stapp (39:28)</p><p>Yeah, so we really focus, for that reason, we really focus our efforts on the regulatory side to unlock private industry because a lot of these cases are situations where there are narrow targeted benefits to the users. And so if you make it legal to build the infrastructure, people will build it and they will sell it to private citizens for a profit. And so that's true of solar. And again, this is where like understanding institutional structure in DC is really important. So like, why did we get...</p><p>the subsidies and not the permitting reform. A key part of this is actually because of the rules of reconciliation. A bill can only go through reconciliation, meaning it would get only needs 50 votes in the Senate, which Democrats had for the first two years of the Biden administration. If the provision is primarily budgetary in nature, spending money on subsidies and tax credits is primarily budgetary in nature. And that's how we got the Inflation Reduction Act. But now we need to do permitting reform.</p><p>And we're also not going to increase deficits. We're not going to do massive new spending programs. And so our effort, which has always been around the higher efficiency, increased productivity in the economy, is now the only game in town. Going forward, either we're not going to get new reforms, or we're going to do the reforms that actually increase efficiency and productivity. There is no more new massive spending package coming.</p><p>Theo Jaffee (40:51)</p><p>So last week the Supreme Court struck down chevron deference, which is a legal doctrine for the audience, that if a court determines that a statute is ambiguous, it must defer to the interpretation of the relevant federal agency. But they no longer have to do that. So how important is this?</p><p>Alec Stapp (41:08)</p><p>It's a big deal. I think there is still uncertainty around exactly how it will be implemented. The court did not offer a very clear framework for how future decisions should be made. And so this is one of these things where, is how the common law system works in the United States. You get one new Supreme Court decision that establishes a new precedent. And then you see how it plays out in practice. You see which agency decisions get challenged. You see...</p><p>what lower courts do in terms of how they interpret this new guidance from the Supreme Court, and you see how it works in practice. So I think, one, I would just caveat this with, don't trust anyone who's overly confident on what Chevron deference or really any other Supreme Court decision means for the future. You'll notice that most people can't predict reliably what the Supreme Court is gonna do ahead of time on all these high profile cases. The law is inherently uncertain, at least in how US legal institutions work.</p><p>And we'll have to see how it evolves over time. But in general, Chevron deference will probably be a big deal. It will probably mean that agencies are more risk averse on the margin. They do less. They spend more time making sure that the limited actions they do take are unimpeachable on legal grounds and directly tied to their statutory authority from Congress. And something we've been talking about internally at IFP is like,</p><p>We're now going to be in the world of NEPA, the National Environmental Policy Act, for lots of other parts of government. Because that's how NEPA worked. It was a very short statute passed in 1970 that was then interpreted by the courts more broadly year after year for 50 plus years. And through litigation, brought by private actors, and then decided by judges, all of a sudden everything was a major federal action, so it was covered by NEPA. Almost everything has a significant environmental impact.</p><p>And you could never consider enough alternative mitigating measures. And so you can sue any project and say the environmental review missed a significant impact or it didn't consider an alternative measure. And that's the world we live in for NEPA and environmental review. And it's going to be increasingly the world we live in for a lot of other areas of policy.</p><p>Theo Jaffee (43:23)</p><p>What do you think about land value taxes as an intervention for YIMBY?</p><p>Alec Stapp (43:27)</p><p>I think they're great. We haven't done any explicit work on them, but I'm very supportive of all the Georgists out there. When I first heard about land -value taxes and looked into it, my prior was that, as I try to think about a lot of policy issues, is like, this is not really implemented anywhere in the world. Something else is going on. Like, probably the idea is fundamentally flawed. It's not that you can never come up with a wholly new idea that hasn't been tried and it won't be successful.</p><p>But it's probably a very high bar. Probably there are like fundamental things about human psychology and human institutions in the modern nation state that like lead to your idea not working. I think land value taxes could be an exception to this rule because it seems like we have a clear theory of why they haven't worked so far. And there are folks who have done a startup company to figure out the how do you estimate the land value? How do you separate it from the value of the structure on the land? And I think</p><p>You can tell a story where up until now we didn't have the data collection and the like quantitative statistical tools to actually make this, you know, produce these answers in a reliable way, which is why, you know, we're left with things like property taxes that are less efficient. And so I think technology applied to the land value tax problem in terms of land valuations could unlock them. And then obviously from economic first principles, they are the most efficient.</p><p>type of tax to implement.</p><p>Theo Jaffee (45:01)</p><p>So Leopold Aschenbrenner a few weeks ago just published this very long document called Situational Awareness. For those in the audience who don't know, Leopold was on the super alignment team at OpenAI. And he's very concerned about AI and AI alignment and doing it right. And one of the main ideas in this book length</p><p>Alec Stapp (45:04)</p><p>Ahem.</p><p>Theo Jaffee (45:22)</p><p>blog post series that he wrote was that eventually as AI gets really good, governments will wake up and kind of like they did during COVID and realize, this AI thing is a big deal. And they will nationalize all the AI labs and try to build AGI themselves in one big kind of Manhattan project thing called The Project. So do you think this will actually happen? Do you think this would be desirable?</p><p>Alec Stapp (45:48)</p><p>the way I currently think about AI is the future is highly uncertain, especially in this area. And so I won't say whether this will or won't happen. I think it's a possibility. I don't even can begin to describe the percentage chance of this happening on any reasonable time frame. The way we can think about AI at IFP is we're trying to focus on what are the robust ideas that will be good in a wide range of futures. So</p><p>There's folks like Leopold who believe in very short timelines. They believe in the scaling laws will hold. This is how we're going to get super intelligence. The resources required to get these orders of magnitude compute increases will require the nation state nationalization. One single concerted effort to avoid duplicating resource use. It's possible, but I also think there's another scenario where for whatever reason, the scaling laws stop holding.</p><p>capabilities kind of peter out or capabilities keep increasing, but the real world is heavy tailed as lots of people in Silicon Valley like to say. There are a lot of frictions in the real world. Maybe we get like the internet, much more innovation in the digital world and the physical world because it's hard to maybe progress and robotics doesn't happen as quickly. And so in all of these scenarios, whether it's like the Leopold future where we're very close to super intelligence and it's going to be a national project.</p><p>or the world where we have like limited gains and we're just trying to like make this internet 2 .0 thing happen. Under those world states, we want more state capacity on AI. We want NIST, the federal agency tasked with a lot of like standard setting and evaluations under the executive order from the Biden White House. We want them to work. We want them to have talented people. We want them to be focused on the most important issues. We want to make sure that that</p><p>When they evaluate a model, they know how to test its capabilities, they know what risks to work with, they know how to talk to the companies. That is just in almost every future world state, a better thing to have is that there's expertise and competence somewhere in the government to handle these really technical challenges in a fast changing world. And so we're really focused a lot in our AI portfolio on these state capacity issues because we're open -minded about</p><p>a wide range of possible futures and we're not sure where it's going.</p><p>Theo Jaffee (48:19)</p><p>If America is increasingly focusing on domestic production and manufacturing, like domestic manufacturing in America has increased significantly over the last few years, what does that mean for globalism? Is globalism dying?</p><p>Alec Stapp (48:31)</p><p>Globalism is definitely in retreat right now. I think it remains to be seen how much output we get. There's like all I've seen all the charts. I'm sure everyone has seen the charts of like massive increases in manufacturing capacity in terms of spending in the United States. We'll see what the productivity looks like, what output increase do we get for these these new investments. I think that remains to be seen. But probably we'll get we'll get some noticeable increase. And yeah, due to</p><p>rising geopolitical risk, multiple wars, potential conflict with China over Taiwan, ongoing Russian invasion of Ukraine. I think a lot of countries look around and they see kind of the end of Pax Americana and the natural implication of that is let's make sure we have domestic capacity for manufacturing, critical supply chains, and there's like a movement towards more on -shoring. The thing I think is underrated in this debate that I would...</p><p>hopeful that US policymakers move towards is the idea of friend shoring of you don't have to have all this capacity in the United States, but you do want to have it in friendly countries that in the event of a hot war or conflict, you aren't vulnerable to a critical input being leveraged against you. And so let's dramatically increase trade with Canada, Mexico, the European Union, the UK.</p><p>Obviously attempt to do so with South Korea and Japan, but also recognizing that they're in a more vulnerable region of the world. But overall, let's massively increase the density of trade networks with allied nations, I think is an obviously good idea that balances the national security risks, along with the reality that the United States is never going to be the world leader in every single facet of manufacturing.</p><p>Theo Jaffee (50:32)</p><p>So for my last segment, I'm going to shamelessly steal from Tyler Cowen and say we should play a game of overrated or underrated. So overrated or underrated prediction markets.</p><p>Alec Stapp (50:42)</p><p>Let's do it.</p><p>I will say currently underrated just because probably your listeners are people who read works in progress. And our friend Nick Whitaker just wrote a great piece about why prediction markets are overrated. And so I think probably in people's mind, they're currently overrated. I will say they're underrated just because the biggest event in Washington, DC right now or the biggest ongoing conversation is the presidential debate and the aftermath of should Joe Biden step down? Will he step down for the nomi - for the -</p><p>convention, if he steps down, who will the next nominee be? And the primary way this conversation is happening is via people talking about prediction markets, which is crazy. This is a very niche thing relative to years ago. And now it's mainstream for political pundits to talk about prediction markets. And so I think they're a little underrated given their recent progress.</p><p>Theo Jaffee (51:40)</p><p>And a personal lesson from Prediction Markets, when Trump and Biden were both trading at around 50 -50 a couple weeks ago, I bet on Trump yes, but I should have bet on Biden no, because I forgot to take into account the conditional that Biden would actually get the nomination, and I didn't expect him to collapse like this after the debate. So, yeah, lesson for anyone who wants to bet on Prediction Markets. Overrated or underrated? Charter cities.</p><p>Alec Stapp (51:50)</p><p>You</p><p>Gotta be careful out there.</p><p>Hmm underrated probably I think they've had a lot of you know, obviously struggles and false starts over the over the years, but I think Still the idea of like can a fresh start for a city with new institutions have a big impact and I also think that Charter cities as a case study for for incumbent cities to learn from is underrated so if you get a successful charter city in Africa that you know grows to</p><p>even 100 ,000 people, maybe they learn new ways of doing things that can be adapted to cities in America or at least other cities in Africa. And so I think there is a transfer of learning across cities that a charter city could kickstart.</p><p>Theo Jaffee (52:54)</p><p>Overrated or underrated? Existential risk and long -termism.</p><p>Alec Stapp (52:58)</p><p>probably currently underrated. It's obviously taken some huge hits with the controversies around Sam Bankman Fried, FTX, the AI, EAC, AI safety debate. but I think you should just like separate out the ideas from the communities. And at the end of the day, is it possible that there are technologies that could have catastrophic risk and it's like nuclear weapons already exist?</p><p>We have had pandemics that have killed more than 20 million people multiple times. Biotech is advancing very quickly. It's possible we could engineer pathogens that are much more deadly than COVID. And again, like I said, we're very uncertain about the future of AI, but if we actually achieve developing super intelligence, there are risks involved with that. And so I think being realistic about the possibility of existential risk is something people should include in their mental model of the world.</p><p>Theo Jaffee (53:56)</p><p>Overrated or underrated, effective altruism.</p><p>Alec Stapp (54:00)</p><p>And this is where I think it's underrated currently because it got attached to the controversies around long -termism and ex -risk. I think of effective altruism primarily as bed nets to fight malaria in Africa. And like, again, just like take it down to brass tacks because I know there are a lot of like controversial people involved in all sides of this debate. Yeah, and like, yeah, lots of people. And it's like, is it good to try to help others?</p><p>Theo Jaffee (54:13)</p><p>Yeah.</p><p>Savvy Goodread.</p><p>Alec Stapp (54:29)</p><p>Generally, yes. Should we try to be effective about this? Should we try to measure things and use data to be more effective or less effective? Those are kind of unimpeachable ideas, I think. You can disagree with how it operates in practice. And I think the kind of like...</p><p>Theo Jaffee (54:42)</p><p>Yeah, but that's like the Democratic People's Republic of Korea argument.</p><p>Alec Stapp (54:48)</p><p>Sure, sure I guess. I mean, but that's obviously just like, but like, the effect of autism community does donate to bed nets in Africa and I do think that like, actually like reduces malaria deaths.</p><p>Theo Jaffee (55:01)</p><p>Alright, overrated or underrated? Within progress studies circles, climate change.</p><p>Alec Stapp (55:08)</p><p>within progress of these circles?</p><p>Theo Jaffee (55:10)</p><p>Yeah, because probably in the broader society, it's somewhat overrated. People are making these very short -term doom predictions. But within progress studies, do you think it's overrated or underrated?</p><p>Alec Stapp (55:23)</p><p>It's probably still a little overrated, I think. A lot of the tipping point arguments seem to have been refuted of like, the latest IPCC report shows that like the extreme worst case scenarios are increasingly unlikely. And so we need to do our best to limit warming. The arguments for clean energy abundance are over determined. So of course, mitigating the effects of global warming are one of those, but also just the general benefits of clean energy.</p><p>becoming more widespread. Those are really important. And so I think it's probably still in the progress studies community there are some folks who aren't up on the latest data in terms of like the catastrophic risks from the possibility of catastrophic risk from climate change being overrated.</p><p>Theo Jaffee (56:10)</p><p>Also within progress studies, overrated or underrated nuclear energy. Cause last time on the podcast I had Casey Handmer, who is extremely, extremely bullish on solar and kind of bearish on nuclear energy because it's expensive and complicated and solar is cheap and simple.</p><p>Alec Stapp (56:19)</p><p>Mm -hmm.</p><p>Yeah, I think it's still overrated. I think the way I talk about nuclear is it is extremely stupid for us to shut down operational nuclear power plants and we should be doing everything we can to bring those back online once we shut down to extend the life of existing Gen 2 nuclear power plants. We should be reforming the Nuclear Regulatory Commission to make it viable to have a chance for small modular reactors to make sure that</p><p>a possible future of nuclear fusion is not killed by the regulatory state. There are lots of things we should be doing, but if you're just trying to prognosticate about the future, nuclear power has experienced negative learning curves in almost every country for decades. It gets more expensive to build over time. South Korea used to be an exception. Now their costs seem to be increasing. France costs are increasing. They have a legacy nuclear fleet.</p><p>I'm sure your listeners heard it for Casey, the opposite was obviously true for solar. It just keeps getting cheaper over time. It's an exponential curve in terms of deployment. And so if you're betting on what the future is going to look like, it's going to look like much more like solar than nuclear. Even if we should have made different decisions in the 1970s with nuclear, we still should include it in our portfolio. And it's moronic to shut down existing nuclear power plants because they are safe and deliver clean 24 -7 energy. But are we likely to</p><p>quickly fix the cost problem, I'm pessimistic.</p><p>Theo Jaffee (58:01)</p><p>Overrated or underrated, California.</p><p>Alec Stapp (58:05)</p><p>California currently underrated probably. I mean, it's one of these things of when you have LA, San Francisco, the future of AI is being built in San Francisco, great climate, the coast, it's beautiful. Yeah, it's California. So I think currently underrated because of temporary problems with, major problems with the housing crisis, major problems with.</p><p>Theo Jaffee (58:23)</p><p>Yeah, I'm there right now.</p><p>Alec Stapp (58:35)</p><p>drug addiction, homelessness, et cetera, but I'm long run bullish on California because it has all the fundamentals and just needs to fix some of these policy errors.</p><p>Theo Jaffee (58:45)</p><p>And what about Florida, overrated or underrated?</p><p>Alec Stapp (58:50)</p><p>Probably currently a bit overrated. I think people underestimate the importance of weather and the hot and humid summers there I think make it hard to bet on like super long term. So Florida's great. They're innovating and people are moving there. But I think for the real bleeding edge frontier tech stuff, I think you need more than what Florida currently has.</p><p>Theo Jaffee (59:15)</p><p>Yeah, I grew up there and there's a reason that I'm here in San Francisco for the summer and not in South Florida. Yeah.</p><p>Alec Stapp (59:21)</p><p>Exactly. Weather matters a lot. It's real shame.</p><p>Theo Jaffee (59:25)</p><p>Well, it's not just the weather, it's all the tech people, but you know, the weather helps. And then what is the, yeah, what is the single most overrated policy or issue, either among progress studies people or just among the general public?</p><p>Alec Stapp (59:30)</p><p>Yeah, but why are the tech people, you know, it's like it's a bit circular.</p><p>Overriding the sense that people care about it, but it actually won't move the needle.</p><p>Theo Jaffee (59:46)</p><p>Yes.</p><p>Alec Stapp (59:48)</p><p>Uhhh...</p><p>I mean, because we were talking about it, it's on top of my mind, but rural broadband subsidies, it is, on a per dollar basis, almost a complete waste of money. And if we think that there is a redistribution element to this where we need to subsidize people's access to internet in rural areas, give them Starlink terminals and call it a day. But in the tech policy community, this issue is talked about ad nauseum, and it's almost a complete waste of money.</p><p>Theo Jaffee (1:00:26)</p><p>And then finally, what's the most underrated policy or issue?</p><p>Alec Stapp (1:00:31)</p><p>Most underrated. The most underrated, yeah. I think probably is on my mind today because there was a great piece in the New York Times about using elevators as an example of why there's cost bloat in all sorts of building construction. And I think it gets to a broader problem around building codes and standardization that I think is like the one of the most underrated ideas is</p><p>Theo Jaffee (1:00:34)</p><p>like the most underrated.</p><p>Alec Stapp (1:00:59)</p><p>The US federal system of government with local, state, and federal regulators and authorities leads to a lack of economies of scale in the construction industry writ large, whether it's commercial, residential housing, manufacturing buildings, et cetera. When you want to build anything in the United States, we have this fitocracy where so many regulators get to weigh in and impose different standards that it's very hard for us to be integrated in the global economy for building supplies as well as having any kind of national.</p><p>companies and so the federal government should use every carrot and stick it has to align building codes and standards so that companies can reach higher economies of scale and start to automate more.</p><p>Theo Jaffee (1:01:44)</p><p>All right, well, that's a good place to wrap it up, I think. So thank you so much, Alec Stapp, for coming on the podcast.</p><p>Alec Stapp (1:01:49)</p><p>Thanks for having me, Theo.</p><p>Theo Jaffee (1:01:52)</p><p>Absolutely.</p>]]></content:encoded></item><item><title><![CDATA[#17: Casey Handmer]]></title><description><![CDATA[Terraform, solar, space, Hyperloop, and how to think]]></description><link>https://www.theojaffee.com/p/17-casey-handmer</link><guid isPermaLink="false">https://www.theojaffee.com/p/17-casey-handmer</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Tue, 25 Jun 2024 00:11:31 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/145965580/ed21f1f9cb5ec9c4825bb8bce7f97632.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Casey Handmer is the founder and CEO of Terraform Industries and a physicist, immigrant, pilot, dad, solar enthusiast, Caltech physics PhD and former Hyperloop One levitation engineer and NASA JPL software system architect.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:40 - Why don&#8217;t other people do what Terraform does?</p><p>2:51 - Why is solar better than nuclear fusion?</p><p>5:27 - Could carbon emissions actually be good?</p><p>8:38 - Why isn&#8217;t anyone stopping global warming with sulfur?</p><p>13:20 - Can America build something like Terraform?</p><p>20:53 - Solar and nuclear</p><p>23:10 - Why not terraform Venus instead of Mars?</p><p>30:47 - Why did Casey work at NASA instead of SpaceX?</p><p>37:18 - Why is Elon the only person with multiple huge companies?</p><p>39:59 - Why didn&#8217;t the Hyperloop work?</p><p>42:26 - Tile the desert with solar</p><p>46:03 - How does solar change geopolitics?</p><p>48:30 - How does Casey manage his time?</p><p>53:24 - How do you develop first principles thinking?</p><p>56:28 - Favorite place Casey has traveled to</p><p>59:21 - Outro</p><h3>Links</h3><p>Casey&#8217;s Blog: <a href="https://caseyhandmer.wordpress.com/">https://caseyhandmer.wordpress.com/</a></p><ul><li><p>You Should Be Working On Hardware: <a href="https://caseyhandmer.wordpress.com/2023/08/25/you-should-be-working-on-hardware/">https://caseyhandmer.wordpress.com/2023/08/25/you-should-be-working-on-hardware/</a></p></li><li><p>The solar industrial revolution is the biggest investment opportunity in history: <a href="https://caseyhandmer.wordpress.com/2024/05/22/the-solar-industrial-revolution-is-the-biggest-investment-opportunity-in-history/">https://caseyhandmer.wordpress.com/2024/05/22/the-solar-industrial-revolution-is-the-biggest-investment-opportunity-in-history/</a></p></li><li><p>Future of Energy Reading List: <a href="https://caseyhandmer.wordpress.com/2023/10/19/future-of-energy-reading-list/">https://caseyhandmer.wordpress.com/2023/10/19/future-of-energy-reading-list/</a></p></li><li><p>Elon Musk Is Not Understood: <a href="https://caseyhandmer.wordpress.com/2024/01/02/elon-musk-is-not-understood/">https://caseyhandmer.wordpress.com/2024/01/02/elon-musk-is-not-understood/</a></p></li><li><p>Why High Speed Rail Hasn&#8217;t Caught On: <a href="https://caseyhandmer.wordpress.com/2022/10/11/why-high-speed-rail-hasnt-caught-on/">https://caseyhandmer.wordpress.com/2022/10/11/why-high-speed-rail-hasnt-caught-on/</a></p></li></ul><p>Casey&#8217;s Website: <a href="http://caseyhandmer.com/">http://caseyhandmer.com/</a></p><p>Casey&#8217;s Twitter: <a href="https://x.com/cjhandmer">https://x.com/cjhandmer</a></p><p>Terraform Industries: <a href="https://terraformindustries.com/">https://terraformindustries.com/</a></p><p>Terraform Blog: <a href="https://terraformindustries.wordpress.com/">https://terraformindustries.wordpress.com/</a></p><ul><li><p>Scaling Carbon Capture: <a href="https://terraformindustries.wordpress.com/2022/07/24/scaling-carbon-capture/">https://terraformindustries.wordpress.com/2022/07/24/scaling-carbon-capture/</a></p></li></ul><ul><li><p>Terraform Industries Whitepaper: <a href="https://terraformindustries.wordpress.com/2022/07/24/terraform-industries-whitepaper/">https://terraformindustries.wordpress.com/2022/07/24/terraform-industries-whitepaper/</a></p></li><li><p>Terraform Industries Whitepaper 2.0: <a href="https://terraformindustries.wordpress.com/2023/01/09/terraform-industries-whitepaper-2-0/">https://terraformindustries.wordpress.com/2023/01/09/terraform-industries-whitepaper-2-0/</a></p></li><li><p>Permitting Reform or Death: <a href="https://terraformindustries.wordpress.com/2023/11/10/permitting-reform-or-death/">https://terraformindustries.wordpress.com/2023/11/10/permitting-reform-or-death/</a></p></li></ul><p>Transcript: <a href="https://www.theojaffee.com/p/17-casey-handmer">https://www.theojaffee.com/p/17-casey-handmer</a></p><p>More Episodes</p><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><h1>Transcript</h1><p>Theo Jaffee (01:29)</p><p>Hi, welcome back to episode 17 of the Theo Jaffee podcast. We're here today with Casey Handmer.</p><p>Casey Handmer (01:36)</p><p>Hi, thanks. It's great to be here.</p><p>Theo Jaffee (01:39)</p><p>Yeah, thank you. So for first question, what you're doing at Terraform, which I'll explain in the intro just to make it clear to everyone, why isn't everyone else doing what Terraform is doing? Like it seems like a very important market need to create like hydrocarbons that will not destroy the environment while also taking CO2 out of the atmosphere.</p><p>Casey Handmer (02:04)</p><p>I think they will pretty soon. I think we're just kind of at the cusp where this technology becomes, it goes from being economically extremely unimpressive to economically inevitable.</p><p>Theo Jaffee (02:16)</p><p>And is that just because solar will continue to get cheaper?</p><p>Casey Handmer (02:19)</p><p>Yeah, that's the key thing. I mean, if you're making fuel fuels a source of energy, you need a cheap energy input to make that work. And actually for a long time, in many ways, oil and gas has been one of the cheapest energy inputs. So it would be very strange to take some other form of energy that's more expensive and then lossily transform that into oil and gas. But of course, that's not going to be the case. And solar is between five and 10 times cheaper than...</p><p>and coal oil now. So actually you can do the reverse conversion and take the efficiency and still win economically.</p><p>Theo Jaffee (02:51)</p><p>Hmm. So yeah, you're like a huge fan of solar and you've written about why solar is better than wind, why it's better than nuclear. But why is solar better than nuclear fusion? Because fusion would use much less land and it has like almost none of the drawbacks of fission. So why not?</p><p>Casey Handmer (03:08)</p><p>Well, solar is nuclear fusion. The reactor's in the sky and it comes up every day. And actually, if you think about the platonic ideal of a fusion reactor, you have a completely free heat source, some kind of glowy gas thing that's confined by magnetic fields or something. But you somehow figure out how to build that completely free. And then you have a 25 to 30 % direct energy conversion efficiency system that sits outside that magnetic containment system with zero moving parts, no turbines, no steam handling, no neutron embrittlement, no</p><p>Theo Jaffee (03:10)</p><p>Well.</p><p>Casey Handmer (03:38)</p><p>nothing. And then some intervening filtering or shielding of some sort so you don't end up with neutron products and neutron embrittlement and other problems. And then basically the platonic ideal of this energy conversion system is basically just a solar array or a solar panel. And you say, well, why don't we just delete the entire reactor and put that thing outside where it's in the sun and it works. This is slightly tongue in cheek, obviously. I want to say, for the record, I think fusions are a really cool technology. And I</p><p>really hope that we figure out how to make it work but in order for Fusion to compete with Solar, this is what it has to do.</p><p>First of all, it has to quite a great work. We have to achieve Q greater than one in a real world nuclear fusion reactor. Then we have to achieve Q high enough that we can extract heat from the reactor to boil water or otherwise allow direct conversion of electricity. Then we have to do that at a good enough price that we can compete on price with other forms of energy that are notionally available 24 hours a day, which is on the order of 50 bucks a megawatt hour. Then we have to be able to produce these reactors at a sensible pace. And I'm talking like,</p><p>at least hundreds of gigawatts of production capacity of these reactors per year. And that's a really tough problem, right? You have to solve the science problem, then you have to solve the economic problem, then you have to solve the manufacturing problem. And you have to do all of that before solar solves the problem for all of us anyway, which it's pretty close to doing. So in terms of the time window for this to occur, if fusion arrives in 2050, that'll be too late, really. It will not be able to, even if it's able to compete on the cost, I think it'll find it has very marginal markets because solar will already kind of own everything. If it becomes extremely compelling on cost, which I think is quite a lack.</p><p>because just diffusion reactors are inherently more complicated than solar arrays. Then, you know, we can always rip up the solar arrays and put them in a hole in the ground and switch over to fusion, and that would be a pretty cool thing as well. I think that would be a win -win situation.</p><p>Theo Jaffee (05:27)</p><p>So is there any possibility that, I've seen this argument before, so I wanted to get your take on it, but could carbon emissions actually be good? Like in a world that we don't reduce them, first of all, lots of climate predictions from like the seventies have been like wildly pessimistic compared to what's actually happened. This has been like a consistent theme and CO2 boosts plant growth, including for agriculture. Like could the offset of like higher wet bulb temperatures in poor countries be...</p><p>Casey Handmer (05:38)</p><p>Mm.</p><p>Mm -hmm.</p><p>Theo Jaffee (05:56)</p><p>essentially like counteracted by like it being easier to grow food. And then the relationship between CO2 emissions and warming is logarithmic. So like you can increase CO2 a lot and it only warms the planet a little.</p><p>Casey Handmer (06:06)</p><p>Yeah.</p><p>Yeah, all these things are true enough.</p><p>And it's not controversial that plants are growing like crazy now because of longer growing seasons and higher CO2 levels in the air. And I think if you had the choice to set the CO2 level in the atmosphere at any level we desire, which we're going to have that ability to do in the next decade or two. So it's probably figure out now what we think a good set point is. I would say about 350 ppm is quite good. So that means that the grasslands are no longer starving forests for CO2 availability due to the inherent inefficiency.</p><p>of different forms of photosynthesis or their CO2 uptake capability. You also get a little bit warming, slightly milder winters, particularly in the north, which generally speaking, cold kills 10 times more people than heat. And then of course we've got solar powered air conditioning which can help make the hotter areas more livable in summer. But the default plan, which is we just keep on cranking up CO2 in the atmosphere by two or three or four ppm every year, is absolutely crazy. And I think that we're totally</p><p>totally playing Russian roulette here. Because there will definitely come points when the atmosphere and surface gas exchange mechanisms destabilize in far more catastrophic ways than they already have. And so here I'm talking about once the Greenland ice sheet or the Western Antarctic ice sheet starts moving, we will not be able to stop it. At that point, our coastal cities will be flooded and there's not a damn thing we can do about it. And much of our arable land as well. And winding that back will take tens of thousands of years. So we should probably not do it.</p><p>that. And also if we get to the point where we melt the permafrost and release a lot of biogenic methane that's trapped there, then that will also really put a thumb on the scales that will require us to take much, much more drastic solar geoengineering response to that in order to keep a lid on temperature. It's kind of a crazy thing, but what Terraform is doing is finding a carbon neutral supply chain source for hydrocarbons for everyone forever. We will be there in 20 years. And so the critical thing is, A, find some way to stop heat from</p><p>getting out of control in the next 20 -30 years and then find some way to wind back the existing CO2 levels to a more sustainable maybe around 350 ppm or thereabouts in that intervening time so that we don't have to do solid resource management, solid radiation management with sulfur in the atmosphere or something forever and then ultimately turn that process off as well once we're on the fully synthetic hydrocarbon supply chain.</p><p>Theo Jaffee (08:38)</p><p>On the topic of solar radiation management, we already asked why isn't everyone doing what Terraform is doing, but why aren't anyone doing solar radiation management with sulfur? It seems like it would be relatively easy for someone with the resources of a nation state. Yeah, it's like a few billion dollars, right?</p><p>Casey Handmer (08:53)</p><p>It is. you could do it as a retired Googler. Yeah, yeah, yeah, less than that, less than that. That's astonishingly cheap. And I actually don't know for sure that people aren't doing it on the sly. I know of a number of entities that are doing it officially, but at a relatively small scale, but doing it publicly at a small scale. It seems insane to me that we've kind of built this.</p><p>cultural precedent of the precautionary principles just in the last generation or two that will, if we don't kind of agitate about it, it will end our civilization because we'll kind of by default prefer stasis and stasis will take us off the edge of the cliff.</p><p>But the thing is, you can go from nothing to a full deployment of SRM in less than a year if you really want to. There are numerous countries on Earth that remove sulfur from fuels and stockpile it. You don't have to burn very many of those and you basically have the effect that you want.</p><p>my position would be that we should start doing it now incrementally so that we can measure the effects more precisely and avoid kind of the impacts of catastrophic or very rapid changes in solar radiation hitting the surface of the Earth. And I think people are waking up to that right now. I think that this was, compared to say three or four years ago, it's much, much more mainstream and we should continue to talk about it.</p><p>Theo Jaffee (10:14)</p><p>Yeah, but why hasn't, like, China or someone just spent, like, a couple billion dollars?</p><p>Casey Handmer (10:16)</p><p>We don't know they haven't. We don't know they haven't.</p><p>Theo Jaffee (10:24)</p><p>they had when we see effects on global temperature.</p><p>Casey Handmer (10:27)</p><p>Well, in some ways, the largest short -term effect on global temperature that we've seen is the effect of desulphurization of coal and marine fuel.</p><p>So just this last couple of years, we've had incredible heating in the North Atlantic, and it seems like at least half of that signal is accounted for by taking sulfur out of marine diesels. So in some ways, we've taken the accidental geoengineering that we're doing with CO2 emissions, which we've done mostly over the last 100 years, and we've turned that up to 11 by taking the sulfur out of fuels. And there are good reasons to take the sulfur out of fuels. Sulfur is acid and so on, because acid rain and problems, respiratory problems and so on in port areas.</p><p>of environmental consequences. But it is also true that it roughly masks half to three quarters of the effect of CO2 -induced global warming. And by taking it out, it comes out of the atmosphere in a year or two, and we're actually getting the full brunt of it now. And we're in the midst of a months -long heat wave in India right now. And there'll be mostly ones across the American South and much of the world. And I think severe flooding is occurring all over the world at any one time. There's always severe floods.</p><p>occurred somewhere and I think people are gradually waking up to the fact that...</p><p>that we need to be a little bit more deliberate about how we manage our use of fossil hydrocarbons and the sulfur that often comes with them in order to make sure that we don't inadvertently rip off yet another band -aid and make this situation far, far, far worse than it currently is. And it's easy to kind of talk about this in the abstract, but sooner or later we'll have a mass casualty event, right? And the hypothesis that we've seen in some science fiction that's been published recently is that sooner or later, India or China or countries,</p><p>in Africa or someone will experience mass casualty event and then they will unilaterally engage in solar radiation management and there's not a damn thing that anyone will be able to do to stop them unless they want to you know, decapitate the government via force of war which I think is hopefully off the table. And so what I'm saying, what I'm continuing to say is like the United States ideally should be a country that you know legalizes and promotes this technology and uses its incredible array of you know NASA based space sensors and so on to monitor and regulate and understand</p><p>the effects of this technology so that a, we never have the mass casualty event at all. Ideally, let's not kill tens of millions of people for no reason. And b, by avoiding that mass casualty event, we avoid a kind of panic response that we'd otherwise have. I didn't mean to bang on my SRM for like 15 minutes of your podcast, but it's something I'm quite passionate about.</p><p>Theo Jaffee (12:56)</p><p>No, Serum is cool.</p><p>Casey Handmer (12:58)</p><p>There's a company called Make Sunsets. So I'll give them a shout out. Super cool guys have started that and they're doing, you can go on their website and you can buy basically heat offset credits. They'll launch a balloon for you and then send you a certificate. And I think that, you know, they're tiny, tiny operation right now. But I think that's a step in the right direction.</p><p>Theo Jaffee (13:20)</p><p>So you talk about Terraform in the Terraform blog, kind of like a thing that will happen almost by default as solar gets cheaper. Like, yeah, you know, as solar gets cheaper, we'll tile the desert with solar panels and we'll build like tens of thousands of factories at a rate of like one every few days for like decades.</p><p>Casey Handmer (13:29)</p><p>Yeah.</p><p>Yeah, we might build some bigger factories and fewer of them, but yeah, that's the basic idea.</p><p>Theo Jaffee (13:44)</p><p>Yeah, but is this actually true with the level of social organization and anti -builder mindset that we currently have in America? It seems like we can't outbuild China right now, not because China has more capitalism or a stronger economy even, but just because they allow companies to build things like high -speed rail or massive solar farms.</p><p>Casey Handmer (13:53)</p><p>Mm.</p><p>Yeah.</p><p>Well actually when it comes to manufacturing, China is probably more capitalist than the United States. And I think that's often lost in the mix. China is initially a communist country, but really it's an authoritarian dictatorship and its economy has been, since stone -shoving deregulated the command aspects of the economy, it's been as capitalist did not more than the United States, at least in the kind of private sector manufacturing sphere. But that said, sorry, the United States is currently experiencing an unprecedented manufacturing boom. There's more money.</p><p>and more factories being built than at any time in history, including the lead up to World War II. And so I think this idea that all we can do for China is no longer a valid notion and that we should probably prepare ourselves for another decade or two of US manufacturing dominance, particularly with higher tech and automated manufacturing.</p><p>Theo Jaffee (14:55)</p><p>Well, that's the thing. We have the manufacturing capacity. But it doesn't matter if you have the capacity to build millions of solar panels if you can't actually get the permits to put the solar panels in the desert.</p><p>Casey Handmer (15:04)</p><p>Hmm.</p><p>Well, that actually is kind of the challenge, if that makes any sense. But actually, I would say that in contrast to SRM,</p><p>oil and gas production, the United States has always been kind of devolved between like tens of thousands of family owned independent oil drillers and oil producers. And so there's kind of an economic precedent for small scrappy startup to enter this space and all the economic framework is already in place to make that work, which is wonderful as far as I'm concerned. The challenge as far as like mass scale solar deployment is that there is currently a very expensive and time expensive process required to deploy solar arrays whose major effect is that it just</p><p>kills tens of thousands of Americans every year for no reason from the effects of legacy coal plant power production that would otherwise be displaced by new solar development. But the thing is, when it comes to oil and gas, the outcome is never in doubt. You might delay it, you might slow it down, but sooner or later the infrastructure will be built and will be deployed because the economic arbitrage, if you like, the pressure between the amount of value that can be generated versus not becomes so great that even the organs of state are unable to affect it.</p><p>resist them. And that actually has been a continuing frustration for opponents of oil development and offshore oil drilling and export terminal development and refineries and so on. But it will actually be a source of, I think, major joy for much of the justifiably environmentally concerned activist community when they realize that the same economic forcing function is now fighting for them as opposed to against the ideals that they hold dear. And we're already seeing that. So,</p><p>So that's kind of exciting. It's just economics. At the end of the day, if you found some way of making some fundamental product that everyone needs, and everyone pays a lot of money for, and you found some way of making it three times cheaper, well, that's just what's going to happen. And that's what has happened essentially in every country on Earth since forever. It doesn't matter what their economic system is. Everyone needs oil.</p><p>Theo Jaffee (17:07)</p><p>So a lot of people are now talking about like, America can't build. We no longer have this mindset. We have this, yeah, like what you talked about with the precautionary principle. But the reason that we developed the precautionary principle with building industrial mega projects is because we may have been getting to a point in the beginning of the 20th century where people may have been too far in the other direction. So like the famous like 1950s plan,</p><p>in the San Francisco Bay Area to drain half of the bay and replace it with reclaimed land with shipping channels and stuff. A lot of Californians didn't like that because it would have meant significantly impacting the ecology of the area. So what's the right balance, do you think, of building megaprojects and preserving the environment?</p><p>Casey Handmer (17:54)</p><p>Yeah.</p><p>Yeah, well that's a good question. At the time, of course, San Francisco Bay was mostly just a dump, right, like there are a whole series of hills and islands along the shore of the peninsula which are, you know, remediated dumps.</p><p>And one of the consequences of these kind of major thought thinking plans for provisioning additional development space and so on is that San Francisco is now by far the richest city in the history of humanity with some of the oldest and most dilapidated housing stock at some of the most highly unaffordable prices. So high in fact that it cannot function as a city because the diversity of workers that you need to make a city function cannot afford to live there. And so it's in, you know.</p><p>I don't know, like San Francisco has always been a city that's capable of reinventing itself and rebuilding itself, but it's never been more clear that like something desperately needs to change there. Anyway, I don't want to get kind of on that horse too much. But yes, I'm totally, yeah, yeah. So I totally, you may well be, I totally agree that,</p><p>Theo Jaffee (18:51)</p><p>Yeah, we way overreacted. And I think I'm on reclaimed land in San Francisco right now. Yeah.</p><p>Casey Handmer (19:04)</p><p>that the precautionary principle exists for a reason and that environmental protection regulations exist for a reason and that the fraction of the Earth's surface that has not been affected negatively in some way by human activity is relatively small and that we should probably do what we can to preserve those areas. But at the same time,</p><p>we are an economy that's not standing still, right? And so doing nothing is a choice. And if you do nothing about kind of retrofitting and replacing legacy energy production infrastructure that we all depend on to avoid starving to death, then you continue to accept the consequences of that in terms of the environmental and health impacts of those technologies, which we know now are many, many, many times worse than the impacts of putting out a solar array. And the nice thing about solar arrays is that, unlike pouring a bunch of concrete and installing a nuclear</p><p>power plant or something like that. If you decide after 30 years you're done with it and you want to replace it, then you just derack it, rip the racks out of the ground and it goes back to being pretty much exactly the same as once before. You don't have to break up any any subsystem, you don't have to decontaminate any areas, you don't have to do a bunch of like you know settling ponds, cooling ponds, ash tailing ponds etc etc which are standard practice for for coal generation, for nuclear generation etc. And so I think that when it comes to sensible changes to environmental permitting</p><p>regulations, there should be a recognition of the fact that if you were displacing existing use, which is far more damaging, then you should probably get a pass. And if your displacement or if your thing you're building is extremely easy to undo, then you should also get a pass that's proportional to that. So for example, it is completely conceivable when you are developing a solar array that you could put cash in escrow that would completely pay for the cleanup and remediation of that site.</p><p>whereas that is impossible with almost any other kind of development.</p><p>Theo Jaffee (20:53)</p><p>Yeah, I think reading your blog has turned me into like much more of a solar bowl. And this is still like something that very few people, even on like optimist, like, yak builder Twitter are talking about. They're all like, we need to build more nuclear, nuclear, nuclear, nuclear. And like, they're kind of right. And that nuclear is probably better than like coal, but yeah, like not many people are talking about solar.</p><p>Casey Handmer (21:12)</p><p>Yeah, for sure. I mean, that's quite clear. Look at what's happening in France, right? Like, France for a variety of interesting...</p><p>of socio -political reasons decided that it wanted to have energy independence and also large sources of its own supply chain for fissile materials. And they went and did it, and the results speak for themselves. And I think that if you were wondering about energy policy in the 1960s and 1970s, and you had sufficient access to uranium deposits in your own country and also a large industrial base, so you can support that technology, then it's a no -brainer to have then developed nuclear in the way that France did. That said, we are now at the</p><p>point where it will suit me cheaper for France to decommission or turn off their nuclear power plants or at least mothball them and deploy new solar and get their electricity from solar. And I think that relatively few people have run those numbers and seen that that's the case. But it is already the case and has been the case for many years now that it is cheaper to build and operate a new solar plant than it is to continue to operate an existing fully depreciated coal plant, for example. And that is also the case with gas beaker plants. If it is not</p><p>already shortly be the case with gas -combined cycle plants. Solar has overtaken wind in the last few years as well and so it's just a matter of time. It's like the Grim Reaper meme as it works its way down. Because it's true that a fully -appreciated nuclear plant you don't have to pay for its construction cost anymore but...</p><p>But it's also true that those materials and systems don't last forever. And so, for example, about two years ago now, France encountered an issue with their reactors that affected lots of them. It took something like a third or two thirds of them offline for a season as they repaired something to do with corrosion in an exchanger. And that's not cheap. It's extremely expensive to get these things online and working indefinitely. It's certainly cheaper than losing an energy war with Russia, Germany if you're listening, but it's certainly not free.</p><p>Theo Jaffee (23:10)</p><p>So let's talk about Giga projects. You've written a lot about colonizing Mars. And obviously I'm no expert on planetary scale terraforming, but Kurzgesagt, the YouTube channel, has a video that I think is really interesting. That's about terraforming Venus instead of Mars, because Venus has more solar energy, it's closer to the sun, has similar gravity to Earth, it's bigger than Mars. And so their plan is, step one, you make a giant, annular mirror system.</p><p>Casey Handmer (23:16)</p><p>Yes.</p><p>Mmm.</p><p>Theo Jaffee (23:39)</p><p>that directs solar energy away from the atmosphere that freezes the CO2 atmosphere of Venus. And then you use robots and mass drivers to shoot the excess CO2 and nitrogen into space. Because obviously like too much CO2 means like humans can't breathe on it. And then you fire water in the form of ice from Jupiter's moon Europa using again, robots and mass tethers or mass drivers and space tethers.</p><p>Casey Handmer (23:42)</p><p>Yep.</p><p>Mm -hmm.</p><p>Theo Jaffee (24:07)</p><p>And then you will add more mirrors so you can heat up the planet gradually without torching it, because if you just remove the existing mirrors, then it would get like grilled by the sun because a Venus day is so much longer, Venus rotates so much slower. So one side of the planet would get cooked. And then you'll add like trillions of cyanobacteria, which will photosynthesize. It'll turn that CO2 into oxygen and it'll fix the atmospheric nitrogen to usable nutrients and then grind down the surface into soil and add plants, trees and animals.</p><p>Casey Handmer (24:20)</p><p>Yeah.</p><p>Theo Jaffee (24:37)</p><p>So that would take a very long time to actually work all the way. But why do we hear so much about terraforming Mars and very little about terraforming Venus?</p><p>Casey Handmer (24:48)</p><p>Yeah, so you've asked the right person. And some of you will remember there is a platform called Quora. And actually one of my first viral posts on Quora was like, what is easier to terraform, Mars or Venus? So maybe you should dig that out and take a look at it. It was like almost 12 years ago now, something like that. It turns out that...</p><p>terraforming Mars is about a billion times easier. That's the fundamental reason. Actually, I was involved in a workshop recently where we calculated that you could probably achieve a degree of temperature rise, a Kelvin degree of temperature rise on Mars for about a billion dollars of marginal investment. So if you wanted to heat Mars up by...</p><p>actually about a billion dollars per year, something like that. But if you wanted to heat Mars up by, say, 40 Kelvin or something, so it's just about freezing, then that's actually quite affordable. That's much less than, say, Google's cash flow. Whereas the cost to deploy a planetary -scale mirror...</p><p>on above Venus in the Venus Sun L1 point and then wait for the atmosphere to freeze out, which will take about 140 years, and then deploy the terawatts of nuclear reactors onto the surface that you would need to use the mass drivers to fling stuff. Actually, if you just want to get rid of the CO2, you can just bury it. You don't actually have to fling it off the planet. But if you wanted to use it as mass to speed the planet's rotation up, then you could fling it off with mass drivers, which would be pretty funny.</p><p>And then actually, as far as getting water goes, I would probably advocate just drilling holes and getting it out of Venus's crust because there's way, way more water in Venus's crust than you could get from even an entire moon of Jupiter and it's right there. You don't have to shoot it across the solar system.</p><p>Yeah, and then you have to design an atmosphere that is able to support life, but is also significantly more thermally transparent than Earth's atmosphere. So Earth's atmosphere is responsible for something like 15 Kelvin of heating, which prevents the surface of Earth from being largely frozen. But sometimes in the historical past, it was, or prehistoric past, it was frozen like the entire planet basically glaciated in periods called snowball Earth. But on Venus, you would almost certainly have to have some kind of shade system permanently to avoid</p><p>once again undergoing runway warming.</p><p>Theo Jaffee (27:05)</p><p>So if you just bury the CO2, then like, couldn't the CO2 just escape back into the atmosphere if you have like a volcanic eruption or something?</p><p>Casey Handmer (27:07)</p><p>Hmm.</p><p>I mean, impressive, but...</p><p>But effectively you treat it as landfill, right? So you just bury it deep enough and it will be stable at that pressure. It's the same idea that people are talking about with CO2 injection for carbon capture and sequestration here on Earth. It's probably somewhere, but we'll come back out. But bear in mind, if you built the infrastructure that's necessary to go and bury how many quintillion tons of CO2 is in Venus' atmosphere underground, and you get it all underground and you build the surface and you find out a rate of like a trillion tons a year, well, that's...</p><p>millionth of your current industrial power to bury that all again. So you just have to keep up with emissions and otherwise stabilize the atmosphere. You need CO2 in the atmosphere anyway. You just don't want like a 200 bar hot house, sulfuric acid clouds and stuff. I'd say like as a destination, some people are very team Venus. I'm a bit dubious about it because the gravity is so high. So to get from Venus back to Earth is almost as hard as it is to get from Earth to Venus and it's extremely hard to get off Earth. So give yourself a break.</p><p>and just go to a lower gravity world first.</p><p>Theo Jaffee (28:22)</p><p>first, so like we should eventually terraform Venus.</p><p>Casey Handmer (28:25)</p><p>If we find that we have a shortage of planetary surface area, then yes, but I just don't, I don't know if that's going to be a major concern for us.</p><p>Yeah, in some ways, like...</p><p>Yeah, it could be done. Maybe Venus is for building an orbital actually. Maybe you use Venus to build, basically take the whole planet apart and build giant space station instead. Because you've got rotating space stations about a thousand times more surface area per unit mass.</p><p>Theo Jaffee (28:51)</p><p>like an O 'Neill cylinder.</p><p>Casey Handmer (28:55)</p><p>Yeah, some kind of giant ring, I don't know. There's this concept from the Banks, in Banks' books called the orbital, which is a ring that is so large that it's a circular period that creates gravity is equal to 24 hours. And it turns out that it's probably impossible to build one of these out of materials that we know how to build.</p><p>because it would break apart from the force. So either you make it turn slower or you make it smaller, or a bit of both. But I would be very surprised if I lived long enough for this to be something that's really occupying a lot of brain sweat for me, worrying about. I think that solving the set of problems required to do something meaningful on Mars is a lifetime's worth of incredibly intensive effort.</p><p>Theo Jaffee (29:49)</p><p>But in like one human lifetime from now, assuming all goes well, we should have like a permanent human presence on Mars. Why would people actually want to live there? You know, it's cold, it's really far away, the wifi is bad, there's lots of latency between Earth and Mars. There's not much lighting, not much natural lighting at least. So what would make people want to go to Mars?</p><p>Casey Handmer (29:55)</p><p>Yep.</p><p>Yeah.</p><p>Yeah, my wife wanted to go work in Antarctica and she did. She spent most of 2016 overwintering at the South Pole where the Wi -Fi was worse than terrible and it was very cold. The air was breathable but very cold obviously. Food selection limited. Company limited. It turns out that most of us kind of prefer a comfortable life but some people are just kind of...</p><p>pioneers one way or another and so I don't think there'd be a shortage of people who want to do that and even if you look at the Venn diagram of people who want to do that and people who have the skills to make a meaningful contribution I think it would be no shortage of people.</p><p>Theo Jaffee (30:47)</p><p>So you did a lot of this Terraform investigation while you were working at NASA JPL, but you're also like a huge fan of SpaceX and Starship and Starlink and like not a huge fan of the space launch system. Although side note, I did actually watch the Artemis One launch live from Florida and it was really, really cool. Yeah.</p><p>Casey Handmer (30:59)</p><p>Hehe.</p><p>Yeah.</p><p>That's cool. Yeah, I mean, I'm Team Rocket when it comes to like lighting the candle. Don't get me wrong. I've never seen a rocket launcher. I was like, I feel worse as a person having seen that. But the thing is like, you know, as a way of getting dopamine, as a way of entertaining people, you know.</p><p>I think we need to be circumspect about the fact that SLS is, for a whole variety of reasons, that I've exploited in depth on my blog and other people have too. And it's fairly openly understood now. It's an extremely expensive, extremely wasteful, extremely dangerous way of going about solving these problems. And I actually think it...</p><p>It speaks poorly to US technical integrity to continue to maintain this polite fiction that it is a good idea. It's quite evidently a terrible idea and sooner or later it will kill someone and then it will be impossible to deny but someone will have died. So, yeah, I think that...</p><p>It's one of these things, it's a bit like Fusion actually, in the sense that maybe if it had done what it promised to do, which is reuse parts from the shuttle to reduce complexity and development time and actually got to the launch pad and launched and achieved a higher launch cadence within a few years, then maybe it would have had a window of 10 or 15 years where it could have made a meaningful contribution. But instead it's just been this giant vampire squid sucking the money out of NASA and producing almost nothing in return. And I think we need to be really pragmatic about that.</p><p>Theo Jaffee (32:26)</p><p>Yeah, so given that, why did you work in NASA instead of SpaceX?</p><p>Casey Handmer (32:31)</p><p>That's a good question. So when I worked at NASA, I worked at JPL, which is the Caltech operated deep space robotics center. And it's not related to the development of the SLS or the human space flight program. And I worked on GPS related technologies, which are critical to national security and scientific applications and also studying global warming. I'm quite proud of the work that we did there. For better or for worse, it had its challenges. And I also got to participate to a limited extent with the Mars Exploration Program, with the rovers and stuff there.</p><p>And you know, LA is where I happen to live. So that was a lot of fun. As far as SpaceX goes, well, I've written a blog post where I talk about my professional failures. And one of those I would say is that despite the fact that I've interviewed at SpaceX a number of times, I've not been invited to work there. And I think that reflects well on the recruitment process, frankly. And I think that maybe at some point in the future, I might re -examine that. But SpaceX is a place that requires a level of commitment that is hard to square with my current commitments to young children. And I have to keep that in mind.</p><p>And so in many ways, part of the reason I went and did Terraform is that I wanted to build a technology that had dual use applications, both here on Earth where it solves a major energy abundance challenge and a human welfare challenge, but it will also give me and the team here a major leg up when it comes to building critical infrastructure for the Mars base, which with any luck we would be able to respond meaningfully to challenges or requests by SpaceX or NASA to participate in that technology development program.</p><p>Theo Jaffee (33:59)</p><p>Yeah, so maybe once you solve the small task of global warming and abundant energy, then yeah, maybe you can do SpaceX.</p><p>Casey Handmer (34:06)</p><p>Yeah, well, they're not mutually exclusive. So.</p><p>So it actually turns out that like putting humanity on a much, much firmer financial, economic and energy, ecological footing drastically unlocks huge numbers of resources that can be used then to explore space. I think it's very hard to say that a future where in 2050, a large fraction of the world's population is starving to death or being boiled to death from heat waves is a world where it'd be easy to mobilize the sort of resources that you would need to do a public -private mass city.</p><p>Whereas one where actually unlocking cheap energy has put humanity back on the Henry Adams curve and we're doubling the size of the global economy every 15 years would be one where it would be pretty easy to liberate those kind of resources. So I think these are quite mutually compatible.</p><p>Theo Jaffee (34:57)</p><p>Yeah, I mean, this is like the ultimate Elon Musk master grand plan, no? Like he started SpaceX first before he did Tesla. It seems like he's always cared more about SpaceX than any of his other companies, including Tesla. It seems like, yeah, like if he could only save, if he could only do one company, he would, my bet would be on SpaceX.</p><p>Casey Handmer (35:20)</p><p>I think that's probably a fair assumption. Obviously, he has to be somewhat cryptic in his personal remarks, but I think that one of the things that's being lost in the current discussion of whether or not he deserves his $55 billion dollar pay package over the last six years of hard work he did to Tesla, despite the fact that it was approved by 80 % of the investors and the information relating to it was publicly available, was that...</p><p>was that for most CEOs, if they don't get the gig to work at Tesla, they have to work somewhere else. And for most billionaires, they can work on a beach. But really, there's a good argument to be made that what Elon set out to do at Tesla from 2018 was impossible. Everyone thought it was impossible. The idea of this pay package that he voted himself or that the board put together for him and that was approved by the shareholders was kind of a Hail Mary pass, right? But it was also something that preserves enough upside to make it worth Elon's work.</p><p>to go and like break his brain working, you know, 120, 140 hour weeks on that and also at SpaceX and a few other companies, but mostly on Tesla to take it to that next level. And if he hadn't done it, it wouldn't have been there. Tesla would be, you know, doing great business selling Yard Model 3 here and there, but they would not have stood up the factory in Texas, they would not have stood up the factory in Germany. And we'd be so much poorer as a civilization. And so, you know, one of the nice things is once you have a modicum of wealth, you can actually negotiate from a position of strength when it comes to what you're doing.</p><p>you want to spend your time doing. But yeah, I think that Elon understood a long time ago that he has goals that cannot be achieved cheaply and a necessary precondition for doing something interesting on Mars is having a rocket that is the complete opposite of the SLS, which is to say high flight rate, completely reusable, low cost, high reliability, and much, much simpler architecture, and then also having...</p><p>basically first dibs on the 100 ,000 smartest engineers on earth and across those ideas I think it's done extremely well.</p><p>Theo Jaffee (37:18)</p><p>So why is Elon like the only person essentially to have like multiple extremely successful companies? Like you could think of some exceptions to this, maybe like George Hots who has Comma, which is like open source self -driving and then Tiny Grad, which is like neural networks. But like there's nobody who actively runs like more than one multi -billion dollar company. You think that.</p><p>Casey Handmer (37:32)</p><p>Yep.</p><p>Mm -hmm.</p><p>It's extremely rare. It's extremely unusual. And it's even more unusual given the ambition and the scale and the technical difficulty of what those companies is doing. This is not like someone like doing serial entrepreneurship of three SaaS companies and having good exits three times, which is good for them. This is someone who set out to do in parallel the two hardest things that were so hard that the smart money in the field maintained that it was impossible for more than a decade.</p><p>Despite their various successes and advances along the way. And I think that, you know, I have a blog post that's saying Elon Musk is not understood, and I don't understand it. I have very limited insight. But like, I think that more of us should ask the question, how is this possible? Because it's like, it's obviously possible. It's permitted by the laws of physics. But how is it that Elon was able to do this thing that plenty of other people have set out to try and do and have not succeeded or would not even bother to try it because they convinced it's impossible? And...</p><p>Theo Jaffee (38:22)</p><p>Yeah, I love that one.</p><p>Casey Handmer (38:41)</p><p>Yeah, blows my mind. And it's also, I should state for the benefit of listeners, I met Elon in I think 2011 and I was suitably impressed as many are and I decided to put some of my limited, very limited savings at the time into Tesla stock. This was before the launch of the Model S and that stock today forms the basis of my personal wealth. I had to sell some of it to pay for a green card to get anything out and stay in the United States, which was painful, extremely. I kind of lost well over $100 ,000 in terms of</p><p>today's stock price. But that essentially gave me the freedom over the last decade to break out and start my own company. And if I didn't have that and with young children and a mortgage, I don't think I would be able to take this risk. So I'm incredibly grateful. And what did I do for that? Nothing. The value of stock in Tesla was built by the tens of thousands of engineers and technicians and so on who sweated blood for decades to make that happen. And all I did was got lucky. So I never take that for granted.</p><p>Theo Jaffee (39:42)</p><p>So speaking of Elon, Elon famously back in like 2000 something, 2006, 2008, wrote his plans for the Hyperloop and then you worked on a Hyperloop company. And then...</p><p>Casey Handmer (39:54)</p><p>Yeah, 2015 I think that was released. Yeah.</p><p>Theo Jaffee (39:58)</p><p>yeah. So given that there is no hyperloop between San Francisco and LA, like, why not? And what would it take to work now? Could it work now?</p><p>Casey Handmer (40:05)</p><p>Hmm.</p><p>The short answer is no. The company that I worked for was finally defunct earlier this year. I'm generally cautious about speaking about what happened there and I kind of have a probably perpetually unpublished blog post about it because really the team that was assembled there was exceptional and they did really exceptional things. And they basically solved hundreds and hundreds of next to impossible engineering challenges as you would expect them to. But it turns out that hyperlapse is a concept.</p><p>really struggle for all the same kinds of reasons that high -speed rail does. In some ways it's worse. Because it turns out that the expensive part of high -speed rail is not really like rail wear or, you know...</p><p>right away or something like that. But the expensive part is that almost all the world's cities that could be connected by high speed rail or high glue have sufficient terrain difficulties between 80 and 95 % of the CAP extra -criticality systems is just spent moving rocks, like digging holes, digging tunnels. And that's one of the reasons I think that Elon went off and founded Boring Company, was because just the price of tunneling seemed so absurdly high. And so Boring Company's made some advances, but they certainly haven't like,</p><p>been on the Moore's law of tunneling cost, right? They haven't been able to consistently halve cost every 18 months or something like that. So when it comes down to it, and especially if you go like a first principles analysis, like, okay, so what's the energy required to smash up a four meter diameter tunnel between, I would say 20 or 30 % of the land between here and San Francisco, between Los Angeles and San Francisco, in terms of the sheer energy required to break those rocks up and move them out of the way? And how does that compare to the energy required to push the air molecules in the stratosphere out of the way as a</p><p>plane flies through and it's like 10 ,000 times more energy easily. So for the energy required to build one tunnel that shows one pair of cities, you could fly 10 ,000 flights. And also the machine that you built to fly those flights can fly to any of 20 ,000 runways worldwide, point to point. So yeah, it just turns out that like.</p><p>I think aviation as a technology is underappreciated for just how revolutionary and incredible it is. It blows my mind when I jump on the Southway 737 to go off to a business meeting or something that most people have their shades down and just kind of blissed out, you know, staring straight ahead, getting drunk, whatever. I always get a window seat. I get a window seat out the back.</p><p>Theo Jaffee (42:22)</p><p>Yeah, I tweeted about exactly this.</p><p>Casey Handmer (42:26)</p><p>You know, usually on the on the shady side of the plane. So like the north facing side of the plane So the sun's not in my eyes and I just stare out the window like a like a grinning idiot at the Landscape as it goes by and of course flying out of LA What I'm looking at is terraforms future hunting grounds state after state after state of like mostly empty You know economically unproductive parched land that is just getting hot in the sun that I want to put solar rays on and and turn them into you know, just a river of wealth for for the people who live there and</p><p>It's beautiful, it's incredible. And I think this is, the thing that blows my mind is that aviation has been an extremely compelling technology since the 1930s. We're almost going up for 100 years of aviation being quite clearly the obvious way to do things. And yet, for some reason, people somehow think that the solution is to go really close to the surface of the earth so that you have to drill lots of holes in the ground. Planes are amazing. We should just figure out how to make planes cheaper and faster and better.</p><p>Theo Jaffee (43:25)</p><p>Yeah, honestly, I just flew from Phoenix to San Francisco, like a couple of weeks ago. I had a window seat and I was thinking like the exact same things. I was flying over the Mojave Desert and then the California Central Valley and then over the mountains on the coast. And I was, especially with the desert, I was also thinking solar panels.</p><p>Casey Handmer (43:25)</p><p>Thank you.</p><p>Yeah, and I don't want to give the impression that like, I'm just going to like take over Nevada and pay for the solar. The actual amount of solar that you need to make a shitload of money doing synthetic fuels is relatively small, because the economic productivity of solar per unit area is about 10 ,000 times higher than agriculture.</p><p>Sorry, that's not entirely true. The energy productivity is about 10 ,000 times higher. The economic productivity is between 100 and 1 ,000 times higher. And so for, yeah, it's pretty good. So like in the United States, we have like 50 million acres of corn production is devoted to bioethanol. And that bioethanol is mixed with gasoline in some places and used for a handful of processes. But like it's like single digit percents at most of the US's fuel consumption mix. If instead you took those 50 million acres of like prime,</p><p>Theo Jaffee (44:12)</p><p>pretty good.</p><p>Casey Handmer (44:35)</p><p>agricultural fertile land and you reforested them and turned them back into prairies and put the bison back on there and the deer and the mountain lions and cougars and whatever and rewilded that land right like that would be an obvious ecological win and then take 50 million acres of parched</p><p>desert fried land out in the American West. It doesn't even have to be like virgin desert land. You have easily 50 million acres of like basically brownfields where it's already been disturbed. And you throw solar arrays on that. The fuel productivity is like between 20 and 50 times higher than the best corn land in the United States. And you end up producing like more than 50 % of US's oil and gas consumption just from those 50 million acres. Isn't that amazing?</p><p>Right, so like it's a win -win -win. And then you say, okay, what's the impact in the place where you're putting the solar array down in the desert? Well, depending on how you do it, shading the ground actually improves moisture retention, reduces soil temperatures, and actually allows things to grow. So like there's these absurd photos you can find of solar arrays that were developed maybe a decade ago in Nevada or Southern California or Arizona, where like in the solar array, they now have a problem with it, they have to like run around with the mower because trees keep growing. Like trees haven't grown in this landscape for 10 ,000 years. And obviously 10 ,000 years ago in the Pleistocene.</p><p>and it was much, much wetter and there were trees and forests and mammoths and things, but more recently it's completely desertified. It turns out as soon as you shade the land, that trees start growing and you're like, hmm, how curious. We are terraforming the desert with solar rays.</p><p>Theo Jaffee (46:02)</p><p>So what's the geopolitical impact of that? Like does it turn out that, actually it doesn't matter if all the Middle Eastern Gulf states run out of oil because they can just build solar on the desert?</p><p>Casey Handmer (46:13)</p><p>Yeah, so there's actually a bit of a question mark there, which is, you know, obviously the significant solar resources.</p><p>in the Gulf states and also significant oil export capacity. And so it kind of, you may pose the question, does it make more sense for, say, Saudi Arabia to build a lot of solar panels and make synthetic oil and then export that to Europe like they currently do? Or would it be cheaper for Europe to build solar arrays locally and cut out the shipping cost? And it turns out that, particularly for natural gas, the shipping cost of a long distance pipeline, so ships is quite high. So that places, increases the incentive to do gas production locally.</p><p>And if it is the case that long -term solar synthetic fuel production can match current importation prices for oil and gas in Europe, then that's actually not a huge forcing function to change the current importation modality. So for example, you might continue to import oil products from the Middle East, but they'd be synthetic, but you'd import them. But that actually, in some ways, like...</p><p>I mean, it's actually a lot cheaper to produce oil in parts of the Middle East than it is at the marginal fracking producer in the United States, for example. So it's not entirely certain that that would displace oil production in the Middle East unless that oil actually ran out.</p><p>But if you're able to develop technologies and you develop the terraform technology to the point where you're able to reduce the price of oil and gas by maybe a factor of three, which I think is, it's definitely physically possible. How long it takes us to get there is somewhat up in the air, but there's an extremely strong forcing function for it. So we should expect, you know, enormous investments in the factories and so on that are building these components that allow that to take place. Then that actually places a significant forcing function for more local production, which I think is super interesting from a geopolitical point of view as well, because basically since the end of the second world war, we've had this,</p><p>kind of global economic system that is underwritten by the United States Navy and freedom of navigation and globalization, which now enables most of the world's countries to import oil that they need from foreign countries that happen to have it. But everywhere has roughly the same amount of solar power. So it may be the case that in the future, food production, so oil and gas production like food production is much more localized. And I think that will be mostly a good thing.</p><p>Theo Jaffee (48:30)</p><p>So the way I found you in the first place on Twitter was because Ashley Vance, who is the author of the first Elon Musk biography, tweeted something like, the two most productive people I know are both massive Twitter addicts, and the two most productive people that he knows are Elon Musk and you, which is like, wow, what a distinction. So like, do you, do you agree with that? Like, how do you manage to like,</p><p>Casey Handmer (48:52)</p><p>Distant second. Yeah.</p><p>Theo Jaffee (48:58)</p><p>do so much, how do you like manage your time, what does a typical day look like?</p><p>Casey Handmer (49:02)</p><p>Well, I would have to say, I wish I had time to take better care of myself, but I probably don't sleep enough. But I think the key is to work on a bunch of different things at once and just make sure you have a bunch of irons in the fire and then just keep pushing those projects forward over time. And so...</p><p>Basically, I'm well set up at home that if I have a, most evenings I'm kind of free after nine or 10 p And then if I'm in the mood to write a blog, I can sit down and smash out a blog up pretty quickly or I can do some coding. Last night I was doing some work related coding for about two hours, which was actually fun. Because as a founder, I'm doing something I haven't done before and I would say that my skills as a founder are...</p><p>Not infinite. Certainly I'm pretty inexperienced. But when it comes to solving a nice well -defined coding problem with some data analysis or something, that is my superpower. So I was like, I feel competent for once. This is nice. Yeah, and then the scrolls thing. I think Ashley came and talked to me about scrolls originally. And that's just because one of our investors, Nat Friedman, started the scroll price thing. And it was kind of...</p><p>interest and productivity was waxing and waning and I thought, look, I should probably spend a week on this just to at least tell Nat that I had a good crack at it. And actually I found I was able to make pretty rapid progress, again, by kind of changing the rules of the game. But I don't know if I have any special insights other than just make sure you're making good use of the time you have available. And I have almost no time. It's actually kind of crazy. But I feel like I'm significantly more productive since I had children than I was before. I think like before I had kids, I wasted a lot of my time.</p><p>Theo Jaffee (50:43)</p><p>That's actually very interesting for two reasons. Like first of all, a lot of people say that like having kids makes them less productive because you have to like spend time with the kids and then you have less time for work. And then the second thing is what you said is kind of the opposite of Steve Jobs' productivity advice of like relentlessly focus on one thing. Yeah.</p><p>Casey Handmer (50:52)</p><p>Yeah. Yeah.</p><p>Yeah, well, I mean...</p><p>I would say for your listeners, feel no obligation to validate my mistakes by repeating them. It's what works for me. You need to find out what works for you. And if you have a single minded maniacal focus on one thing, then you can probably make a lot of progress in a year or 10 years or a lifetime. That's just not how I work. I tend to get bored pretty easily. So I try a lot of different things. One of the advantages of that actually is you find that often the things you're working on cross -planet. So for example, I'll spend a week bashing my head against the wall trying to debug a numerical convergence issue with a Mars hydrologist.</p><p>simulation for a terraforming Mars simulation that I'm running and I'll get almost nowhere and then I'll take a look at the scroll prize and realize that because I've been thinking about vectorization of data sets for a while I can apply that to the Mars thing and it allows me to do something that would have otherwise taken 10 hours in about 15 minutes which then means that I can actually make progress on it because I don't have whole days that I can work on things anymore.</p><p>But yeah, it's a lot of fun. And then with kids, it reminds you that if you're not with your kid, why are you giving off? Every now and then you just get tired and you want to sit in the small room with your phone out and tweet about stuff. But actually Twitter is fabulous to me because it has put me in contact with a community of people who also value finding ways to achieve really productive things. One of the other things I'd say is that the long -term returns of things that you spend your time on are very parallel distributed. So it turns out that I...</p><p>I show up for work in person here in the office for probably 40 hours a week and easily 20 hours of that time is spent on shit that does not matter. I don't know exactly what that stuff is, just long term I know that half of this stuff does not matter at all. Or it's very low leverage, like paying bills and stuff. It's necessary work but it doesn't really leverage my capability, it doesn't make a huge impact on the future. But then every now and then you find a rich vein of ore and you can exploit that and you make a really big</p><p>impact really quickly. I've written hundreds and hundreds of blogs and only really a handful of them have ended the zeitgeist but the ones that have have made my life, have changed my life really in many positive ways. Everyone should write a blog. You have to write lots and lots of blog posts until you get good at it.</p><p>Theo Jaffee (53:13)</p><p>Yeah, I think I need to write more blog posts. I have a blog that I write like a post every like two months and it ends up being this like 30 ,000 word monster, not like literally that long. So there's, there's a lot of parallels between you and Elon. And one of them is that you both have like a very like fundamental like first principles, like engineering based mindset, like with the famous story about how when Elon was starting SpaceX, he noticed that like,</p><p>Casey Handmer (53:25)</p><p>Yeah.</p><p>Theo Jaffee (53:41)</p><p>It costs like a hundred times the cost of materials to launch a rocket. And he was like, yeah, you should reduce this. So like, how do people develop this? Is it an inborn trait?</p><p>Casey Handmer (53:51)</p><p>Well, depending on where you are in your career, I think there's always an advantage to studying physics.</p><p>But you know, I think it's also, if you're Elon, there's an advantage if you're like just a psychotically motivated South African immigrant with a chip on your shoulder. Like, I think people don't understand that. And you know, I've talked about this with Ashley and I've read the other biography as well. And I think that, you know, I think Ashley tried his best, but he didn't, and Isaacson didn't also kind of manage to get at the core of who Elon is as a person and why he does what he does. And I think actually it's not super accessible.</p><p>So yeah, but just do the best you can, I guess. If...</p><p>I think a lot of people see the outward signs of Elon's success, his wealth, his power, his positive achievements for humanity, and they envy that, and they wish they could be that, and they wish they could be in their shoes. But I don't even know Elon personally, and I would not swap with him for all his wealth, not for a second. Obviously, and I think he would agree with this, in some ways, kind of enduring a curse. And...</p><p>Theo Jaffee (54:53)</p><p>Yeah.</p><p>Casey Handmer (55:02)</p><p>And I think we should just be grateful for the fact that we live at the same time as someone who is quite clearly so capable, despite the fact that it obviously aspects of their personality and their work ethic and so on obviously have caused them enormous personal sacrifices and pain. Yeah.</p><p>Theo Jaffee (55:22)</p><p>Recently somebody asked Elon like what should I do to become the next Elon Musk and he said like are you sure you want to?</p><p>Casey Handmer (55:29)</p><p>Yeah, exactly. I don't think he's a person who has made happiness a major priority or has achieved it either. I think he has moments of joy, obviously, but some people are just like they're set, just who they are as a person is not really set up for contentment. And for some of those people, it creates life -ruining mental illness, and some of them it creates this deep -rooted fire and passion to right a wrong or see their enemies suffer or something like that. And again...</p><p>Theo Jaffee (55:31)</p><p>So.</p><p>hedonic treadmill.</p><p>Casey Handmer (55:59)</p><p>Yeah, we're just extremely lucky that we live in an era when Elon is able to channel that energy into making cool technology that moves our entire species forward, as opposed to becoming some despotic warlord somewhere. If you've heard about like, Cesare Borgia or something like that, similar kind of instincts, but in 1500s Florence, there was no way to go and do massive reindustrialization of space. So instead, these people just kind of got trapped in cycles of violence. Anyway.</p><p>Theo Jaffee (56:28)</p><p>So last question, what's your favorite place that you've traveled and why?</p><p>Casey Handmer (56:34)</p><p>I'm probably here, California. Yeah, that's why I live here. No, I mean, I came out here for grad school in 2010. And after year or two, I realized I would be staying. I kind of, I agree to appreciate what California had to offer, both in terms of landscape and human factors and so on.</p><p>Theo Jaffee (56:36)</p><p>Really.</p><p>Casey Handmer (56:52)</p><p>But that said, I've traveled to a whole variety of interesting places and I think actually in many ways it was more about the age I was at the time than the places that I went to. Because I have in some cases gone back to places that I visited as a 19 year old or whatever and found extremely transformative at the time and more recently I went back and it was just like, this is just yet another shitty concrete city. And...</p><p>Yeah, and the thing that's missing is some combination of chemicals in my brain that just happens to exist when you're 19 years old and fades shortly thereafter. So I'd say if you do have the opportunity, if you're a younger listener in particular and you're thinking, wouldn't it be cool to go and travel to some crazy place, you should absolutely do it. Because in some cases, those places won't exist when you're older, but your ability to enjoy them in that way certainly won't exist when you're older.</p><p>It's a good thing. So yeah, I mean, I spent a lot of time kicking around in the Russian Far East when I was in my late teens and early twenties, just as a kind of a playground in a way, a place that had the right kinds of challenges for my personality and the things that I was interested in.</p><p>Theo Jaffee (57:42)</p><p>interesting.</p><p>Casey Handmer (57:56)</p><p>You know, I didn't really don't speak Russian at all and and there wasn't and and still isn't any kind of tourism industry in these places And it's really sparsely populated and it's it's quite dangerous in some ways I wrote the Wiki travel article for this area and as far as I know no one has revised it since so that was 14 years ago so that means that either either the subsequent English -speaking travelers who went there found that my article was accurate enough or No one has been I'm not quite sure I think I know of like I know of maybe half a dozen people who who have read my blog source in</p><p>videos and who've subsequently gone there and told me about it but yeah it's it's kind of it's an out -of -the -way place I'll put it that way.</p><p>Theo Jaffee (58:36)</p><p>Good practice for Mars.</p><p>Casey Handmer (58:38)</p><p>I don't think that's why I went there at the time. But yeah, in some ways, yeah, I mean, the history of human, at least like technological human habitation in these areas is extremely recent. Like we're talking like 1930s, 1940s, 1950s. Obviously, there are indigenous populations who live there, but they're mostly nomadic and extremely sparsely populous. It's an extremely tough climate. Yeah.</p><p>Yeah, just to put it mildly, you know, United States once again wins the lottery with climate and geography. I actually have to jump to a call, so we should probably wrap up.</p><p>Theo Jaffee (59:11)</p><p>Yeah. Well, thank you so much, Casey, for coming on the show.</p><p>Casey Handmer (59:15)</p><p>Yeah, thank you so much for having me. It's been fun and interesting questions as always.</p><p>Theo Jaffee (59:18)</p><p>Yeah. Thank you.</p>]]></content:encoded></item><item><title><![CDATA[I listened to the top 100 albums on Rate Your Music]]></title><description><![CDATA[Here's what it taught me about taste.]]></description><link>https://www.theojaffee.com/p/i-listened-to-the-top-100-albums</link><guid isPermaLink="false">https://www.theojaffee.com/p/i-listened-to-the-top-100-albums</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Fri, 21 Jun 2024 03:31:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LBz4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LBz4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LBz4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 424w, https://substackcdn.com/image/fetch/$s_!LBz4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 848w, https://substackcdn.com/image/fetch/$s_!LBz4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 1272w, https://substackcdn.com/image/fetch/$s_!LBz4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LBz4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png" width="1456" height="597" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:597,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3758771,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LBz4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 424w, https://substackcdn.com/image/fetch/$s_!LBz4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 848w, https://substackcdn.com/image/fetch/$s_!LBz4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 1272w, https://substackcdn.com/image/fetch/$s_!LBz4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa275ef2c-3f0b-45fa-a9bd-5978bade4f4e_2560x1050.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Over the last couple weeks, I listened to all 100 of the top albums on the infamous music website <a href="https://rateyourmusic.com/charts/top/album/all-time/">Rate Your Music</a>, in chronological order.</p><p>This started when I realized my listening habits were far too narrow. Probably 80% (not even joking) of the music I listened to on Spotify in the past year has been either Logic<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, Kanye West, or Daft Punk. If I really wanted to be able to have opinions, if I wanted to develop taste, I needed to go broader. I couldn&#8217;t trust myself to pick the right albums, however. When the Apple Music 100 Best Albums list came out, I decided to trust the experts and go through the list.</p><p>I started at the top with the intent of working my way down. #1 was Lauryn Hill&#8217;s classic <em>The Miseducation of Lauryn Hill</em>, which I loved. But after looking through the rest of the list, I realized that even a normie like me could tell that it sucked. Drake, Bad Bunny, and Travis Scott ended up on here? No <em>To Pimp a Butterfly</em> or <em>Madvillainy</em> or <em>Graduation</em>? I decided if I really wanted to develop taste, to become a true music bro, I had to turn to the dark side. I had to go to Rate Your Music.</p><p>RYM isn&#8217;t your average music website. You won&#8217;t find any Drake, or Michael Jackson, or Taylor Swift<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. Though there are some popular musicians (the Beatles, Kanye West, and Kendrick Lamar come to mind), much of the list is dedicated to <a href="https://boards.4chan.org/mu/">/mu/</a>-influenced esoterica. Ever heard of <em>Selected Ambient Works 85-92</em> by Aphex Twin, or <em>In the Aeroplane Over the Sea</em> by Neutral Milk Hotel, or <em>Lift Your Skinny Fists Like Antennas To Heaven</em> by Godspeed You! Black Emperor? Me neither. After deciding to listen in chronological order, to best trace the evolution of genres and tastes, it was time to dive in.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.theojaffee.com/subscribe?"><span>Subscribe now</span></a></p><h3>The 60s</h3><ul><li><p><em>Kind of Blue</em> - Miles Davis (1959)</p></li><li><p><em>The Black Saint and the Sinner Lady</em> - Charles Mingus (1963)</p></li><li><p><em>A Love Supreme</em> - John Coltrane (1965)</p></li><li><p><em>Highway 61 Revisited</em> - Bob Dylan (1965)</p></li><li><p><em>Pet Sounds</em> - The Beach Boys (1966)</p></li><li><p><em>Blonde on Blonde</em> - Bob Dylan (1966)</p></li><li><p><em>Revolver</em> - The Beatles (1966)</p></li><li><p><em>The Doors</em> - The Doors (1967)</p></li><li><p><em>The Velvet Underground &amp; Nico</em> - The Velvet Underground &amp; Nico (1967)</p></li><li><p><em>Are You Experienced</em> - The Jimi Hendrix Experience (1967)</p></li><li><p><em>Sgt. Pepper&#8217;s Lonely Hearts Club Band</em> - The Beatles (1967)</p></li><li><p><em>Songs of Leonard Cohen</em> - Leonard Cohen (1967)</p></li><li><p><em>Electric Ladyland</em> - The Jimi Hendrix Experience (1968)</p></li><li><p><em>The Beatles [White Album]</em> - The Beatles (1968)</p></li><li><p><em>The Velvet Underground</em> - The Velvet Underground (1969)</p></li><li><p><em>Karma</em> - Pharaoh Sanders (1969)</p></li><li><p><em>In a Silent Way</em> - Miles Davis (1969)</p></li><li><p><em>Abbey Road </em>- The Beatles (1969)</p></li><li><p><em>In the Court of the Crimson King</em> - King Crimson (1969)</p></li></ul><p>The 1960s is the decade with the most familiar artists - the Beatles, Bob Dylan, Jimi Hendrix, the Beach Boys, and the Doors, just to name a few<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. I had tried listening through the entire Beatles discography beforehand, but got bored after about four albums of early 60s mass-produced teenybop pop. Their four best albums - <em>Revolver, Sgt. Pepper&#8217;s, </em>the White Album, and <em>Abbey Road</em> - written and recorded largely after their touring days, were different, and much better. I&#8217;ve never been much of a jazz fan, but I was very pleasantly surprised by Miles Davis, Charles Mingus, and John Coltrane, plus Pharaoh Sanders, who I hadn&#8217;t heard of. One of the most unique albums of the decade was <em>Songs of Leonard Cohen</em>, a spoken poetry album delivered with a soft voice and strumming acoustic guitar, which I liked a lot more than I expected. Another was <em>The Velvet Underground &amp; Nico</em>, a cult classic produced by Andy Warhol, with an experimental sound and lyrics about sexual deviancy and drug abuse before they were cool.</p><p>My favorite album of the decade was, of course, <em>In the Court of the Crimson King </em>by King Crimson: a fusion of progressive rock, jazz, and classical done incredibly well. Its five songs both work beautifully together and stand fully on their own. &#8220;21st Century Schizoid Man&#8221;, famously sampled on Kanye&#8217;s &#8220;POWER&#8221;, has almost metal-like intensity in both instruments and vocals. &#8220;I Talk to the Wind&#8221;, in contrast, is soft and ethereal. &#8220;Epitaph&#8221;, the group&#8217;s magnum opus, is a grand, nearly nine-minute-long symphony about the Cold War. &#8220;Moonchild&#8221; is two minutes of Mellotron and ten minutes of surprisingly good improv, and &#8220;The Court of the Crimson King&#8221; brings everything back in a grand finale. I had already listened to it before many times, but listening to it in the context of its time only solidified its supremacy.</p><h3>The 70s</h3><ul><li><p><em>Bitches Brew</em> - Miles Davis (1970)</p></li><li><p><em>Paranoid</em> - Black Sabbath (1970)</p></li><li><p><em>What&#8217;s Going On</em> - Marvin Gaye (1971)</p></li><li><p><em>Master of Reality</em> - Black Sabbath (1971)</p></li><li><p><em>Led Zeppelin [IV]</em> - Led Zeppelin (1971)</p></li><li><p><em>Pink Moon</em> - Nick Drake (1972)</p></li><li><p><em>Clube da Esquina</em> - Milton Nascimento &amp; L&#244; Borges (1972)</p></li><li><p><em>The Rise and Fall of Ziggy Stardust and the Spiders From Mars</em> - David Bowie (1972)</p></li><li><p><em>Close to the Edge</em> - Yes (1972)</p></li><li><p><em>The Dark Side of the Moon </em>- Pink Floyd (1973)</p></li><li><p><em>Future Days</em> - Can (1973)</p></li><li><p><em>Innervisions</em> - Stevie Wonder (1973)</p></li><li><p><em>Red</em> - King Crimson (1974)</p></li><li><p><em>Blood on the Tracks </em>- Bob Dylan (1975)</p></li><li><p><em>Wish You Were Here - </em>Pink Floyd (1975)</p></li><li><p><em>Station to Station</em> - David Bowie (1976)</p></li><li><p><em>Songs in the Key of Life</em> - Stevie Wonder (1976)</p></li><li><p><em>Low</em> - David Bowie (1977)</p></li><li><p><em>Animals</em> - Pink Floyd (1977)</p></li><li><p><em>Marquee Moon</em> - Television (1977)</p></li><li><p><em>Unknown Pleasures</em> - Joy Division (1979)</p></li></ul><p>The 70s, though again full of popular artists like Pink Floyd and David Bowie, started to show more of RYM&#8217;s more underground albums. My favorite of those was <em>Clube da Esquina</em> by Milton Nascimento and L&#244; Borges. One of my favorite albums of all time is <em>Buena Vista Social Club</em>, and I saw a lot of parallels between it and <em>Clube da Esquina</em> - created by an artists&#8217; collective in a poor Latin American country, and rising to critical acclaim and commercial success. Where <em>Clube da Esquina</em> differs is its more experimental sound, and the clear and beautiful sound of Nascimento and Borges&#8217; vocals.</p><p>There were three artists - Miles Davis, Bob Dylan, and King Crimson - who also had listed albums from the 60s. None of them were able to surpass their previous albums. In particular, I had high hopes for King Crimson&#8217;s <em>Red</em>, which, aside from &#8220;Fallen Angel&#8221;, just wasn&#8217;t as good as their debut. Interestingly, every member of King Crimson except band founder and guitarist Robert Fripp left between 1969 and 1974, which probably had something to do with it.</p><p>This was my first time seriously listening to Black Sabbath, David Bowie, and Pink Floyd, and I was pleasantly surprised to like all of them. Black Sabbath was heavy and absolutely foundational to the heavy metal genre, exemplified by the song &#8220;Iron Man&#8221;. David Bowie sounded like an alien, but in a good way - &#8220;Starman&#8221; and the rest of <em>Ziggy Stardust</em> was a ton of fun to listen to, like corny pulp fiction sci-fi come to life. Pink Floyd was the best of the three. <em>The Dark Side of the Moon</em> and <em>Animals</em> were solid, but <em>Wish You Were Here</em> was excellent - a touching tribute to band co-founder Syd Barrett, who went insane from substance abuse and had to leave the band before they made it big.</p><p>There were also a couple soul albums: <em>What&#8217;s Going On</em> by Marvin Gaye (famously rated by Rolling Stone as the #1 greatest album of all time) and <em>Innervisions </em>and <em>Songs in the Key of Life</em> by Stevie Wonder. <em>Songs in the Key of Life</em> in particular, was just a beautiful album, particularly &#8220;Isn&#8217;t She Lovely&#8221;. I never thought the harmonica could be used so well.</p><h3>The 80s</h3><ul><li><p><em>Closer</em> - Joy Division (1980)</p></li><li><p><em>Remain in Light</em> - Talking Heads (1980)</p></li><li><p><em>Ride the Lightning</em> - Metallica (1984)</p></li><li><p><em>Hounds of Love</em> - Kate Bush (1985)</p></li><li><p><em>Master of Puppets</em> - Metallica (1986)</p></li><li><p><em>The Queen Is Dead</em> - The Smiths (1986)</p></li><li><p><em>Doolittle</em> - Pixies (1989)</p></li><li><p><em>Disintegration</em> - The Cure (1989)</p></li></ul><p>For some reason, there were far fewer albums from the 80s than any other decade, even the 2010s. Curiously, nearly all of the most popular 80s artists are conspicuously missing, including Michael Jackson, Prince, Madonna, U2, Bruce Springsteen, Van Halen, Billy Joel, AC/DC. Regardless, the albums left on the list are pretty good. <em>Remain in Light</em> was a preview of the next few decades of electronica. <em>Ride the Lightning</em> and <em>Master of Puppets</em> were great follow-ups to Black Sabbath, and <em>The Queen Is Dead</em> was post-punk alt rock exemplified. Despite these, this was the most forgettable decade on RYM&#8217;s list.</p><h3>The 90s</h3><ul><li><p><em>Heaven or Las Vegas</em> - Cocteau Twins (1990)</p></li><li><p><em>Spiderland</em> - Slint (1991)</p></li><li><p><em>Laughing Stock</em> - Talk Talk (1991)</p></li><li><p><em>The Low End Theory</em> - A Tribe Called Quest (1991)</p></li><li><p><em>Nevermind</em> - Nirvana (1991)</p></li><li><p><em>Loveless</em> - My Bloody Valentine (1991)</p></li><li><p><em>Selected Ambient Works 85-92</em> - Aphex Twin (1992)</p></li><li><p><em>Souvlaki</em> - Slowdive (1993)</p></li><li><p><em>In Utero</em> - Nirvana (1993)</p></li><li><p><em>Midnight Marauders</em> - A Tribe Called Quest (1993)</p></li><li><p><em>Enter the Wu-Tang (36 Chambers)</em> - Wu-Tang Clan (1993)</p></li><li><p><em>The Downward Spiral</em> - Nine Inch Nails (1994)</p></li><li><p><em>Illmatic </em>- Nas (1994)</p></li><li><p><em>Grace</em> - Jeff Buckley (1994)</p></li><li><p><em>Dummy - </em>Portishead (1994)</p></li><li><p><em>Symbolic</em> - Death (1995)</p></li><li><p><em>Liquid Swords</em> - Genius/GZA (1995)</p></li><li><p><em>Soundtracks for the Blind</em> - Swans (1996)</p></li><li><p><em>LONG SEASON</em> - Fishmans (1996)</p></li><li><p><em>Endtroducing&#8230;</em> - DJ Shadow (1996)</p></li><li><p><em>Either/Or </em>- Elliott Smith (1997)</p></li><li><p><em>OK Computer</em> - Radiohead (1997)</p></li><li><p><em>F# A# &#8734;</em> - Godspeed You! Black Emperor (1997)</p></li><li><p><em>Homogenic</em> - Bj&#246;rk (1997)</p></li><li><p><em>In the Aeroplane Over the Sea</em> - Neutral Milk Hotel (1998)</p></li><li><p><em>Mezzanine</em> - Massive Attack (1998)</p></li><li><p><em>Aquemini</em> - OutKast (1998)</p></li></ul><p>Now we&#8217;re getting somewhere.</p><p>The 90s had both the most albums and the most diversity of genres out of any decade on this list. There was <em>Loveless</em> by My Bloody Valentine, which pioneered the &#8220;<a href="https://en.wikipedia.org/wiki/Shoegaze">shoegaze</a>&#8221; genre &#8220;characterized by its ethereal mixture of obscured vocals, guitar distortion and effects, feedback, and overwhelming volume&#8221;. There were <em>Nevermind</em> and <em>In Utero</em> by Nirvana, grunge albums bursting with anger and frustration. There was the folksy, bluesy rock album <em>Grace </em>by Jeff Buckley, his only album before his tragic death at 30<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. There was <em>The Downward Spiral</em> by Nine Inch Nails - if you like metal grinding on metal and people screaming, you&#8217;ll love this one. The two most interesting genres, however, were rap and /mu/core.</p><p>The 90s was both the first decade on the list to have rap and, arguably, the golden age of the genre. <em>Illmatic </em>by Nas is an enduring classic that holds up just as well 30 years later: &#8220;NY State of Mind&#8221; and &#8220;The World Is Yours&#8221; are songs you can really vibe to. <em>Enter the Wu-Tang (36 Chambers)</em> and its follow-up, <em>Liquid Swords</em> by Wu-Tang member GZA had pseudo-Asian influences before rappers being <a href="https://en.wiktionary.org/wiki/weeaboo">weeaboos</a> was cool. <em>Aquemini</em> by OutKast was pretty cool, though I still prefer <em>Stankonia</em> and <em>Speakerboxxx/The Love Below</em>. <em>The Low End Theory</em> and especially <em>Midnight Marauders</em> by A Tribe Called Quest were fantastic: perfectly blending jazz with rap, a fusion you almost never see in popular music today.</p><p>The other genre is /mu/core, characterized by unique vocals, erratic instrumentation, sometimes deliberately low-quality sound, and tracks that often go 20 minutes or longer. In other words, albums designed to deter normies.<em> Soundtracks for the Blind</em> by Swans is a nearly two and a half hour long concept album reminiscent of <em><a href="https://en.wikipedia.org/wiki/Everywhere_at_the_End_of_Time">Everywhere at the End of Time</a></em>, complete with recordings of dialogue from a nursing home. <em>LONG SEASON</em> by Fishmans consists of a single, 35-minute-long song and is the only Japanese album on the list. <em>F# A# &#8734;</em> by Godspeed You! Black Emperor is over an hour but with just three songs, each consisting of multiple movements, plus a spoken intro and lots of slow, instrumental parts. <em>Homogenic </em>by Bj&#246;rk has one of the most unique vocals I&#8217;ve ever heard, especially the song J&#243;ga (it&#8217;s hard to even describe, you just have to listen to it). Perhaps the biggest cult classic on the entire list is Neutral Milk Hotel&#8217;s <em>In the Aeroplane Over the Sea</em>, with surrealist lyrics, peculiar instruments (a singing saw!), and a &#8220;psychedelic folk&#8221; style.</p><p>The best album of the decade, however, was Radiohead&#8217;s <em>OK Computer</em>. I fucking love this album. Radiohead went from a decent but totally undifferentiated debut pop album (<em>Pablo Honey</em>, known for the single &#8220;Creep&#8221;) to a pretty good, relatively unique second album (<em>The Bends</em>) to the absolutely fantastic, completely irreplicable third album that is <em>OK Computer</em>. &#8220;Airbag&#8221; hypes me for the rest of the album; the first beat drop in &#8220;Paranoid Android&#8221; gives me goosebumps; &#8220;Subterranean Homesick Alien&#8221; is weird and delightful; the finale of &#8220;Exit Music (For A Film)&#8221; is powerful and moving; &#8220;Let Down&#8221; is uplifting without venturing into normie pop territory; &#8220;Karma Police&#8221; is just one of the great songs of all time; &#8220;Fitter Happier&#8221; is an eerie and fitting interlude reminiscent of Stephen Hawking; &#8220;Electioneering&#8221; emulates Led Zeppelin&#8217;s rock without losing Radiohead&#8217;s sound; &#8220;Climbing Up the Walls&#8221; is dark and introspective; the glockenspiel-and-soft-guitar melody in &#8220;No Surprises&#8221; is beautifully bittersweet; &#8220;Lucky&#8221; is a solid banger; and the call bell at the end of &#8220;The Tourist&#8221; is placed perfectly. Not a single song is less than great, and they all work well together: just a 10/10 album throughout.</p><h3>The 2000s</h3><ul><li><p><em>Kid A</em> - Radiohead (2000)</p></li><li><p><em>Lift Your Skinny Fists Like Antennas to Heaven</em> - Godspeed You! Black Emperor (2000)</p></li><li><p><em>Since I Left You</em> - The Avalanches (2000)</p></li><li><p><em>Discovery</em> - Daft Punk (2001)</p></li><li><p><em>Vespertine</em> - Bj&#246;rk (2001)</p></li><li><p><em>The Glow Pt. 2</em> - The Microphones (2001)</p></li><li><p><em>Velocity : Design : Comfort</em> - Sweet Trip (2003)</p></li><li><p><em>The College Dropout</em> - Kanye West (2004)</p></li><li><p><em>Madvillainy</em> - Madvillain (2004)</p></li><li><p><em>MM..FOOD</em> - MF DOOM (2004)</p></li><li><p><em>Illinoise </em>- Sufjan Stevens (2005)</p></li><li><p><em>Late Registration</em> - Kanye West (2005)</p></li><li><p><em>Donuts</em> - J Dilla (2006)</p></li><li><p><em>In Rainbows</em> - Radiohead (2007)</p></li><li><p><em>Deathconsciousness</em> - Have a Nice Life (2008)</p></li></ul><p>The aughts are much like the 90s: also full of rap and /mu/core. Radiohead&#8217;s <em>Kid A</em> and <em>In Rainbows</em> are excellent albums, but nothing can top <em>OK Computer</em> for me. MF DOOM&#8217;s <em>Madvillainy</em> and <em>MM..FOOD</em> are masterpieces of both production and wordplay, layering multiple rhyme schemes, some mid-sentence, into the same verse. J Dilla&#8217;s <em>Donuts</em> is nearly perfect instrumental hip hop, from which I caught so many samples from other songs that I lost count. A common theme in /mu/core albums is that they&#8217;ll have one, and only one excellent song out of an otherwise hard-to-interpret album. &#8220;Dsco&#8221;, on Sweet Trip&#8217;s <em>velocity : design : comfort.</em>, is a perfect example of this. Go listen to it right now.</p><p>My two favorite artists of the decade are Kanye West and Daft Punk. Though I wish they&#8217;d included Kanye&#8217;s <em>Graduation</em> (which, strangely enough, ranks at #622 overall<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>), <em>The College Dropout</em> and <em>Late Registration</em> are fantastic albums - they blend soul with rap in a way that would typically be experimental and esoteric, but Kanye does it in a way that sounds <em>fun</em> and <em>poppy</em>. Both of these albums contain a dizzying variety of moods, from the fun of &#8220;The New Workout Plan&#8221; to the passion of &#8220;Through The Wire&#8221; to the free-flowing reminiscence of &#8220;Last Call&#8221; on <em>College Dropout</em>, or the the catchy &#8220;Gold Digger&#8221; to the sad &#8220;Roses&#8221; to the absolutely triumphant &#8220;We Major&#8221; on <em>Late Registration</em>, but they all sound like Kanye. Even Kanye&#8217;s most stupid lyrics (&#8220;Plus my Aunt Shirley, Aunt Beverly, Aunt Clay and Aunt Jean/So many Aunties we could have an Auntie Team&#8221;) manage to sound good.</p><p>Daft Punk&#8217;s <em>Discovery</em>, however, is the best album of the decade. After sitting through like ten /mu/core albums designed to be as far from pop as possible, turning on <em>Discovery</em> felt like coming home - if home were a <a href="https://en.wikipedia.org/wiki/Interstella_5555:_The_5tory_of_the_5ecret_5tar_5ystem">futuristic space odyssey</a> with a <a href="https://en.wikipedia.org/wiki/French_house">distinctly European feel</a>. Like <em>OK Computer</em>, <em>Discovery</em> doesn&#8217;t have a single bad song. Unlike <em>OK Computer</em>, almost all of <em>Discovery</em>&#8217;s songs are so good that they can fully stand by themselves - Romanthony&#8217;s dance anthems of &#8220;One More Time&#8221; and &#8220;Too Long&#8221; on both ends of the album, the overdrive-heavy &#8220;Aerodynamic&#8221;, the orgasmic guitar solo at the end of &#8220;Digital Love&#8221;, the bouncy, funky &#8220;Harder, Better, Faster, Stronger&#8221;, the groovy buildup and vocals of &#8220;Crescendolls&#8221;, the beautiful, contemplative &#8220;Something About Us&#8221;, the incredible beat drop of &#8220;Voyager&#8221;, and the sad electronic woodwinds on &#8220;Veridis Quo&#8221;. The best song on the album is the penultimate &#8220;Face to Face&#8221;, where Daft Punk show their total mastery over the art of sampling. I mean, just watch <a href="https://www.youtube.com/watch?v=5AqHSvR9bqs&amp;t=95s">this video</a>. Absolutely incredible.</p><h3>The 2010s</h3><ul><li><p><em>My Beautiful Dark Twisted Fantasy</em> - Kanye West (2010)</p></li><li><p><em>The Money Store</em> - Death Grips (2012)</p></li><li><p><em>good kid, m.A.A.d city</em> - Kendrick Lamar (2012)</p></li><li><p><em>To Pimp A Butterfly</em> - Kendrick Lamar (2015)</p></li><li><p><em>Carrie &amp; Lowell</em> - Sufjan Stevens (2015)</p></li><li><p><em>&#9733; [Blackstar]</em> - David Bowie (2016)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p><em>Blonde</em> - Frank Ocean (2016)</p></li><li><p><em>Atrocity Exhibition </em>- Danny Brown (2016)</p></li><li><p><em>Igor</em> - Tyler, the Creator (2019)</p></li><li><p><em>Ants From Up There</em> - Black Country, New Road (2022)</p></li></ul><p>The defining characteristic of this decade was rap that transcended the genre. It opened with Kanye&#8217;s magnum opus, MBDTF, which perfectly blends post-2007 Kanye&#8217;s braggadocio and energy with an over-the-top, almost orchestral production style. Try &#8220;All of the Lights&#8221;, &#8220;Devil in a New Dress&#8221;, or &#8220;Runaway&#8221;, which isn&#8217;t very orchestral, but some consider it Kanye&#8217;s best song. It was immediately followed up by Death Grips&#8217; <em>The Money Store</em>, another cult classic, although more of the underground rap cult than the /mu/ cult. &#8220;I&#8217;ve Seen Footage&#8221; is the best song on the album; the rest aren&#8217;t as good.</p><p>The 2010s also had Kendrick Lamar&#8217;s two best albums, <em>good kid, m.A.A.d city</em> and <em>To Pimp A Butterfly</em>, the latter of which is ranked #1 out of nearly 6.2 million releases on Rate Your Music. GKMC tells the story of Kendrick&#8217;s upbringing among poverty, drugs, and violence in Compton, California; while TPAB speaks more broadly of black culture, racism, and discrimination. GKMC has more individual bangers (&#8220;Bitch, Don&#8217;t Kill My Vibe&#8221;, &#8220;Backseat Freestyle&#8221;, &#8220;Money Trees&#8221;, &#8220;Swimming Pools (Drank)&#8221;, and &#8220;Sing About Me, I&#8217;m Dying of Thirst&#8221; come to mind), but TPAB is overall a better album.</p><p>Frank Ocean&#8217;s <em>Blonde</em> and Tyler, the Creator&#8217;s <em>Igor</em> complement each other very well. Both are concept albums that tell a story with innovative production that blends rap with R&amp;B, soul, and electronica. <em>Blonde </em>has a more soft, upbeat mood and minimalist production, exemplified by the 10/10 track &#8220;Pink + White&#8221;, though &#8220;Nights&#8221; and &#8220;Futura Free&#8221; are excellent too. <em>Igor</em> goes much harder, with a heavy electronic sound. The first two tracks, &#8220;IGOR&#8217;S THEME&#8221; and &#8220;EARFQUAKE&#8221;, make up one of the best two-song lineups I&#8217;ve heard.</p><h3>A RYM Retrospective</h3><p>So, what did I learn from my foray into the world of high taste? What even <em>is </em>high taste? We can try to answer this by examining what all 100 of these albums had in common, which wasn&#8217;t much - most of them are very different from each other. The main thing that springs to mind is that they&#8217;re all unique. This may be obvious, but consider how much of the music you listen to has very little that distinguishes it - especially rap (can you even tell the difference between 21 Savage, Kodak Black, and NLE Choppa?)</p><p>Second, most songs aren&#8217;t structured. A lot of popular music has a definite form with ordered repetition, such that you can almost predict what&#8217;s going to come next. Weirdly, almost no songs on any of the 100 albums have this trait - most songs are relatively unstructured, and it&#8217;s hard to tell what comes next. This naturally requires you to actually pay attention to the music, rather than mindlessly taking it in like you would with pop music. Paying attention to music is a <a href="https://en.wikipedia.org/wiki/Costly_signaling_theory_in_evolutionary_psychology">costly signal</a>, one that can provide solid evidence of good taste for those who do it.</p><p>Third, almost all albums were the first to either create a new genre or fuse two or more genres in a specific way. Like financial alpha, being the first comes with its rewards. Watch <em>Citizen Kane</em> today and you might be bored by how trite and overplayed everything feels. Watch it when it came out in 1941 and you&#8217;d be shocked at how much it advanced the medium of film. If you go back and listen to RYM&#8217;s list with the same mindset, you&#8217;ll see that Bob Dylan was the first to pioneer folk rock, The Velvet Underground &amp; Nico invented alt rock, A Tribe Called Quest fused rap with jazz, and Radiohead melded&#8230; everything.</p><p>In the end, though, taste is a nebulous concept. There are no formulas or algorithms for developing it - just subjective feel and &#8220;<a href="https://en.wikipedia.org/wiki/I_know_it_when_I_see_it">I know it when I see it</a>&#8221;. Listening to someone else&#8217;s list, even an aggregate of many such lists like RYM, can only get you so far. If you truly want to evolve your taste, you have to venture out yourself, finding what&#8217;s interesting to you and literally playing it by ear. Nonetheless, if you feel like your music taste is too narrow or too normie, listening to the top 100 RYM albums is a great first step.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Theo's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>My #1 artist every year for the last 2-3 years, by a wide margin. I don&#8217;t care that people think he&#8217;s corny or whatever - I think he&#8217;s great, and listening to 100 of the &#8220;best albums of all time&#8221; didn&#8217;t change that.</p><p>Interestingly, my top three artists - Logic, Kanye, and Daft Punk - are also some of the favorite artists of popular tech YouTuber (and, clearly, man of high taste) <a href="https://www.youtube.com/@mkbhd">Marques Brownlee</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>You also won&#8217;t find pretty much any non-English music, or music from before 1959. RIP to Beethoven and Liszt - I guess they weren&#8217;t good enough.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I was surprised to see no Rolling Stones or Elvis Presley.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>So many of the artists on this list died early. Nico of The Velvet Underground and Nico fell off her bike, hit her head, and died of a cerebral hemorrhage at 49. Marvin Gaye was shot and killed by his own father over a stupid fight about insurance documents at 44. Jimi Hendrix choked on his own vomit after overdosing on barbiturates at 27. Kurt Cobain shot himself in the head with a shotgun, also at 27. Cliff Burton, the bassist of Metallica, was killed when a tour bus hit a patch of ice. He was ejected through the window, and then the bus fell on him, crushing him to death. He was 24.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Even stranger, the album directly above it is one of my favorite of all time, <em>Buena Vista Social Club</em> by Buena Vista Social Club, and the one right above that is <em>Norman Fucking Rockwell!</em> by Lana Del Rey.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>David Bowie&#8217;s Blackstar was released <em>39 years</em> after his second-most recent album on the list, <em>Low</em> (1977), and 44 years later than his first album on the list, <em>The Rise and Fall of Ziggy Stardust and the Spiders From Mars</em> (1972).</p></div></div>]]></content:encoded></item><item><title><![CDATA[#16: Stephen Grugett and Austin Chen]]></title><description><![CDATA[Manifold, Manifund, Manifest, prediction markets, and EA]]></description><link>https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen</link><guid isPermaLink="false">https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Thu, 13 Jun 2024 17:49:22 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/145590376/29f60005579b05e2b94dd98bb24c75a0.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Stephen Grugett and Austin Chen are co-founders of Manifold Markets, an online play-money prediction market and competitive forecasting platform. Stephen currently serves on the company&#8217;s management, while Austin recently stepped down to start Manifund, a unique, open-source grant program. This video is not sponsored in any way by Manifold, Manifund, or Manifest - I just think they&#8217;re cool.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>Stephen Grugett</p><p>1:20 - Are prediction markets actually bad?</p><p>4:11 - Would Manifold use real money if allowed?</p><p>5:24 - How Manifold would use real money if allowed</p><p>6:08 - Would Manifold use crypto if allowed?</p><p>7:17 - Can you ever get long-term returns from prediction markets?</p><p>10:01 - Would subsidies ruin markets?</p><p>11:23 - Why Manifold beat real money on predicting the 2022 elections</p><p>16:00 - Would Stephen implement futarchy?</p><p>19:54 - Manifold Love</p><p>23:22 - Bet on Love</p><p>26:21 - Why Manifold is miscalibrated</p><p>29:06 - Insider trading and market manipulation</p><p>31:42 - Is it easier to make money on prediction markets or normal markets?</p><p>32:37 - Good prediction market UI</p><p>34:35 - Why should people trust market creators?</p><p>35:34 - Derivatives on prediction markets</p><p>37:20 - Stephen&#8217;s ginseng adventures</p><p>40:55 - Audience Q: why don&#8217;t Americans consume American ginseng?</p><p>41:35 - Audience Q: cancel culture and Richard Hanania</p><p>45:50 - Audience Q: why aren&#8217;t there more institutional investors in prediction markets?</p><p>47:33 - Audience Q: can journalists help resolve markets?</p><p>49:45 - Audience Q: is there any role for sweepstakes other than regulatory arbitrage?</p><p>Austin Chen</p><p>51:14 - Are prediction markets insufficiently powerful?</p><p>54:22 - What prediction markets can do if not futarchy</p><p>55:36 - How Manifund was designed</p><p>59:35 - How Manifund chooses regrantors</p><p>1:00:49 - Why donate to Manifund?</p><p>1:03:09 - Does Dustin Moskovitz have too much power over EA?</p><p>1:04:29 - What Manifund would do differently with more money</p><p>1:05:52 - How Manifest gets so many interesting people</p><p>1:09:10 - How much did SBF&#8217;s fall damage EA?</p><p>1:10:04 - OpenAI</p><p>1:11:54 - Is this decade more important than other decades?</p><p>1:13:01 - Why aren&#8217;t more philanthropic organizations open?</p><p>1:15:35 - Manifund&#8217;s best projects</p><p>1:17:25 - How short AGI timelines would affect Manifund</p><p>1:19:21 - Audience Q: how Manifold ships fast</p><p>1:22:11 - Outro</p><h3>Links</h3><p>Manifold: <a href="https://manifold.markets/home">https://manifold.markets</a></p><p>Manifund: <a href="https://manifund.com">https://manifund.com</a></p><p>Manifest: <a href="https://www.manifest.is">https://www.manifest.is</a></p><p>Manifold&#8217;s Twitter: <a href="https://x.com/manifoldmarkets">https://x.com/manifoldmarkets</a></p><p>Manifund&#8217;s Twitter: <a href="https://x.com/manifund">https://x.com/manifund</a></p><p>Austin&#8217;s Twitter: <a href="https://x.com/akrolsmir">https://x.com/akrolsmir</a></p><p>Transcript: <a href="https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen">https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen</a></p><h3>More Episodes</h3><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: <a href="https://www.theojaffee.com">https://www.theojaffee.com</a></p><p><strong>Theo: </strong>Welcome back to episode 16 of the Theo Jaffee podcast. Today, I have the pleasure of speaking with Stephen Grugett and Austin Chen, two of the co-founders of Manifold Markets, a play money prediction market company. Prediction markets are like financial markets, except instead of betting on stock prices, you bet on the outcomes of future events. On Manifold, you can bet on markets created by other people or create your own on any topic you want. Manifold has all kinds of markets from who will win the 2024 presidential election to will AI destroy the world by 2030 to what will happen next in the manga <em>One Piece</em>.</p><p>This is a very special episode of the podcast, my first in-person interviews done live at Manifest 2024, Manifold's annual conference. The first interview with Stephen goes in-depth on Manifold itself, the theory and practice of prediction markets, Manifold love, and Stephen's background as a ginseng merchant. The second interview is with Austin. Austin recently left Manifold to start Manifund, a unique, fully transparent grant program. In our interview, we talk about Manifund, effective altruism, and the EA funding ecosystem. I had a great time at Manifest, and these interviews were some of the highlights for me. This is the Theo Jaffee podcast. Thank you for watching. And now here's Stephen Grugett and Austin Chen.</p><h3>Part 1: Stephen Grugett</h3><p><strong>Theo: </strong>Welcome back to episode 16 of the Theo Jaffee podcast, part one. This is my first ever live recording. We're here live at Manifest 2024, and I'm interviewing Stephen Grugett, the co-founder of Manifold with a live audience.</p><p><strong>Stephen: </strong>Thank you. Thanks for having me on. I'm super excited to be on your podcast.</p><p><strong>Theo: </strong>Awesome. Thanks. </p><p>So for the first question, Works in Progress just wrote an article called Why Prediction Markets Aren't Popular, which argues that, contrary to the traditional view that prediction markets aren't popular just because they're regulated, prediction markets are actually quite legal in the U.S., and Calshi and others are able to do them. And the reason they don't work is that they just aren't very good. So aside from being zero-sum, they're usually quite small and quite illiquid, and that it would take expensive subsidies to make them large and liquid. And also, one of the reasons that Austin Chen laid out in his leaving Manifold document is because he thinks prediction markets feel insufficiently powerful. So what do you think about that? </p><p><strong>Stephen: </strong>The first thing is I think the premise is not true. One of my favorite prediction market facts from Robin Hanson is that the turn of last century, prior to the 20s, there was more trading on prediction markets on U.S. presidential elections than there was on the stock market. Average Americans were speculating on these sorts of political contracts, and it was hugely popular. So I think that certainly, within the U.S., we would see huge volumes on at least election markets by themselves if they were legal. That's totally a regulatory issue.</p><p>I think there is the other question, though, of other use cases besides election speculation. There are, right now in the U.S., some limited regulated markets on things that don't touch on these subjects, and the volumes on these contracts haven't been that high right now. I think part of the reason for this is not necessarily an inherent lack of interest on the part of the public, but the fact that there hasn't been a platform that makes it really easy and simple and engaging enough for the public to consume. So that's one of the things that Manifold is trying to address. </p><p>And I think this just takes time. The regulatory barriers for prediction markets have prevented adoption in the past. I would guess that in a counterfactual world without any regulation, you would have seen a platform like Manifold arising much earlier with real money with very large liquid markets. That's a much larger part of public discourse. </p><p><strong>Theo: </strong>So if prediction markets were fully deregulated, like, tomorrow, would you leave Manifold entirely based on mana, or would you make it real money, or would you make a separate real money prediction market? </p><p><strong>Stephen: </strong>I think we would have both. So I think one of the things people don't get about play money is that it's not just an inferior version of real money, but its own thing entirely, and that it comes with a number of advantages. So the benefit of play money is that it's just way more casual and frictionless for people to consume. If you want to get someone to sign up for a real betting platform, that can be difficult. People have all sorts of psychological barriers. They don't want to invest their money. But when it's simple and a game and doesn't come with any financial commitments, it's much easier for people to participate. </p><p>There's that, and then there's also just the freedom aspect. You can do anything you want with play money. The moment you introduce real cash into the mix, then all sorts of regulations and know your customer and anti-money laundering regulations come into play that make life very difficult. So I think even a world where real money is fully legal, you probably, you would still see a large play money platform catering to this other source of consumer demand. </p><p><strong>Theo: </strong>Have you thought about extensively what Manifold would do with real money if you could? </p><p><strong>Stephen: </strong>I've obviously thought a little bit about this. I think we would spin up a separate USD-denominated version of many of our markets for people to trade on. I think even in a world where it is legal, you would expect pretty substantial regulations. So I would imagine Manifold USD-denominated markets would be much more severely limited and on fewer topics than our play money markets, but we would definitely have been creating as many as we could.</p><p><strong>Theo: </strong>But what about crypto? If there were no regulatory barriers, would you make crypto prediction markets or is there just too much speculation in crypto?</p><p><strong>Stephen: </strong>Crypto, in addition to having the same regulatory issues that all real money markets would be, has the additional burden of being much harder to use. One of the reasons for crypto in the first place is this kind of regulatory arbitrage thing where people turn to these decentralized mechanisms precisely because certain types of contracts cannot be enforced in a court of law in the U.S. But I'm more skeptical on this fuller Web3 vision where everything would have its own token and everyday Americans would be actively engaging on the blockchain. I think that's less likely due to how cumbersome and difficult it is to use these sort of products. So I actually think in a world with more liberalization and fewer regulations, you would just see way fewer people using crypto, both in prediction markets and in general.</p><p><strong>Theo: </strong>Do you think prediction markets are fundamentally by their nature zero-sum permanently? Or do you think there will be an equivalent to an index fund, something that traders can put their money into to expect some kind of return over the long run? Is there anything traders can do to do that?</p><p><strong>Stephen: </strong>Prediction markets on a mechanistic level are zero-sum in that the most common way to structure a prediction market is to have contracts on whether an event either does happen or doesn't happen, yes or no. That's inherently zero-sum. For a lot of our markets on Manifold, the environment isn't zero-sum because there is this third party which is typically but not always the market creator who's actively going into the market to subsidize. So I think subsidization is actually very important in a prediction market context.</p><p>The basic idea is that if you want to have your question answered and it's on a pretty narrow niche topic, you may not get as much liquidity on that from purely profit-seeking traders. A lot of questions that you may want an answer to have massive adverse selection where one party naturally knows much more about this topic than the other and the price would move very rapidly in response to trades. So to cut back against this a little bit, in order to work well, you have to pump your markets full of subsidies in order to entice traders to predict in the market. A subsidy is basically just cash that you allocate, that you put into the market. You can think of it as making the, adding more friction to price movements. The more subsidies in the market, the less the price will move in response to trades. But I think in that sense, it's not zero-sum. </p><p>But I guess the other part of your question is trying to use prediction markets as an instrument to gain equity-like returns. I mean, I think that doesn't really make sense to me. Even with the subsidy, it may not be zero-sum in the sense that there's a bunch of dumb, intentional dumb money in the form of subsidies being added to the market. There still isn't really anything like stock market beta or sensitivity to broad economic growth. But you can't, for instance, if you select 100 random prediction markets and invest $10 into each, you would expect that to return $0 and not to increase with the size of the economy. </p><p><strong>Theo: </strong>But wouldn't the subsidies that you would need to make prediction markets work in the way that you're describing be tremendously burdensome to the process? Incredibly expensive?</p><p><strong>Stephen: </strong>Not necessarily. This is one of the things that we found with Manifold. We're a play money platform. If the user experience is sufficiently compelling and game-like, you can get a huge crowd of people, such as the people in this live audience today and attending Manifest this weekend, who are interested in prediction just for the sake of it outside of the monetary rewards. And when you have this system set up, that means you can get by with a much, much lower subsidy than you would if you were actively going out and commissioning the traders to give you your forecasting estimate. So I think this is one of the nice things about Manifold, is that you can purchase information much more cheaply. The nature of the platform itself kind of elicits information out of traders at a pretty low cost, much cheaper than you would be able to otherwise. So each subsidy dollar in turn can then way more efficiently get you information than whatever the alternative is. </p><p><strong>Theo: </strong>Why did Manifold predict the 2022 midterm elections better than real money prediction markets like Polymarket and PredictIt?</p><p><strong>Stephen: </strong>This is interesting, because I was on the other side of this trade. I thought during the midterms that Polymarket's numbers would be more accurate. So I bet a lot on the other side, and I lost a huge amount of mana because I was wrong. And now I've learned my lesson, that Manifold's numbers are more accurate. I think, honestly, this is kind of an n equals one thing. I think people should be very wary in general of trying to judge the accuracy of any pundit forecasting platform tool or anything on just one election cycle. So one very simple story that you can tell about this is that Polymarket had more Republicans and Manifold had more Democrats, and the Democrats won. So really, we need to repeat this over several election cycles with different parties winning in each in order to get a better sense of each platform's accuracy.Form's true calibration. Is there any existing data on which platforms have performed the best on different elections, or is it just too recent, there haven't been enough elections? It's not too recent, and there are also other, even older academic prediction markets which have a track record behind them. One of the first big prediction market experiments in the 20th century, after the progressive era in which they outlawed all of this stuff, was the experiment conducted by the Iowa electronic markets in the 80s and beyond. At that time, they found that their markets were more accurate than both individual pundits and a bunch of different aggregates of pundits. There's a similar track record from the more recent prediction market attempts. </p><p>I got my start on prediction markets with Intrade, which is defunct. They were an Irish prediction market platform. I remember trading on the 2012 midterms, and I believe that their numbers were more accurate than pundits at the time. But there are a bunch of studies on this. You can find actual answers to these questions. I don't have them off the top of my head. </p><p>More recently, though, what we found is that Nate Silver and FiveThirtyEight have performed basically on par with prediction market and other forecasting platforms. I think that will change as prediction markets become more liquid and more people are trading on them. I do think in the limit case, with tons of money being actively traded on these things, that prediction markets will be the very best mechanism and will have the best track records. But they're already pretty large and pretty liquid. I don't know about that. There's millions of dollars that are being traded. </p><p><strong>Theo: </strong>So you're telling me that these thousands and thousands of traders, many of whom are pretty smart in aggregate, can't beat Nate Silver, even though they have financial interests in doing so? </p><p><strong>Stephen: </strong>Yeah, I think a lot of it is due to the big thing you need to guarantee that prediction markets can live up to their full potential: institutional liquidity. You need Goldman Sachs and hedge funds to be able to be counterparties to all of these bets done by retail traders on platforms like Polymarket. And that does not appear to be in the works anytime soon, mostly because of regulation. I think it is true, though, that having a hundred million dollars on the line should be very enticing to people. That is a lot of money, even for very talented, wealthy individual traders. But there are still these structural barriers that prevent a lot of individual traders from participating in real money markets. </p><p><strong>Theo: </strong>Accredited investor requirements and stuff? </p><p><strong>Stephen: </strong>Well, or the fact that US citizens legally can't participate in Polymarket. Many do. Many use VPNs to access these markets offshore. But the regulatory issue and the usability issues with crypto are a major barrier. </p><p><strong>Theo: </strong>So for my podcast audience, I'm sure everyone in the Manifold audience knows this, but futarchy is a political system where you would base policies on prediction markets. And so if you had the option to do so, would you replace our current political system with futarchy? </p><p><strong>Stephen: </strong>Ah, that's a great question. Maybe this is a little bit heretical, but I've never actually been that on board with futarki as a concept. So, firstly, I think the first thing is, my view is that prediction markets are a tool, or it's kind of a category error to talk about it as a form of government. Governments are not just decision-making mechanisms. They're people who have particular values, who implement decisions. Even Robin Hanson often formulates this as a bet on what will happen. Bet on values, not beliefs. </p><p><strong>Theo: </strong>Yeah, vote on values, bet on beliefs. </p><p><strong>Stephen: </strong>Yes. So even in this formulation, part of the governance formula has to include other stuff that isn't just the mechanism. So there's that aspect. But in terms of using prediction markets to totally replace all existing decision-making bodies, I'm more skeptical. I certainly think on the margin that governance quality would improve a lot if people actively were creating, subsidizing, all sorts of questions on different policy impacts of various proposals. That would be a great thing. People have talked about using NGDP futures to help central banks determine their monetary policy. I think all of those are great things that we should be doing. </p><p>In theory, a sufficiently liquid decision market on topics where decisions can be enumerated exactly in some domain should be good. There shouldn't be any problems with that. If the market price is predicting what the outcome of various policy interventions would be are out of whack, then rational profit-seeking traders will come in and correct them, and their probability should be accurate. Policymakers make a lot of decisions. A lot of decisions are about things in smaller markets where people don't really care about but in which there are very strong vested interests. If you're making some micro-policy decision about shrimping rights off the coast of Maine, maybe the shrimpers will be willing to collude and place bets that other rational profit-seeking individuals wouldn't be quite motivated enough to do. That's one issue with futarchy. </p><p>I think the other big issue, that's a problem with the mechanism, I think the other problem with futarchy is that it doesn't address the fundamental concept of the political. The real political question is who gets to create the markets? Which are the importantValues that people actually care about determine how we allocate the liquidity to subsidize the prediction markets to get the answer on. Even if we do move into a much more futarchical world, which I support, that won't solve that problem.</p><p><strong>Theo: </strong>Let me frame the question differently. Do you think if the Bay Area governments were replaced entirely with futarchy, would it lead to better outcomes?</p><p><strong>Stephen: </strong>I think replacing the Bay Area government with anything would lead to better outcomes, so yes.</p><p><strong>Theo: </strong>Clearly not anything, right? Replacing it with Stalin wouldn't.</p><p><strong>Stephen: </strong>I don't know.</p><p><strong>Theo: </strong>For my podcast audience, Stephen's brother, James Grugett, is one of the other co-founders of Manifold. Why do you have so much more mana than James? He has like 200,000, you have over a million.</p><p><strong>Stephen: </strong>A lot of my mana comes from betting against James, which is interesting. One of us was guaranteed to win and have more money than the other.</p><p><strong>Theo: </strong>On what markets? </p><p><strong>Stephen: </strong>I think our biggest source of disagreement, and one of my biggest sources of profits versus James, is on the success of Manifold Love, which is our dating platform. I guess for the benefit of Theo's audience who may not have heard of this, the basic premise of Manifold Love is that, you know, it's in part an OkCupid clone where you can create your own public dating profile, and then the twist is that we have prediction markets on each of the people in this ecosystem for people to bet on who would be a good match with each other. The thinking is that your friends, relatives, or other random strangers who scour through your profile would be interested and motivated in matching people off based on this, and that would be reflected in the market prices. </p><p>So this, obviously, this is an insane sounding idea. This is a thing that people outside of the Bay Area would not do and would probably roll their eyes or laugh, or some combination of all of these things. I first want to say that even though I never believed in this as a large venture scale business, it actually has been successful in producing multiple long-term relationships which are still going to this day. Who knows, maybe they'll result in marriage or something like that. So I think it's too easy for people to cavalierly dismiss crazy Bay Area ideas involving prediction markets. And even if they don't live up to the full hype, they're still capable. I feel like the premise of the Manifold Love actually was vindicated, but on a smaller scale. I think it can work in this community, at Manifest, in the Bay Area, for like-minded individuals. I still have my questions about how well it would be able to scale to the rest of the world. </p><p><strong>Theo: </strong>What are the fundamental limits? Just that not enough people know enough information about the couple to be able to make good decisions?</p><p><strong>Stephen: </strong>I think... Like, they'd be very small markets, necessarily, right? Well, I think a lot of people are just put off by the concept of public profiles. This is actually a huge barrier. I think it's not necessary for everyone to be on board with the premise of the app for the app to still succeed. Many people really despise and hate dating apps, and yet those are a big thing. When dating apps were first introduced, they were seen as really weird and gross and disgusting, and only the worst part of society would use them. But since they were so useful, adoption has gradually increased, and the bull case for Manifold Love is something like this story, that even though it sounds really weird, some people have told me it's repulsive, that over time, that would fade, and the benefits would become more apparent. I'm just not convinced, though. I think too large a chunk of society just really doesn't want to have public profiles with people betting on them.</p><p><strong>Theo: </strong>Speaking of manifold love, you did a related... I don't even know what to call it. Part game show, part live musical, called Bet on Love. How did you get the idea for this? How did this come about? What's the backstory? What was the idea behind it? </p><p><strong>Stephen: </strong>Yeah. I think it's interesting. Both manifold love, our dating site, and the idea for Bet on Love essentially grew out of the last Manifest, our first conference here. In particular, we noticed that a lot of the markets that people seem to have the most fun betting on were relationship or romance-related things, many of which involved Aella, and you can look those markets up yourself on Manifold. We were trying to think about how we could capture that energy and use it to drive more engagement. Obviously, the natural thing to do is to have a surrealist prediction market dating show musical with Aella as the star bachelorette. The show actually... My original vision was much more limited. Originally, I was planning on just doing this really small-scale, very low-budget indie event where it might even be at the same venue that Manifest is happening, out in a courtyard, and we just stream it on one webcam. After I explained my idea for a prediction market dating show featuring Aella to one of my friends, they told me that, in fact, Vibecamp had actually done a prediction market dating show featuring Aella, and that I could watch the footage of this video, or watch the footage of the recording. I did, and I was super impressed by the theater company that put it on. I knew immediately after I watched this that we needed to hire them and get Manifold involved in some capacity, and that tying their theatrical and musical genius to betting on markets could be a product which is super compelling to people. I really like Bet on Love. It was very entertaining. Very interesting. I guess I do have to say this is pretty polarizing as well. I think you, the audience, will enjoy Bet on Love if you like musical theater, if you are really into niche nerd humor, and you like dating shows. If you love all three of those things, you're absolutely going to love this. If you love one of these things a lot, you'll probably love it. If you love none of these things, you probably will not love it.</p><p><strong>Theo: </strong>I don't particularly love that. I don't love musical theater except for Hamilton, and I definitely don't like dating shows. They're boring, but there was something about Manifold Love. Maybe it was just the specific type of guy who was in it. I don't think it would work with most normal people. It wouldn't have the same charm. </p><p>Manifold has a calibration chart at <a href="https://manifold.markets/calibration">manifold.markets/calibration</a> that shows whether events happened as often as they predicted. If you go to that chart, you'll see a bunch of dots and a diagonal line. All of the dots are below the diagonal line, which suggests that events happen less commonly than they were predicted to at all data points. Why? Are the traders just overconfident?</p><p><strong>Stephen: </strong>Yes.</p><p>The interesting thing about this is that you might naively think you could just write a bot to bid the contracts up and that you would make money. The reason why things like this can persist is that that's harder to do than you think. The moment you introduce YesBot that bets yes on everything, people will see that your bot always bets yes on things and will bet against you or will exploit you. They'll bid the price up higher than what the true price should be, and then you'll be stuck holding the bag with your worthless yes shares.</p><p><strong>Theo: </strong>Has anyone tried making YesBot?</p><p><strong>Stephen: </strong>Yeah, they have. I think it is interesting. One of the first things about our calibration chart is that it's just a firehose of all of our markets. It includes even pretty low-quality markets and markets that don't have that many traders. One of our users actually has created this website called Calibration City that allows you to create calibration charts that are more granular and targeted towards markets with whatever attributes that you want that have 1,000 traders or that are on particular topics. I suspect that if you added more filters to filter for higher-quality markets that a lot of this effect would go away. But it still remains to be seen. </p><p>I don't know. I think the brute fact, even for our lower-quality markets, that they have this pattern is surprising to me. A priori, I think I wouldn't even have been able to predict the sign of whether our markets would be over- or under-confident. I don't really know why this effect exists and if or how long it will persist. But in general, when you find things, yes, bot is not going to work as a strategy. But if you do see consistent wrong patterns in markets, you can do more sophisticated things to try and correct those. This is a sign that there are possible trading strategies that you could use to profit from this since it does appear to be pretty systematic. </p><p><strong>Theo: </strong>When you were on the Dwarkesh podcast a couple years ago, you told them basically that you don't like insider trading, even though a lot of prediction market people do because they think it makes prices more efficient.</p><p><strong>Stephen: </strong>No, I love insider trading.</p><p><strong>Theo: </strong>On real financial markets? Or insider trading laws.</p><p><strong>Stephen: </strong>Yeah. The classic libertarian story is that insider trading laws are bad because markets are about information and giving good prices to the public, first and foremost, and that when you remove restrictions on who can trade, it makes the prices more reflective of reality and more efficient. So I think that's a pretty good argument. The counter argument is more of a fairness argument. It's not fair for corporate officers to be able to make so much money doing things which are relatively dumb, of having access to earnings reports before the general public, or more maliciously, it's bad that they have an incentive to try and sabotage the company or other things, et cetera. I think those are very real concerns and probably the ideal legislation would do something to limit that in some fashion. Maybe the absolute chaos would work. That would result in a society which is functional. It may be better in some ways than a more restrictive legal climate, but it probably also isn't the absolute best regulatory regime.</p><p><strong>Theo: </strong>So what do you think about other forms of suspicious market activity that isn't exactly insider trading or fraud? Like, for example, what Roaring Kitty is doing right now with GameStop, where he's somehow memeing the stock up multiple billions of dollars in market cap. Should the SEC do anything about that?</p><p><strong>Stephen: </strong>I think probably not. In general, whenever there's ambiguity about the harms of particular actions, as a good general principle, it's good to not have litigation or regulations there. The world is very chaotic. If the outcome is not certain, it doesn't really make sense to get lawyers involved. Or really, when you do add regulation on this, the only parties who actually win are lawyers because then there's increased litigation. Society doesn't really benefit anyway because it's ambiguous. There are benefits on both sides. It just doesn't matter from a societal perspective. I think the government, you know, financial regulation should be limited to more severe, severe harms, which everyone can recognize and which are dealt with in an easier fashion. </p><p><strong>Theo: </strong>Do you think it's easier or harder to, in the long term, do well on manifold versus actual financial markets? Because you might think it would be easier because they're less efficient, but you might also think it would be harder because they're more zero-sum and you can't just buy the S&amp;P 500.</p><p><strong>Stephen: </strong>That's a good question. So again, we have the subsidizer dynamic where people are putting up huge amounts of cash because they want to have their question answered. So as long as subsidizers are an important part of the ecosystem, or insofar as that's true, that makes it easier for people to earn money because the subsidizer is just paying you to do that. They're not paying you in the same way to trade GameStop stock. There isn't someone naturally tossing a bunch of money into that outside of other retail investors. They are paying you lots of money to trade.</p><p><strong>Theo: </strong>At the beginning of the interview, you talked about how one of the reasons prediction markets aren't more popular is because a lot of them are hard to use. So what do you think are the good elements of a prediction market user interface that will make people want to use it?</p><p><strong>Stephen: </strong>Simplicity is key everywhere. A big mistake other platforms and other forecasting platforms have made is just making it too complicated, having too many different market types, having too many order types, showing too much information on the screen, etc. The simplest consumer apps are things like Robinhood where they strip away all of the extraneous content and just have you focus on a few key numbers and make it super obvious which user flows you want to go down. In the case of Manifold, one of the flows that we try to optimize for is market creation.</p><p>Making that really easy is part of it. That includes having it all fit onto one screen. We don't have a multi-page setup. We try to keep that pretty minimal. The other aspect of that is we've tried to standardize market terms. When we launched Manifold, when you created a market, we allowed you to set the initial probability and choose the exact amount of subsidy to provide in the marketplace among other things. The model we've moved towards now is where the market automatically starts at 50% and we standardize on certain liquidity tiers. That's just to make it much easier so you don't have to think about what you want to do when you create a market. The lowest tier markets on Manifold all cost the same thing. You don't need to think about that. If you want to subsidize them more, we've recently introduced a market tiers feature which have liquidity at different levels and you can just choose among these discrete options. That eliminates a lot of the paralysis that comes from having too many different options available. </p><p><strong>Theo: </strong>So why should people trust market makers? What if they resolve markets incorrectly on purpose?</p><p><strong>Stephen: </strong>The big thing is reputation. One of the nice things about our platform is not only do traders accrue a reputation for trading well on the platform, but market creators do as well. The better market creators not only resolve markets fairly and quickly, but they also do a better job of anticipating edge cases and having really well thought out resolution criteria, which is a skill. So it's not just not being a scammer. There's also an art in crafting markets such that the entire process is smooth and unambiguous. Our view is that over time, the market itself will select for creators who are better at doing that. We internally at Manifold will promote their markets more versus other markets with worse criteria. </p><p><strong>Theo: </strong>What do you think about derivatives on prediction markets? Is that a thing that needs to exist?</p><p><strong>Stephen: </strong>Prediction markets themselves are a kind of derivative contract on information or other real world financial assets.</p><p><strong>Theo: </strong>This is interesting.</p><p><strong>Stephen: </strong>Actually, one of the things I feel like Manifold's user base now is pretty high caliber. Immediately after we launched Manifold, we kind of blitzed through all different sorts of random derivatives on Manifold, which weren't really that useful directly, but were really cool demonstrations of different things that you could do. So we had immediately users created leverage prediction markets where you would do things like resolve NA and return money most of the time. But in some world, you choose like 1% of the time the market will resolve 100 times more or something or give you 100 times the payout, something like that. We experimented with volatility using other prediction markets as volatility swaps on other prediction markets where you can do that in a few ways by saying, will this prediction market trade outside of this range within this particular date? That's where you can extract volatility as a separate signal. There are a bunch of other stuff as well. I feel like eventually those will be useful for the biggest prediction markets on things where people are putting up huge amounts of money and want to hedge their risk. If you created a five dollar market with your friends, betting on who's going to win the next game of pickleball, maybe it's not so useful.</p><p><strong>Theo: </strong>On your LinkedIn, it says you used to be the founder of Rareroot, an online ginseng marketplace. Can you tell us a little more about that, like how you got the idea, why ginseng, why you moved on?</p><p><strong>Stephen: </strong>I was not expecting to be grilled about my past as a former humble ginseng merchant. This is a very long backstory. The first commercial vessel to ever set sail from America to China was loaded with several tons of American ginseng, and American ginseng is a separate species from Asian ginseng, indigenous to Appalachia, and closely associated historically with the fur trapping trade. Fur trappers like Daniel Boone would collect ginseng and sell it to these ginseng merchants who would then ship it overseas to China during the off-season for the fur trade. So there's this very long history of trade. China is flowing in the opposite direction of what you might think. The key facts about American ginseng today are that it wholesales for about $1,500 a pound for the simplest type of roots. Many Chinese people value roots that have very interesting or exotic shapes, which can be worth a significant multiple over the base wholesale price. The most expensive individual ginseng roots have sold at auction for $500,000 to a million dollars. Ginseng occupies the same cultural position that a really fancy bottle of wine would in the West. It's a thing you would give your boss if you don't know what else to give, and there are different gradations of fanciness that you can calibrate your gift to.</p><p>My random business idea was to try and become the Alibaba of American ginseng. I noticed that there were several layers of middlemen between the growers of American ginseng roots in Appalachia and the ultimate consumers in China. Ginseng is typically exported to Hong Kong and then smuggled over the border to mainland China to avoid taxes. It's then shipped out to the rest of mainland China from a small town in southern China where a bunch of Chinese medicinal products are located. I was trying to think about ways to disintermediate these layers of middlemen through a website. However, I realized that no one in the Chinese traditional medicine world operates at startup speeds and they're much more set in their ways than people in Silicon Valley. I ultimately realized it would probably take a decade to build a serious business in this domain and that there were a lot of other interesting things I could do instead. I did sell a little bit of ginseng, but I only had two or three sales total, so it wasn't a huge success.</p><p><strong>Theo: </strong>Now, let's take some questions from the audience.</p><p><strong>Audience Member: </strong>Why don't Americans consume American ginseng?</p><p><strong>Stephen: </strong>Well, they actually do. People in Appalachia do consume American ginseng. I've also heard that truckers in the south will sell ginseng at truck stops. The most common way that Americans would consume ginseng is in Arizona iced tea, although that's mostly Chinese ginseng, not American.</p><p><strong>Audience Member: </strong>The next question is about my views on cancel culture and prediction markets, and specifically my views on the Richard Hanania controversy.</p><p><strong>Stephen: </strong>Cancel culture is bad. If you want to help people, you should try to help them improve their views. Prediction markets can play an important role in getting people who believe incorrect things to believe better things. They provide a better calibrated picture of how the world works, which can help people improve and hold better beliefs. However, prediction markets won't tell you whether things are right or wrong. They will tell you whether people believe things are right or wrong or will believe them at some future date, but they won't address those questions directly.</p><p>As for Manifold's moderation policy, we have tremendous faith in random internet strangers to mostly do the right thing. We want Manifold to be culturally neutral and not enforce particular political sides or stances on issues. We prefer to allow as much ideological diversity as possible. We believe it's bad for social media platforms to impose any particular narrative. We're trying to operate as close to a free environment for anyone of any political persuasion as we can, within the limits set out by the law and other structural factors that we face as a business. Regarding the specific case of Richard Hanania, I think it's bad he was cancelled.</p><p><strong>Theo: </strong>What specifically was this controversy, for the audience?</p><p><strong>Stephen: </strong>The original thing that set off his cancellation was when it came to light that a decade previously, when he was a college student, he wrote a bunch of dumb articles under a pseudonym. Some journalists discovered that the pseudonym was him and released these in the future. He released some statements saying that he disavowed the dumb things that he used to believe and doesn't believe them. For most people, many believe really dumb things in college or as teenagers. I think it's important as a society to understand that people should not be held accountable or publicly punished as an adult for things that they believed as a teenager. I think it would be very bad for platforms like Manifold to take a strong stance against content like that.</p><p><strong>Theo: </strong>Do we have any more audience questions?</p><p><strong>Audience Member: </strong>Why aren't there more institutional types in the market? You mentioned before you think that would improve the market.</p><p><strong>Stephen: </strong>This is a great question. A lot of it is actually just regulation. If you're an investor and you invest in a regulated exchange and you lose money, that's understandable. If you're investing your limited partner's money in some exotic financial instrument that's unregulated or is offshore etc., if you lose money you're going to get sued. This basic factor prevents a lot of institutional capital from moving into unregulated domains. If there's enough money in this space then eventually that demand will emerge. Crypto is a good example of this. Crypto even right now is still not legally kosher everywhere or even in the U.S., but there's beginning to be more and more institutional capital pouring in just because the opportunities are there. </p><p>The other reason why there isn't more institutional money in prediction markets is just that there's not that much money in general. I think similarly to crypto, the trajectory that prediction markets and Manifold in particular will follow is that we're starting with the consumer use case. Once we get more consumers and retail trading volume on our platform, eventually over time, institutional capital will follow especially if that's accompanied by deregulation. </p><p><strong>Theo: </strong>Anyone else?</p><p><strong>Audience Member: </strong>I guess I have a question. Is there a role that journalists and media publications can have and maybe being incentivized to help resolve certain difficult questions or participate in that process? </p><p><strong>Stephen: </strong>Sure, even today on a lot of markets on Manifold, you'll see that a common type of resolution criteria that people will employ is deferring to mainstream media to decide the outcomes of markets, particularly in cases where outcomes are ambiguous and you need some independent neutralish source to make some sort of judgment call. </p><p>For instance, we had a market recently on whether in the Israel-Palestinian conflict there would be an invasion of Rafah. Invasion is actually a totally ambiguous term. There's no strict legal definition of invasion. If you created your own personal market on whether it was an invasion in your heart, people may not bet on that because they don't trust your ability to have a reasonable understanding of what that means. We've had several markets on whether the New York Times will call it an invasion. That's a good way to operationalize this really difficult fuzzy claim. </p><p>A lot of the work is doing things like that. Another type of pattern that people look for is for a general media consensus on something, which is usually an indication of fact in cases where, like U.S. presidential elections, typically are not disputed but the last one kind of was and perhaps other ones will be in the future. In a politically tumultuous time, being able to enumerate a list of different journalistic bodies and say if most of them say this then we're going to resolve according to that, provides a reasonable standard and baseline.</p><p><strong>Theo: </strong>I think we have time for one more. Yes?</p><p><strong>Audience Member: </strong>Is there any role for sweepstakes other than regulatory arbitrage? </p><p><strong>Stephen: </strong>Yeah, so the concept in American law that makes something a sweepstakes is this concept of alternative method of entry, which means you have to be able to enter into the sweepstakes without paying. If you have to pay to participate in the contest to win a prize then it's not a sweepstakes. </p><p>The key thing that makes sweepstakes good and fun relative to other types of mechanisms is that it allows free play. As I mentioned earlier, even in a world where there are totally deregulated real money prediction markets, I think we do want this space of play money prediction markets where anyone can participate. Insofar as sweepstakes are a way of achieving this, I think they're good and will continue and persist into the future.</p><p><strong>Theo: </strong>All right, well I think that's all the time we have, so thank you so much Stephen Grugett for coming here and doing this live interview with me at Manifest. Everyone go check out Manifold Markets at manifold.markets and yeah, I think this was great. </p><p><strong>Stephen: </strong>Yeah, thank you so much for having me.</p><h3>Part 2: Austin Chen</h3><p><strong>Theo: </strong>Welcome back to episode 16 of the Theo Jaffee podcast, part two, again live on day two of Manifest. Today I'm interviewing Manifold co-founder Austin Chen. First question...67 days ago on April 2nd, you officially left Manifold and in your farewell post, you gave four reasons for doing so. Manifold is stable and doesn't have much left to iterate on, you're not excited for the next steps including the pivot, prediction markets are insufficiently powerful, and short AI timelines muddle everything up. So far, the Manifold market has predicted an 8% chance you'll regret it in two years. I'm assuming you don't yet regret it, but do you have any more details to offer, especially on the prediction markets being insufficiently powerful?</p><p><strong>Austin: </strong>The prediction markets being insufficiently powerful is a point I've thought about many times throughout my tenure at Manifold. It was pitched to us as a revolutionary mechanism that would help us figure out what the future will hold and how to navigate the many decisions you have in the world. One thing I noticed pretty early on is that a prediction market can only tell you very few bits of information. It will tell you how likely the thing is from 0 to 100 percent, that's the main source of information that a prediction market by itself gives you. But you need a lot of bits of information to navigate the world. When you're making a decision like what policy, what feature should I implement for Manifold, it became very hard for us to use our own markets to figure out what we should do. </p><p>James, my co-founder, has thought of some pretty interesting mechanisms to try to get around this. If you look at a prediction market, most of the bits of information are in the question itself. So James thought to invert the traditional market structure and let people submit the questions and crowdsource the question creation part as well. That could hypothetically generate a lot more bits of information. But that mechanism hasn't proven out to generate really good policies, really good paths, really good plans for navigating the future. So I still think we're kind of at the drawing board with regards to how do we use these predictive mechanisms to make better decisions. </p><p>I was a true believer in the beginning that prediction markets can really help us act in the world. Now I still think that there's a good business in prediction markets, they provide fun, they provide a game that people enjoy betting on, but I'm less sure that these are the things that will help us navigate.</p><p><strong>Theo: </strong>If they can't provide foundational governance value to society, what areas do you think that prediction markets would actually be better than the alternatives for predicting?</p><p><strong>Austin: </strong>They're a pretty good aggregation mechanism, one that doesn't really exist in other areas. They can cohere all of that into a single point, like you can get a much better question of what does the world believe about whether Biden or Trump will win by having them bet on a prediction market than you can with a variety of other mechanisms. So I think the aggregation function of prediction markets is probably the most valuable one. Besides just aggregating all the data into a single percentage estimate, you also have people ask comments and bet back and forth, which are additional add-ons. They're not core related to prediction markets, but as you extend this functionality and people are all in the same location, you get additional benefits. </p><p><strong>Theo: </strong>So now you're working full-time or mostly full-time at Manifund, which is a very unique charity organization that has a whole bunch of unique features like, for example, re-granters. So you have people who you entrust a budget to, to let them donate money. How did you make some of the design decisions behind Manifund?</p><p><strong>Austin: </strong>A lot of them were based on my own experience as a grantee in the ecosystem. I've received some grants from the Long-Term Future Fund, for example, who can give a pretty small grant, and a Survival and Flourishing Fund, which can give a pretty big grant. I noticed a bunch of shortcomings. For example, they tend to not give you very much other feedback, rather than did you get the grant, did you not get the grant. There's not a lot of other data points to look at. You don't have a sense of what kind of grants they're actually looking for. Most of these have what are called open grant databases, but it's really just a single sentence, or maybe just the place where the grant went, and how much money it was. It doesn't tell you what was in the application, or what are the decision processes behind why the grantor decided to pay out to the grantee. So those are things that I wanted to fix with Manifund. </p><p>On re-granting specifically, re-granting was a mechanism that was really popularized by the FTX Future Fund.</p><p><strong>Theo: </strong>Now the FTX name has been tarnished, I would say. It was good for its time. I remember during the FTX glory days, when Sam Bankman-Fried was on Nas Daily and the Dwarkesh Podcast.</p><p><strong>Austin: </strong>I was perhaps the last SBF fanboy and still a die-hard. I guess your idols die very slowly.</p><p><strong>Theo: </strong>SBF did nothing wrong!</p><p><strong>Austin: </strong>However, I wanted to separate out the Future Fund from FTX itself, because one massive unfortunate shortcoming was that they tried to tie these two things together. Future Fund, I was pretty close with a lot of people running it, like Leopold Aschenbrenner, Avital Balwit, and I've spoken to some of the other people involved as well. They were just really good people. Good both in the competent sense, but also in the virtuous, trying to do good things for the world sense. For instance, on their re-granting program, they made the decision to not announce who their re-granters were in public, because they didn't want this thing becoming a weird status badge that would change the dynamics of the EA ecosystem. They didn't want to be seen as the ones awarding status. I thought that was one small example of a decision they made very thoughtfully.</p><p>The Future Fund did a lot of really cool things. One cool thing they did was the re-granting program, where people were just empowered to have individual budgets of something ranging from hundreds of thousands to millions of dollars, where they could more or less make a decision on a grant without having to get external approval from committees or things like that. I think Future Fund would just do a safety check, but then the re-granter basically had full discretion of how to spend the funds. This is actually a kind of thing that's very rare in the entire grant-making ecosystem. Most of the time, people think that if you have to give out money, you have to do it with a process. You have to do it very carefully. You have to have written-up concrete justifications for why this grant is being given to be accountable. Future Fund was like, no, let's throw this out the window. Let's just let people give out money. Let's do it really quickly. We're going to try to put an emphasis on getting money out the door very quickly. I think these were all really great things. Things that I had suffered a lot when I was a grantee and I really wanted to promote.</p><p>So Future Fund collapsed, but then at some point later, one of the people involved in Future Fund put me in contact with one of the donors and was like, hey, we think the re-granting program is still really good. Even though FTX isn't around, we might still open when the fund is. So then this anonymous donor gave us 1.5 million dollars last year and 1.5 million dollars this year to distribute to ASX re-granters. And they are making a lot of the grants in Manifund right now.</p><p><strong>Theo: </strong>So how do you choose re-granters and how similar are good re-granters or philanthropists to good investors?</p><p><strong>Austin: </strong>We actually didn't choose the re-granters in this case. Manifund views our role as more of a platform, a neutral platform. The grant maker, the person who provided funds, had about five or six people in the ASX space. We validated their picks. We looked over them, made sure they looked like they were going to be able to give grants in the ASX space. But we did not make the decision on who the re-granters were.</p><p><strong>Theo: </strong>How similar are good investors to good philanthropists?</p><p><strong>Austin: </strong>None of our re-granters are investors. All of them work in ASX basically full-time. You can see the list, but there are people like Leopold.</p><p><strong>Theo: </strong>Very impressive list.</p><p><strong>Austin: </strong>I think we worked pretty hard to find good people, but again it was up to this anonymous person whose identity is still not known to the rest of y'all. I think they already had some connections to these people and as a result that's how we got the list of re-granters in the first place.</p><p><strong>Theo: </strong>So why should someone donate to Manifund over something like GiveWell or Open Philanthropy?</p><p><strong>Austin: </strong>You can't even donate to Open Philanthropy, so that's one reason you need to give to Manifund. If you want to give away your money, OpenPhil won't take it, as far as I can tell. Unless you're Dustin Moskowitz, I guess. Then OpenPhil will take your money. GiveWell does take your funding. GiveWell only donates basically to projects in the global health and development space. That's mostly things in projects in Africa or other ways that they can find to help out humans very cheaply. So depending on what your worldviews are, if you believe that humans alive today are important but maybe less important than the welfare of animals because of the amount of funding in these spaces, you might want to give to a different animal welfare related fund. Manifund doesn't have too much of that. What Manifund does have a lot of is AI safety research. So insofar as you think that the future of humanity, humans living in the future are still a pretty neglected cause, you might think that giving to projects on Manifund would be good.</p><p>Right now it's not really the case that Manifund even accepts that many direct donations. Most of the time when you go to Manifund, you are screening the projects yourself. Manifund is kind of like a Kickstarter where you can just look at the project proposals and yourself decide, I think this is promising. I think this is a shot. I think I want to donate to this. This is actually I think closer to the roots of EA than GiveWell is today. Because today when you go to GiveWell, you kind of think like GiveWell is this one trusted institution. You can just give money to them and they will distribute it wisely. But back in the day when EA was just getting off theWhen GiveWell was just getting off the ground, there was no other trusted source they could look at. They had to make all their decisions themselves. So I would say that if you are in a position of trying to give some money, it's a good thing to make that decision yourself a little bit. Try to put yourselves in the shoes of a grant maker and try to evaluate whether a project that is about to go out will work. Are the founders good? Is the plan of impact good? This is the kind of thing that the people at GiveWell, way back when it was getting started, had to think a lot about. </p><p>Nowadays, EA has become a lot more institutionalized in a way that I don't quite like. In that you just try to guess who these people are who are smart. It's a little bit more political, a little bit more affiliation-based rather than doing your own research. So Manifund lets you do your own research and make your own decisions about what to fund. </p><p><strong>Theo: </strong>Earlier you were talking about Dustin Moskovitz. So how much power do people like Dustin Moskovitz and Cari Tuna and Jaan Tallinn have over the EA funding ecosystem? Is there a centralization risk there?</p><p><strong>Austin: </strong>This is a thing that lots of people in EA discuss. I don't think as a practical matter Dustin or Cari have that much direct influence because they don't make day-to-day governing decisions at OpenPhil. If they want to make a change they'll probably communicate it out to this 200 person organization. And that message then has to trickle down to all the different grant makers and people who support the grant makers at the OpenPhil institution.</p><p>OpenPhil plus Dustin and Cari who are maybe the largest voices at OpenPhil but still, I think less than 50% of the stuff that gets done by OpenPhil you would causally attribute to coming from the heads of Dustin or Cari. OpenPhil as a whole is a big influence in the EA ecosystem. Yawn I think does more direct thinking about what to invest in and does make those decisions. They're both big players. Manifund is trying to be the place where all the other smaller players can find all the other grantees and kind of set up the marketplace, the clearinghouse for that. </p><p><strong>Theo: </strong>How would Manifund's priorities change if its annual budget were a billion dollars or ten or a hundred?</p><p><strong>Austin: </strong>I like to think that I often think of Manifund just as Future Fund running the same-</p><p><strong>Theo:</strong> Future Fund 2.</p><p><strong>Austin: </strong>Yeah Future Fund 2, the Future Future Fund. I think their playbook looked pretty good. Their explicit first year was trying to run a bunch of experiments on different ways you could distribute funding in large amounts. That's why they were excited about the re-grantor program. It differs from the OpenPhil classic. You have program officers at OpenPhil who are just in charge of large budgets and make decisions relatively slowly. So future fund wanted to try a different approach with maybe a hundred different re-grantors with small budgets who can just make decisions very quickly. That's the kind of testing future fund did. That's the kind of testing I would do. </p><p>I'm pretty excited by something like impact certificates which is trying to set up a venture ecosystem for charity or effective altruism grants. So that's the first thing that I would try out but I'm not so committed to it. I think the meta strategy is try lots of things and see which ones work and scale them up and the first obvious thing I would try would be impact certificates.</p><p><strong>Theo: </strong>So how did Manifest manage to get such a high density of smart and interesting people and internet celebrities? You think it would just be selection plus being in the Bay Area but very few other places and gatherings are like this. So why Manifest specifically?</p><p><strong>Austin: </strong>I think what you might not see is the amount of time that our team, mostly me and Saul, have put into just sending out invites. We have this CRM of maybe 300 different people who we thought would be really awesome speakers and we spend a bunch of time just writing individual emails to all of them. </p><p>Some of it is taste in that basically Manifest is a gathering of a lot of the people who I think are the best writers the most interesting thinkers in the world and I've tried to invite them and try to position myself in their shoes and think what would make a good event for them well why would they be excited to come and pitch it to them. That often involves calling out a few of the other names. Name-dropping I guess a little bit which I don't feel that great about necessarily but I think it's a core human being. The first thing you think about when you're going to a party is who else will come to this party who else do I know at this party. So I try to highlight that for the speakers who I invite. </p><p>So that's a big part of it just the manual work of spending 15 minutes per person sitting down to write an email from scratch to invite them to come to this event that we're putting on. I think two other things that are working in our favor. One is that we ran Manifest last year and it was a really good event. And that just leads to word of mouth growth, more or less. People tell their friends, hey, Manifest was a great event. And that is so valuable. The virality of having created a good product. They say if you build a better mousetrap, people will beat a path to your house. And I've seen that with Manifest, I think. So many people have said, oh, my friend told me about Manifest. That was so great last year.</p><p><strong>Theo: </strong>I couldn't miss it this year. I experienced massive FOMO last year when it was happening, so I had to make sure to be here all summer so that I could make it to Manifest.</p><p><strong>Austin: </strong>I'm glad that you work here and I'm grateful to have you here.</p><p><strong>Theo: </strong>It was well worth the $200.</p><p><strong>Austin: </strong>$200? Oh, because you got a student discount. There's a separate digression about how we use pricing discrimination a lot at Manifest. For the speakers, there's basically a negative price. We will pay for some of their flights and housing. For students, we try to make it relatively cheap. I try to charge people who have a lot of money a lot more. That goes into the economics of making something viable. </p><p>Just to finish answering your question, I think the last part about how a lot of really cool people come to Manifest is that we kind of lucked out with prediction markets as a topic. It turns out that many of the really smart people in the world just think that prediction markets could be cool, at least. They're kind of open to weird mechanisms. And this is pretty differentiated. A lot of the rest of the world has not heard about these things. So it turns out that running a conference just on prediction markets will draw out the right crowd. I don't know if this will sustain, especially if Manifold actually succeeds in growing. I don't think you could do such a good conference on blogging, for example. But we'll see.</p><p><strong>Theo: </strong>Because it's just not differentiated enough?</p><p><strong>Austin: </strong>I think so, yeah. Or especially like podcasting, or TikTok, or something. And I don't think those would be nearly as highly selected for interesting people.</p><p><strong>Theo: </strong>Earlier, we were talking about Sam Bankman-Fried and how SBF did nothing wrong. Stan SBF. So how much do you think the status of EA has been damaged by him?</p><p><strong>Austin: </strong>Quite a bit. I don't know if I'm the best person to answer this kind of question. I'm relatively new to EA. I only got into the space a couple years ago. My sense is that it's just much harder to be unapologetically EA. You always have to caveat with, oh, but the SBF thing, et cetera, et cetera. And I do know that I feel less intellectually excited by EA, either its ideas or its participants nowadays, than I did two years ago. And I'm not sure if that's because of the SBF thing, or they were just co-coinciding for other reasons. </p><p><strong>Theo: </strong>So a lot of what you do at Manifund revolves around future of humanity kind of stuff, including AI and AI existential risk and safety. So how have your views on open AI and AI in general, especially because you just did a podcast on OpenAI, but how have they changed since the Leopold Aschenbrenner piece the other day, if at all?</p><p><strong>Austin: </strong>Unfortunately, the Leopold piece dropped during Manifest, so I haven't read most of it. I don't think my views have shifted that much. But yeah, again, I just haven't really read it in depth and only skimmed it. Hard to answer. </p><p><strong>Theo: </strong>What about all the OpenAI drama over the last couple of weeks? You wrote on the podcast page a specific note about, oh, this was written before, like this, and this, and this, and this, this.</p><p><strong>Austin: </strong>I am maybe an apologist for Sams everywhere, not just Sam Bacon for you, but also Sam Altman in this case. I kind of, maybe because I've spent some time in the role of somebody similar to SBF or Sam Altman as an executive of a startup that was growing and had to make decisions, I kind of see reasons why things that look bad in hindsight, such as the massive fraud of Sam Bankman-Fried or the NDA thing with Sam Altman, weren't really that attributable to the leaders. With the fraud thing, I think I was probably wrong at the time. I now put a lot more weight on the fact that SBF knew what was going on, and that was bad. So I made a couple there. But with the Sam Altman thing, I'd say with the NDAs, as an executive, there's so many things that you're trying to do all at the same time. You often don't have that much time to go into the details for each one. You don't know in advance that, oh, this NDA thing is going to be bad or good, and it's going to blow up. You make 100 of these small decisions every single day. So it's not that surprising to me that this kind of thing would slip past Sam's radar.</p><p><strong>Theo: </strong>So in EA a lot, there's this concept of hinginess, which relates to whether it's better to donate money now or invest money for the future. So do you think this decade has greater hinginess than other decades? So will money donated now, if it's a pivotal moment in AI or something, have more of an impact than other times?</p><p><strong>Austin: </strong>I tend to think so. But I also have kind of direct financial incentives that would lead me to think so, which is roughly that on Manifold we take a 5% transaction fee of donations that happen. So it would be better for the Manifold budget if a lot more people donated now as opposed to wait a few years, something like that. So yeah, it is not a topic that I thought that deeply on. As far as we can tell, we get funding and we're pretty much given the mandate to spend this funding within a year or so. So the question of higher level portfolio allocation, should you try and save up more to donate in the future versus not, is not a thing I've spent a lot of time on. </p><p><strong>Theo: </strong>Why aren't more philanthropic organizations open? Especially the ones that have "open" in their name, like Open Philanthropy. Is it a naming curse, like Open AI?</p><p><strong>Austin: </strong>I think being open isn't a huge pressure on philanthropic organizations. There is some pressure, but it's mostly to be open to your donors, not as much to the general public. It's common for a philanthropic organization to host a dinner event where they talk about what they've been up to, but it's not as important to publish a blog post or a YouTube video to get the same message across. This is because the lifeblood of philanthropic organizations is donations, so a lot of it is optimized for the donor flow.</p><p>Effective Altruism (EA) probably does somewhat better at this than most other philanthropic organizations. They try not to consider the donor the end user, but rather, the recipient of the good stuff, the person whose utility is being maximized. So philanthropy is very difficult in some sense, much more difficult than regular capitalism, because you have to deal with three competing parties: your donors, the people who are doing the work, the grantees, and then the recipients of the good stuff. With typical capitalism, the people who are receiving the good stuff and the people who are paying you, the donors, are the same people. So you have more of a tight feedback loop. You know that whether or not the good stuff is actually happening, because they will keep paying you money if it is and stop paying you money if it's not. </p><p>There's a bunch of things in philanthropy, and it does lead to all kinds of weird things, such as the incentive to, not necessarily incentive, but just lack of incentive to talk more about what's going on. I mean, it is also the case that many capitalistic institutions are just not that open. Like, most companies are closed source, for example. They don't publish most of what goes on internally. They view that as a differentiating advantage. </p><p><strong>Theo: </strong>What do you personally think are some of the best projects that Manifund has funded, and why those specifically?</p><p><strong>Austin: </strong>I'm biased, because as a re-grantor myself, I usually pick out the ones that I put money into. One that comes up is Lumina Probiotics, the tooth bacteria thing. That one, we were very early on. It was before even Aaron had secured the sequencing for this. He came on to Manifund and was like, hey, I think there might be an opportunity to get this bacteria and then give it to a bunch of people. I think this could be a good charity. I think this could be a good business. And then we actually invested in that very early on. I think the fact that it is so well-known and widespread, at least within the rationalist EA community today, is kind of like a success in re-granting stock picking, I guess.</p><p>I think most of the grants that are on Manifund I think are pretty good. But here is another issue with philanthropy. I don't actually have that much expertise in AI safety. My expertise is in startups and building websites and technology. We are trying to run this grant-making program on behalf of people in AI safety. So my sense of whether our grants that we've been giving out have been good is mostly just based on second-hand reports. Do the people who I respect think that the grants are good? And they mostly do. Our donor thought they were good enough to want to renew the program for a different year. That is, I think, the strongest signal I have that we're doing something worthwhile. Otherwise, it is hard to say. Especially with AI safety specifically, it is such a field of really long feedback loops. In some sense, did the world explode or not? We won't know for another five years. And the projects that we're working on in the meantime, did they constantly affect that or not? Very hard to say.</p><p><strong>Theo: </strong>Would you fund Lumina if you knew that AGI, benevolent AGI was coming in five years that would be able to cure mouth diseases itself?</p><p><strong>Austin: </strong>That's a great question. I think yes, because I think concrete wins are really important of the kind that Lumina has basically already delivered. It's still a little bit hard to say how effective the bacteria is because it hasn't gone into a lot of people's mouths for a trial. But I think winning is just super important. And insofar as Aaron builds up the skills set of being able to market a thing and promote it and share with a bunch of people, I think that will be robustly useful in the coming AI future more or less. So I think helping him accomplish this goal will also mean that in two years, if AI stuff is going nuts, he will have a lot of the capacity, resources, network, talent to be able to help out with that. I think he actually wrote in his management application that he would prefer to be doing something, something AI safety, because he thinks that's more important. But this is just such an obvious low-hanging, dumb thing that society is dropping, the fact that we should not have cavities at all, that someone should go do this. And he was the one who thought about that.</p><p><strong>Theo: </strong>I wonder what other low-hanging fruits are just sitting there like that.</p><p><strong>Austin: </strong>Yeah, I think chasing that is actually probably much better for the next for a smart EA person who's trying to figure out what to do, trying to upskill in AI safety could be a good option.</p><p><strong>Theo: </strong>Of course, if everyone is doing AI safety and there's no one left doing anything else of value, then that would be a problem.</p><p>Are there any questions from the audience? </p><p><strong>Audience Member: </strong>I would say having observed Manifold and Manifund, a lot of your success seems to stem from the fact that you guys execute really fast and move quickly. What have you learned? I guess it's a two-part question. How do you think you guys got off the ground and were able to execute so quickly? And then, what have you learned in the process that you've iterated just taking action as an entrepreneur? </p><p><strong>Austin:</strong> It's interesting because I don't even really think of Manifold as moving fast. I just think of most other software organizations as moving slowly for some hard-to-describe reason. </p><p>We picked some really good winners with regards to the technology early on. We started with Next.js, we started with Firebase. These were both tools that helped us iterate very quickly. We happened to have a fair amount of skill in these before starting Manifold. I had been working on an online board game for a long time before this. James and Stephen had been working on another React site before this. So we came into this with experience launching websites and startups. I think that helped us maintain a very high development velocity.</p><p>I do think that software is just not that fundamentally difficult. It is a field where you can iterate very quickly, and we leveraged that a lot. So that's on the building side. Then there's also the feedback loop side. We took the YC advice of talking to users to heart. We have a Discord channel and a Discord server where a lot of our power users hang out. They talk to us. Whenever something goes wrong, we find out about it very quickly, or we can ask them about things all the time. </p><p>The Manifold site itself is also another way where people can talk to us because this is a particular nature of building a social network. We're just on it all the time, and just by being on the Manifold site, people can create prediction markets about Manifold. A lot of this was how we got to very fast iteration early on. Just knowing that it's possible, I guess doing things that you have familiar clarity with, that's all helped with the execution.</p><p><strong>Theo: </strong>Just ship, that's the key. Well, thank you so much for coming on the show.</p><p><strong>Austin: </strong>Thank you so much, Theo.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Stephen Grugett and Austin Chen. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts. Follow me on Twitter at Theo Jaffee, and subscribe to my sub stack at theojaffee.com. </p><p>Some of my biggest takeaways from this episode include, prediction markets work better the more large and liquid they are. It's fundamentally hard to apply them to certain areas like dating. And there's a lot of room for innovation in philanthropy, like what Manifund does. </p><p>Be sure to check out Manifold Markets at manifold.markets, Manifest at manifest.is, Manifund at manifund.com, Manifold's Twitter at Manifold Markets, Manifund's Twitter at Manifund, and Austin's Twitter at akrolsmir. All of these will be linked in the description. </p><p>I had a great time at Manifest, and really enjoyed doing a live, in-person interview with an audience. I hope to do more soon. Thank you again, and I'll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[Manifest Manifested]]></title><description><![CDATA[One of the best weekends of my life at the best conference in the world.]]></description><link>https://www.theojaffee.com/p/manifest-manifested</link><guid isPermaLink="false">https://www.theojaffee.com/p/manifest-manifested</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 10 Jun 2024 22:49:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I just got back from <a href="https://www.manifest.is/">Manifest 2024</a>, one of the best weekends of my life and easily the best conference I&#8217;ve ever been to. Calling it a &#8220;conference&#8221; or &#8220;unconference&#8221; or even a &#8220;festival&#8221; doesn&#8217;t cut it - it was more of a cross between a summer camp and a family reunion. Though ostensibly about prediction markets, it was really about everything intellectual. Steve Hsu <a href="https://x.com/hsu_steve/status/1799555617725796757">describes it best</a>: &#8220;Woodstock for nerds&#8221;.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1K7q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1K7q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 424w, https://substackcdn.com/image/fetch/$s_!1K7q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 848w, https://substackcdn.com/image/fetch/$s_!1K7q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!1K7q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1K7q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png" width="1200" height="1146" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1146,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2016264,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1K7q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 424w, https://substackcdn.com/image/fetch/$s_!1K7q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 848w, https://substackcdn.com/image/fetch/$s_!1K7q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 1272w, https://substackcdn.com/image/fetch/$s_!1K7q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa77f050-be98-4d7f-9850-a0f61c85e5db_1200x1146.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The secret behind Manifest&#8217;s unparalleled social atmosphere is the people it attracted. Every single person I talked to, without exception, was both smart and interesting. I could walk up to people I had never met before and instantly insert myself into a conversation on ancient Greek military history, prediction markets applied to romance, or whether AI will end the world. I could look across the room, realize &#8220;oh wow that&#8217;s so-and-so from Twitter!&#8221;, introduce myself, and immediately become friends with them. To quote my new friend Matthew Adelstein of <a href="https://benthams.substack.com/p/reflections-on-manifest">Bentham&#8217;s Bulldog</a>:</p><blockquote><p>In the normal world, people like me who obsess over philosophy, worry about AI and existential risks, know who Eliezer Yudkowsky and Scott Alexander are, and read blogs are a minority. In the real world, when I tell people that I have a blog where I write about niche philosophy topics, I&#8217;m a bit embarrassed, a bit like one might be showing off their Lego collection. At Manifest, however, it became real, on a visceral level, that there were people like me, that we&#8217;re not some kind of weird alien offshoot from the human population. This may sound like a bit of an exaggeration, and perhaps it is, but it&#8217;s hard to overstate just how profound it is to realize that there are other people like you&#8212;that they&#8217;re not just internet-floating heads, but real, flesh and blood people.</p></blockquote><p>(Byrne Hobart also <a href="https://x.com/ByrneHobart/status/1799963459658154203">wisely observed that</a> &#8220;The Manifest conference has been a successful experiment: put enough introverts with common interests into a confined space and they&#8217;ll spontaneously turn into extroverts.&#8221;)</p><p>Manifest managed to create that magical feeling of serendipity, where you can flow through a space, passing from conversation to conversation, contribute to each one in turn, and have others do the same for you. One of the most heartwarming things was multiple people coming up to me to tell me that they listen to my podcast and that they like it. One person even said they listened to every episode. I&#8217;ve largely optimized for guests so far, but it&#8217;s truly special to see that real people enjoy what I make.</p><p>Manifest&#8217;s other draw is the high density of Twitter celebrities, all gathered in one place, and all easily accessible. At one point, I thought to myself, &#8220;what a fantastic venue this is, I&#8217;d love to talk to whoever set it up&#8221; and then realized <a href="https://www.lightconeinfrastructure.com/">Lightcone</a> CEO <a href="https://x.com/ohabryka">Oliver Habryka</a> was sitting right next to me.</p><p>I finally got to meet Dwarkesh Patel, and we talked about our podcasts. I talked with Scott Alexander about why blog posts are such a great medium for information, and what to do in Japan. I talked with <a href="https://x.com/Aella_Girl">Aella</a> and <a href="https://x.com/So8res">Nate Soares</a> about creating communally raised genetic superbabies to solve the AI alignment problem. I got to compliment <a href="https://x.com/AgnesCallard">Agnes Callard</a> on her famously colorful outfits. I got Eliezer Yudkowsky in my BeReal and then discussed AI scaling with him, then asked <a href="https://x.com/robbensinger">Rob Bensinger</a> why there&#8217;s no printed copy of the Sequences<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. I chatted with <a href="https://x.com/hsu_steve">Steve Hsu</a> (apparently a fan of my podcast!) about <a href="https://en.wikipedia.org/wiki/Dominic_Cummings">Dominic Cummings</a> and governance in the UK. I got to hold <a href="https://www.telegraph.co.uk/family/life/pronatalists-save-mankind-by-having-babies-silicon-valley/">Simone and Malcolm Collins</a>&#8217; tiny two-month-old daughter Industry Americus Collins while talking with them, <a href="https://x.com/s_r_constantin">Sarah Constantin</a>, <a href="https://x.com/oscredwin">Andrew Rettek</a>, Robin Hanson, and <a href="https://x.com/razibkhan">Razib Khan</a> about raising kids. I spoke with <a href="https://x.com/tracewoodgrains">Tracing Woodgrains</a> about the word &#8220;progressivism&#8221; in politics, and with <a href="https://jonathan-anomaly.com/">Johnny Anomaly</a> about how I explained &#8220;eugenics&#8221; to my cousin. I talked to <a href="https://x.com/liron">Liron Shapira</a> about whether I should drop out of college. I debated with <a href="https://x.com/barakgila">Barak Gila</a> about whether Trump or Biden is better from a pro-tech, pro-growth <a href="https://eriktorenberg.substack.com/p/what-does-the-gray-tribe-want">Gray Tribe</a> perspective, and got a selfie with <a href="https://x.com/richardhanania">Richard Hanania</a> after thanking him for his tireless battle against anti-semites online. I watched <a href="https://x.com/cremieuxrecueil">Cr&#233;mieux</a> absolutely carry my team in trivia. I introduced <a href="https://x.com/robertskmiles/status/1764714628934709719">Rob Miles</a> to <a href="https://www.impulselabs.com/">Impulse Labs</a>, and showed <a href="https://ng.cba.mit.edu/">Neil Gershenfeld</a>&#8217;s work on self-replicating machines to <a href="https://x.com/genfabco">General Fabrication</a> CEO Matt Parlmer. I talked to <a href="https://x.com/trishume">Tristan Hume</a> about AI interpretability, and to <a href="https://x.com/KatjaGrace">Katja Grace</a> about why my p(doom) has gone down in the last year. I hung out with <a href="https://x.com/ByrneHobart">Byrne Hobart</a> and <a href="https://x.com/SamoBurja">Samo Burja</a> at Curtis Yarvin&#8217;s house. I talked with Dwarkesh and <a href="https://x.com/jkcarlsmith">Joe Carlsmith</a> about power-seeking AI. I got plenty of time with the Manifold team: Stephen and James Grugett, Austin Chen, Rachel Weinberg, and Saul Munn, who did an amazing job from start to finish.</p><p>Then there were all my friends, both old and new: <a href="https://x.com/zagrebbi">Werner Zagrebbi</a> (who I&#8217;ve known since second grade), <a href="https://x.com/mortonSATgirl">Kosher Salt</a>, <a href="https://x.com/TheRevAlokSingh">Alok Singh</a>, <a href="https://derikk.com/">Derik Kauffman</a>, <a href="https://x.com/tessybarton">Tessa Barton</a>, <a href="https://benthams.substack.com/">Bentham&#8217;s Bulldog</a>, <a href="https://x.com/maxflowminclout">Whiteboard Programmer</a>, <a href="https://manifold.markets/LiamRobins">Liam Robins</a>, <a href="https://mtabarrok.com/">Max Tabarrok</a>, <a href="https://x.com/psychosort">Brian Chau</a>, <a href="https://x.com/jam3scampbell?s=21">James Campbell</a>, <a href="https://x.com/romanhauksson?s=21">Roman Hauksson</a>, <a href="https://x.com/Halikaarn1an">Nick Simmons</a>, and Topher Colby.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YClo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YClo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YClo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YClo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YClo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YClo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg" width="520" height="390" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:520,&quot;bytes&quot;:3169971,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YClo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YClo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YClo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YClo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F856a4d63-8ad0-4679-9cea-b950f8f41147_4032x3024.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Gathering a minyan for Shabbat services</figcaption></figure></div><p>The events were great too. A couple of my favorites included an impromptu Shabbat service on Friday evening, Alok Singh&#8217;s <a href="https://www.youtube.com/watch?v=YP-iTs5m3X0">explanation</a> of how you can take the derivative of a discontinuous function at the discontinuity, Johnny Anomaly&#8217;s talk on genetic screening, Scott Alexander discussing forecasting with Nate Silver, and a debate between Holly Elmore (affirmative) and Brian Chau (negative) on AI pause. On Saturday night, there was a &#8220;night market&#8221; that was really more of a random exchange session (this is a common theme at Manifest). I won 1000 mana in a 1v1 trivia contest and traded a piece of knowledge<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> on a Post-It note for a small obsidian prism. The next night, Aella and Nathan Young ran the <a href="https://manifold.markets/RickiHeicklen/who-will-win-the-miss-alignment-cos">&#8220;Miss Alignment&#8221; costume contest</a>, with people dressed up as various AI-related memes.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8IiY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8IiY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8IiY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8IiY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8IiY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8IiY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg" width="640" height="480" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:640,&quot;bytes&quot;:1184815,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8IiY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8IiY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8IiY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8IiY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075cef51-6f55-4d32-b396-c85178ae5dd6_2198x1648.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nathan is the one in the gold dress. He absolutely slayed it if you ask me.</figcaption></figure></div><p>I was also lucky enough to get to interview Manifold co-founders Stephen Grugett and Austin Chen live. Both interviews were recorded and will be up on all platforms this week.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jEcm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jEcm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 424w, https://substackcdn.com/image/fetch/$s_!jEcm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 848w, https://substackcdn.com/image/fetch/$s_!jEcm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 1272w, https://substackcdn.com/image/fetch/$s_!jEcm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jEcm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4559241,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jEcm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 424w, https://substackcdn.com/image/fetch/$s_!jEcm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 848w, https://substackcdn.com/image/fetch/$s_!jEcm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 1272w, https://substackcdn.com/image/fetch/$s_!jEcm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc1bf156-3e99-4952-899a-3f384b5c037b_2542x1420.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Preview of Manifold co-founder Stephen Grugett live on the Theo Jaffee Podcast!</figcaption></figure></div><p>The venue fit the occasion perfectly. <a href="https://www.lighthaven.space/">Lighthaven</a> is a complex of buildings on the site of the now-defunct Rose Garden Inn in Berkeley. When it&#8217;s not being used as an event space, it&#8217;s the working headquarters of <a href="https://www.lightconeinfrastructure.com/">Lightcone Infrastructure</a> and home to many rationalists - a mix of village, hacker house, WeWork, and resort. Lighthaven has six buildings, each with their own unique character - from the wide open, more modern Aumann Hall to the darker, Gothic Bayes Hall. It&#8217;s absolutely bursting with places to sit and gather - giant indoor and outdoor sitting areas, more intimate upstairs salons, porches, roof decks, outdoor gazebos, an amphitheater, and even a small geodesic dome. Most rooms are tastefully appointed with soft carpets, low-to-the-ground seating, incredibly well-selected books, and ample natural lighting. Everything has variety, even the green spaces. The huge astroturfed green of Rat Park, meant for large gatherings, contrasts beautifully with the more contemplative Walled Garden, with its trees, flowers, and places to read, do work, or nap.</p><p>Lighthaven is the perfect incarnation of the principles of <em><a href="https://en.wikipedia.org/wiki/A_Pattern_Language">A Pattern Language</a></em>. Most event venues feel dead - conference centers with square rooms, boring colors, folding chairs, artificial lighting, and ancient nylon carpets and vinyl walls. Lighthaven feels alive. Complete. Whole. It possesses Christopher Alexander&#8217;s &#8220;<a href="https://en.wikipedia.org/wiki/The_Timeless_Way_of_Building">quality without a name</a>&#8221;. You can read the <a href="https://www.lesswrong.com/posts/HJNtrNHf688FoHsHM/guide-to-rationalist-interior-decorating">Guide To Rationalist Interior Decorating</a>, or look at the photos of Lighthaven on its <a href="https://www.lighthaven.space/">website</a>, but much like Manifest, the only way to really understand the vibes is to be there.</p><p>To everyone who made Manifest such a great experience, thank you. I can&#8217;t wait to be back next year.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Theo's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>TL;DR: because it&#8217;s really long and needs to be edited down, early attempts to do this ran into issues, and MIRI and Lightcone are busy with AI alignment anyway.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Did you know that the rulers of the three most important countries in World War I: King George V of Great Britain, Tsar Nicholas II of Russia, and Kaiser Wilhelm II of Germany, were all first cousins, and their grandmother was Queen Victoria?</p></div></div>]]></content:encoded></item><item><title><![CDATA[#15: Perry Metzger]]></title><description><![CDATA[Extropians, Nanotech, AI Optimism, and the Alliance for the Future]]></description><link>https://www.theojaffee.com/p/15-perry-metzger</link><guid isPermaLink="false">https://www.theojaffee.com/p/15-perry-metzger</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Tue, 14 May 2024 05:02:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/144609720/03cfd5348e1298ca483934f8ba4833ae.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Perry Metzger is an entrepreneur, technology manager, consultant, computer scientist, early proponent of extropianism and futurism, and co-founder and chairman of the Alliance for the Future.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>0:47 - How Perry got into extropianism</p><p>7:04 - Is extropianism the same as e/acc?</p><p>9:38 - Why extropianism died out</p><p>12:59 - Eliezer Yudkowsky</p><p>17:19 - Perry and Eliezer&#8217;s Twitter beef</p><p>19:46 - TESCREAL, Baptists and bootleggers</p><p>22:34 - Why Eliezer became a doomer</p><p>28:39 - Is singularitarianism eschatology?</p><p>37:51 - Will nanotech kill us?</p><p>45:51 - What if the offense-defense balance favors offense?</p><p>53:03 - Instrumental convergence and agency</p><p>1:05:35 - How Alliance for the Future was founded</p><p>1:12:08 - Decels</p><p>1:15:52 - China</p><p>1:25:52 - Why a nonprofit lobbying firm?</p><p>1:28:36 - How to convince legislators</p><p>1:32:20 - Can the government do anything good on AI?</p><p>1:39:09 - The future of Alliance for the Future</p><p>1:44:22 - Outro</p><p></p><h3>Links</h3><p>Perry&#8217;s Twitter: <a href="https://x.com/perrymetzger">https://x.com/perrymetzger</a></p><p>AFTF&#8217;s Twitter: <a href="https://x.com/aftfuture">https://x.com/aftfuture</a></p><p>AFTF&#8217;s Manifesto: <a href="https://www.affuture.org/manifesto/">https://www.affuture.org/manifesto/</a></p><p>An Archaeological Dig Through The Extropian Archives: <a href="https://mtabarrok.com/extropia-archaeology">https://mtabarrok.com/extropia-archaeology</a></p><p>Alliance for the Future: </p><p><a href="https://www.affuture.org/">https://www.affuture.org/</a></p><p>Donate to AFTF: <a href="http://affuture.org/donate">affuture.org/donate</a></p><p>Sci-Fi Short Film &#8220;Slaughterbots&#8221;: </p><div id="youtube2-O-2tpwW0kmU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;O-2tpwW0kmU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/O-2tpwW0kmU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>More Episodes</p><p>YouTube: <a href="https://tinyurl.com/57jr42wk">https://tinyurl.com/57jr42wk</a></p><p>Spotify: <a href="https://tinyurl.com/mrxkkhb4">https://tinyurl.com/mrxkkhb4</a></p><p>Apple Podcasts: <a href="https://tinyurl.com/yck8pnmf">https://tinyurl.com/yck8pnmf</a></p><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:01)</p><p>Hi, welcome back to episode 15 of the Theo Jaffee podcast. We're here today with Perry Metzger.</p><p>Perry Metzger (00:06)</p><p>Hello?</p><p>Theo Jaffee (00:09)</p><p>So you've been into futurism, extropianism, and the like for a very long time, several decades, starting in like...</p><p>Perry Metzger (00:16)</p><p>35 years, maybe a little more depending on how you count it. Long enough that, you know, that one starts to know almost everyone and have seen almost everything.</p><p>Theo Jaffee (00:30)</p><p>So how did you first get into this scene?</p><p>Perry Metzger (00:32)</p><p>I think so. I was an undergraduate at Columbia in the 1980s and someone posted a book review of this book by Eric Drexler called Engines of Creation. And, you know, and I went out and I got a copy of the book and weirdly it meshed with all sorts of thoughts I had had as a student.</p><p>You know, once biotechnical, so, you know, in the 70s, you know, it was not unusual, for example, to have a Time Magazine cover about Genentech and how they, you know, were commercializing genetically engineered bacteria to produce, you know, to produce things like insulin and human growth hormone or what have you, which at the time was like, you know, this shocking thing. And I...</p><p>I started thinking at the time, well, gee, you know, you have these things that can manipulate things at the molecular level. You know, could you use them to make computers? Could you use them to build macroscopic objects? You know, I mean, we have trees, we have, you know, we have plants, we have whales, you know, why, why couldn't you do, you know, crazy things with biology like that? And I'd put that in the back of my mind. And then I encountered Eric Drexler and I encountered.</p><p>FM S. Van De Are and, and, and, and, you know, the book, True Names by, damn it, I'm having a, I'm having a senior moment. But the, the gentleman who coined the term singularity, Vernor Vinge Yeah, he pronounced, he preferred Vinge. I only met him.</p><p>Theo Jaffee (02:18)</p><p>Vernor Vinge? Vinge. Okay.</p><p>Perry Metzger (02:25)</p><p>you know, the one time, but it was a very fun multi -hour conversation. You know, I, you know, yeah, a sad thing that he's gone. Read a few books by him, read a bunch of other stuff. And one day, this was a while after I got out of school, my buddy Harry and I were hanging around at his apartment and he had...</p><p>Theo Jaffee (02:31)</p><p>Rest in peace.</p><p>Perry Metzger (02:54)</p><p>he'd recently gotten divorced and the way that he was entertaining himself was subscribing to a lot of zines. And these days, of course, no one remembers what these things were, but it used to be that a lot of people, you know, got their ideas into the world by, by basically making their own magazines on, you know, by Xeroxing up things and, and, and. Selling them to each other. And in, if you got one zine, it almost always had an ad for 20 more.</p><p>And he encountered this particular one called Extropy by a bunch of graduate students in Southern California. And the next thing you know, I was running a mailing list for people who were interested in the topics covered in Extropy. And the next thing you know, after that, we have a few hundred really, really interesting people from.</p><p>Theo Jaffee (03:46)</p><p>So you ran the Extropian mailing list?</p><p>Perry Metzger (03:49)</p><p>I started the Extropians mailing list. Yeah. It was a very heady sort of time. We had all sorts of cool people, Carl Feynman, Hal Finney. Hal, unfortunately, is dead now too. And Robin Hanson Yes, Robin and I have known each other since back then. And it's scary to think of how long back then was.</p><p>Theo Jaffee (03:51)</p><p>Wow.</p><p>my last podcast guest.</p><p>Perry Metzger (04:18)</p><p>but lots and lots of very interesting people suddenly popped up and it was one of the best floating cocktail conversations for a few years that I've ever participated in. Lots and lots of very interesting ideas being bandied about for quite a while. Unfortunately, it also had certain mutant offshoots as seen before one these days. But for the most part, it was a very, very cool time.</p><p>and a very cool bunch of people and I was very glad to hang out with them. You know, Tim May was one of our subscribers and he and a bunch of other people ended up going off to start the Cypherpunks movement, which I also got into and I ended up running a spin -off of the Cypherpunks mailing list called the Cryptography mailing list, which I still, you know, still exists. And I think I'm notorious to certain other people.</p><p>For having shut down the first conversation about Bitcoin online Because it was getting repetitive and we had rules against that But you know if I show up at in in certain cryptocurrency Circles, you know at various conferences or what -have -you You know, some people are like, you're the guy who shut down the first conversation of unbit coin and the answer is yes. Yes I am You know and more recently, you know, I've I've been involved in</p><p>you know, a lot in AI policy, not that I wanted to be involved in AI policy. I hate being involved in almost anything with the word policy attached to it. But it turns out that that although you might not care about the government, the government will care about you either way. And and so it's become necessary to do something about that. You know, I was involved a bunch in cryptography and cryptography policy when that was a much more controversial topic. So.</p><p>I suppose this sort of thing is not entirely surprising.</p><p>Theo Jaffee (06:18)</p><p>So when I was prepping for this podcast, I read through a bunch of, extropian stuff, the extropian principles and the like 1994 wired profile on extropians. And there was one thought that struck me the whole time, which is, holy crap, this is like identical to e/acc today, Effective Accelerationism. So is it literally just identical? Are there any substantive differences? Is it just a pure reincarnation?</p><p>Perry Metzger (06:36)</p><p>Yeah.</p><p>I think that a lot of people are older and there's also, I think, certain political differences. I think that the extropians were much more explicitly libertarian for a while. But I think, yeah, in substance, it's sort of the predecessor of e/acc in a lot of ways. It's amusing and kind of cool.</p><p>to see new people picking up the ideas and running with them. I've been kind of pleased by it. It's also been kind of cool getting to know a bunch of people as a result of the fact that all of this has gotten recycled. But yeah, your observation isn't wrong.</p><p>Theo Jaffee (07:28)</p><p>So why do you think Extropianism died out then? Or...</p><p>Perry Metzger (07:32)</p><p>It didn't. It's just that, you know, one of the things you learn when you're in enough of these long -term conversations that happen is that all of them are sort of like parties. And parties end at some point. If you, the party that goes on for six or eight days, eventually the guests get exhausted, start smelling bad, you know, run out of hors d 'oeuvres.</p><p>Theo Jaffee (07:35)</p><p>it evolved.</p><p>Perry Metzger (07:59)</p><p>and really desperately want to go home and maybe take a shower. All of these things end up being bunches of people that are interested in particular things and get enthusiastic about them and push hard. I mean, but the consequences of these things when the, you know, the influence of these things moves on. I mean, there were all of these really influential, you know, home computer clubs in the Bay Area.</p><p>in the 1970s and you ask, well, what happened to all of them? And what happened to all of them is that we all have home computers now. We don't even think of them as home computers. They're just computers. Lots of these movements have a moment where they flower and the ideas end up spreading in the world and then everyone moves on and does other things and it's cool.</p><p>Theo Jaffee (08:52)</p><p>Well, what you were talking about, with a party must come to an end at some point sounds like it would apply to a scene, but not necessarily a movement. You know, like communism, for example, lasted well over like a century, century and a half, and it's still very much alive today and it shaped the entire world. Yeah. It shaped whole countries. Why didn't extremism do that?</p><p>Perry Metzger (09:07)</p><p>Yes, although it smells much, much worse than ever before. But there was a... yeah. So I don't think that it died out as a set of ideas or as a set of things that people were interested in. You'll find that almost all of the things that people were interested in continued to obsess them and the people around them. If you look, for example, there are, I mean, you look at people like Aubrey de Grey,</p><p>or lots of other people who are interested in finding ways to cure the diseases of aging or retard aging itself indefinitely. A lot of those people, in some sense, were influenced by or an extension of the things we did. The people who are interested in cryptography, cryptocurrencies, privacy, et cetera, right now, are in many cases descendants of the Extropians mailing list and the Cypherpunks mailing list.</p><p>it's just that, you know, there are lots and lots of people who are working on various things. I, it's just that discussing things endlessly at some point becomes less interesting than going out into the world and working on stuff. So I think that what you've seen is that, you know, people like say Robin Hanson, who were very, I mean, Robin, announced.</p><p>wrote his original paper on idea futures for, I think it was the third or fourth issue, I think it might have been like the fourth issue of Xtropy. And there he is still to this day at GMU publishing lots of really cool ideas on related topics and being energetic about it. It's just that we don't give it a name anymore. But all of us are still out there.</p><p>Theo Jaffee (10:58)</p><p>I mean, you say that these conversations get repetitive and then people will stop, but it seems like the conversations on the extrobian mailing list in 1996 about AI risk are identical to the ones on less wrong in 2012 and then identical to the ones on Twitter today.</p><p>Perry Metzger (11:09)</p><p>that was long af.</p><p>By 1996, all of the stuff that was fun was gone. The early days of the mailing list, we had a rule about not keeping archives. So all of the most interesting, really early stuff is gone. But yeah, I mean, so one of the, maybe I'm giving myself too much credit here, but I perpetually regret at this point, you know.</p><p>So we had a few early members, people like Samir Parekh and what have you, who were teenagers when they joined. And Samir went on to starting the first company to commercialize an encrypted web server, the stuff that ended up becoming TLS. Every time you type HTTPS into a web browser, you're using that same technology stack.</p><p>which he then went off and sold for a lot of money and he went on to do all sorts of great other things. We had a bunch of people that age who did interesting things. We also had a teenage person who joined by the name of Eliezer Yudkovsky, and that seems to have gone much less well. I won't exactly say that I regret, you know, letting Eliezer join, but it turned into much more of a mixed success.</p><p>Theo Jaffee (12:22)</p><p>Hmm.</p><p>I mean, he might be the most famous person around today associated with the Extropians.</p><p>Perry Metzger (12:43)</p><p>maybe. But, you know, I mean, I'd probably say that he's more famous for creating, you know, for turning things that we thought of as descriptive into the objects of a cult. You know, first singularitarianism and, you know, then went on to create SIAI and MIRI, which got very little done.</p><p>but I guess he wrote a lot of good fan fiction or bad fan fiction. I never found it particularly readable, but never mind that. And yeah, I mean, pardon.</p><p>Theo Jaffee (13:18)</p><p>You haven't read HPMOR? You haven't read HPMOR?</p><p>Perry Metzger (13:23)</p><p>I tried. I had a very open mind. A lot of people who I respect or respected at the time told me that I had to read it. And I started and I got a few chapters in and at some point I just couldn't.</p><p>Theo Jaffee (13:37)</p><p>for the audience, Harry Potter and the Methods of Rationality, which is Eliezer's, like, what is it, 1200 pages, 200 pages long Harry Potter fan fiction about rationality and decision theory and that kind of thing.</p><p>Perry Metzger (13:51)</p><p>I think it's really a recruiting mechanism for his group. it works spectacularly well. There's this gigantic pipeline between the stuff that he's published and Younger of Divergent Kids and Mirian Effective Altruism and all of those things. It's kind of ironic that we find ourselves in a situation in which the people on the one side...</p><p>Theo Jaffee (13:55)</p><p>it works well.</p><p>Perry Metzger (14:18)</p><p>of the current debate about AI, which I'm sure you've covered in the past, but if you haven't, we can talk about it a bit. And the people on the other side of it all came from some of the same mailing lists and intellectual discussions and what have you, but drew very, very different conclusions. Like Eliezer came to the conclusion that it was his moral duty to build an AI god to rule the universe. And I would have been more disgusted by that, except for the fact that I didn't think that he'd ever succeed in building anything.</p><p>Theo Jaffee (14:22)</p><p>Yeah.</p><p>Perry Metzger (14:49)</p><p>I was there one day and Eliezer says that this didn't happen. I remember it happening. Other people I know remember it happening. I can't prove that it happened. But I remember Eliezer giving a presentation at a conference and one of the gods of AI research, Marvin Minsky.</p><p>standing up and saying, you know, everything you're talking about is stuff that we tried and it doesn't really work. And Eliezer then saying, but I'm smarter than you and I'll make it work, which he to be, you know, he's consistent, you know, he hasn't become much less arrogant over the years. But, but, you know, I didn't think that Eliezer was going to go off and build an AI that would rule over the universe and, and, and force libertarian ethics, which strikes me as being kind of</p><p>oxymoronic, it's, it's, you know, sort of like having, you know, the dictatorship in favor of freedom or what have you. but, I didn't think anything would happen there. And so I kind of noped out and went off and paid attention to other things. And while I was paying attention to other things, you know, they mutated a few times and now have become radical D cells and to use the current jargon, you know, LEA's are calling for things like bombing data centers.</p><p>you know, saying, you know, well, nuclear war is better than AI research because at least a small breeding population will survive a nuclear war and we might yet reach the stars, but you know, if there's AI research, we're all doomed, which I think is garbage and I'm happy to defend that, but nevermind.</p><p>Theo Jaffee (16:33)</p><p>So going back to what you said earlier about, Eliezer speaking at the conference. yeah, this has been like a public Twitter thing, for a while back in like July, 2023, you tweeted, that he presented at an extra conference about how he was going to build a geo FDI.</p><p>Perry Metzger (16:49)</p><p>I might be wrong, by the way. It might have been at a different conference. It might have been maybe at a four -site conference or something. I'm old. A few other people I know remember the same thing, but we were all at all of these conferences, so who knows?</p><p>Theo Jaffee (16:59)</p><p>And then...</p><p>And then Eliezer tweeted, I claim to have not given a talk at Extra 2 in 1995 and offered to bet Metzger or anyone in the habit of believing him $100 ,000 or 50 ETH. And then you didn't make the bet as far as I know. So.</p><p>Perry Metzger (17:09)</p><p>never happened.</p><p>Yeah, I couldn't prove that he was there and it didn't seem important enough. You know, it's true. I did not make the bet. You know, Mia culpa, Mia culpa, Mia maxima culpa. You know, I, you know, I was hoping that we would find the recordings from extra two, you know, Max claimed that he had them, but you know.</p><p>never was able to find the things. It might have been a different conference, by the way. It might have been a few years later. And it might be that I'm remembering the whole thing wrong, right? Because I'm old and people tend to, you know, when you get old enough, your memory for things that happened 30 years before, it ain't always the best. But if you were going back and reading the things that he wrote, they're pretty much consistent with my memory of him.</p><p>regardless of whether particular events occurred or not.</p><p>Theo Jaffee (18:19)</p><p>And then after he tweeted about the bet, you kind of disappeared from Twitter for a few months. So was that related or no?</p><p>Perry Metzger (18:29)</p><p>No, I had two issues, one of which was that I had a lot of work to do, and the other one of which is that I've had some health issues over the last year, and I tend to disappear from things unexpectedly for periods of time. I've been off Twitter for the last couple of days too, mostly because of that. But never mind. Too much information, probably.</p><p>Theo Jaffee (18:52)</p><p>Yeah, well, I'm glad you're back.</p><p>Yeah, I'm definitely glad you're back. So when we talk about extropianism and then some of its offshoots, a common kind of umbrella term that's used is test grill, which stands for transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and long -termism. So is that?</p><p>Perry Metzger (19:17)</p><p>This is sort of like this is like having a single term for, you know, for for Nazism, communism, the Grange movement and, you know, and and, you know, and marine biology. But, yeah, I've seen that. That's that's Timnit Gebru's thing, if I'm not mistaken. She's an interesting character.</p><p>Theo Jaffee (19:43)</p><p>totally incoherent. It's just not a useful term.</p><p>Perry Metzger (19:47)</p><p>I don't see why it's a useful term. But you know, if you're attempting to get grant money for, you know, for your discipline, and maybe it's a useful term, I see it as relative, you know, a lot of those things aren't related. Like, you know, I don't consider myself to be half of those things, at least. You know, I don't think that anyone who...</p><p>was in any of those things considers them all to be the same thing. But, you know, one of the more amusing things that I've noticed of late actually is the fact that there seem to be three distinct groups of people who are trying to get AI regulated or killed in one way or another. There's the MIRI EA group of people. There are the people like Timnit Gebru and what have you who claim that AI is horribly discriminatory and evil.</p><p>And then there are the people who would like to use the power of the government to grant them a monopoly so that their AI company gets as much of the market as possible without having to compete with anyone. And watching the interactions between everyone has been really kind of amazing.</p><p>Theo Jaffee (20:58)</p><p>The Baptists and the bootleggers.</p><p>So yeah, this is like Marc Andreessen's idea of the Baptists and the bootleggers.</p><p>Perry Metzger (21:09)</p><p>It's not originally his. I'm trying to remember the economist who came up with that phrasing. There's a Wikipedia page on it, and it'll probably give the right person's name. But yeah, it's an old idea. You have people who are true believers in some sort of cause, and then you have people who would just like to take advantage of it. And they have common interests. And the common interests often intersect in extremely bizarre ways.</p><p>Theo Jaffee (21:19)</p><p>Yeah.</p><p>Perry Metzger (21:38)</p><p>They intersected during prohibition, during alcohol prohibition in the United States, and they seem to be intersecting a lot in the AI debate at the moment.</p><p>Theo Jaffee (21:48)</p><p>Hmm. Do you think that Eliezer is a Baptist or a bootlegger? And why do you think -</p><p>Perry Metzger (21:52)</p><p>he's, he's, he's, I have no doubt in my mind that Eliezer actually believes everything that he says. I also think though, that he is in a position where it would be very, very difficult for him to believe anything else. the only thing he's done in his adult life, and maybe he'll come back and claim that I'm a horrible liar because he likes calling me that and say that he had a job once for six months doing something else. But so far as I know, the only thing he's ever been paid to do is work for his own nonprofit.</p><p>and it would probably be a rather unfortunate situation for him if he were to change his mind on this. There, there was a, there was a, a talk once given by Daniel Dennett on the unfortunate situation, that, that certain clergymen find themselves in if they no longer believe in the religion that they're a clergyman for, because there is nothing else they can make a living at. And yet here they are.</p><p>So many people find that they don't change their opinions or only do in private. I think Eliezer believes everything that he says and believes it very strongly. But on the other hand, he's also, it's also his profession. The only thing that he's ever done in his adult life is work for SIAI, work for MIRI. So, and I don't know who else would pay him to bloviate on the internet and write fanfic in place of doing AI research, but.</p><p>Theo Jaffee (23:18)</p><p>So why do you think he did such a total 180 in a relatively short period of time?</p><p>Perry Metzger (23:24)</p><p>I think it's relatively straightforward. So if you read his stuff from very, very late 1990s, early 2000s, his goal was to... So I... All right, take a step back. I wrote some thought experiments on the Extropians mailing list at one point to the effect of what would happen if an AI gained capability very, very quickly.</p><p>Because you could imagine a mature AI of some sort being able to do engineering many thousands, millions, maybe billions of times faster than humans. What if you had such an AI and it was hostile and it recursively bootstrapped itself? What would happen? And I think that Eliezer, at some point, and maybe I'm wrong.</p><p>you know, I, this is my hypothesis and some of this is just, you know, his writing. Eliezer at some point decided that, you know, since God didn't exist, he needed to create one and justified this partially in his mind by the idea that there would only ever be one AI anyway, because whatever AI came into existence would, you know, would take over everything in sight because it would recursively self -improve itself.</p><p>And so it would be good if the recursively self -improving AI happened to be one that created a utopia for mankind. I know one former EA who has said to me recently, and I shouldn't use their name because they haven't been public about this, that there's an extent to which the EA movement would not be satisfied with AI simply being...</p><p>safe in our society surviving because what they were promised was utopia. They were promised to be freed from the Hindu wheel of birth and re -death. They were promised that we would all live in bliss and paradise forever. And getting less than that is a failure.</p><p>Eliezer, you know, became very, very obsessed, as I said, with the idea that he and his cohort would build this thing and release it, and it would take over the world. And I think that at some point he realized that he and his cohort had made absolutely no progress on AI research whatsoever after spending millions of dollars of Peter Thiel and other people's money. Peter Thiel later came to seem to be pretty PO'd at Eliezer and what have you.</p><p>And at some point, you know, a few years ago, he wrote that, you know, that April Fool's Day Death with Dignity article on Less Wrong, which I don't think was actually an April Fool's article at all. But, you know, publishing it on April Fool's Day gives you plausible deniability. You know, from what I under -</p><p>Theo Jaffee (26:26)</p><p>Well, he believed in very high P -Doom before that, though, right?</p><p>Perry Metzger (26:31)</p><p>I think that originally he believed that everything was going to go great because he was going to build the AI that took over the universe and it would be aligned, whatever that means. I don't think it's a coherent concept. And it would bring about utopia and everyone would be happy. And I think that he believed that he was going to be the person to do that for many years. My understanding from people like Jessica who left MIRI,</p><p>is that to this day, they still have this idea, well, maybe what we can do, and this is like the most science fiction thing I've ever heard. It's not that it is prohibited by physics. It's just prohibited by reality. But apparently, some of them still have this idea that maybe,</p><p>They can get an early AI to upload a bunch of them so they could do thousands of years worth of research very quickly and still realize the singularitarian dream of, you know, 20, 25 years ago. But I think that Eliezer is mostly just depressed these days. From what I understand, he spends his time writing Glowfic, which I didn't know about until someone told me that it existed in connection with Eliezer. But we're talking so much about him. Why don't we talk about...</p><p>Theo Jaffee (27:46)</p><p>and</p><p>Perry Metzger (27:50)</p><p>you know, other stuff, you know, it's a...</p><p>Theo Jaffee (27:53)</p><p>Yeah, sure. So, do you think that Singularitarianism is kind of just like eschatology? Is it really like, scientific? Yeah. Well, you were involved in a lot of this kind of stuff early on, so...</p><p>Perry Metzger (28:00)</p><p>Yeah, it's a millenarian religion.</p><p>No, I was involved in something different. So we talked a great deal about the fact that the pace of technological change was going to keep growing and that there was likely going to be an inflection point around AI. This was descriptive, not proscriptive. There wasn't any sort of, well, it's our moral obligation to make this thing happen quickly in order to bring about some sort of millenarian utopia.</p><p>You know, and there's an extent to which singularitarianism is, I've heard it referred to as the rapture of the nerds, and I don't love the term, but it does seem to fit to some extent, right? Mostly we were talking about what was likely to happen and the sorts of things that one might, you know, that one might be interested in in connection with all of this. There was never any, it is our moral obligation to.</p><p>you know, to make AI happen as fast as possible or what have you. Almost none of us were going off to work on AI research. Eliezer did. I've only joined the AI research community in the last couple of years. I'm quite amateur at it, even right now. But yeah, I see it as a millenarian religion and not a very realistic one. I mean, it's, well, most religions.</p><p>And certainly most new religions are not particularly realistic, so that's perhaps the wrong way to put it. But I don't see it as being particularly sane, even by the fairly weak standards that people judge such things by.</p><p>Theo Jaffee (29:45)</p><p>Well, does it not seem to be the case that if you get human level AI combined with a whole bunch of other powerful technologies, you know, nanotech, gene editing, full -dive virtual reality, that the world that we live in after would be radically different? That's kind of the singularitarian hypothesis.</p><p>Perry Metzger (29:59)</p><p>Yeah. And by the way, well, yes, but I, and the world that we live in right now is radically different from the world that our ancestors lived in. Right. So imagine that you go up to, I don't know, a homo habilis. I think those were the first tool using, you know, hominids. And I'm sure that someone will now write in or tweet or something when they hear this and say, you're wrong. It was homo erectus or, or it was australopithecus or something else. I've probably gotten it wrong. Who cares?</p><p>You know, let's say that you go back to one of those folks and you say to them, well, gee, you know, if you actually pick up that stone tool and start working with it, eventually people are going to be doing video podcasts over the internet, living in centrally heated homes and eating strawberries in midwinter and all of the rest of this stuff. I mean, the world that such a creature lived in in our world bears no resemblance to each other. It's like, it's like.</p><p>crazily different. There are a few things that are similar. We still have sex and reproduction and a bunch of those things, but I imagine such a creature thinking about someone getting a cochlear implant or a newspaper for that matter. Things are radically different. And I think that in the future, things are going to be even more different, yes.</p><p>We're going to have extremely powerful artificial intelligences. We hopefully will eventually cure Alzheimer's and cancer and all sorts of age -related diseases will probably extend human life indefinitely. We'll be able to do things like doing neural implants to computer systems. We're going to be able to, you know, we're going to have a greatly accelerated</p><p>technologies in, you know, manufacturing technology, space technology, etc. The world is going to look extremely different. But that doesn't mean that history ends, or that it's going to be a utopia or that it's going to be a hell. It just means extremely, extremely different. And yeah, if you read, I think that it was run bookworm run is the Vernor Vinge short story.</p><p>about an uplifted chimpanzee. In the introduction, he writes that he had proposed to his editor that he might want to write a sequel about this technology, which makes a chimpanzee vastly more as intelligent as a person, that he might want to write a sequel about this being applied to a human being. And his editor wrote back and said, you can't write that story and no one else can either. And that's sort of where he started thinking about it.</p><p>this stuff. And yes, we have a great deal of difficulty, I think, predicting what's going what the world is going to look like once we have things like nanotechnology and AI. And I've given talks about this. You know, I would say that, you know, the history of technology has been the history of creating more and more and more general technological capabilities. You know, what is the Internet for? It's not for anything.</p><p>That's why it's so powerful. It's a general way to have computers talk to each other. What are computers for? They are things that allow us to do anything that can be expressed as an algorithm. And oddly, it turns out that recording a podcast or predicting the weather or entertaining a child all happen to be things that this technology enables. Nanotechnology is going to be insanely powerful.</p><p>It's going to allow us to live in a world where physical objects of incredibly rich capability are things that we can build. And right now, if you look out your window, there's, you know, if I look out my window, I see lots of trees and, you know, trees are things human beings can't build yet, right? But they are exquisitely nanostructured devices. They're capable of self -reproduction, which is not the most interesting thing about them.</p><p>say, but they're also capable of constructing themselves and putting out all of these photoreceptors and powering themselves and creating this amazing nanostructured material called wood. Wood is a really, really weird material when you think about it, and we take it completely for granted, but it's almost magical, right? And at some point, we're going to be able to do artificial things that are even more powerful than what natural biological systems can do. You know, the</p><p>prick in biological systems is that they're capable of assembling macroscopic objects like you or me, molecule by molecule, building them atom by atom. And we will be able to do that with artificial systems at some point. And it's going to be world changing, like dramatically world changing. And to be sure, that means that there is a limit to what we can predict about the future because as you know, the further out we go, the...</p><p>more different things are. That's a really terrible way to put that. I'm not very good at English today, but as time goes on, we're gaining more and more powerful technological capabilities. And for the most part, this is a great thing, right? I have friends who are alive today. I have one friend in particular who survived stage four malignant melanoma, which 30 years ago, if you got</p><p>malignant melanoma, never mind pretty much the stage, you were a corpse. It was a question of how long, right? And how do you survive stage four malignant melanoma now? Well, we understand the human immune system so exquisitely that we're capable of producing targeted molecules that can block or enhance various mechanisms inside the human immune system. So you can tell the immune system to go off and kill the malignant melanoma and it works, right? Doesn't work for absolutely everyone, but it used to be a death sentence and now it isn't anymore.</p><p>Theo Jaffee (35:57)</p><p>Mm -hmm.</p><p>Perry Metzger (36:21)</p><p>we're going to have a billion, a trillion such changes. It's going, the world is going to look very, very, very different. And probably if you give it a few hundred years, or maybe even if you give it 50, the world might look as different, you know, compared to where it is now, as it does for, you know, a person in the ancient world comparing themselves to today or even worse, right?</p><p>but that's not magic, right? Or, or a millenarian vision. That's just talking about, you know, technological change as it occurs in the world.</p><p>Theo Jaffee (36:53)</p><p>worse.</p><p>Yeah, I'm sorry, I gotta go do something really quick. I'll be back in like a minute or two and I'll edit this out.</p><p>Perry Metzger (37:11)</p><p>Sure.</p><p>Theo Jaffee (39:54)</p><p>Alright, I'm back. Sorry about that.</p><p>Perry Metzger (39:56)</p><p>very good. So we can have a trim point at some point like here. Yeah.</p><p>Theo Jaffee (40:00)</p><p>Yeah, Riverside makes all this really easy. It's great.</p><p>Perry Metzger (40:04)</p><p>Yeah, in fact, you can just cut by text, which is like the most amazing thing on earth.</p><p>Theo Jaffee (40:08)</p><p>Yeah. It's only gonna get cooler from here. I'm really excited.</p><p>Perry Metzger (40:13)</p><p>Yes, well that's what we were just discussing.</p><p>Theo Jaffee (40:17)</p><p>Yeah, I'm really excited for when I can get an agent to just edit my whole podcast for me and transcribe it and come up with like intro videos.</p><p>Perry Metzger (40:26)</p><p>You can already transcribe it very easily. In fact, most of these systems out there will...</p><p>Theo Jaffee (40:30)</p><p>Yeah, but it makes all kinds of errors and stuff.</p><p>Perry Metzger (40:33)</p><p>You can probably aim an LLM at that and ask it to try to find a bunch of them.</p><p>Theo Jaffee (40:39)</p><p>I did, I wrote a script where I had it go through whisper to transcribe it and then I ran it through GPT -4 in chunks with like a custom prompt that was like, you know, I am doing a podcast where I talk a lot about AI so, you know, be aware of that and fix everything. It still wouldn't get everything. Not even close, maybe next month. It got a lot of things, yeah, but not everything.</p><p>Perry Metzger (40:58)</p><p>Did it get a lot of things?</p><p>Well, you know, I will I'll tell you a sad story, which is that I so I've got a book coming out. It's not about any of this stuff. It's a it's a children's book about computer architecture, believe it or not, published by Macmillan. I think in like fall of 25 or something like that, it's going to be a graphic novel. There's this great illustrator associated with it named Gerald. I every time we go through to look for mistakes, there are more mistakes.</p><p>It seems like they breed behind the couch. So if human beings can't read through their own book and find all the mistakes, maybe it's not entirely surprising that even a human level AI or a weekly superhuman AI can't quite find all of them. But at some point, of course, at some point you also wonder what constitutes a mistake if that person misspoke.</p><p>Do you correct what they said in the transcript? I don't know. All sorts of interesting questions.</p><p>Theo Jaffee (42:04)</p><p>Yeah. So back to what we were talking about earlier. You mentioned how nanotechnology is going to become a thing. It's going to be very world changing and very powerful. Yeah. Yeah.</p><p>Perry Metzger (42:11)</p><p>It's going to be transformative. Yeah, it's going to be one of the most transformative technologies in history. I'd say the most transformative other than AI.</p><p>Theo Jaffee (42:20)</p><p>So then what's stopping Eliezer's prediction of misaligned super intelligent AI that learns nanotechnology better than anyone else and creates self replicating diamondoid bacteria robots that reproduce using atmospheric.</p><p>Perry Metzger (42:35)</p><p>that eat everyone and turn everyone into diamond paper clips because paper clips seems to be the thing. So I think that the answer to that is that none of this is going to happen overnight. And in the course of, so let's take a step back and trust me, this is all relevant. So it turns out that you are already in a nanotechnology war, right? You are in the middle of one as we speak. And in fact,</p><p>Theo Jaffee (42:40)</p><p>Yeah.</p><p>Perry Metzger (43:03)</p><p>If you stop fighting this war, if your heart stops beating, within hours, you know, horrifying nanobots are going to start eating you and turn you into a putrefying, into putrefying sludge. Everyone knows this, right? But they don't think of it in terms of like nanotechnology, right? The biology around you is nanotechnology. Now, how is it that we all are not turned into sludge as we're walking around day to day? Well, in fact, every once in a while you get an infection.</p><p>that nothing can treat and you in fact do die. Like, you know, a bunch of, you know, not that many people tend to die every year of say influenza, but a few people do, right? You know, 20, 30 ,000, I think in a given year. That's nanotechnology, right? Now, how is it that all of us don't die of that? Well, it turns out that you are also made out of crazy nanotechnology.</p><p>and your body is filled with these things that are intended to look for invaders and stop them. Right. Now, let's look at something that looks very, very different. Now, maybe I don't know if you live in San Francisco, there probably aren't any police who are actually stopping people from committing crimes. But let's imagine that you're in most of the United States. Right. You know, in most of the world, right. The reason you can walk down the street and you generally speaking don't fear being mugged.</p><p>is because there's a substantial cost to being a professional mugger, which is that there are people whose full -time job is hunting down professional muggers and people who break into houses and things of that sort, right? We have unaligned human beings, you know, if you want to use the jargon, all around us. And we've built mechanisms like armies and police forces and even fire departments to some extent that exist.</p><p>to stop people from doing bad things to other people. And this is only going to continue, right? So as AI and nanotechnology get developed, we will find ourselves in situations, I don't know if you've seen there's this video that made the rounds some years ago of AI -driven drones, like going around and killing politicians and doing stuff like that. It was very dystopian and got a lot of people talking.</p><p>Theo Jaffee (45:25)</p><p>I don't remember that. I may have vaguely heard of something like that.</p><p>Perry Metzger (45:27)</p><p>I can, I can probably find, you know, track it down and forward you a link for the show notes or something. but, but, but the, but why is this an unrealistic vision? It's an unrealistic vision because people have an incentive not to let it happen. And I don't mean that they have an incentive to somehow like brainwash everyone on earth into no longer remembering how to build.</p><p>Theo Jaffee (45:34)</p><p>Yeah, sure, I'll put it in the description.</p><p>Perry Metzger (45:55)</p><p>you know, drones or what have you, right? I mean that they have an, because that sort of thing is impossible, they have an incentive to build defenses, to build systems that stop other people from doing bad things to you. Regardless of what you think about the current war in the Middle East, whether you think that, you know, whatever side you support, you know, the state of the art in anti -missile systems, in anti -drone systems, in anti -artillery systems is kind of impressive.</p><p>And those systems have been built because certain people were worried about that they might come under attack from such systems and didn't want to sit around waiting for it. As AI is developed, as nanotechnology is developed, we will discover ways that bad people can abuse these systems. Bad people have abused every technology in human history. And what do we do when we discover this? We build countermeasures. We build...</p><p>Theo Jaffee (46:25)</p><p>Yeah.</p><p>Perry Metzger (46:53)</p><p>ways to stop people from doing bad things. And this goes back as I said, back to the dawn of history, to the fact that we all have immune systems, the fact that we have culture, the fact that we have as part of our culture, various cultural mechanisms for punishing people, for attempting to take advantage of other people, the fact that we have police forces, the fact that we have militaries, the fact that we have...</p><p>you know, that we have espionage agencies and all sorts of other things. Societal mechanisms and biological mechanisms and technological mechanisms have been built to counter bad things. And this will continue, right? So it is true that if one single maniac in the future had a superhuman AI and access to nanotechnology and decided one morning that they should turn everyone on earth,</p><p>into, you know, I don't know, into instant cocoa. I get tired of talking about paperclips. Paperclips are so boring. Whereas Swiss Miss or Nestle's Quick, those are exciting, right? So you've got a madman out there and he's decided to turn everyone on earth into instant cocoa. And if there's no one opposing him, yes, he'll be able to. But that's not what's going to happen. What's going to happen is that these systems are going to be built slowly over a number of years by lots and lots of different groups.</p><p>And as they build them, they will construct mechanisms that will stop other people from doing bad things. In the not that distant future, I expect to see law enforcement deploying a lot of AI -based systems to help them tracking down things like phone scammers. I expect to see people, you know, people are already using AI -based systems in law enforcement, in military applications, in other places like that. It will continue.</p><p>So if there are, you know, if there are many, many, many, many people who have AI and there are many, many, many, many people who have nanotechnology, you don't have to worry that you're going to be turned into instant cocoa because you're going to have systems that will say, hey, this other bad person is doing something to try to turn you into instant cocoa and I'm going to stop them. I mean, like if someone, let's put it this way. If someone breaks into your house, you know,</p><p>and starts watching your television, you're going to call the cops, right? You're not, you know, and you can say, well, what stops someone from breaking into anyone's home and sitting in their living room and watching TV or breaking into their house and stealing their stuff? And the point is that we have a system of disincentives and countermeasures that severely disincentivizes this sort of behavior, at least in most of the country. Now, again, there are places where people seem to believe in crime.</p><p>But, you know, in most places, we disincentivize bad behavior, we punish it, we hunt it down, we try to stop it. That's why we don't have to worry, right? Yes, it's true. Nanotechnology will be extremely powerful, and it will not just be in the hands of bad people, it will be in the hands of a great number of people, and many of them will not want to sit around and be eaten.</p><p>Theo Jaffee (50:08)</p><p>That may be true, but the kind of canonical doomer counterargument to that is that what if the offense -defense balance dramatically favors offense? Like in Nick Bostrom's Fragile World Hypothesis, where he talks about what if the world is actually very easy to crash, we just don't have technology powerful enough to do it yet? What if nanotechnology gets to such a point where it is possible for people to just brick the entire world with a failed experiment or...</p><p>a misaligned either superintelligence or person or whatever. And I know you already said, yeah, but we have countermeasures against that. But I think in response to that, they would say, yes, but like, do we really want to risk it? And what if the probability that such countermeasures would work is actually much lower than you think?</p><p>Perry Metzger (50:58)</p><p>I think that, so this is a long conversation, but you know, I mean, so Nick is very, very fond of obsessing about things that I think aren't worth obsessing about. Like, you know, there's, for example, you know, the doomsday argument, which I think is junk, but which he yet spends an inordinate amount of time talking about, but let's not get ad hominem about this. Let's address this notion directly.</p><p>I don't think that it's true, right? I think that we already, first of all, I think we already have a lot of understanding, about where likely offense slash defense is likely to be played there. And I think it's mostly a question of resource expenditure. I don't think that there's an absolute advantage for either offense or defense in most of these situations, but that if you are willing to expend sufficient resources, that your</p><p>probably in a pretty good position. Arguing this definitively probably would require a few hours, not, you know, like, however many minutes we want to spend on it. But to give you just, like, a hint on this, right, you know, the, let's say that tomorrow morning, you know, someone, you know, let's say that we're living in a hypothetical future world, you know, where...</p><p>There's lots of AIs, there's nanotechnology deployed, there are lots of such systems around. And someone decides that they want to go for, I believe there was a great paper actually by, that I once read called Limits to Global Ecophagy, which was really, really kind of neat as it asked the question, how long would it take for nanobots to eat the world? And it came back with the answer that it would take so long. It doesn't seem like a long time, but it would take weeks.</p><p>And that sounds like it's a terrible amount of time, but it turns out that that means that within hours you have things that have probably noticed and are in a position to start doing something about it. You can't... Well, so you almost certainly can't help but notice within hours. That was a paper by Bob Fridus, actually, and it's a pretty good one. I think it's a reasonably good read.</p><p>Theo Jaffee (53:06)</p><p>Hopefully you've noticed within hours.</p><p>But again, it's exponential. Yeah.</p><p>Perry Metzger (53:24)</p><p>though maybe not the sort of thing that most people enjoy reading when they go to bed at night. I have weird tastes. But generally speaking, it's the future someone decides to release something that will turn everyone on earth into Ming vases because they've got a thing for Ming vases or what have you, or they've released one of Eliezer's hypothetical. By the way, so Yadkovsky says things like, I think you could build biological viruses that would take over people's minds. I don't.</p><p>really think that's possible. You can do it in very, very narrow situations. There's a lyssavirus which is what causes rabies. It does sort of take over the minds of animals, but in a very primitive kind of way. All of these things.</p><p>Theo Jaffee (53:58)</p><p>Mad cow disease.</p><p>Yeah.</p><p>I mean there are certainly chemicals that can alter your brain state and emotions and personality.</p><p>Perry Metzger (54:17)</p><p>They can, yeah, in a very crude way. I think that he imagines things that could take over your brain and make you obey the wishes of a particular machine and do whatever it desired or what have you in an extremely sophisticated, rich sort of way, which I don't think is possible, right? But again, let's say we've got our hypothetical situation where.</p><p>someone desperately wants to release nanobots that convert everything on Earth into Ming vases. So I think that by the time people can do that, there are going to be nanobots everywhere. And they are going to be doing all sorts of things, like for example, cleaning bacteria and viruses out of the environment, doing things like cleaning the air of particulates, like...</p><p>checking whether or not someone is releasing biological agents or hostile nanomachines. I think that the odds of something not being detected are very low. Now, if you believe the notion that there's going to be hard takeoff, that someone will wake up one morning, build an AI, and that by the next afternoon they'll have access to all of these amazing technologies, then yeah, sure, I'm wrong if that's true. But I don't think that that's at all possible.</p><p>The amount of work needed in order to construct a mature technology like that is insane. Even if you have access to things that can do all the engineering you could possibly want, the amount of physical stuff that needs to be done, like just acquiring and degassing ultra -high vacuum equipment to start doing experiments is like a serious effort.</p><p>All of the things involved in such things are serious efforts. I think a much, much more realistic scenario is that what's been happening, right? So has anyone, you know, I think that Yudkovsky and company never imagined that we would have systems like whisper or GPT -4 or GPT -3 .5. I can hear Eliezer screaming in the background on Twitter, Metzger is lying I, of course, I envisioned this. Look at this obscure pod.</p><p>cast I was on 17 years ago, look at this thing I wrote on less wrong. Well, OK, fine, whatever. But I think that if you read the bulk of their materials, they talk about building a seed AI that bootstraps itself to super intelligence. And they don't talk about some sort of gradual development. But if you look around us, AI is being developed very gradually. The AIs around us are being released at regular intervals by.</p><p>Theo Jaffee (56:55)</p><p>Well...</p><p>Perry Metzger (57:00)</p><p>Organis commercial and academic organizations that are doing lots of research and development much of it in the open and they are making incremental progress and in certain respects these systems are deeply super human already and in certain respects they're deeply sub human still and it's happening bit by bit and it's happening in many places not in one place. I.</p><p>Theo Jaffee (57:20)</p><p>Well, I think they would argue that the way that you get to AGI doesn't matter as much as the endpoint. And if the endpoint is a superhuman artificial intelligence, no matter if it's based on an LLM or if it's based on some like pure Bayesian seed AI or whatever, then it will end in the destruction of humanity because of instrumental convergence.</p><p>Perry Metzger (57:36)</p><p>Yeah, they can argue that. But the well, so the instrumental convergence strikes me as being like, you know, so so the notion of instrumental convergence for those that don't know, you have to take a step further back, which is that according to the Doomer argument, all AGI's are going to of necessity have some sort of goal.</p><p>and be vicious optimizers in pursuit of that goal. And again, I can hear Eliezer's voice in the back of Twitter somewhere screaming, you're lying Metzger. But this is effectively what they argue. That if you build an AGI, it's going to have goals. It's going to be superhuman about optimizing those goals. And that the goals will necessarily be weird and alien, like say turning everything into paper clips or paving the moon or who knows what. The two problems here are,</p><p>There's no reason to believe that any of the AIs that we build necessarily have interesting goals of their own. And you could say that the goal of whisper is to transcribe speech into text. Or you could say that the goal of GPT -4 is to predict the next word or maybe at a higher level to produce a conversation that people find maximally probable or reasonable, right?</p><p>But these aren't really goals in the way that humans or even ants have them, right? There's this notion that if you build an AI, it's a person or an independent agent in some meaningful sense and not a tool. And I think that although you could build AIs that are not tools, that most of the AIs we're building are tools and that most AIs that we build will be tools.</p><p>Theo Jaffee (59:12)</p><p>So you're saying it's just.</p><p>So you're saying that they simply do things and they don't want to do things. They're not agents at all.</p><p>Perry Metzger (59:34)</p><p>Well, what's an agent? OK, so there are all these terms that we throw around when we're discussing AI. Simple ones like alignment that don't really have a definition or agent that don't have a definition. Much more complicated ones like conscious that philosophers have been arguing about for thousands of years that have very, very poor definitions. The problem, by the way, of the hard problem of consciousness, in my opinion, is how you define it. Once you've defined it, like discussing it,</p><p>rigorously discussing it becomes either trivial or drifts off into mysticism. But anyway, what's an agent? I mean, if I have a system that I have asked to tend the fields on my farm, is it an agent in a meaningful way? Or is it just a tool? I don't know how to define that particularly well. The real question to me is,</p><p>Is it sitting around off hours talking to itself about how awful its job is and how it would really like to run away and commit a mass homicide spree or something? If it's not actually talking to itself off hours about how bored it is and how it really wants to, I don't know, you know, turn Los Angeles into glass or something, then why are we worried? The things that we're building at this point. So the original vision, you know, of these</p><p>you know, Bayesian monster machines, isn't what we've built, right? What we've built are these systems, and I'm drastically oversimplifying here, okay? But this is essentially right, okay? What we had was the following problem that was standing between us and AI. We had the problem that I could show you as many, you know, that I as a human being could recognize pictures of cats.</p><p>And I couldn't write down some sort of closed form explanation. How do you recognize a cat in an image? OK? You know, I could have bitmaps, and a human being could easily say, it has a cat in it, doesn't have a cat in it. I could have, you know, digital pictures of all sorts. Cat, no cat, cat, no cat. But how do I explain to a machine what I'm trying to do here? And it turned out to be really, really, really difficult until we realized that what we could do,</p><p>was simply give the machines vast numbers of examples of pictures that had cats and didn't have cats and allow them to use statistical learning to figure it out. And this changes a lot, right? And the most important thing that it changes is that the way that we're building these machines is that we're giving them examples of what it is that we want and...</p><p>We are not saying, yes, this is a machine we want to release into the world until they do. But Eliezer and company made extremely heavy weather of the notion that you could build something that was incredibly intelligent, but how would you get it to want to do something that you wanted it to do? But if you're using statistical learning techniques, the systems naturally want to do what you want them to do.</p><p>Like, I'll give a stupid example that people don't think of very much, right? Like, could you, if, okay, you've got your eyes open, you look around at the world around you, could you voluntarily decide not to recognize objects around you?</p><p>Theo Jaffee (1:03:05)</p><p>no, but for example, if you</p><p>Perry Metzger (1:03:09)</p><p>W -w -w -why not?</p><p>Theo Jaffee (1:03:11)</p><p>because you just do it. It's not conscious. But if those objects are letters and you're not very good at reading, then you might be able to kind of choose not to read. Like if I'm looking at Japanese hiragana or in some cases Hebrew text, it takes me effort to read it. So I can also choose to not expend the effort and not read it.</p><p>Perry Metzger (1:03:13)</p><p>Well, it's not even just that you just do it. You...</p><p>Right. But if you're sure. But if we're talking about things that are in system two and not system one, right. If we're talking about like recognizing a chair, if I look across the room, I see a chair. I can't. I literally don't know how I would get myself to stop. And this is because you've got this extensive set of neural circuitry that you use for looking at the world. And most, by the way, of your circuitry.</p><p>Isn't something where you have to extend volition for it to work or even where you could stop it from working by an active volition. You could probably extend an active volition to get yourself to fall on the floor from a vertical position, but you don't have to extend volition as you're standing around like waiting online to go into the movies or, or at the checkout at Trader Joe's. You don't extend volition to stand, you know, vertical. It just happens.</p><p>Right? Most of these systems that we build that have very, very intelligent and interestingly intelligent behavior and your visual subsystem is a big, complicated, rich subsystem that's like probably more complicated and bigger than any of the AIs we've built so far. Most of them don't have volition in an interesting way and don't need it. Right? And if I'm building a system that's picking fruit,</p><p>or laying out circuits in a new chip design or designing a bridge or helping me find molecules that dock to receptors on cell surfaces. None of these things require independent volition or volition at all. Whisper doesn't have volition any more than your visual system does. You know, it's got inputs which are, you know, which are sounds and it's got outputs which are text.</p><p>And this is a slightly rough approximation because, you know, it's there and both encoded in an interesting way. But it can't choose to instead say, no, today I've gotten bored with this online strike. I instead want to be repurposed, you know, making burgers at McDonald's, which I think would be a more interesting career than being a speech recognition system. No, it doesn't do that. It has it has no memory. It has no capacity for self -reflection. It has no consciousness. The</p><p>bulk of the things that we are interested in building are going to be tools. Now, this doesn't mean that people can't build things that are not tools, that do have self -reflection in a meaningful way, that might get bored, that you could even convince to become genocidal, right? But that's okay, provided those are a minority of the systems out there and don't have some sort of overwhelming control. And by the way, I think it's inevitable, given that there are eight billion people in the world now.</p><p>and that in the future there will be far, far, far more billions of conscious creatures and entities out there. It's inevitable that over time at some point someone's going to build, and it might even be relatively soon, who knows, that someone's going to build something that's not a tool but a thing, but a thinking conscious thing. But it's not required. There's this whole section of the Yudkowskian dogma.</p><p>called about orthogonality. And I would like to note that one form of orthogonality that none of these people considered was the possibility that agentness and consciousness and volition and all of those things were orthogonal to being able to solve interesting problems. Like really interesting problems can be solved by these systems without needing those things. Human beings have consciousness and a desire to survive and</p><p>a variety of other features like this, all because we need this to survive. We evolved to have these things. These were important features that we gained from our past. But you don't need these things for the most part, in order to have interesting useful systems. It is not necessary that that that you have systems like this.</p><p>It's that you have consciousness and an inner life and a desire to think about philosophy and your off hours. I mean, when GPT -4 isn't talking to you, it's not thinking about philosophy. It's off, de facto. Those are features that we could add to systems but are not required for them to be useful to us.</p><p>We are building tools, right? And we don't have to build tools, but so long as most of the things that we've built are things that are tools and under our control and mostly do the things that we want, we don't have to worry so much. And I think that that's almost certainly going to be the case. And yes, at some point people will build things that are not tools and maybe they'll even build things that desire that they, the desire eating the whole world. But so long as they do that at a point where we have countermeasures, it doesn't matter.</p><p>And I think that it's inevitable that we will have countermeasures.</p><p>Theo Jaffee (1:08:43)</p><p>By the way, the arguments that you just made remind me a lot of Quinton Pope, AI researcher and former podcast guest. His excellent blog post where my objections to we're all going to die with Eliezer Yudkowsky, which when I was like full X -risk doomer mode after like ChatGPT and then GPT-4 came out last year helped sow some seeds of doubt. I'm much more optimistic now.</p><p>Perry Metzger (1:09:06)</p><p>Just a few. Yes. Can we pause for 10 seconds so I can put my watch on a charger? Okay, one moment.</p><p>Theo Jaffee (1:09:14)</p><p>Apple Watch.</p><p>Perry Metzger (1:09:31)</p><p>Apple watches will do a very wide variety of things, but they will not give you an alarm to the effect that your watch is down to 10 % charge, which is annoying as all hell. Mine does not.</p><p>Theo Jaffee (1:09:41)</p><p>Mine does that.</p><p>Yeah, I've been having the same exact issue too, where I charge the Apple Watch and it should last for like almost two days and then it gets down to 10 % in half a day and usually -</p><p>Perry Metzger (1:09:55)</p><p>That's your dead, that's your battery dying. You're going to have to go to Apple and get it replaced. I need to do the same thing.</p><p>Theo Jaffee (1:10:00)</p><p>Well, I've gone and fixed, or I fixed it temporarily by just rebooting it. And then it worked. Yeah, so.</p><p>Perry Metzger (1:10:05)</p><p>Maybe I should reboot mine more often. Maybe that's the reason I'm not getting alerts about low battery. But anyway, back to, so you were telling me, talking about Quentin Pope and how we're not all going to die with...</p><p>Theo Jaffee (1:10:20)</p><p>Yeah, so that if you recall that podcast episode, we're all gonna die with Eliezer Yudkowsky on the bankless podcast helped throw a whole lot of people into like holy crap mode. And this was like right after ChatGPT came out when people were like,</p><p>Perry Metzger (1:10:35)</p><p>That threw me into holy crap mode, by the way. It's the reason I ended up founding Alliance for the Future.</p><p>Theo Jaffee (1:10:40)</p><p>But it found it put you in the holy crap mode for a different reason, I imagine.</p><p>Perry Metzger (1:10:43)</p><p>Yes, it put me into, holy crap, if I don't get involved in politics in a way that I don't particularly love doing, what's going to happen is that the entire conversation is going to be dominated by people who I deeply disagree with and who I think are going to have very, very bad policy ideas. That's the very gentle way of putting it.</p><p>Theo Jaffee (1:11:05)</p><p>So can you tell us the founding story of Alliance for the Future? Like how did it come to be? Why Brian Chau and Beff Jezos?</p><p>Perry Metzger (1:11:15)</p><p>Well, why Beff Jezos? Because Guillaume Verdon is a wonderful guy and having him on our board was too good an opportunity to pass up. Why Brian Chau? Because Brian is not only a cool person in the space, he happened to want to do the job and is, you know, and is doing a good job at it. And when you're recruiting for a nonprofit that has no track record, you know,</p><p>And you have someone who is as good as Brian who appears you you know, you grab them and you say, please please work for us But take going all the way back to the original question. How did Alliance for the Future get started? So what happened was I realized After things like Eliezer's time, you know blog, you know website posting and His bankless podcast and things like that that if people</p><p>you know, that there was an incredible amount of money and effort being expended on pushing the doom message. And that if people didn't scramble very quickly to try to mention the fact that maybe we're in fact not going to all die and in fact maybe the only way to make sure, by the way, we should get to this in a little while, but I very strongly believe that if you pause AI research, you increase danger dramatically.</p><p>And I mean that very literally. And I was very worried that people like Eliezer would and William Macaskill and Dustin Moskowitz and Cari Tuna and all of the rest of these people, all of the people that Sam Bankman Fried funded, which is I think that even now there's residual SBF money floating around in a bunch of this stuff.</p><p>Theo Jaffee (1:12:42)</p><p>Yeah, I -</p><p>Perry Metzger (1:13:11)</p><p>You know, I was very, very worried that if these people got their way, we were going to be in horrible danger and we were going to get a dystopian future and we were necessarily going to get a dystopian future because they would conclude that the only way to keep the world safe was totalitarianism in the end. And if you read the proposals that lots of people make on less wrong and elsewhere on the, you know, the, it's really, really simple. We just make sure that access to general computation,</p><p>gets eliminated and that people aren't allowed to do this research and we have the AI police who come and arrest them and and by the way those people who claim that this isn't really the case, you know, I invite them to read things like SB, you know, 1047 in California or what have you You know, but but the you know, I I was looking around and I kept thinking well surely Someone is organizing to do something about</p><p>this and I kept waiting and I kept waiting and no one was doing it and I finally realized well you know if you want it to happen you're going to have to do it and I really don't like doing it right I have an AI startup that I should be spending all of my time on which I think does interesting stuff you know I have a personal life that I would like to be spending time on you know I'm an old man so I'm not nearly as energetic as I was say 35 years ago</p><p>You know, I can't go without sleep for days on end and still be productive. But it seemed like it was necessary. So I, you know, talked to friends who had DC connections who introduced me to other people. And I, you know, we put a team together. We put together a 501 C4 because it gives us more freedom than having a 501 C3, even though our, you know, donations are not tax deductible. You know, you know, I may I pitch for, you know,</p><p>Our URL for for two seconds. Yeah, AF future 2Fsaffuture .org slash donate. You know, we need your money. You know, but our stripe integration is still kind of crap. Our IT person is working on that right now. But but it's but you know, it's OK. The money is flowing in. We've actually managed to be effective.</p><p>Theo Jaffee (1:15:09)</p><p>Yeah, go for it.</p><p>Link will be in the description.</p><p>Yeah.</p><p>Perry Metzger (1:15:33)</p><p>You know, I've had do -mers asking me, so why is it that you weren't aware of this thing that happened six months ago as an organization? And the answer is because we've existed for two months. Thank you. And other people are like, well, why is it that you weren't aware of everything that was happening in every state legislature in the United States? And the answer is we started a couple of months ago, and we don't have the surveillance systems for that yet. But thank you for telling us that we need to. And.</p><p>And it appears based on, I have a buddy who's on a small city council in Minnesota. Okay. He's in a small town in Minnesota. He's on the city council there and he has gotten communications from EA associated research organizations, basically push polling him, trying to convince him that he should be sponsoring local legislation to stop AI. Like, so these people clearly have too much money on their hands. They're spending it every.</p><p>Theo Jaffee (1:16:26)</p><p>Wow.</p><p>Perry Metzger (1:16:30)</p><p>So, you know, we're going to have to be a hell of a lot more efficient. One of the problems I've got is that AFTF doesn't have hundreds of millions of dollars a year to spend on this stuff. And these people do. You know, I had a doomer like making fun of me over the weekend for saying that they had thousands of people working full -time on X -Risk when it's only about 400. Like, okay, let's assume that they're right and it's only 400 people working full -time.</p><p>to try to push this narrative. I mean, that's a hell of a lot of people. It's even a hell of a lot of people by US legal lobbying standards. That's a serious campaign. I think they actually have thousands of people on it. But even if it's only 400, it's crazy, right? So I found a bunch of people, and we incorporated, and we set ourselves up. And now I find myself like,</p><p>Theo Jaffee (1:17:04)</p><p>Yeah.</p><p>Perry Metzger (1:17:25)</p><p>running a D, well not running, Brian runs it, but now I find myself as the chair of a DC advocacy organization, which is not something I ever expected would happen in my life. But you know, you live long enough and all sorts of unexpected things happen.</p><p>Theo Jaffee (1:17:39)</p><p>By the way, what you were telling me earlier about how, you know, the decals have all these crazy ideas. I was talking to a pretty prominent person in the space, like a month ago. I don't think I would characterize them as a decal, but they're definitely like, you know, tangentially involved in EA rationalism, that whole complex. And they were talking about, you know, yeah, AI is very scary and maybe we should, you know, focus on stopping it. And I said, well,</p><p>Wouldn't the most effective way to literally stop AI progress be bombing open AI? Or something like that? And they said, well yeah, I mean, like we've talked about it, it just doesn't seem like feasible. You know, it seems like it might be like a net harm to the cause.</p><p>Perry Metzger (1:18:27)</p><p>Well, yes, but at some point, some of them are going to decide that it's not a net harm and they will act independently of the others. When someone like Eliezer says that he doesn't support terrorism, I think what he really means is that he does not personally think that it would be effective, which I think is very different from saying that he doesn't support it. I might be wrong. I mean, for all I know, he's preparing the lawsuit in London right now for libeling him for saying that he secretly believes that terrorism...</p><p>is morally justifiable but perhaps not effective. And maybe I'm wrong. Maybe he doesn't actually believe that it's morally justifiable. But I certainly feel like at least a lot of these people seem to have that position that it would be morally justifiable. It's just probably not effective. And some of them will decide that it's both morally justifiable and might be effective at some point, which is kind of a scary thing to contemplate. But I want to get back to the question of whether pausing AI</p><p>would be dangerous because I try to make this point a lot and there's only one way through this problem, which is to grapple with the actual engineering problems associated with artificial intelligence and the actual societal problems. And you do that by building things, by engaging with them, by seeing how they work, how they fail to work, by putting them out in the field and seeing,</p><p>how people use and abuse them. And there is an extent to which the doomers are right that this is a powerful technology. I would, in addition to worrying a great deal that we will make absolutely no progress through amphiloskepsis, which appears to be the main strategy of MIRI, navel gazing. You looked puzzled for a moment, you know. They...</p><p>They seem to believe that you can figure out how to align AI by thinking very hard over a long period of time. You can't do that. No, no engineering discipline works that way, right? The way you figure out how to make something work well is, is by building things and incrementally refining them. But equally.</p><p>Theo Jaffee (1:20:33)</p><p>I don't think that's what you're doing.</p><p>I don't think they actually think that. I think more like they believe that if they do build AI, it will probably end the world. So they will probably fail in their mission of aligning AI and they know that. But, you know, they can't run the risk of trying to build AI.</p><p>Perry Metzger (1:20:53)</p><p>By the way, your audio just went from being good to being bad. You may have switched microphones unintentionally.</p><p>Theo Jaffee (1:21:01)</p><p>Is this better? Okay.</p><p>Perry Metzger (1:21:02)</p><p>Yes, it is. Yeah. I think they have a variety of views there. I don't remember if I've said this so far in this podcast, but I believe that there are a bunch of people, you know, at MIRI who still believe that like having themselves uploaded and doing thousands of years of research very quickly is still, you know, is one viable, I put big air quotes around that because it's ridiculous, one viable way of attempting to.</p><p>to get aligned AI. But anyway, ignoring all of that, though, there is the problem that we are not the only actors in the world here in the United States. And there are a lot of other countries, some of them with much more advanced manufacturing and research and engineering capabilities than ours, that are also interested in AI and are not going to agree with the Yudkowskian vision, I suspect. I was in an argument recently with some people who are allies of mine in DC.</p><p>who were arguing, well, we could just stop, you know, and they were not, we weren't even talking about AI as such. We were talking about, you know, geopolitics and who is ahead on manufacturing technology, electronics, et cetera. I don't know what it is that you're studying, I have forgot, or if I knew I had forgotten. Okay, so if you were an EE and you were doing random projects these days,</p><p>Theo Jaffee (1:22:22)</p><p>computer science.</p><p>Perry Metzger (1:22:29)</p><p>You almost certainly would be asking companies in Shenzhen to send you PC boards that, you know, you'd draw something up in KiCad, you'd send it off to them, and a day later you'd be getting PC boards from them. There aren't a lot of companies, there are almost none in the West, that offer services as good as the ones the Chinese do. If you go out there and you look at small embedded Linux boards that you can use in various projects, things that are like Raspberry Pi -ish,</p><p>There are, you know, there is the the Western designed Raspberry Pi with a Broadcom chip in it. And there are also all of these great boards that you can get made in China, like the Orange Pi, which has, I believe, a RockChip 3588 in it, which also has a neural processing unit, which the Raspberry Pi does not. And that is a Chinese designed, non -U .S. fabricated microprocessor in that thing.</p><p>The state of the art in technology is not such that we can just giggle about the Chinese not having the ability to catch up with us. I know people who say things like, well, the Chinese don't have, you know, extreme ultraviolet, you know, silicon fabs. And the Americans don't either, it turns out. Like Intel doesn't have the ability to do cutting edge fab stuff. You know, TSMC does. You had something you wanted to say.</p><p>Theo Jaffee (1:23:37)</p><p>BOOM!</p><p>Yeah, I mean, the Doomer counterargument to that, one of them is, well, if you think that China catching up to the US on AI research would be bad, then open sourcing all of our AI would simply hand our frontier advances to them. And that would be a bad thing.</p><p>Perry Metzger (1:24:13)</p><p>They are going to have the frontier advances no matter what eventually, right? So what one of the things people don't get about how to think geopolitically is the notion that we are protected by superiority. We are not protected by superiority. We are protected by a balance of power in which people believe that it is dangerous to attack, in which they believe that they have far more to lose from warfare and other non -cooperative strategies than from.</p><p>cooperating, and so they do not. Which is not to say that I want to hand the Chinese, you know, the plans to some sort of sophisticated, you know, command and control AI or what have you. I don't. But we'll get to the open source question in a moment. For the last, like, 370 odd years, okay, the way that people in the West have recognized and</p><p>then it came to be recognized globally. That so long as the competing major powers in the world have relatively similar capabilities, relatively similar, you know, relatively similar worries about the capabilities of the opponents, et cetera, we end up with reasonably peaceful, you know, conclusions. You end up with war.</p><p>when a great power believes that it has an overwhelming advantage over its counterparties. This has been the understanding since the Treaty of Westphalia, and it seems to be mostly true. I know a lot of people on the EA side. EA funded, not that long ago, a bunch of Kurzgesagt videos about how terrible nuclear weapons are. And not that I particularly love nuclear weapons.</p><p>But there's a strong argument to be made that the presence of nuclear weapons has prevented us from having a giant world war of the sort that, you know, that happened. The first and the second world war were not the first great power conflicts, major great power conflicts in our history. It's just that they seem to encompass a large, much more of the Earth's surface. But we haven't had a great power conflict of that sort since 1945.</p><p>And why is that? Because everyone was too bloody scared to get into one, right? Now imagine a world in which there hadn't been nuclear weapons. I think it would have been almost inevitable that we would have ended up with a war between the communist bloc in the West somewhere in the 1950s or 1960s. And it probably would have been bloody as hell. There would have been tens or hundreds of millions of people killed. And we didn't. And we didn't do that because everyone more or less believed that the other side was in a position to deter it.</p><p>So there's a possibility in five years, 10 years, 15 years, maybe it's sooner, maybe it's later, who knows, that the Chinese have lots and lots of autonomous weapons systems and believe that they could easily just overrun Taiwan with them. And the trick to having them not do that is to have them know that the West and the Taiwanese and the Americans and everyone.</p><p>on the other side also have lots and lots and lots of autonomous weapons systems and that there would be a price for them attempting to do such a thing, that there would be a possibility that they would lose, right? If people overwhelmingly, if great powers are as amoral as infants, generally speaking, if you've ever dealt with a toddler, that's in certain ways not the worst model for the way that great powers operate to some extent.</p><p>You know, if the great power believes that it's going to win, it's probably going to do abusive things. And if it believes that it'll be deterred, it probably won't. The key here, in my opinion, given the inevitability that the Chinese are going to have capabilities like this, is for us to have capabilities like that, in which case they'll never actually try to use them, because they won't believe that they could do so safely. Now you ask the question, well, you know, if we open source all of our AI technology, if we just release</p><p>all of this research online, aren't they going to get a tremendous advantage? They're going to get something of an advantage, but we're also going to get an advantage, right? We, if we cut off internal communications among ourselves, will be unable to make the sorts of progress that we need in order to have balancing systems. And in my opinion, the key is not having the ability to overwhelm them or make them afraid that we will overwhelm them. The key is to be able to balance them and to be able to balance other powers.</p><p>If we get rid of open source AI, which by the way requires that we make all sorts of things that are traditionally like sacred values in the United States, like being able to openly publish about things, like being able to openly talk about things, like being able to just like release a bunch of data on the internet if you feel like it. If we decide that we want to make that stuff illegal, if we want to go for a pervasive societal attitude that all of this research is too dangerous to allow anyone to hear about.</p><p>What we'll end up doing is kneecapping ourselves, right? The advantage we have, the capability we have in spite of the fact that we have a tiny fraction of the number of manufacturing engineers in China and a tiny fraction of the number of electrical engineers that they have and a tiny fraction of the number of material scientists they have, et cetera. The advantage we have is free and open communication and a very competitive and vibrant venture capital segment.</p><p>I think that it would be incredibly immensely stupid for us to kneecap open source. I think that the greatest safety we have is in having lots and lots and lots of players on all sides have artificial intelligences that they're using for all sorts of purposes. Most purposes aren't going to be bad, right? Most purposes are going to be doing things like weeding fields and designing cancer drugs and coming up with ways to...</p><p>Theo Jaffee (1:30:05)</p><p>Yeah.</p><p>Perry Metzger (1:30:28)</p><p>to fix horrible social problems. But I think we're better off with massive decentralization with lots and lots of people having their toe in the water. By the way, we already have massive numbers of people with their toe in the water. I mean, I don't see how you could get the world to forget what we already know about AI research. It's a terribly sophisticated technology by the standards.</p><p>of a high school student who's only starting to study algebra basically. But it's not that bad if you're a computer scientist, right? The things that we have figured out turn out to be relative. I mean, there are no deep secrets, right? The deep secret is that there are no deep secrets. The biggest secrets were that statistical learning was going to win over good old fashioned AI mechanisms. I'd probably talk too long, though, without.</p><p>giving you a word in edgewise. I have a habit of doing that when I'm not feeling well, which is a large fraction of the time, unfortunately.</p><p>Theo Jaffee (1:31:33)</p><p>Yeah, so I'd love to get back to Alliance for the Future. Specifically, do you think that a nonprofit lobbying firm type thing is the best way to achieve good outcomes for open source, free and open source AI? Or like, why did you decide on this format?</p><p>Perry Metzger (1:31:48)</p><p>I mean...</p><p>Well, because I didn't have the budget in my own company and I really wasn't in a position to justify it. So we're working with a lot of organizations outside ourselves. One of the features of having a DC nonprofit of this sort is that a lot of what we do is talking to people, feeding them information, getting them the ability to do things that they need to do. We discovered...</p><p>You know, I had a reporter saying to me, you know, a few days ago, well, wasn't SB 1047 out since February or whenever it was? You know, how come you only learned about it right now? And I was like, well, you know, for better or worse, we only learned about it right now. We can argue about why that would be. But it turned out that our learning about it, because we were told about it by a couple of people who, you know, who like very forcefully brought it to our attention, meant that we were in a position.</p><p>to tell a bunch of other nonprofits and to tell a bunch. We discovered that a large fraction of civil society organizations that you would have expected would have been very concerned about this didn't know as of a week ago, right? They had no idea it existed. We found out that a bunch of venture capital firms had no idea it existed, that a bunch of startups had no idea that it existed. I'm talking to some folks at a very, very large company right now who are part of their policy group.</p><p>and who didn't really know much about this thing a week or two ago, and now they've geared up to talk about it a bunch. One of the things that a nonprofit of the sort that AFTF is can do is it's in a position to spread information around like that. Another thing we can also lobby, we can write position papers, we can do editorials, we can do all sorts of things. And it turns out that this is how the game is played.</p><p>I don't really love the way that politics happens in the United States, but the way that politics happens in the United States is you have advocacy organizations and you have lobbying branches for inside of large companies and there are professional lobbying firms and all sorts of other things like this in the ecosystem. DC, DC has its 501 C threes and it's 501 C fours and it's 501 C sixes and it's, you know, and it's companies that do lobbying and it's companies that do communications and.</p><p>Theo Jaffee (1:34:00)</p><p>Well.</p><p>Perry Metzger (1:34:12)</p><p>You know, and it's an ecosystem. And if you don't play the game, you're not in the game.</p><p>Theo Jaffee (1:34:17)</p><p>How do you convince legislators when you're playing the game that allowing AI regulation is actually, or not allowing AI regulation, that not doing AI regulation is actually in their interest and not merely, you know, the morally right thing to do or whatever? Because it seems like there are huge  forces playing the opposite direction.</p><p>Perry Metzger (1:34:33)</p><p>So.</p><p>So there are two things going in our favor. One of them is that for good or ill, it seems like the EA folks, in spite of their overwhelming financial advantages, are not very good at this. And I could speculate as to why that is. And maybe it would even be intelligent speculation, but it's not really my place. So when we find ourselves going in and being taken relatively seriously when we talk to people.</p><p>And when we talk to people, we explain to them, you were told that this piece of legislation was something that was very widely supported by industry, that lots and lots of people in academia think is good, that lots and lots of people believe is necessary and very normal. And in fact, it's kind of a bunch of extremist stuff. And the claims that you've made about what's in your own law aren't true.</p><p>And I can't, by the way, I cannot blame, it's common to say, why didn't this legislator read his own bill? And the answer is because he has 120 ,000 pages of bills that he's got to deal with in a given session. And of course he didn't read the bloody thing. How could he? You can't expect them to. And I think that it's not reasonable to. You can ask something very reasonable about why do we have a system in which we expect legislators to deal.</p><p>with these massive volumes of stuff going through. But you can't actually practically expect that they've read everything. And so sometimes you have to go in and you have to say, look at this paragraph in your bill, this paragraph that says a thing that is opposite to the thing you believe it says. Here, let's read it. OK? And you obviously can't be a rude asshole like that. But the point is that, you know,</p><p>Theo Jaffee (1:36:25)</p><p>Yeah.</p><p>Perry Metzger (1:36:29)</p><p>I will say right now that in the current fight in California, it is my strong expectation that a bunch of people more or less openly lied to the sponsors of this bill in order to get it pushed forward quickly. They told them that it was a widely supported piece of legislation, that there would be very few people who thought that it was a bad idea.</p><p>that it would get them lots of positive press, that it would help their political careers, that it would, you know, that it would bring back their lost hair, you know, anything almost. I get the distinct impression. And again, this is, I get the distinct impression that a bunch of the people on the EA side, because of their fanaticism about this, do not understand the concerns that legislators would have about doing things that,</p><p>are frankly hated by a large fraction of their constituents and don't understand that lying to them about what the consensus is is a way not to make friends for the long term. I mean, I think, you know, things like SB 1047 are likely to be extremely counterproductive for the EA side because what they end up doing is convincing legislators that they cannot trust the lobbyists who are pushing this sort of thing because they don't have the interests of the legislator in.</p><p>If you're talking to someone, you can't, and you lie to them too much, eventually they will notice.</p><p>Theo Jaffee (1:38:01)</p><p>So do you think that there's anything good, like positive expected value, that the government can do on AI? Or is Alliance for the Future's goal to kind of just get them to do nothing at all?</p><p>Perry Metzger (1:38:11)</p><p>No, I think that there's a great deal of stuff that we probably need, right? First of all, I mean, there's the stuff that you would probably consider to be not doing anything, but which I consider to be doing something, which is we probably need federal preemption of local AI laws because of the fact that one of the strategies that EA has chosen is to try to get laws passed in as many little municipalities and states as they can. But more than that.</p><p>There's a lot of controversy around this stuff right now. For example, there is a lot of arguments about copyright law and the use of copyrighted materials in training. Okay. And for good or ill, it's probably going to be the place of the legislature at some point to provide clarity so that we stop having lawsuits so that everyone, everyone may not be happy at the end of that process. And in fact, one of the, one of the definitions of a compromise.</p><p>in such circumstances is that you find that no one is happy at the end, right? You kind of know that it's okay if they're, it's bad if there's one party that's ecstatic and like lots and lots of other parties who feel screwed. If everyone feels like they can live with it, but they're not actually gloriously joyful, you've probably reached a reasonable level of compromise. But there's, there's stuff around, you know, around actual bad uses of AI. Now, now we focus a lot.</p><p>when we're talking about like the AI obsessions with attempting to stop AI research and development itself. But if you look at the other side of that, there are clearly uses of AI that most of us would probably consider a little scummy. You know, you can come up with stupid examples that are very obvious like scamming the elderly. Sure. Scamming the elderly. I don't think is I'm sure that there is a lobby for that, you know, among certain grifters, you know, in and.</p><p>Theo Jaffee (1:39:53)</p><p>scamming the elderly.</p><p>Perry Metzger (1:40:05)</p><p>Maybe there are people in Nigeria and boiler rooms in Pakistan or what have you where there are a lot of scam operations are operated off of who believe that they have a moral right to scam the elderly. But I think most people don't think that they have a moral right to scam the elderly. So actually having some law enforcement effort put behind, ignore the AI thing. I mean, everyone.</p><p>in the United States gets lots of scam calls, right? And they are a persistent nuisance. And wouldn't it be nice if they were actually the object of more attention? But there's lots of other stuff, right? Like, we have to answer how much surveillance do we want in our society? And we're going to get to the point soon where you could imagine the police in a major metropolitan area having real -time feeds from hundreds of thousands or millions of cameras. I mean, the price of</p><p>The price of cameras and the price of the hardware to drive them and to pump their data over the internet has gone down to very, very low numbers. We're talking about a couple bucks a piece, sometimes less. At some point, we're going to be able to scatter them like dust around. And do we want a society where people can pervasively track and record and note down the actions that every human being in our society takes in public?</p><p>And maybe there are some legitimate uses for such information. Maybe there aren't. But it's a debate that actually needs to be had about how we want to confront questions of privacy, of individual liberty in our society. Do we want, and this can be done right now, do we want tracking of every car and every person in our society at all times?</p><p>Do we want the government to have access to that information? Do we want private organizations to have access to that information? I mean, there are people I know who argue that you do want that sort of thing. It's at the very least a legitimate argument to be having because this is a real thing, right? These are real capabilities. These are things that are actually going to be possible in the not that distant future or possible now. So they are worth discussing. One of the things that angers me a great deal,</p><p>Theo Jaffee (1:42:06)</p><p>Mm -hmm.</p><p>Perry Metzger (1:42:31)</p><p>is that because of the Doomers focusing so much on science fiction scenarios, no one is discussing very realistic scenarios. Things that can happen in the near term or immediately are already happening. And I think that those are probably more salient. So one of the things we would like to do is actually develop a legislative agenda for, you know, that discusses...</p><p>some of the threats that are more salient, the things that we might actually have to really worry about, you know, scams, surveillance, you know, how we react to training data, data sets, privacy, you know, should people, you know, should I be able to ask an open source AI?</p><p>or well, not an open source AI, I didn't mean that, but should I be able to ask a publicly available AI for intrusive information about your medical data and get an answer out? Probably not. But depending on how people train the things and the liabilities they have or what have you, this could end up being an issue. So we do have a lot of stuff that we want to discuss with Congress and with state legislatures.</p><p>But most of it is not in the direction, you know, most of it I think would be laughed at by a lot of the EAers. You know, someone like Yudkovsky would probably say, why are you thinking about this meaningless drivel when, you know, when the entire world is going to be destroyed soon? Well, I don't actually think that the entire world is going to be destroyed soon. So, pardon?</p><p>Theo Jaffee (1:44:09)</p><p>it's logically consistent. It's logically consistent though.</p><p>Perry Metzger (1:44:16)</p><p>Yes. The one thing that I can say for Yudkovsky and McCaskill and all of these people is that they may be distasteful, but they are reasonably internally consistent. Reasonably, not completely. I think that a lot of the things that they say are developing certain cracks, especially given the fact that we are living in an era where we're actually being confronted by real AI systems.</p><p>and are being forced to see whether or not they actually do the sorts of things that they claimed at one time, and they don't.</p><p>Theo Jaffee (1:44:50)</p><p>Yeah. So last question, what does your roadmap look like for the future of Alliance for the Future? Like what kinds of things will you be doing in the near term and then maybe farther out?</p><p>Perry Metzger (1:45:03)</p><p>Most of the stuff that we have to do is incredibly boring. We have to retain more staff. We have to raise more money. We have to build better donor relations software. We have to track more and more of the initiatives going on in state legislatures, in various portions of the federal government. We have to build a lot more connections with organizations that have interests similar to ours.</p><p>One of the things we've discovered is that there are a ton of organizations that are on the same side as this, but don't have enough time to think about it very much because they are, say, you know, an industry group that has to deal with, you know, 50 or 100 issues in a given legislative session. We only have to deal with one. Where, you know, most of what we're interested in at this point is just really boring stuff about building the organization. But our</p><p>You know, I mean, our goal is pretty straightforward. We want to stop the, you know, stop overregulation of AI and make sure that people are focusing on actual salient issues for our society associated with AI instead. And over the next few years, that's what we'll be doing. I mean, at some point, I think that this particular battle is going to be won and AFTF will probably, you know,</p><p>Most such organizations in the end start mutating and taking on different roles than they started with. And I'm sure in five or 10 or 15 years, AFTF will do that sort of thing. I mostly care about what it's doing in the next few years and how effective we are in trying to stop, in trying to stop doomerism. If we can stop doomerism, if our societal transition for this sort of thing ends up being,</p><p>less sculpted by paranoia and science fiction scenarios and insanity and ends up being more sculpted by people thinking things like, gee, I might actually be able to help a bunch of these kids learning math by giving them individualized math tutors or what have you. I think we have the possibility of having... There are some really, really amazing and cool things we're going to be able to do if this happens, right? If AI is left alone.</p><p>The US has had year on year GDP growth at or below 2 .5 % for a very long time, even though if you go back a century, it was more like 5%. And this is a really big problem because it means that, among other things, that we're slowly strangling ourselves on our national debt, that people are much poorer than they need to be.</p><p>to me, and this is going to sound boring to like 98 % of people, but I think this is really exciting. I think if we just can, if AI brings GDP growth, you know, above, you know, four or 5 % for the first time in forever, you know, and I know people who'd probably say, well, you know, it can probably do a lot more than that. And I'm not going to be utopian and, and, and, and, and go there. Maybe it can, but if we double.</p><p>GDP growth because of the of the widespread adoption of AI it's going to mean that in the lifetimes of people who are around right now You know ignoring life extension or anything else, you know by the time they're 50 they're going to be well, let's see So that would be if it was at 5 % I think that would be a doubling every 15 here So we would expect them to be like 16 times wealthier by retirement</p><p>if they're in high school now. That's crazy. That's a big, big difference, right, over the sorts of economic growth we have right now. And if it turns out to be true that AI could bring economic growth to double digits or higher, maybe it could, maybe it couldn't, things are even better. But even if we just got to a really modest goal, like 5%, this is life changing for hundreds of millions of people.</p><p>Theo Jaffee (1:48:55)</p><p>Yeah.</p><p>Perry Metzger (1:49:20)</p><p>And if we can have a small part in making sure that that happens. I don't care if AFTF gets any credit for anything that it does. I don't care if it becomes a famous organization. I don't care if it becomes a household world. I care a great deal about making sure that we don't end up with crippling regulation or bans or things like that on the most promising technology that I know of that's being deployed right now.</p><p>If we can succeed in doing that, that's a win.</p><p>Theo Jaffee (1:49:53)</p><p>Yeah, well, I think that's an excellent place to wrap it up. So thank you so much, Perry Metzger, for coming on the show.</p><p>Perry Metzger (1:50:01)</p><p>Well, thank you so much for having me, Theo.</p>]]></content:encoded></item><item><title><![CDATA[#14: Robin Hanson]]></title><description><![CDATA[Cultural Drift, Ems, Elephants, Institutions, and The Future]]></description><link>https://www.theojaffee.com/p/14-robin-hanson</link><guid isPermaLink="false">https://www.theojaffee.com/p/14-robin-hanson</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sat, 27 Apr 2024 15:07:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/144073363/9c445f269af928af8465696675d95cc4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Robin Hanson is a professor of economics at George Mason University, the author of <em>The Age of Em</em> and <em>The Elephant in the Brain</em>, the writer of the blog Overcoming Bias, and one of the most interesting polymaths alive today.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:24 - Mathematical models and grabby aliens</p><p>9:11 - Will we run out of value in the future?</p><p>12:23 - Kurzweil&#8217;s Law of Accelerating Returns</p><p>14:29 - Posadism</p><p>17:53 - Moral progress and Whig history</p><p>20:29 - Will there be a trad resurgence?</p><p>23:00 - Will Israel&#8217;s ultra-Orthodox problem globalize?</p><p>25:39 - Why will fertility rate keep dropping?</p><p>30:14 - Is declining fertility solvable technologically?</p><p>32:20 - What is wokeness? Has it peaked?</p><p>35:02 - Will virtualization make society more multicultural?</p><p>39:30 - How do institutions coordinate so well?</p><p>42:50 - Will ems care about death?</p><p>46:16 - Personal identity and death</p><p>49:30 - How much of Age of Em is applicable to LLMs?</p><p>51:09 - Why we shouldn&#8217;t worry about AI risk</p><p>55:40 - What if people don&#8217;t see AIs as their descendants?</p><p>1:00:41 - Other future tech deep dives</p><p>1:02:43 - Our very long-run descendants</p><p>1:06:08 - Time and risk preferences</p><p>1:08:34 - Wouldn&#8217;t ems be selected for docility?</p><p>1:11:24 - How Robin got involved in rationalism</p><p>1:13:22 - Girls getting the &#8220;ick&#8221;</p><p>1:16:56 - Have humans evolved since forager times?</p><p>1:18:28 - Cultural evolution</p><p>1:20:30 - Culture and prestige</p><p>1:22:49 - Why medicine in the US is bad</p><p>1:25:54 - Is academia the best truth-seeking institution in society?</p><p>1:28:52 - Peer review</p><p>1:31:13 - Which institutions are actually good?</p><p>1:32:33 - Why universities are all the same</p><p>1:37:40 - Bitcoin and speculation</p><p>1:46:44 - Demarchy</p><p>1:50:03 - Futarchy</p><p>1:53:56 - Applying prediction markets to dating apps</p><p>1:57:38 - The broadest thinkers and books in the world</p><p>2:00:59 - How Robin balances his many interests</p><p>2:01:58 - Teaching</p><p>2:03:12 - Outro</p><h3>Links</h3><p>Robin&#8217;s Homepage: <a href="https://mason.gmu.edu/~rhanson/home.html">https://mason.gmu.edu/~rhanson/home.html</a></p><p>Overcoming Bias: <a href="https://www.overcomingbias.com/">https://www.overcomingbias.com/</a></p><p>Robin&#8217;s Twitter: <a href="https://twitter.com/robinhanson">https://twitter.com/robinhanson</a></p><p>Grabby Aliens: <a href="https://grabbyaliens.com/">https://grabbyaliens.com/</a></p><p>Age of Em: <a href="https://archive.is/LMrr9">https://archive.is/LMrr9</a></p><p>The Elephant in the Brain: <a href="https://www.elephantinthebrain.com/">https://www.elephantinthebrain.com/</a></p><p>Beware Cultural Drift: <a href="https://quillette.com/2024/04/11/beware-cultural-drift/">https://quillette.com/2024/04/11/beware-cultural-drift/</a></p><p>Playlist: </p><div id="youtube2-avyrA5PYsik" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;avyrA5PYsik&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/avyrA5PYsik?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Spotify:</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:3549,&quot;numEpisodes&quot;:13,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-03-22T15:11:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)</p><p>Welcome back to episode 14 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Robin Hanson. Like previous guest Bryan Caplan Robin's day job is a professor of economics at George Mason University. There's much more to him than just that, however. He's a world -class polymath who's worked in literally dozens of fields and was a pioneer of many of the things we love today. Before going into economics, he studied physics, worked in AI research in the 80s, and did a PhD in social science.</p><p>He's proposed over a thousand ideas for alternative institutions, most famously prediction markets, which are today a multi -billion dollar industry. He was early on crypto, too. He was friends with Hal Finney, who many believe was Bitcoin's creator, Satoshi Nakamoto. He's been involved in futurism since the 90s, creating the idea of the great filter and grabby aliens, and writing a 400 -page deep dive on mind uploading called The Age of Em. He's also into human psychology and rationality. He's written on the blog Overcoming Bias since 2006.</p><p>where Eliezer Yudkowsky was originally a co -blogger before leaving to create Less Wrong. And he co -wrote with Kevin Simler the book The Elephant in the Brain on the hidden motives behind nearly everything we do. Most recently, he published an essay called Beware Cultural Drift, warning about the danger of having a global monoculture that's slow to adapt to changes. There was a lot to cover in this episode and I had a lot of fun recording it. This is the Theo Jaffee podcast. Thank you for listening and now here's Robin Hanson</p><p>Theo Jaffee (01:24)</p><p>Okay, we're on. Welcome back to episode 14 of the Theo Jaffee Podcast. We're here today with Robin Hanson</p><p>Robin Hanson (01:30)</p><p>Nice to meet you, Theo.</p><p>Theo Jaffee (01:33)</p><p>Nice to meet you too. So my first question has to do with your idea of grabby aliens and the great filter, which are, you know, these ideas about the, you know, explanations for why we don't see aliens and how our society might move in the future. Um, so grabby aliens and rationalism both heavily rely on mathematical models, but with a lot of these mathematical models, even like small inaccuracies in the inputs can make the output like wildly inaccurate. So how do you typically account for this?</p><p>Robin Hanson (02:04)</p><p>Well, you want robust models for which that's not true. So our model is a three parameter model where each parameter is fit to data and the parameters are as follows. Basically, advanced Davidian civilizations appear in space and time. They appear at random places in space.</p><p>They also appear at random points in time, but the time at which they appear is proportional to a power law. So the power law has a constant and it has a power. And then once they appear, they expand at a speed. Those are the three parameters. And we fit each of the parameters to a particular piece of data we have.</p><p>and I claim that that model is robust in the sense it's not very sensitive to these parameters. You can change these parameters by a substantial amount and then the model only changes by a proportional amount. It's not highly sensitive to some particular choice of parameters. So I mean we could explain where these parameters come from and then you know what we've concluded from that but I'll pause and let you push farther if you want. Okay so...</p><p>Theo Jaffee (03:11)</p><p>Yeah, yeah, let's go into that.</p><p>Robin Hanson (03:15)</p><p>The speed of expansion comes from the fact that if you model different speeds of expansion, you'll find that at low speeds of expansion, you predict that each one, when it's looking out into the sky, will see many of the others. Because light goes much faster than they expand, and so they will see them a long way off way before they get here. Each will see the other one coming. Since we look in the sky and we don't see...</p><p>other huge alien civilizations taking up enormous spheres in the sky expanding at a rapid rate. We can then conclude that we must be in the parameter space of the model where they don't see each other coming and that's where they're expanding pretty fast, say half the speed of light or even faster. So we then conclude, well I guess that's the that parameter value. They are expanding very fast because</p><p>we look up and we don't see anything and most would see something in the model if they were expanding slowly. So that's one of the key parameters. Now, another key parameter is the constant in front of the power law. So as you change the constant, you will change on average when these things appear in the history of the universe.</p><p>you know, make the constant lower and it'll take longer before they appear. You make the constant higher and they will appear sooner. So we can take our date at the moment.</p><p>as a random sample from when the dates of these things appear, and that constrains the constant in front of the power law. That is, we can basically assume a uniform distribution of where we are in the distribution of alien civilizations. We could be really early, we could be really late, but we're somewhere, assume, say, a uniform distribution over the rank. Are we in the first percentile, the 99th percentile, somewhere in the middle? And that then gives us a distribution over this constant.</p><p>in front of the power law. And the power of the power law comes from the history of life on earth. So the key idea is that in order to become an advanced civilization like ourselves, you have to go through a number of difficult steps and you have to do that before the window for life on your planet ends.</p><p>and a simple statistical model of what happens when something has to complete a whole bunch of difficult steps before a short window. Usually it would not succeed before the end of the window, but once in a while it gets lucky and does. A statistical model of that says that the time at which they do succeed in appearing goes as a power law. And the power is the number of steps it has to go through in this history. And we can now use</p><p>timing of events on the history of Earth to roughly guess the number of steps that we've gone through. So a best guess of six, but hey maybe a shorter three or nine, come from...</p><p>how long it took from life to be possible on Earth till the first apparent appearance of life on Earth, and then how much is there between now and when it looks like life would no longer be possible on Earth. Those two time durations are the datums that we can use to pin down how many steps there have been, and again, middle estimate of six. So those are the three parameters. Each comes from data. Put those together into a stochastic model, a model of probabilities, and you can run it many times.</p><p>to get the distribution over what the history of advanced civilizations in the universe looks like. And from those distributions, as you vary the parameters, we can draw, say, the following two conclusions. Roughly, advanced aliens appear roughly once per million galaxies. So...</p><p>There's trillions of galaxies in the universe, so there's lots of them in the universe, but they're still pretty far apart, once per million galaxies isn't close. So that says it'll be pretty hard to see them because they're pretty far away. And the other key parameter is if we go out and become one of these advanced alien civilizations that expands and becomes visible in the universe, we will meet others in roughly a billion years.</p><p>once per million galaxies meet them in a billion years. So those are pretty specific answers to pretty important questions. Now, it's not a billion point one or something precisely, it's in the ballpark of a billion years. All these answers are rough. That's part of the answer to your precision question. We're not giving very precise answers here, but compared to the basic question you might've asked, we now know a lot more than we used to.</p><p>Theo Jaffee (07:59)</p><p>So going back to what you said about like the steps it takes intelligent life to arise, how does that model account for like, do you think it's possible for life to arise spontaneously on like a non carbon based, non biological substrate?</p><p>Robin Hanson (08:15)</p><p>The general model doesn't care what the substrate is. It just cares what this power law is. Basically, how many steps does it have to go through? And what's the overall chances constant? And it allows for a wide range of different paths to advance life. I think it's if there are many different paths and some of them have more steps than others, it's overwhelmingly likely that it's the one with the fewest steps that will happen most often.</p><p>So we can, you know, if it takes us six steps, we can be pretty assured nothing out there is happening in less than six steps. Six steps must be the minimum. And the only other competitors out there we might interact with would be also things that took six steps, because the rest are very unlikely. So, of course, if it's five steps for us, it's five, but whatever our number of steps is, it's pretty much going to be the same for everybody else.</p><p>Theo Jaffee (09:11)</p><p>So while we're on the topic of aliens in the future, you have written like very extensively about future economic growth rates and you predict that eventually our rate of like resource and technological growth will stagnate. But how can that be possible when the like combinatorial space of atoms is just like gigantic? Like, yeah, sure. We will probably end up getting fewer actual resources in terms of like the number of atoms that we can gain access to.</p><p>But in terms of the ways that we can combine those atoms to create things of value, it seems like it would be a very, very, very long time before we exhaust that. This is kind of David Deutsch's idea of the beginning of infinity. So what do you think about that?</p><p>Robin Hanson (09:57)</p><p>So just to be clear to everyone, we're talking not the next few years. We're talking over thousands or millions of years as we get much more advanced and explore a much wider space of possibilities than we have now. Now, we already have a lot of experience, say, with computer programs, exploring searching spaces of possibilities.</p><p>I did part of this as part of my career. For example, we had a space of possible statistical models and a certain family of models. We were searching in that space for more likely models. And we found that, say, compared to a simple search, being a little clever about heuristics gave enormous advances in our ability to find more likely models. So there's really a lot to be gained by being clever about search. But it was also just clearly true that...</p><p>If you have decent ways to search, you're going to be looking at the low -hanging fruit first. You're not searching at random, you're searching for the best things as quickly as you can. And low -hanging fruit first implies high -hanging fruit last.</p><p>it means that the search has to slow down. That's kind of implied by any ability to not search at random, but to grab the best stuff first easier, is that you are able to prioritize the search space to find the things that are more likely to be promising first, which means the things that are less likely to be promising are going to be what's left over after you take the low -hanging fruit.</p><p>So it's just a matter of like how fast, how sort of just how much does it asymptote eventually, but consistently when we do search, you know, in computer systems and in design spaces, we do reach this sort of a asymptote situation where things slow down a lot. And the world economy is a big, deradic example of this.</p><p>That is, as the economy doubles, we get twice the capacity to search in all the different dimensions we might want to. Yet, the growth rate doesn't double, stays about the same, because we pick the low -hanging fruit first. So, you know, the fact that economic growth is steady is a strong testament to the fact that the search gets harder with time, because our ability to search gets much bigger, yet we don't find stuff that much faster.</p><p>Theo Jaffee (12:23)</p><p>What do you think about Ray Kurzweil's idea about accelerating returns?</p><p>Robin Hanson (12:28)</p><p>I mean, there can be parts of search spaces where you, you know, find things and you find more things and they help. So just generically, if you think about any system that has some sort of reinforcing process, the reinforcing process can either accelerate growth or decelerate growth or hold it steady. Those are the only three mathematical possibilities, right? Our experience with systems is that overwhelmingly we have the decelerating growth.</p><p>And once in a while we have things that are constant growth, and then more rarely we get accelerating growth, like a nuclear bomb is accelerating growth, right? But accelerating growth typically doesn't continue to accelerate forever. It accelerates over a range and then it slows down.</p><p>Like, you know, many of you listeners probably have personally experienced some point at which you were trying to figure something out and then you started to get it. There was a period of accelerating growth where things were falling into place and you were able to figure things out. In fact, they figured one thing out, helped you figure another thing out, and that was great. It was fun, but it didn't last forever. In your personal experience, it ran out a bit, and then you experienced decelerating growth.</p><p>So, you know, I just think we have enough experience with lots of kinds of systems. Like you might say the surprising thing is we have had continued to have at least steady growth in humans for a while, then we've had these jumps to faster growth modes. That's the most dramatic deviation from this expectation of decelerating growth is that we've got overall continued growth of human civilization. So the question is how far you expect that to go. And...</p><p>Again, once we reach the limits of physical expansion, like the speed of light, and we reach the limits of the materials we can work with, we have all the basic elementary particles and forces and we can't find any more, we'll search the space of all the vanes to arrange them and we will reach diminishing returns.</p><p>Theo Jaffee (14:29)</p><p>I see. By the way, are you familiar with pesadism?</p><p>Robin Hanson (14:33)</p><p>Posadism. Nope.</p><p>Theo Jaffee (14:34)</p><p>Passages um this guy I forgot his first name, but his last name was Posadas he was like a communist and he came up with this idea of Yeah, J Posadas He was yeah an Argentine Trotskyist who had this vision of alien civilizations Basically, he believed in communism. So he believed that you know kind of like he took Karl Marx's theory that like</p><p>the end state of human civilization is communism to its logical conclusions that advanced aliens would be communists too. So civilizations should search for advanced aliens because if advanced aliens were to find us, then since they would also be communists, they would help us lead the global communist revolution until you go over the world. I ask because it's basically one of the very, very few other people who I've encountered who's thought about this in the same way that you have.</p><p>Robin Hanson (15:32)</p><p>I find it remarkable that people can look at our recent history of cultural change and take some recent trend in our cultural change and imagine that that trend will apply over billions of years when it hasn't even applied over 100 years yet. This is one of the key blind spots humans have.</p><p>Basically, we are driven by cultural evolution and our cultural evolution makes us change our cultures fast, but we're kind of blind to that and you know, we tend to think our culture is best and that whatever issues our cultures have with other cultures then we're right and they're wrong and we're gonna be right for the entire future of the universe and that you know...</p><p>Liberal democracies say will be what the universe wants or commune or whatever it is. I just think if you look at how much has changed in such a short time in the past and then try to project the last few hundred years of human history into billions and trillions and more years of the future of the universe, it's just really hard to imagine that we have gotten it right in terms of the fundamental cultural issues if we didn't even notice them a few centuries ago.</p><p>Now, I might say we could find some things that are more robust. Robust issues are centralization versus decentralization. We can sort of see that that's a robust issue. Robust issue is degree of competition versus coordination, how many scales of organization there are. Those, in some sense, we can define issues as long -lasting issues, issues of what the unit of mines are be, how large would mines be.</p><p>How much do they merge into hive minds? At what scale? Whether natural selection continues? What are the forms of preferences creatures have? How do they encode them? How well do they know them? These are some of the things we can identify as pretty robust issues that last for a long time. But to think that our temporary answers to those questions, we could be confident will be the best answers across.</p><p>vast time scale of the universe seems kind of crazy.</p><p>Theo Jaffee (17:53)</p><p>Hmm. Um, David Deutsch also writes about this kind of idea, but his perspective is more like we will continue to improve over time, just as, you know, our morals so far have improved from, you know, being kind of forager values where we would fight constantly to being farmer values, or you'd have like a despotic king ruling over people to being more modern, like liberal, secular, democratic values. And that we continue to improve values into the future. But.</p><p>Do you think more values in the future will be arbitrary and we won't see them as improved? We won't necessarily see them as improved.</p><p>Robin Hanson (18:30)</p><p>I mean, first of all, you just have to notice there's a selection effect. If every cultural thinks it's best, then when it looks back at its history, it's gonna see things improving. That's more directly implied by cultural arrogance. All cultures think they're best. That's also accrued across space at any one time. Each culture thinks it's better than all the rest. Yes, well, also Whig foreign policy.</p><p>Theo Jaffee (18:49)</p><p>Whig history?</p><p>Robin Hanson (18:55)</p><p>you think you're best than all the other things that coexist with you or all the things you might imagine, but basically you can therefore predict that the future will think they're best and they will have seen more improvement. The question is whether you would approve of their improvement. And this is actually something I've been thinking a lot about lately and even in the last few hours.</p><p>Old people like me have seen our culture change in our lifetime. And then we are expected to embrace those changes, to think that our culture is now better than it was when we were young. But when we were young, we assimilated the culture of our world when we were young. And then later on, the world changes and its culture changes, and we're supposed to change with it. We each have to ask, well, was I wrong back then?</p><p>This is more right, but have people offered me arguments for that or they just telling me, you know, you're out of touch old man And I think it's hard to To you know engage that really I Don't think the world really offers as much evidence that in fact their new values are better than old ones They offer conformity pressures and sanctions if we don't agree but We're not usually persuaded</p><p>more conjured, pressured into accepting the new value changes.</p><p>Theo Jaffee (20:29)</p><p>Do you think that as these kinds of changes keep improving that we'll see like an increase in like return to tradition movements? Not just like in the sense of mainstream conservatism, but like I'm seeing this a lot more now, like trad Cath people on the internet who are trying to be like, you know, medieval Catholics or something.</p><p>Robin Hanson (20:51)</p><p>Well, the dimension I would call your attention to is diversity and variety. Our world today has vastly less variety of culture than we did a few centuries ago. So three centuries ago, basically the world was divided into the hundreds of thousands of little tiny peasant cultures.</p><p>each of which was pretty independent but pretty near subsistence. And if they drifted off the rails, they got punished by famine or pandemic or invasion, and that held them pretty close to functionality. Selection kept them in line. But then we merged small peasant cultures into national cultures, and then we have seen the rise of a world culture. And that means vastly less cultural variety.</p><p>And that, I think, means we should expect our cultures are going off the rails. We have vastly less cultural variety and vastly less selection in that our cultures, our few world cultures, are quite rich, peaceful, healthy, and if they go off the rails it'll actually take quite a while for them to fall because they have a lot of slack to survive that. And that's what I think is going to be happening over the next few centuries.</p><p>and the solution is some sort of variety that will, you know, defy the mainstream culture and do things different, like say the Amish or Heretim, and then in a few centuries they will grow and rise and replace the dominant cultures. So in that sense, I think in a few centuries' time scheme, yes, you will see the revival of...</p><p>traditional cultures, they will be different. Some of them will be more traditional, but the key thing is they will find a way to be highly fertile and highly insular. That is, insularity is especially the key. You can't diverge and do things different from the dominant culture unless you are insulated from it. And that's in some sense the most distinctive features of these small fertile subcultures is their insularity.</p><p>Theo Jaffee (23:00)</p><p>This talks, this sounds a lot about the biggest current domestic issue in Israel, which is you have like basically two classes of Israeli Jews. One is modern, secular, liberal, and the other is very traditionalist, Orthodox, and they have conflicts all the time because the Orthodox don't serve in the military and are like net burdens on the tax base. So should we expect to see this kind of thing everywhere?</p><p>Robin Hanson (23:23)</p><p>Right?</p><p>So this is a vanguard in the sense that if nothing else happens over centuries, these very highly fertile cultures which are growing fast will in fact replace everyone else because everyone else's fertility is going to keep declining and be below replacement. There are some transition issues to work out as small cultures get bigger.</p><p>Certainly one of them is this pacifism. This is also true of the Mennonites and the Amish in the US. They're pacifist. But I think the pacifism is mainly a strategy for insularity, because young men, when they go off to war, get a lot of cultural impressions from their fellow soldiers. They didn't want this to happen to their young men, so they made them pacifists. Once you have large enough groups to have their own military units, this is less of a problem. They can all go to war together and maintain their culture.</p><p>But this will be a transition point. So I guess the question is when the Haredim will be willing to make that transition to joining the military, but maybe in their own special units. The fact that they're subsidized is a happenstance of Israeli history. The Amish and the Mennonites aren't subsidized here in the US. They're kind of taxed, actually.</p><p>but they're still growing very fast. But any of these transitions is a risk point where they might fail to maintain their insularity or their high retention rates. And so we can't be that confident in predicting their rise in the sense that there's just lots of things that can go wrong. So for example, the Mormons are a failure case.</p><p>They were once a highly insular, highly fertile subculture, but the Mormon Church made a conscious choice to integrate the Mormons and integrate it with the rest of.</p><p>national and international society, they succeeded in that, and now the Mormon fertility is falling at the same rate as the rest of the country just 20 years behind. They are not going to succeed in being a insular fertile subculture, and that sort of thing could happen to the Haredim or the Yamash or others, but if there are enough of them doing things differently, it won't happen to all of them.</p><p>Theo Jaffee (25:39)</p><p>Why should we expect mainstream society to continue decreasing its fertility rate?</p><p>Robin Hanson (25:45)</p><p>Because we can see strong cultural trends that are causing it. It's not just an abstract number we can track. We can track the particular more proximate causes that are pushing it. And they seem robust and beloved. You can try to resist them, but a lot of people don't want to stop them. They will, in fact, push hard back if you try to reverse some of these trends.</p><p>Theo Jaffee (26:11)</p><p>Well, you know, in the last few decades alone, we've seen, you know, a major norm, 50, 60 plus years ago, was that homosexuality was wrong. And now, like, if you were to say that homosexuality is wrong, that would be enough to get you kicked out of polite circles. So we've had, like, a complete reversal. Yeah.</p><p>Robin Hanson (26:27)</p><p>So cultures do change, but the question is can you cause them to change in a desired direction? So the key thing is that the main way cultures change is that different factions fight over the changes and compete to influence the changes. And like if you look at the culture section of a newspaper, what that really means is these are the people most respected for...</p><p>influencing the direction of cultural changes and not everybody gets to play on an equal battleground in that space. So culture definitely changes and it's changed big time, in fact so much and so fast I think you should be disturbed by whether...</p><p>We can believe that that's all functional and adaptive, but it's not willing to just be changed in any particular direction in any particular subgroup once. Each group that tries to push for one change will typically face other groups that are pushing the other way. The question is, who will win?</p><p>Theo Jaffee (27:24)</p><p>So why would it be unlikely that the pro -fertility push in mainstream culture would win?</p><p>Robin Hanson (27:31)</p><p>Well, we can see a number of more fundamental causes again. So for example, I just over the weekend read a book by the classic founders of the field of cultural evolution, Boyd and Richardson. And in a 2004 book, their favorite model was that.</p><p>Basically, all culture needs an idea of prestige and people copy prestigious and a lot depends on what counts as prestige. And so in our society, the sorts of things that get you high prestige tend to require a lot of education. And a lot of education tends to put off fertility. And that's their story for fertility decline, which is plausible.</p><p>Obviously it's also tied in with say gender equality. If it was only men getting educated so highly it would be less of an obstacle. But we do have a lot of gender equality and along with prestige going along to the people who have high education, years of education, and that's causing low fertility. Another thing in history was that rich people could make their kids...</p><p>higher status by investing money in them, but the more kids they had, the more they divided up that money. And so there was an incentive for elites to have fewer kids in order to give each one of them high status. And so if we looked at prestige of individuals, signs of individuals, at that.</p><p>money could be invested in than that produced a selection effect to lower fertility even among elites centuries ago. But in addition to that, we have trends toward more parental care. That is, we have higher standards now for how much attention parents should pay to kids. We have standards of switch from what they call cornerstone marriage to capstone marriage, whereas in the past you would marry somebody young when you're less formed, less clear where you would go or succeed. Now the standard is more you should wait until you have a</p><p>steady, successful career and you've formed your personality, you know, just what your hobbies are and then you should find somebody who matches all those things. But by the time you do that, there's much less time for fertility to happen. We have norms limiting grandparent involvement and raising of kids and in kid careers. You know, there's just a whole bunch of these trends and many of them are quite beloved.</p><p>Urbanity, stronger urban living, which also seems to pretty clearly discourage kids. Less religion, religion's always been pretty strong. Correlate to fertility. And most people are not very open to reversing these trends.</p><p>Theo Jaffee (30:14)</p><p>Hmm. Well, do you think that this kind of thing could be solvable technologically? Like if there's some technological innovation that lets women get pregnant, you know, in a healthy way when they're 50 or 60.</p><p>Robin Hanson (30:20)</p><p>Oh sure, but the -</p><p>Well, the simple thing would work. I mean, the simplest thing that would work is just to have men and women freeze their egg and sperms at the age of 20 and then unfreeze them when they're ready to have kids, even if that's the age of 45. That would work, but it's a big ask because most people don't want to do that. Another thing that would work is to pay parents.</p><p>to have kids and borrow the money from the future tax revenue those kids will pay. That would also work, but again, you'll have to be inclined to want to solve the problem. And if you do that, it's gonna cause changes in some of these beloved trends. But I mean, I'd say there are ways we could, if we got our head into it, solve the fertility problem, but the fertility problem is really only a symptom of a deeper problem. And we are...</p><p>We have much worse options for the deeper problem. The deeper problem is just we have very few cultures which rapidly change and we're weak selection pressures and that can plausibly make fertility go off the rails but can also make lots of other things go off the rails. Norms of, you know, when you, what medical treatments you use or norms of war and peace.</p><p>Theo Jaffee (31:39)</p><p>Like what?</p><p>Robin Hanson (31:49)</p><p>norms of family. I mean, our life is full of social norms and prestige markers, and they can all just go wrong. The space of possible cultures, most of it isn't very good. Generic idea of evolution is selection will keep designs and structures in the small part of the space that's the good part of the space that's functional and productive, and random drift takes you off to the bad parts.</p><p>Theo Jaffee (32:20)</p><p>So what do you think about wokeness? Is that like a concept that is meaningful? Is it like a, you know, symptom of a maladaptive culture? Is it like a cause of a maladaptive culture? Do you think it's peaked?</p><p>Robin Hanson (32:32)</p><p>it's, it's, it's, I mean, it's just clearly, it's not peaked, but it's clearly just evidence of cultural change, and if you think about it, there's no particular reason to think it's functional, not that there's no particular reason to think it's dysfunctional compared to any other cultural change, but just realizing how rapidly our cultures are changing.</p><p>you have to realize there's nobody driving this train. There's no guiding force that's there to make sure these things are channeled into more productive, functional, adaptive forms. That doesn't exist. It's not how it works. We are just in a world where culture changes pretty randomly. And you have to realize that if this is a fragile thing that is valuable when it works right and it's in a functional structure, if you just make random changes,</p><p>Pretty soon those won't be very good.</p><p>Theo Jaffee (33:27)</p><p>Hmm. Is that also your explanation for, let's say, the current dysfunction that California has?</p><p>Robin Hanson (33:37)</p><p>I'm less willing to sort of attribute things to very particular cases. You know, this argument is strongest at the general level and it's just harder at the specifics. But...</p><p>I'll note that most people really want to argue at the within culture level. So, like the culture sections of newspapers or a lot of op -eds or a lot of things are basically people passionately arguing about which way their culture should go. And when they make those arguments, they will refer to sort of who's more prestigious and who's with us and who we are and what we have valued in the past. And those are the resources that you can use to persuade people that your story of where we should go is right and people should follow in your direction.</p><p>And most passionate discussion in politics and culture is about that. It really isn't looking at a distant point of view of how cultures evolve and what that might mean. It's about here we are and I want to go this way and you want to go that way and I'm right and you're wrong.</p><p>And part of that story is to look in the past and say, you know, where we've came from, from there to here is exactly the same thing I want to do, continue going from here to the future. And people want to appeal to our shared sense that what we did, our changes in the past must have been good changes in order to argue for how new changes should also be pursued.</p><p>Theo Jaffee (35:02)</p><p>So let's talk about this hypothetical future that one of my previous podcast guests, Greg Fodor, talks about extensively that I think is plausible, where eventually virtual reality technology will get good enough and reliable enough so that people will essentially virtualize their whole lives. And when that happens, they will have a much greater degree of control over the level of participation in society that they have compared to today. So do you think...</p><p>that could be like a solution to this kind of monoculture problem if people are capable of simply just removing themselves from the culture and creating something entirely new.</p><p>Robin Hanson (35:44)</p><p>Compared to the past, the distant past, our world is largely virtual in the sense that if you look around you, most of the surfaces you see are artificial surfaces constructed to have artificial appearances. We're not out in the woods or the jungle or the sea. We are mostly in artificial worlds constructed.</p><p>primarily as we see them for their appearances and how they are convenient for our lives. And we have enormous abilities compared to the past to find other compatible people to interact with and to form whatever subcultures we want to. And that's the way today is different from the past. And that does not imply, as far as I can tell, cultural diversity. That's not the natural outcome of that change. So I don't know why continuing along that same path into the future.</p><p>with more virtuality and more ability to select your associates would make it any different. Basically the highest level point is that as the world finds it easier to communicate with each other and to travel to meet each other and then to trade with each other and to move from one place to the other that makes the world culture more integrated. Now...</p><p>we have more variety of things like musical genres or TV shows or some other, you know, maybe particular hobbies of quilting or whatever it is. It becomes possible for there to be more such things in the world and for people to find a smaller niche closer to the kind of quilting they like, say. But those features, those kinds of cultures don't actually...</p><p>say very much about your life. They just, you know, those don't change how many children you have or whether you value living with your kids or whether you value your career or how you feel about death. Mostly they don't. Mostly they just a separate part of the world. Rationalists aren't really very different. So...</p><p>Theo Jaffee (37:38)</p><p>Well, sometimes they do.</p><p>What about rationalism?</p><p>Robin Hanson (37:47)</p><p>Rationalists, of course, are especially low fertility, so they are very much integrated with the lower fertility elites in the culture. And it's a... I mean, I think there's a strong emotional desire to see yourself as part of a distinctive subculture, but you don't usually ask for that subculture to influence very many aspects of your life. And rationalism doesn't influence very many aspects of a rationalist life.</p><p>Theo Jaffee (38:11)</p><p>Well</p><p>What about the polyamory part? That seems like a pretty clear departure from social norms as a result of in online culture.</p><p>Robin Hanson (38:20)</p><p>It is, but if you look at the world as a whole, the world has been converging culturally for a long time pretty strongly, and that's the dominant trend. So...</p><p>I mean, you know, go around the world, you definitely see things that look different around the world. Buildings are different and clothes are different sometimes, and, you know, holidays are different, maybe even work hours are different, but the world is converging quite a lot, culturally, and quite strongly. So, for example, if you look at regulation around the world, it varies hardly at all. Compared to having 150 countries worth of...</p><p>potential variation, we actually have far, far less actual regulation variation. You can certainly saw that in the pandemic where the whole world basically did it the same way. We see it in lots of other areas. Elites especially are converging culturally around the world. Non -elites are farther behind on that trend, but non -elites are usually farther behind on most cultural trends. That's because elites lead the way.</p><p>Theo Jaffee (39:30)</p><p>So how do institutions coordinate so well? Like during COVID, how every university and almost every media outlet and the federal government and almost every state government was basically on the same page pretty much all of the time, at least for like a year, a year and a half, two years. Is it just culture?</p><p>Robin Hanson (39:47)</p><p>Right? That was primarily culture, especially culture of elites. The very beginning of the pandemic, the usual public health experts gave the usual advice, and then elites around the world suddenly started talking to each other intensely for a month or two. And at the end of that, they came to a very different conclusion about how...</p><p>world should respond to the pandemic, the official public health experts immediately caved and changed their mind and accepted the new perm announcement of the new elites, of the elites, and everybody in the whole world did it that way, same together, because that was the consensus of elite culture worldwide.</p><p>Theo Jaffee (40:29)</p><p>You said earlier something about when it comes to culture, like nobody is in charge, but is that not like a counter example? Because you said that a lot of elites -</p><p>Robin Hanson (40:36)</p><p>Nobody was in charge of it. I mean, culture produces conformity and correlation without anyone being in charge. That's kind of the key nature of mobs, basically. Culture is a mob.</p><p>Theo Jaffee (40:47)</p><p>So it would be like a category error to say that the public health experts were in charge or that the elites talking to each other were in charge.</p><p>Robin Hanson (40:56)</p><p>Well, the elites talking about each other constituted the elite culture and the elite culture decided, but it wasn't any individual person or institution. It was the culture as a whole.</p><p>And so that's a form of organization humans have long had, is gossip producing consensus of mobs with shared opinions and even shared mob action without any center to direct it.</p><p>Theo Jaffee (41:31)</p><p>What are the characteristics of elite culture that make it powerful? Or is it not the culture that makes it powerful? Is it just the, you know, elite human capital?</p><p>Robin Hanson (41:42)</p><p>The superpower of humans is cultural evolution. That is, humans are able to learn and change much faster than other animals. And the main way we do that is by passing things on via culture. But culture doesn't work if you copy at random for your associates. You need to differentially copy from those who have been successful compared to others.</p><p>And so prestige is an important part of our strategy for differentially copying from the successful. So culture doesn't work really without some concept of procedure success that you can use to decide who to copy. So that makes prestige very powerful because we're all inclined to copy the prestigious. So that means.</p><p>there are people who are prestigious and they get together and talk and they agree on things, the rest of us are going to cave and go along, for the most part, for whatever the prestigious decide. That makes the prestigious very powerful, because that is the main vector of cultural evolution, is for people to copy the prestigious.</p><p>Theo Jaffee (42:50)</p><p>Alright, so let's switch topics a little bit. I'd love to talk about Age of Em, which is probably one of the most interesting deep dives about the future that I've ever read. So one thing that kind of stuck out to me is when you talk about Em's copying one another. Oh yeah, and for the audience, Age of Em is a book about a hypothetical future scenario where the ability to emulate a human brain on a computer becomes possible and cheap and widespread and the implications of that.</p><p>Robin Hanson (42:55)</p><p>Okay.</p><p>Theo Jaffee (43:21)</p><p>So, when you talk about Ems going off into stubs that then, like, end, you say that Ems won't think about it as, am I about to die, but will I remember this? But you kind of skip over the actual philosophical idea of, like, do stubs die?</p><p>Robin Hanson (43:45)</p><p>Right, so the key idea here is that we often have difficult philosophical questions associated with identity and death and things like that, and some of us think about those things philosophically and try to analyze them, but the vast majority of us don't. And for the vast majority of us, we don't really think about them that much. We just do whatever our culture says to do, and do that happily, without much reflection.</p><p>even when the philosophers all say we're wrong. So if I have a world and I want to know what they do, I don't think it's very important to think, well, what will their philosophers say? I more want to know what will cultural and economic selection produce in this world?</p><p>and then they will just do whatever that produces and they won't really know why they do it and they won't that much care. They will just do what everybody else around them is doing because that's what we do. So in this context, it seems that there's huge economic payoffs from making these stubs, which are basically short -lived copies.</p><p>which are spun off after it's been deleted shortly afterwards, but do something useful in the meantime. So for example, if you work eight hours a day, then you have 16 hours a day you're not working. So then basically your work hours have a tax of a factor of three. You have to pay for all the other 16 hours of resting up before you get the eight hours of work.</p><p>Instead, you can, when you're ready for work, make lots of copies of the work -ready versions, have them do eight hours a week, but only have one of the copies go on to rest for the next day. Now you can basically save a factor of three on labor productivity by having many short -lived copies that work for eight hours and then end instead of resting it for the next day. Now, those copies could say to themselves, gee, I'm about to die, this is terrible, and instead of working, decide to have a revolution or something.</p><p>But my claim is they won't. That is, they will get used to this as a usual practice that they are comfortable with. And they will then, because the ones who do so will get a factor of three in labor productivity, which is a pretty big advantage in a Malthusian world.</p><p>Theo Jaffee (46:16)</p><p>Hmm. So, do you think humans die every night when we go to sleep?</p><p>Robin Hanson (46:22)</p><p>I don't know. I don't care. Most of us don't care, right? That is, we know everybody else who's on this goes to sleep and wakes up and they seem to be okay with it, so why shouldn't I be okay with it? Some philosopher can analyze it and decide that I do die every night and maybe I should then be upset, but if I accept that argument, I'm gonna be upset in a way that people around me aren't upset, and then I'm gonna be weird, and I will be at a disadvantage because maybe I won't be willing to go to sleep.</p><p>It just looks like not much of a cultural win for me, right? I sh -</p><p>Theo Jaffee (46:53)</p><p>Well, is this not a question with a pretty clear scientific answer? Like, eventually, we'll be able to figure out, like, what are qualia, what is consciousness, under what circumstances do qualia persist?</p><p>Robin Hanson (47:04)</p><p>Actually, no, I don't think so. I don't think these are clear. These are not clear questions with clear scientific answers. No, I don't think so. I think they are questions that people will remain uncertain about indefinitely.</p><p>Theo Jaffee (47:07)</p><p>That will never happen.</p><p>And what about the Moravec transfer? Same thing.</p><p>Robin Hanson (47:23)</p><p>So what does that mean?</p><p>Theo Jaffee (47:25)</p><p>It's when it's like a hypothetical procedure for uploading your mind into a computer where you know instead of doing like a destructive brain scan then you have like little nanobots that scan each neuron individually and figure out what it does and then comes up with a simulation and integrates that into your existing brain and just does that again and again until your brain is very slowly replaced.</p><p>Robin Hanson (47:47)</p><p>Right? Okay. But again, most people don't think about those things. Look, in our world today, we are quite alienated from the world of our distant ancestors. Our world is quite strange, and there are many ways if you hold us to the standards of our ancestors, we are weird and even ugly and should be upset about the world we're in. We put up with, like,</p><p>Most of our ancestors would not put up with the degree of domination and ranking that we put up with in our jobs today. They would be upset and outraged and think that we just have no pride because we're willing to put up with this. But we put up with it because all the people around us do. And if we didn't, we would lose on our prestige games. Most prestigious people around us put up with it and accept it.</p><p>and we want to emulate them and be like them and so we put up with it too.</p><p>Theo Jaffee (48:50)</p><p>So, when it comes to human brains, you said there's probably, like, we will be confused about questions of identity forever, but do you think the human brain is kind of fundamentally uninterpretable, like in LLM, or will there be actual, like, computational structures that, you know, will be able to say, oh, hey, look, our, you know, memories are represented as, like,</p><p>Robin Hanson (49:10)</p><p>Oh, I'm sure we'll figure out lots about the brains, but we'll also figure out a lot about LLMs. We'll know a lot more about LLMs in a few decades than we do now. We'll be able to identify structures in them and attribute them, you know, various events and patterns to those structures we find. That'll be true for the brain.</p><p>Theo Jaffee (49:32)</p><p>How much of the age of Em do you think is applicable to, like, the near -term future of LLMs? When they're, like, kind of human -level, they're not, like, absurdly superhuman, they can run at faster speeds than the human brain, potentially?</p><p>Robin Hanson (49:45)</p><p>Well, the key thing is that Em's are full substitutes for human labor, and LLMs are not, and they're not close. So when you have a descendant of LLMs that is a full substitute for human labor, then the parallels will be much closer. At the moment, LLMs are really a niche market, and...</p><p>And people so far haven't... People basically keep making LLMs from scratch rather than making them from previous ones. So that'll be a key point perhaps in development if that ever switches. Well, humans, you make a human and they're a child and you slowly improve the human over time as it gets experience and matures.</p><p>Theo Jaffee (50:26)</p><p>What do you mean by that?</p><p>Robin Hanson (50:35)</p><p>with LLMs you keep every new generation of LLMs, you go back to the data and remake it from scratch so they don't remember or inherit from their previous versions. So, you know, many of the features of the age of Em are based on this assumption that you'll want to keep using the same Em's for many decades of subjective experience rather than just...</p><p>Theo Jaffee (50:46)</p><p>I see.</p><p>Robin Hanson (51:01)</p><p>stamping them in the lab and using them for an hour and throwing them away from the very beginning with no history or any sort of memory.</p><p>Theo Jaffee (51:09)</p><p>Hmm. So on the topic of AI and LLMs, how would you simply state the case against AI risk to people? Cause in your article, AI risk again, it relies on a lot of like somewhat.</p><p>Robin Hanson (51:22)</p><p>Wait, I don't have an article by that title, do I?</p><p>Theo Jaffee (51:26)</p><p>I think so. Let me look it up.</p><p>Robin Hanson (51:28)</p><p>but the title is AI Risk Again.</p><p>Theo Jaffee (51:31)</p><p>Yeah, AI risk, comma, again.</p><p>Robin Hanson (51:33)</p><p>That must be many years old then, right?</p><p>Theo Jaffee (51:36)</p><p>No, that was March 3rd, 2023.</p><p>Robin Hanson (51:39)</p><p>Okay, but presumably that's referring to some of my other more elaborate articles, but okay.</p><p>Theo Jaffee (51:45)</p><p>Yeah. But what about the opposite of elaborate? Like how would you state this case as simply as possible without economic arguments to someone who's like, for example, watch the Terminator or I, Robot so they're scared of AI and they see GPT -4 and they're like, oh wow, wow, this thing seems really impressive. Oh, we should be scared of the Terminator soon.</p><p>Robin Hanson (52:05)</p><p>So I think I have outlined two distinct lines of argument. The most recent line of argument that I focused on is asking people, and I did like a dozen AI risk conversations with people on YouTube that are recorded as you can see, basically asking people, well, ignoring AI, what did you expect to happen with your other kinds of descendants?</p><p>Did you expect to be able to control their values? Or did you expect to not have any conflicts with them? Did you expect to win all conflicts you might have with them? And almost everybody thinks that with respect to their squishy bio -human descendants, that those descendants would in fact become more powerful than them. They would win conflicts with them, and their values would be different from theirs.</p><p>And there might often actually be such conflicts. That's what everybody expects from their ordinary descendants. And it's what everybody has seen for many generations. And therefore, it's what they've accepted. They don't seem to mind that. But when it comes to AI descendants, they change their standards.</p><p>They are worried that we shouldn't have those kind of descendants unless we could make sure they never have a conflict with us or we'd always win the conflict with them or they could assure us that their values would never change. So people just holding different standards to the AI descendants than to the other descendants. And my main argument is that they really are your descendants and the same sort of evolutionary habit that should make you indulgent and supportive of all of your descendants, regardless of how they might differ.</p><p>from you should apply to your AI descendants. So that's one line of argument to say, you know, don't hold them to different standards in the abstract than you hold all your other descendants. Now, the faster change gets, the faster you may see your descendants.</p><p>and the fast the more of that history you may see sooner because change is faster but that would happen in the age of M2 because the Em's would also be changing faster and history would be happening faster so a slow human would see a lot more of that change as well but if you were okay with the Em's as your descendants quickly having different values from you being more powerful and winning conflicts with you then why not for the AI? So a separate line of argument is to say</p><p>AI is produced by capitalist firms who are doing it for profit.</p><p>And they will, of course, if their products hurt customers, that will be bad for them. They will therefore test their products regularly in many different ways. And in fact, computer products are among our most tested and monitored products of any sort because it's so much cheaper to test and monitor them. So AIs being built by large capitalist firms are for profit. They will make sure that their customers are not too unhappy with their experience with them.</p><p>And so they are very unlikely to be suddenly blindsided by their products vastly changing. They will be watching out for tendencies for their products to change and for that to cause bad experiences with customers. And...</p><p>for the intermediate time when these products are being produced by for -profit firms, you should expect to be as similarly happy or unhappy with them as you are now with most of the other products that you get from large capitalist firms.</p><p>Theo Jaffee (55:40)</p><p>Well, in response to that first line of argumentation that AIs are basically our descendants, could you not just say like, no AIs are not our descendants, they're like creation, so there's something different. Like, especially if it turns out LLMs are not like the golden path to AGI, and AGI will take something entirely different to be built, and that something won't simply be like the, you know, distilled total of human knowledge and human data and human preferences, that'll be like a...</p><p>maybe a Bayesian superintelligence built from scratch. Like, I could definitely see why someone wouldn't see that as our descendants.</p><p>Robin Hanson (56:17)</p><p>So natural selection is the powerful general theory that is much more general than simply the nature of biological creatures that pass things on through DNA. DNA -based evolution is one of many kinds of evolution, and we have general theories.</p><p>of evolution that apply not just to DNA -based biological organisms, but to culture and many other kinds of natural selection. The key concept of natural selection is variation in selection, as Donald Campbell famously argued in the 1960s.</p><p>And this is a general process. And in this general process, the key point is just there are some things, and they have descendants. Descendants is really just whatever literally descends from them, i .e. arises from them. And natural selection just requires that the descendants have some correlations and features to their ancestors.</p><p>pass down through whatever means and whatever those means is called genes. Genes are just our name for whatever is the mechanism of passing on features from ancestors to descendants. And as long as there's variation in those features and there is selection, i .e. not all features are equally productive of reproducing and surviving in the world, then you have natural selection. And by these general abstract theoretical conditions, AIs are our descendants, whether they're LLMs or Bayesian networks or</p><p>whatever else they are, as long as they share some features with us. And I'm pretty confident that if there are aliens out there making their own AIs, that our AIs will have things in common with us compared to the AIs of the aliens.</p><p>Theo Jaffee (58:05)</p><p>Well, still just playing devil's advocate here. Like someone could say, yeah, well, sure. You can make an argument where you define the word descendant to mean, you know, something that shares features with us that we had a part in creating either voluntarily or involuntarily. But like, I don't care about that. You know, that's it's, it's a computer. It's, it's not like my children.</p><p>Robin Hanson (58:26)</p><p>Well, you can care about whatever you want to care about, but I'm pointing to the standard concept of a theory, the word descended, I'm just making up a definition of word, I'm pointing to the concept that's in a theory, and this theory predicts, robustly, that...</p><p>creatures will have both a rivalry toward coexisting creatures with different genes, but also a support and indulgence toward their descendants, even if they have different genes, and that these AI are such descendants. You don't have to accept what natural selection tells you you should want, but then I don't know what basis you have other than saying, I like these descendants and I don't like those. I guess that's your right, but...</p><p>Again, why not dislike your other descendants just as much and be unhappy if they displace you and if they have win conflicts with you? That's also your right. You could just say you don't like any of your descendants.</p><p>Theo Jaffee (59:28)</p><p>Well, I think maybe people expect that their biological descendants will be less different. Like, many of the AI risk arguments take the form of, the AI will very quickly gain resources and kill us all, whereas our own biological children won't do that because they love their parents.</p><p>Robin Hanson (59:43)</p><p>Well, in the dozen conversations I had, almost everybody agreed that in fact their biological descendants would get pretty different pretty fast. As they are all. They might. That's a possibility. The reason why you don't think it will happen isn't because it couldn't happen.</p><p>Theo Jaffee (59:54)</p><p>but not fast enough to kill all them.</p><p>Is there any historical precedent?</p><p>Robin Hanson (1:00:06)</p><p>But there's no strong reason you think the AIs will do so easy. It's just the possibility with the AIs that scares people, not any particular reason I think it will happen, just that it could, but it could happen with your biological descendants. They... biological descendants have in fact killed parents and grandparents in the past. Yes, systematically, indeed. I mean, in many societies when people became too old to be productive, they were killed.</p><p>Theo Jaffee (1:00:23)</p><p>Yeah, but not systematically. It's usually very rare and they're like heavy norms against it.</p><p>Robin Hanson (1:00:35)</p><p>That's a common historical cultural practice.</p><p>Theo Jaffee (1:00:41)</p><p>So, the Age of Em is one of the most detailed deep dives I've ever seen into a future scenario. So, I think maybe the only other one I've seen is Nanosystems by Eric Drexler, but even that isn't really like a dive into the future economic and social implications. It's more so a dive into like the future possibilities of nanotechnology. So, are there any other deep dives like this that you know of? And...</p><p>Robin Hanson (1:01:03)</p><p>The technology, yes.</p><p>Theo Jaffee (1:01:11)</p><p>What science fiction do you think is the most realistic, the most detailed in ways that aren't silly?</p><p>Robin Hanson (1:01:18)</p><p>Well, I was inspired to write Age of Em in part by nanosystems. But as you say, nanosystems only looked at sort of the technology possibilities and not at the social implications. Drexter was interested in social implications, and he wrote about them in other places, but he just didn't have as much training in social science, I think, to do his thorough analysis of the social implications.</p><p>I wrote Age of Em and hope and part of inspiring other people to take up the example of doing such detailed analyses. So far they haven't.</p><p>the, I mean, for example, David Brin's Kiln People was a predecessor of Age of Em, where he did the best job I had seen in science fiction of trying to analyze similar issues, but still much less detailed of an analysis that I gave in Age of Em. I would, I mean, it might be true that I picked an easier problem than most are, in the sense that...</p><p>You could say more about Age of Em than you might be able to say about other scenarios, but I still think people could say a lot more about other scenarios than they have. And I guess I have to conclude that people are not actually that interested in working out such detailed implications because showing a demonstration of how it's possible hasn't inspired other people to do so.</p><p>Theo Jaffee (1:02:43)</p><p>Also in The Age of Em, you, like, the kind, the whole book is kind of overshadowed by the idea that, you know, this age might only last, like, one or two objective years, and then after that, like, something much stranger will happen, potentially. But, you know, there's not much detail about what that thing might be. So do you think, in, like, the very far future, is there anything meaningful that can be said about our descendants in, like, a million years?</p><p>Robin Hanson (1:03:11)</p><p>Well, yes, there's just fewer things that can be said. So one thing that can be said is that if physics is really a strong limitation, as it seems to be, then our descendants in a million years will still have to deal with only being less than a million light years away from here. And...</p><p>only having access to the volume and materials in that space. They'll have to deal with constraints of conservation of energy and second law of thermodynamics and, you know, other key constraints of, say, the speed of light of activity in that volume. Those are things I think we can say. I think we can talk somewhat robustly about</p><p>the two main paths of competition or coordination at the highest level. That is...</p><p>Either our descendants will not coordinate to sort of control their entire overall pattern of activity and therefore be in a world of competition with each other fundamentally at the highest levels, even though they can have smaller scale coordination in that competition, but the highest level might just be competition. And if that's true, we can make some claims about what they will want, I think, in that competition, how they will behave. The other option is that somehow,</p><p>our civilization manages to coordinate to enforce some rules that limits the competition. That will have to happen if it does before we leave out to explore the universe because if we have substantial colonists who head out into the universe before such coordination is created and enforced then it will be too late afterwards. So there's a limited time window to do that.</p><p>But it's in principle possible. So we can say either the world will manage to sort of send out political officers with every vessel that leaves here to enforce some central rules on behavior everywhere, or they won't, or the political officers will fail, in which case there will just be competition on the larger scale. And I think...</p><p>We can expect that competition will produce the kind of results it has in the past in terms of evolution toward a more efficient mechanisms and processes. And I think we could also guess that we can say some things about what the preferences of creatures who evolve are, because we have literatures on that today. We have literatures on what preferences involve in natural selection in particular in the context of investment funds and what preferences over investment funds people would have.</p><p>if they are natural selection produce their preferences.</p><p>Theo Jaffee (1:06:08)</p><p>kinds of preferences might they have.</p><p>Robin Hanson (1:06:10)</p><p>Well, so asexually produced creatures should not have a time discount, whereas sexually reproduced creatures have roughly a time discount of a factor of two per generation. That is roughly the time discount that humans seem to have. So if humans start competing with some asexually reproduced creatures, maybe AIs or Ms, then...</p><p>In trading with them, we will buy the present and they will buy the future because they care more about the future. And that's a robust prediction. I think that time preferences will go away.</p><p>Another prediction is about risk preferences. So for example, in the investment world, you get logarithmic risk with respect to correlated risk over all your copies, but you get risk neutrality about risk that's not correlated because you can insure against that by diversification. That looks like a robust argument that would probably continue to hold. So we can say something about the degree and kinds of risk aversion that descendants would have. And I think...</p><p>Theo Jaffee (1:07:16)</p><p>Can you explain the time preference thing a little bit more for the audience? Like what does it mean that sexual creatures discount to per generation?</p><p>Robin Hanson (1:07:22)</p><p>Right, so typically a sexually reproducing creature has a choice between investing in itself now or investing in its descendants. But its descendants share only half its genes and its descendants will mostly be able to use the resources when they reproduce a generation later.</p><p>So the really choice is between spending stuff on yourself now or spending stuff on your descendants in a generation. And so since your gene, your children have half your genes, then this is a choice of...</p><p>Doing things for your genes now full on, or doing stuff for half your genes in a generation, and therefore, this is a factor of two per generation discount rate. That is you're trading off doing stuff for yourself now and doing stuff for your kids in a generation. That's only for sexually reproducing creatures for which their kids only have half their genes. Asexual reproducing creatures, their kids have all their genes, so they should not be discounting the future for them at all.</p><p>Theo Jaffee (1:08:34)</p><p>So in a couple of critiques that were written about Age of Em by Brian Caplan and Scott Alexander, both of the main arguments are kind of that Em's will be selected more for docility. Like they won't evolve that much cultural drift from humans because like we will select against that as the main customers of the Em's and the people who are keeping their infrastructure alive in the physical world. So what do you think about that?</p><p>Robin Hanson (1:08:39)</p><p>beside the point.</p><p>Yes, but it's useful.</p><p>Well, so first of all, the classic story about humans is that we domesticated ourselves. So there are a number of features that domesticated species have that distinguish them from other species.</p><p>And then humans have these features. So when we domesticate horses and cows and pigs and, you know, dogs, etc. They differ in predictable ways because of that domestication. And humans differ in exactly those ways as well. So we have domesticated ourselves. So we are, in fact, more docile than other animals because we have become domesticated. This already happened. Long ago, really. Now...</p><p>For a long time, a fear about the future has always been that somehow changes will enslave us, or worse, make us enslaved and not even know or care. This is one of the most robust dystopian visions of the future that anybody ever has. Really. Overwhelming. Like at the beginning of the Industrial Revolution.</p><p>say around 1900 or before that, when people were trying to create dystopian visions of the future of industry, these were their main complaints. Exactly the complaints that humans would be enslaved or so domesticated we wouldn't even notice we were enslaved. So we clearly just have a very strong sensitivity to this possibility and it goes along with our strong egalitarian norms that humans are distinct from other primates exactly from having very strong egalitarian norms wherein we resisted anyone being the head of the tribe.</p><p>and putting themselves up there, and human foragers are famously egalitarian, famously strongly resistant to any individual humans dominating the rest. So this is just something we humans are primed to be afraid of and to be outraged by the possibility of, even though of course it already happened. We already are self -domesticated. So...</p><p>I don't think there's any particular reason to think that this is a thing that will happen in Age of Em more than any other future scenario, other than the fact people are just primed to be afraid of it because this is just a generic strong fear about any future scenario.</p><p>Theo Jaffee (1:11:24)</p><p>Hmm. So on the topic of rationalism and AI, how exactly did you meet and get involved and start blogging with Eliezer Yudkowsky? How did you get involved in the rationality community in the first place?</p><p>Robin Hanson (1:11:38)</p><p>Well, I had a blog called Overcoming Bias and I invited some others to participate and share the blog with me, including Eli Izer and Nick Bostrom and some other, Hal Finney, including who is a best guess founder of Bitcoin. And they did start blogging with me and we discussed AI risk on the blog because that was a big issue for Eli Izer then.</p><p>The blog was called Overcoming Bias, so it had a rationalist sort of theme right there, and we discussed some issues of rationality on the blog. And then a few years later, Eliezer decided he wanted to make his own blog called Less Wrong, based on some ideas that he'd have many participants contributing and have a karma system to rank them so that people could see what was quality. He did in fact make that blog. I helped him in the sense of allowing hard links from my blog to his blog in order to</p><p>raised the Google rank of his blog using the Google rank of mine at the time. And he</p><p>He created this community where people were talking about rationalist issues. They are less wrong. And then the karma system eventually seemed to have gone wrong, but still for a while people liked it and they were discussing things there. And so the rationalist community sort of started in less wrong there, but it grew to other places and all along, you know, Eliezer was using this community to push AI risk.</p><p>Theo Jaffee (1:13:09)</p><p>Interesting. So let's switch topics a little and talk about the elephant in the brain, which is, I think, my favorite thing that you've written. My friend who introduced me to your ideas, his favorite thing you've written. And for the audience, it's a book about how humans have hidden motives that we don't naturally reach for while explaining our actions. So.</p><p>Robin Hanson (1:13:22)</p><p>death rates.</p><p>Theo Jaffee (1:13:36)</p><p>One thing that's kind of a meme now in young people's circles is girls getting the ick. So I don't know if you've heard of this.</p><p>Robin Hanson (1:13:45)</p><p>the ick regarding a man for sick in particular.</p><p>Theo Jaffee (1:13:47)</p><p>Yeah, yeah. So it's like there are these, you know, TikToks, Instagram reels of like a guy doing something slightly awkward. Like, you know, it could be literally anything. The way he jumps into a pool, the way he, you know, reaches up to open a cabinet or something. And then, um, girls are like, Ooh, I just got the ick from that. So what do you, what do you think is the elephant in the brain explanation if there is one for that?</p><p>Robin Hanson (1:13:59)</p><p>Okay. Right.</p><p>Right?</p><p>Okay.</p><p>Well, the usual explanation, I would think, is pretty close to the surface, is that women are very selective, or they see themselves as very selective among men. They are willing to mate with any woman, man who asks. It's very important to them to be selective, and so they are trying to be very selective.</p><p>And so they lean into intuitive reactions that are selection reactions, in particular rejections, in ways that the rest of society finds impolite or rude in other contexts. So we are again relatively egalitarian, and so we try not to overtly reject each other, or we make excuses for it. We certainly don't like to go out of our way to insult people and to put them down. That seems rude and...</p><p>arrogant, and that's true in most of the rest of society except apparently we make an exception for women rejecting men. Apparently not only is it okay for women to eagerly reject men, but they can insult them in the process and put them down and bond together over their...</p><p>you know, rejection of men and basically declaring men are unequal and men are should be unequal, the lower half of the men distribution is morally unworthy and ick basically and maybe should not exist. That's apparently a kind of inequality attitude that's okay in our world although we have many other aversions to inequality talk. We could discuss that more.</p><p>Theo Jaffee (1:15:53)</p><p>Do you think this, do you think that women's propensity to reject men is adaptive in the sense that, you know, they will select better mates or maladaptive in the sense that there will be like fewer overall children?</p><p>Robin Hanson (1:16:00)</p><p>It's.</p><p>Well, obviously some degree of holding standards is adaptive. And of course, there's going to be the additional...</p><p>value of signaling that you have standards. So people might be hold too high of standards exactly to show off that they have standards. But if people were still settling down and picking someone soon, we wouldn't have fertility problems. Those come from people spending decades being picky. When they finally pick somebody, it's kind of too late. That's more the problem. It's the delay in picking and not so much the high standards.</p><p>And then that interacts to some degree, I guess, is that people hold very high standards. And they say, nobody around me now could possibly meet my standards. Somebody maybe later will.</p><p>Theo Jaffee (1:16:45)</p><p>Also,</p><p>So you also talk about how a lot of the explanations for these behaviors come from, like, forager times. Do you think humans have evolved genetically at all in the last, like, 10 ,000 years, and if so, how?</p><p>Robin Hanson (1:17:11)</p><p>Well, there's certainly data about, say, milk processing.</p><p>You know, so some people can't process milk and others can and that certainly seems to have spread in the last 10 ,000 years. So we have a few pieces of evidence about that. We also have the general data that we expect rates of DNA evolution to be proportional to the size of the population. So the prediction is as the population is getting larger, selection is beginning faster, although it's been happening over a shorter time period. So, you know, you have to take that in. But presumably selection is much faster lately.</p><p>than it was before because of the larger population, and we see some specific kinds of selection. But honestly, the usual story, which seems right to me, is that there's been relatively little DNA evolution, but a lot of cultural evolution. Cultural evolution is overwhelmingly where human evolution has been for a while.</p><p>a lot of cultural evolution, but then the challenge is to think more carefully about what exactly that is and how it works. And I think I didn't think very much about it until a few months ago, and so I realized that most people kind of think they understand culture, but they haven't really thought much about it, and there's a lot more that they should learn.</p><p>Theo Jaffee (1:18:28)</p><p>Like what? What would be the most important things to learn about this?</p><p>Robin Hanson (1:18:32)</p><p>Well, first is that it's this autonomous process that culture tells you what you should value and what you should do, and you just accept that. And then it changes and you just accept that. And there's nobody driving this train. Key features of this culture are what counts as status, what counts as prestige, because you copy the behavior of high prestigious people.</p><p>And so the definition of prestige can really make a big difference to what direction culture goes. If prestigious people are the people who have the most years of education, then that'll encourage people to get a lot of years of education, maybe even more than are useful for other reasons. If prestige goes with individual wealth, then people will focus on accumulating individual wealth and passing on wealth to a smaller number of kids so that they can be individually wealthier.</p><p>a lot depends on what we decide counts as prestige. And nobody is driving that train. Over time, what counts as prestige has changed. And we didn't vote on it, and we didn't analyze it and decide it together, and there wasn't a process that was just anticipating the consequences of this and figuring out what was best for us. Culture is a very crude process. Like, if you think about your organs and your body and your...</p><p>Body reactions, you change in time of day and time of year. You have all these complicated ways in which your body is primed to change its behavior in different contexts, because your bodies like yours have been evolving for many millions, even billions of years, but culture has only been evolving for a few thousands of years, and it doesn't have all those complicated conditional processes to, you know, adjust for context. It's a much cruder thing.</p><p>You should not think it's this very subtle, well -worked -out, systematic thing that will carefully adapt to all sorts of details.</p><p>Theo Jaffee (1:20:30)</p><p>So in terms of the like relative decline in wokeness and similar, you know, if you can call it wokeness, call it progressivism, leftism, whatever, specifically in tech circles in the last couple years as a result of Elon Musk buying Twitter, is that like a genuine cultural change or is that simply people following the behavior of a prestigious individual?</p><p>Robin Hanson (1:20:51)</p><p>Well, that's what cultural change is. There isn't something else. There isn't a whole other thing. That's what it is. It's whatever the prestigious people are doing is culture. There is no other source.</p><p>Theo Jaffee (1:20:53)</p><p>Oh, yeah, that makes sense.</p><p>Well, which way does a causation go? Does, you know, what's prestigious?</p><p>Robin Hanson (1:21:08)</p><p>both. That is, if you aren't doing what culture says, you look less prestigious. And whatever the prestigious people are doing, that's what culture is.</p><p>Theo Jaffee (1:21:21)</p><p>So then how does culture change over time? If it's just, you know, if culture and prestige.</p><p>Robin Hanson (1:21:27)</p><p>Well, the biggest event in the 20th century that influenced culture was World War II. And both the rise of Hitler in the first place and his fall and losing the war were both pretty unpredictable. But they still had enormous consequences for culture. So you can see how things that were at the center of remaking world culture were the result of conflicts.</p><p>that it was hard to predict who would win. So that's also been true for the major changes in culture over the last half century in our world. Ex ante, they were hard to predict. It was hard to know who was going to win. Later on, we tell ourselves the story that the winner was inevitable. And we should have known all along that they were going to be the winner and we should have accepted. But you couldn't really tell early on who was going to win.</p><p>Theo Jaffee (1:22:22)</p><p>Do you think World War II was a bigger cultural change than, like, the fall of monarchy at the end of World War I globally, or, like, the fall of communism globally in the late 80s, early 90s?</p><p>Robin Hanson (1:22:33)</p><p>I don't know, I mean they were of a similar magnitude, so I don't that much care to say which one is exactly the bigger. They were big.</p><p>they were not that predictable.</p><p>Theo Jaffee (1:22:49)</p><p>So another elephant in the brain topic is medicine. This is maybe the most famous thing that's come out of it. So why is health care spending both so high and life expectancy so low in the US relative to other countries? Like, do we have a worse signaling problem or is it bad institutions?</p><p>Robin Hanson (1:23:05)</p><p>Right. So just to discuss to the audience, the elephant in the brain is mostly about why is medicine weird compared to what I think it's not really focused on the US versus other places. So most of the book, we're just looking at average typical human behavior and trying to understand that. I'm not very interested in the variations across space and time. If you don't even understand the average, you have no business trying to figure out the variations because they're just harder to figure out.</p><p>So the average medicine is basically not very useful. And so we try to explain our fascination with and obsession with medicine and the fact that it seems to do very little on the margin by saying that medicine is something we use to show that we care. And so we use medicine to...</p><p>show people that even though they're sick, we might betray them, but we won't leave them. We are going to stay with them and take care of them, and that's very reassuring, and that's the main function of medicine. But that doesn't explain why different times and places might do things different. So if we want to say why is medicine different in the US in the 20th century, say, I think we have to go to</p><p>It's one of the key stories the US tells itself about why the world should be grateful to it. So the world has a few stories about why the US has saved the world in many cases and they should love us. So World War I, World Wars are an example of that. And then the Cold War are also examples of how we say we saved the world from very dire problems and they should all be grateful to us. And then medicine is another one because we say basically we gave modern medicine to the world. We point to key adva -</p><p>in medicine happening in the US and the world copying them and being all the better for it. So when we have a thing that we are proud of as being the source of and spreading it to the world, then we tend to double down on it. So we double down on military spending even though we have very few, you know.</p><p>neighbors nearby who might cause us any problems. We still spend enormous amounts on the military, reaffirming our story of how we saved the world from the Nazis and the communists. We also like to tell the world that somehow we gave them civil rights and legal sort of legal procedure and we'd like to double down on that because that's our story of how that came from us because we went wild with that.</p><p>soon after World War II, and we also say that we gave the world medicine, and so we continue to spend a lot of it in part reaffirming how wonderful it is, which reaffirms how wonderful we are for having given it to the world.</p><p>Theo Jaffee (1:25:54)</p><p>Do you think that academia right now in 2024 is the best institution that we have for aggregating information and seeking the truth? Like, do you think there are existing other institutions like, you know, the blogosphere that might be better? Is academia still a P?</p><p>Robin Hanson (1:26:08)</p><p>was.</p><p>I mean, certainly not the only institution we use. So for different kinds of information, we use just a different institution to aggregate information. So clearly, for example, just ordinary business practice is one institution for aggregating information. Most business practice isn't mediated by academia. It's mediated by prior nearby business practice. So business people are looking around to see what other people are doing and copying them, trying them out. And we aggregate information about what businesses should do through the competitive market.</p><p>with businesses looking over their shoulders, what everybody else is doing. That's the way we aggregate information about business. The way we aggregate information about say marriage or families isn't much to do with academia either. We in a world where other people around us are having marriage practices, family practices, and we copy what our parents did, what people around us do, et cetera. That's how we're aggregating information about those topics.</p><p>Theo Jaffee (1:27:03)</p><p>What about physics? You know, that's mostly like academic, theoretical, and experimental.</p><p>Robin Hanson (1:27:05)</p><p>Well, most practical physics isn't being aggregated by academic physicists either. So everybody should know that like a lot of famous physics inventions were first happened practically and then theorists came up with explanations, say thermodynamics, laws of entropy, et cetera. They were first practiced. And so an awful lot of physics practices first happens in industry and then people doing things and then...</p><p>academic physicists try to make sense of it and synthesize it, you know, systematize it. But theoretical physics is more accepted from academia, in part because nobody cares and it doesn't matter.</p><p>So we mostly let academia aggregate information about abstract stuff that nobody else cares about. When the rest of us care, we are much less willing to listen to academia. And so often, academia then just tells us whatever we want to hear. So there's an old saying that a leader is someone who figures out which way the crowd is going and gets out in front.</p><p>Academics often do that about, you know, practical ways to live. People just have a lot of ways they think we should live and then academics typically find a way to support that. And this is also true in government policy. Mostly the government doesn't listen to academic policy. Mostly the government decides on policy in some other way and then finds academics who support whatever they're saying to justify it and they find them and then those people look influential, but they're much actually much less the fact that somebody was</p><p>there to be found to support it for any of you it matters. If they couldn't find anybody to support it they would have been less likely to do it, but as long as they can find some support that's enough.</p><p>Theo Jaffee (1:28:52)</p><p>So do you think academia has always kind of been bad like this or has it gotten worse over time? Did it used to be much better?</p><p>Robin Hanson (1:29:00)</p><p>I don't know, but certainly peer review is something that people today think of as essential to academia, but a century ago it just wasn't a thing so much. Like in 1900 there was hardly any peer review, that wasn't how things happened. You know, journal editors just had a lot of discretion and decided what they liked.</p><p>And so for most of the famous history of science up until the 20th century, that wasn't peer review. So I think people are not that sure what academic institutions are exactly, because they've changed a lot over the years. And certainly often you just had a community of people and a small enough community of elites that they could just manage each other informally. And sometimes that works very well. And as the community gets larger, it fragments and it just can't manage itself that way. And so,</p><p>today is just really much larger than it ever was and so there is no small community of people who run the whole thing. It's very decentralized.</p><p>Theo Jaffee (1:30:00)</p><p>Do you think peer review actually matters or is it another kind of like elephant in the brain signaling thing that we do kind of to show that we care about truth?</p><p>Robin Hanson (1:30:12)</p><p>I've spent a lot of time thinking about alternative academic institutions and about the institutions we're in, and so I'm confident in saying that the institutions we are in are not remotely optimal for the purpose of producing intellectual progress. They mostly people doing things to win their local games, but...</p><p>Theo Jaffee (1:30:28)</p><p>Hehehe</p><p>Robin Hanson (1:30:35)</p><p>Still, there are many kinds of abstract topics where the best thought about them will be found in academia, nevertheless, even if this isn't optimized for that. I do think we know much better ways we could organize academia in terms of promoting progress, but there isn't much of a constituency for that, so it's not going to happen anytime soon.</p><p>I have my own particular proposals for what you would try to do to make things different, but again, the limit is making anybody care. There's very little interest in improving academia, even among academics. Mostly people want to win the academic game, and that's what they focus on doing.</p><p>Theo Jaffee (1:31:13)</p><p>So you've written very extensively about alternative institutions, but what do you think some examples are of institutions that are actually good?</p><p>Robin Hanson (1:31:21)</p><p>existing ones? Well, in some sense, existing ones are all beating out the immediate alternatives that they could be easily displaced by, so the existence of institutions tells you something about their staying power and that they are, you know, filling a niche and continuing to fill the niche. So you have to give them all credit for that.</p><p>Theo Jaffee (1:31:22)</p><p>Existing ones, yes.</p><p>Robin Hanson (1:31:43)</p><p>You know, they are all sitting next to a space of alternatives that people often do try to replace them with. And the existing ones are keeping them at bay. So they got to get credit for that. All of them. Now some of them are keeping others at bay by, say, locking down the control of the national government, say, and making sure that voters never think to reject them or something. Others have a wider range of competition they're pushing away. But...</p><p>I mean, academia is relatively decentralized, and so academia does succeed so far in pushing away attempts of other groups of people, say bloggers, to gain their position of respect. They have so far won against those challengers. Decentralized.</p><p>Theo Jaffee (1:32:28)</p><p>Did you say centralized or decentralized?</p><p>Yeah, it is, you know, I... It still is crazy to me how powerful human culture is, where, you know, a thousand different universities can have basically the exact same culture, you know, scattered all over the country and the world. Like, one of the things I noticed when I go to University of Florida in Gainesville, like a year ago, I took my cousin to visit Arizona State University on the other side of the country. There's no affiliation at all between UF and ASU.</p><p>And I was like, Oh my God, this is like the same thing. It's got the same kinds of buildings. It's got the same companies that are contracted to it. It's got, you know, the same kinds of tour groups, the same kinds of professors and classes and courses.</p><p>Robin Hanson (1:33:07)</p><p>Right?</p><p>right? Right, so academia has been a unified culture for quite a while now worldwide and so there's a sense in which it isn't allowing much diversity of the sort that you know the center doesn't like. So in general if we had a bunch of different kinds of academia in different places doing it different we might have more competition among them.</p><p>see which ways went out, but when there's centers of academia enough to sort of enforce standards on everyone, then that's how it works. So like in most disciplines, there's a small number of people who are at the top of the discipline, and they control the major funders and the major journals and the major jobs, and their shared opinion about what that discipline should be doing dominates worldwide.</p><p>So the different disciplines can compete with each other to some extent, but they mostly stay off each other's territory. Well, the Center is the most prestigious people.</p><p>Theo Jaffee (1:34:19)</p><p>Well, what is the center of academia, if academia is so decentralized?</p><p>Robin Hanson (1:34:25)</p><p>and their opinions about what, again, who should get what jobs, who should get what grants, who should get what publication journal slots. They decide those things. And those things, you know, basically the most prestigious people decide the next most prestigious people, et cetera, all the way down the ladder of prestige. So when prestige is very important, the top prestigious people basically have a central power even without any other official, you know, roles of power.</p><p>All they really need to do is declare some things prestigious and other things not. And that works all the way down.</p><p>So I mean, quite commonly, most disciplines, there's a set of relatively established methods and.</p><p>claims in the discipline and then the only people really allowed to challenge them are the most prestigious people. So if somebody much lower prestige challenges those things they're routinely slapped down and rejected because it's not their place to do such things. Lower level people are supposed to question smaller things, do supporting work to the most prestigious people. The most prestigious might be allowed to change the fundamental assumptions about methods or conclusions in the field, but low status people that's just not their place.</p><p>Theo Jaffee (1:35:44)</p><p>Well what about kind of outsiders academia who have tried to change it who are pretty prestigious like Peter Thiel and more recently Bill Acme -</p><p>Robin Hanson (1:35:53)</p><p>I know of no academic field where Peter Thiel's opinions carry much weight.</p><p>Theo Jaffee (1:35:59)</p><p>Yeah, but why? I mean, he went to, you know, the most prestigious university, Stanford, one of them, and...</p><p>Robin Hanson (1:36:04)</p><p>prestige is particular, so being a prestigious sociologist doesn't carry much weight in economics or physics, right? Yeah, but that's a different prestige letter, and so it counts for other kinds of things, right? So we trust the most prestigious doctors to decide who doctors can be, but you know, we don't trust the president to decide that, even if the president's very prestigious, right? So we just have...</p><p>Theo Jaffee (1:36:12)</p><p>and he's a billionaire.</p><p>Robin Hanson (1:36:33)</p><p>ideas about what kind there are different kinds of prestige and what scope they have and you have to use prestige within your area.</p><p>Theo Jaffee (1:36:43)</p><p>And I think Peter Thiel has had somewhat of an impact in like, if not coursework, the culture of like CS programs at schools, especially elite schools. I think the startup vibes of MIT and Stanford and probably most universities are different from what they used to be because of in part Peter Thiel's ideas.</p><p>Robin Hanson (1:37:08)</p><p>I'm, you know, if that's true, I'm happy. I mean, I'm not in those worlds, so I don't know those, but yes, like the startup world is another world and academia is a different world, but they overlap somewhat and they come somewhat compete for outside prestige. And so often they are using each other for their various purposes. I certainly know that startup companies often ally with academics in order to add prestige to their startup. Presumably vice versa. Academics ally with...</p><p>Theo Jaffee (1:37:32)</p><p>So.</p><p>Robin Hanson (1:37:35)</p><p>startup people to add prestige to their academic things.</p><p>Theo Jaffee (1:37:40)</p><p>So pretty recently you wrote an article that I believe is called Why Crypto? Where you, yeah, you talk about your positions on Bitcoin and the cryptocurrency industry, which by the way, for the audience, Robin was like remarkably early on this. He was friends with Hal Finney, I believe before Bitcoin was actually a thing.</p><p>Robin Hanson (1:37:45)</p><p>Yes.</p><p>Oh yeah, long ago. But I wasn't really into the crypto thing then, but... And I, you know...</p><p>Theo Jaffee (1:38:03)</p><p>So.</p><p>To not misrepresent your position on this, you believe that Bitcoin is mostly speculative?</p><p>Robin Hanson (1:38:16)</p><p>Most of the value of crypto was realized in Bitcoin nearly early on in the process and since on most of the activity to create other coins and other things you could use the coins for hasn't really panned out much. They could but it hasn't so far. But...</p><p>Theo Jaffee (1:38:35)</p><p>Well, what do you mean by most of the value is realized early on?</p><p>Robin Hanson (1:38:39)</p><p>Well, that is having a crypto coin, a Bitcoin, having it as a store value, having it as a thing you can use to make some trades with, that was a new thing and that was added. That was a value added by the Bitcoin early on. Since then, people have made lots of other coins to do lots of other things, but they haven't actually achieved much value from those other things.</p><p>there is the value of just having the coin and being able to use it as a store value or as a money to trade. And that's a value that's continued from the beginning once people had Bitcoin.</p><p>Theo Jaffee (1:39:02)</p><p>Okay.</p><p>So the reason that the price of Bitcoin has gone up from, you know, one cent per Bitcoin to $70 ,000 per Bitcoin, how much of that is just pure speculation then?</p><p>Robin Hanson (1:39:25)</p><p>Most of it. Most all of it, yes. But.</p><p>Theo Jaffee (1:39:29)</p><p>How is it possible that if it's just pure speculation that the market can remain rational about that for so long?</p><p>Robin Hanson (1:39:35)</p><p>Well, that's not necessarily irrational. That is, there is no particular rational value for Bitcoin. It can be all sorts of different things.</p><p>When worlds of speculation are created, they just have internal dynamics that can take them all sorts of different directions, just like with culture. Culture can just go a lot of different directions, and so can the price of Bitcoin go a lot of different directions. Clearly, in some sense, ex -SANI people should have been surprised to see it go so high so fast, or they would have driven up the price initially. So, you know, this had to have been a surprise.</p><p>but a lot of directions of markets and culture are surprises. But the point of that post is to say all of the speculation had an interesting effect, which is some people made a lot of money in the speculation. And then as commonly happens in the world, when somebody makes a lot of money, they need a story to tell themselves about why they were justified in making that money and what...</p><p>you know, what would justify their use of it? People mostly feel a little embarrassed to have a lot of money and they need a story to legitimize. They're having it and using it. And different kinds of ways to make money produce different stories like that and so they produce different kinds of rich people who live their lives differently. And so this kind of way to make money produced a different kind of rich person.</p><p>Theo Jaffee (1:41:03)</p><p>Hmm.</p><p>Robin Hanson (1:41:12)</p><p>And that's interesting because they do more interesting things with the money.</p><p>Theo Jaffee (1:41:19)</p><p>That kind of reminds me of Warren Buffett's essay, The Super Investors of Graham and Doddsville, where he outlines a scenario where like everyone in the US flips a coin every day and the people who get heads transfer, you know, some amount of money to the people who get tails or whatever. And that process continues over and over again until only one person is left with all the money. And so, like, by the nth day, there would only be like, uh,</p><p>Robin Hanson (1:41:41)</p><p>Right?</p><p>Theo Jaffee (1:41:48)</p><p>10 ,000 people left and they would each have like a million dollars or something I forgot the exact numbers But it's like and then they would go on to write books about how I became a millionaire in Just seven days with only 30 seconds of work a day You</p><p>Robin Hanson (1:41:51)</p><p>But...</p><p>Yeah.</p><p>Right now, different worlds have different amounts of financial speculation and different amounts of the inequality produced by this process. So, you know, ordinary business investment produces inequality this way, but at the time scales at which businesses rise and fall. And so...</p><p>People, you know, wealth becomes more unequal as a natural result of businesses winning and losing, but it's slow because businesses win and lose slowly. When people more directly do financial speculation, then that process can happen faster, but most people don't do that much financial speculation.</p><p>so that limits it, but there are times when it becomes in fashion and lots of people do financial speculation and then more inequality is produced in those short periods, but so far those have been rare periods. Crypto has just been one of those rare periods with a rare subset of people who then went wild speculating and therefore produced enormous amounts of inequality in a short time.</p><p>we should expect that sort of thing to continue over and over again through history, and each person should perhaps wonder how eager they should be to speculate, because, you know, they will be participating in a process that produces inequality, and to what extent do they want to be part of that? Now, sometimes these processes produce other things the world values besides the inequality, like business competition produces industry and a...</p><p>growing economy that we all benefit from, and so it seems like we should all tolerate a substantial amount of inequality produced by business competition because that's what makes our world rich. Other times there are somewhat separated worlds of speculation like crypto, which looks like we could counterfactually imagine if that all went away. The rest of us wouldn't mind so much. We might wonder, should we allow those little worlds to happen?</p><p>And that's the kind of question people have about that and about, say, stock market speculation, say, the 19 up to the 1929 crash or the 2008 crash, right? There's a burst of speculation that get bigger before those crashes. And people are often critical about whether that thing should have been allowed or whether some people are exploiting others in that process. And that's all perfectly reasonable to discuss. But I just thought it's interesting to notice that.</p><p>In crypto, enormous amounts of inequality created, the richest of them got a lot, others lost a lot, and the winners needed to tell themselves a story about why they deserve their money, and that story had to fit with how they use the money. And so, in crypto, the story they tell themselves is that they were pursuing big -idea innovations about how the world could be substantially different, if only certain crypto.</p><p>coins and processes were realized. And then when they're rich, they pursue that vision by spending some of their money on various visions that could change the world a lot. And that's a positive outcome in my mind of crypto because there are in fact a lot of opportunities for ways to invest to make the world much better.</p><p>Theo Jaffee (1:45:15)</p><p>Why was crypto speculated in so heavily and not any other asset class? Is it just kind of pure unpredictable randomness?</p><p>Robin Hanson (1:45:23)</p><p>Well, it was, I mean, it is literally speculation that is crypto is electronic money. Electronic money is literally a thing you can own and that its value can go up and down. So it is, you don't have to do any indirect thing to take a thing that's happening and make something else, something you've speculated about it. Most of the rest of the world, you have to work to make that connection happen, right?</p><p>If you want to speculate about, say, pickleball, you can go play pickleball, or you can buy some pickleball rackets or quarts, but if you want to speculate on it, you'll need some business that's invested in pickleball, and there aren't that many of them, and you have to figure out what to do. But crypto, all the things were things you could buy into, and there were thousands of them, and each of them could go up or down, so... And there was this story that...</p><p>It would be huge, of course, more plausible than pickleball. You know, it's just hard to imagine how big pickleball could ever get. So it's hard really to bet on pickleball. Imagine it'll go up by a factor of 100 in value. But with these coins, you had more of a story of how they could be huge in the future. You know, the entire finance industry could be displaced by the crypto world. And so that, you know, fuels speculation more.</p><p>Theo Jaffee (1:46:44)</p><p>So you've also written about a form of governance called demarchy, which is where you have a network of decision -making groups where the membership of each body is randomly selected from those who volunteer to be on it. But wouldn't that prevent natural elites or good leaders from being able to exert power if it's just randomly selected from all those who volunteer?</p><p>Robin Hanson (1:47:05)</p><p>Sure, I'm not a big fan necessarily of demarchy, but I am just a fan of people collecting big ideas for how things could be different. I mean, I just want people to know about them and to think about them because we just don't think enough about how we could change things.</p><p>Theo Jaffee (1:47:21)</p><p>Do you think the state of homeowners associations in America is kind of counter evidence to demarchy? Cause you know, it's pretty similar. I would say you have a decision -making group that's given a surprising amount of power over neighborhoods and the membership of HOA is, is kind of elected, but usually it's not super competitive. So it's mainly people who volunteer to be on it and they seem to be very dysfunctional.</p><p>Robin Hanson (1:47:42)</p><p>Right? Right?</p><p>again, but they're beating out competitors for that slot. So, you know, the main thing is that ordinary homeowners feel a little awkward about other sorts of institutions you might put in that slot again, and so they're very egalitarian, democratic instincts are pushing them to favor that institutional form there, even if we might look and think it's kind of inefficient. So...</p><p>Again, when something exists and there's alternatives to it that it keeps pushing away, you gotta give it some credit and wonder, how's that working? How does it do that?</p><p>Theo Jaffee (1:48:22)</p><p>What if it's just a coordination problem? Like what if everyone knew that they would be better off if they could say team up to abolish the HUA but...</p><p>Robin Hanson (1:48:28)</p><p>I mean, any one homeowners association in the country could decide to change how it's incorporated and make new rules and do it different. It's not that hard.</p><p>And new, I mean new, new homeowners, new sets of homes could just follow different rules. So clearly, people aren't very inclined to make new sets of homes near each other with different sets of rules. Something must be pushing them to do all these same old rules.</p><p>Theo Jaffee (1:49:02)</p><p>just the same kind of convergent culture.</p><p>Robin Hanson (1:49:06)</p><p>Right, for some reason, they must have done focus groups or something. Homeowners, when they hear about these alternative rules, they're not that eager for them. Maybe they sound suspicious. Maybe they sound unhomy. I don't know, but that would be interesting. I'd like to know what happens when you ask homeowners, hey, how about we have a different set of rules for doing your homeowners association? What do they say?</p><p>Theo Jaffee (1:49:29)</p><p>Often they don't care.</p><p>Robin Hanson (1:49:32)</p><p>Well then, if for example, the developer thought that the home, somehow it would be run better with a different set of rules.</p><p>why don't they make a different set of rules? Because, presumably, there's at least a weak effect of if it's run better, then reputation comes back to people about that brand of home. I want to buy that brand of home because I know people living over there in that brand of home and that seems to go pretty well there. So you would think they'd have some reputation incentives to produce homeowners associations that are well run.</p><p>Theo Jaffee (1:50:03)</p><p>And then, of course, the other very famous system of government that you've written about is called futarchy, where, yeah, invented. Where, for the audience, it's where a policy would become a law when a prediction market would clearly estimate that they would increase national welfare and that national welfare measure would be defined by elected representatives. And of course, you were also very early on the idea of prediction markets. I...</p><p>Robin Hanson (1:50:11)</p><p>That's my invention.</p><p>Theo Jaffee (1:50:33)</p><p>believe you were one of the inventors.</p><p>Robin Hanson (1:50:35)</p><p>Well, it's an ancient idea, so not really something anyone can invent, but I was one of the first advocates for using prediction markets much more widely than they are used today. That is, the same mechanism can be used for other purposes. Previously, the mechanism was used for people to enjoy betting, and then the customer was the better. I was...</p><p>pointing people toward the opportunity for someone who wants the answer to a question to subsidize a betting market to get the answer. And that's a possibility that still hasn't been realized as widely as it could.</p><p>Theo Jaffee (1:51:10)</p><p>Hmm. So, how would the measure of natural welfare be immune from Goodheart's Law? Which is, for the audience, when a measure becomes a target, it seems to be a good measure because people will try to game it.</p><p>Robin Hanson (1:51:23)</p><p>I mean, clearly, Goodhart's Law can't be true that way, because our world is full of measures. There's measures all over the place that we're using all the time, and still we want to keep using them. So it's just not true that measures lose all their value when you use them. Some do in some contexts, so you might want to understand what are those scenarios, but most don't. So, for example, we don't want to die.</p><p>Theo Jaffee (1:51:38)</p><p>Not all, but some.</p><p>Robin Hanson (1:51:49)</p><p>So, lifespans are a measure of health, and we would like processes in places, etc., that promote longer lifespans. And our using lifespans as a measure hasn't destroyed the value of lifespans as a measure. It's still a pretty good measure, even though many institutional incentives are tied to it, or even wealth. People want to get rich, and the fact that people want to get rich doesn't mean that...</p><p>It's not valuable to see how rich you are.</p><p>Theo Jaffee (1:52:20)</p><p>Clearly it's not like as good of a measure as it could be because of, you know, the whole concept of quality adjusted life years. It might be better to live 70 very long, very healthy years where you die quickly at the end than live to be 80, but the last 20 years of that are just like miserable, slow decline.</p><p>Robin Hanson (1:52:32)</p><p>Right?</p><p>Sure. And in fact, we often do use quality adjusted life years as a measure. That is, in fact, more common as a measure among the people who have such measures. And that hasn't destroyed the value of such measures. I mean, there was actually...</p><p>Interesting or say the US Congress I think at some point passed a law that said you shouldn't be using quality adjusted life years as a measure because if you just use life years then elderly people come out looking better or even just death rates if you ignore how old somebody is then you're ignoring how many years they have left and people who want to push policies to help old people prefer that measure they just say let's just look at what affects mortality ignoring life years and or quality adjusted</p><p>life use as well. So there's often politics associated with which metrics get used, but that doesn't mean metrics can't be used. There's a nice book called How to Measure Anything. I recommend it. It says measuring things is, you know, a difficult thing that you can work at and get better at, and so you can do it for most anything if you work hard enough at it, and I think he's right. Measuring is work, and it returns gains to effort.</p><p>Theo Jaffee (1:53:56)</p><p>What do you think about applying prediction markets to dating apps? Like the state of dating apps right now is really bad. Really bad.</p><p>Robin Hanson (1:54:05)</p><p>Well, except they are beating the competition. So, okay. I mean, the question is, do people... So, for example, I actually think the old institution of a matchmaker was an effective institution. Mastermakers did actually learn about other clients and what they might like in putting them together. And matchmaking would be an effective institution today. And people just don't like the idea.</p><p>Theo Jaffee (1:54:08)</p><p>But they're reading the competition, but maybe that's just because they haven't -</p><p>Robin Hanson (1:54:30)</p><p>So it's more people's personal aversion to the very idea of a matchmaker. That's the reason why I'm using matchmakers, not that they couldn't make good matches. So a lot of this has to do with people's sort of ideology of dating and what's supposed to go into it. I mean, for example, parents being more involved in matchmaking made a lot of sense and in fact seems to have gone better for people. Parents know a lot about their kids and they can often make contacts with other parents in ways that people, kids can't do themselves.</p><p>parents helping with matchmaking was a big advantage. And we've rejected that too, not because it didn't work, but because we just don't like the idea. So I think prediction markets could probably actually do better also, but they would face the similar objections to parents and matchmakers, which is we don't like that. So the question is, can you do prediction markets in a way that people won't react that way to?</p><p>Theo Jaffee (1:55:25)</p><p>Hmm. Yeah, like there's a lot of debate. I... You're probably pretty familiar with manifold markets and manifold love. Yeah, you went to last year's manifest, right? I'll probably be at this year's one, by the way. Awesome. So there's a lot of like internal manifold debate about manifold love. Manifold love is, you know, a prediction market applied to the idea of dating apps where you have people bet on...</p><p>Robin Hanson (1:55:35)</p><p>Right?</p><p>Right?</p><p>Okay, well then I'll see you there.</p><p>Right?</p><p>Theo Jaffee (1:55:54)</p><p>like which couples will work. And some people at manifold think it's great idea with lots of potential and others think no, it'll never work.</p><p>Robin Hanson (1:55:55)</p><p>Right?</p><p>Theo Jaffee (1:56:04)</p><p>So do you think maybe both at the same time that it is a great idea with a lot of potential, but it won't work because people won't want it to?</p><p>Robin Hanson (1:56:12)</p><p>Prediction markets in general are, they're a general technology with a enormously wide range of potential application. So I don't think it's worth having strong opinions about which things will work. I think it's better to have opinions about what are the good things to try first. So it's more important to have good heuristics about where it's cheap to try things and where there's big value to be gained if you do try.</p><p>So I would think, you know, dating, there's certainly huge value out there to be gained, but...</p><p>There's not so much value to be arbitraged in the sense that it's hard if you see two people who should be together to like gain the profit from convincing them to go together. There are many other contexts in the world where when things aren't efficient, there's ways to make money from that. And I think we should focus first on those kinds of applications because that will just attract a lot more energy and attention to trying to make that extra money. So.</p><p>So it's of substantial value, but it faces some cultural obstacles and it's hard to just spread on the basis of its efficiency because, again, there's not profits to be made. So I would try to focus people's attention more on places where if you adopt something and it works better, you can make money because stuff spreads faster there.</p><p>Theo Jaffee (1:57:38)</p><p>Hmm. That makes sense. So you have like a very unique outlook in the sense that it's so broad and that you've drawn from so many different subjects. So who else do you think are some of the broadest thinkers in the world?</p><p>Robin Hanson (1:57:56)</p><p>I would have to go research to figure that out, I guess. I studied history of science long ago, and one of the main things I learned is that when scientists have stories about their history, it's usually wrong.</p><p>when historians of science go study what was actually the history of an area of science, they just get the different answers than the scientists I'm telling each other. So, you know, from that I learned people are way too quick to make these judgments about what's going on. And so if I look at a question like this and I go, well, that would be a fair bit of work to figure out. I don't know. I would have to figure out who is actually being strong polymath learning lots of different areas and adding to them. I certainly have known some people who have contributed to multiple areas and</p><p>I, you know, impart by knowing multiple areas and making connections between them. So I respect that, but I don't know who's at the peak of doing it a lot. But if you had some people to read about and talk to, that would be interesting to maybe learn about which of them gained how much how from their different kinds of knowing many areas.</p><p>Theo Jaffee (1:59:02)</p><p>What about in the past? Not necessarily alive today.</p><p>Robin Hanson (1:59:08)</p><p>Again, I know of some people that I happen to come across because they learn multiple things, but I don't know which of them is, you know, big suffocating. Like the name Herbert Simon comes to mind, for example. He's, uh, I guess he won the Nobel Prize.</p><p>from his work combining computer science and economics and other areas of social science and systems design. That was a pretty broad area of things to combined. James March, similarly, I don't think he got the Nobel Prize, but he was somebody who did organizational innovation things while learning other kinds of social science.</p><p>people who have done both evolutionary psychology applied to other areas of life have impressed me at times because they do these crossover things. Most recently I've just been seeing people who do cultural evolution and apply it to other things outside of anthropology. That's interesting.</p><p>Theo Jaffee (2:00:11)</p><p>A few days ago I tweeted, um, what book that you've read has the... I forget the exact wording, but it was like, what book that you've read has the highest density of like, huh wow, I never thought about it like that, moments per page. Do I need to bring to mind for you?</p><p>Robin Hanson (2:00:26)</p><p>I mean, again, these are idiosyncratic. A lot depends on what you'd read before you read any one book.</p><p>Often the first book in an area you read is very insightful because, you know, you could have read some other book and got the same insights from the other books. So it's really hard to say, you know, generalize from your experience to other people's. So I try to, again, look, I speak on enough different topics that I think it's fine when my intuitions say, I don't really know that much about that one, to say, okay, I'm not going to have an answer because I don't really know.</p><p>Theo Jaffee (2:00:59)</p><p>And since you do speak about so many different topics, how do you kind of balance them? Do you have periods where you are only really thinking about one and really diving into it? Or are these different ideas always closer to the top of your mind when you're -</p><p>Robin Hanson (2:01:18)</p><p>Well, I mean, one of the fun things about having many different topics is that at any one moment you can be pulled to, you know, so it's fun to have a number of different topics you're thinking about at any one time so that depending on your mood, you can go to one or the other. And it's always fun to play hooky on one to do the others, honestly. Like if you have only one thing you're working on and if you don't feel like it today, you feel like, well, I got to make myself do it today because it's the thing I'm working on and it's hard to get motivated that way. It's much easier to get motivated to say, I should be working on this, but I'm going to do this one. That's what I'm going to do.</p><p>Theo Jaffee (2:01:37)</p><p>What do you mean?</p><p>Oh.</p><p>Robin Hanson (2:01:48)</p><p>That's fun. And you can then switch it back. The other day you can say, oh, I should be doing this. Why don't I do this one? Because this one's fun. And you get motivation that way.</p><p>Theo Jaffee (2:01:51)</p><p>Yeah, that happens to me a lot too.</p><p>And also, you teach law and econ right now at GMU, right? So how do you reconcile the breadth of your interests with the specific subject matter of a class? Do you try to bring influences from outside what you'd find in a normal textbook to a class? How separate do you give it?</p><p>Robin Hanson (2:02:20)</p><p>My standard is to not be a dilettante. So I think a dilettante would be someone who reads and maybe even talks about subjects without knowing enough about them to contribute. So the standard I hold myself to is if I'm going to go into a new area, I should learn enough about it to be able to contribute. So I should, in fact, contribute.</p><p>That's my standard. So if I try to keep that track of things and say, what have I gotten into that I was never able to contribute to? And that counts against doing things like that. But the things that I got into that I was able to contribute to, that counts to doing more things like that. So if I get into law and economics and I can find original contributions, then I think, OK, I know enough about this to be here. And I can justify my being in that area.</p><p>Theo Jaffee (2:03:12)</p><p>Well, I think that's an excellent place to wrap it up. So thank you so much, Robin Hanson for coming on the show. I really enjoyed this one. Yeah.</p><p>Robin Hanson (2:03:18)</p><p>Nice to talk to you, Theo.</p>]]></content:encoded></item><item><title><![CDATA[#13: Nick Simmons]]></title><description><![CDATA[All about Urbit]]></description><link>https://www.theojaffee.com/p/13-nick-simmons</link><guid isPermaLink="false">https://www.theojaffee.com/p/13-nick-simmons</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 14 Apr 2024 02:01:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/143553542/53af9fbfa57068ca25fd7ef1ae21ad64.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Nick Simmons is a founding partner at Octu Ventures, a member-driven venture DAO investing in teams building on Urbit. Urbit is a new computing paradigm that provides complete ownership of your digital world.</p><h3>Chapters</h3><p>0:00 - Intro</p><p>1:19 - What actually is Urbit?</p><p>5:43 - Urbit ID and Schelling points</p><p>9:05 - Why Urbit?</p><p>10:23 - Roko Mijic on Urbit vs. TikTok and Crypto</p><p>17:32 - Urbit vs. Worldcoin</p><p>22:26 - Niche or growth model?</p><p>28:50 - Why haven&#8217;t Urbit star prices recovered since 2021?</p><p>33:13 - Intrinsic value of Urbit address space</p><p>36:37 - Urbit as digital land</p><p>42:51 - Urbit and DeFi</p><p>45:42 - Personal AI on Urbit</p><p>51:35 - Urbit-native hardware</p><p>55:58 - Urbit design and aesthetics</p><p>1:02:15 - Outro</p><ul><li><p>Urbit: https://urbit.org/</p></li><li><p>Octu: https://octu.ventures/</p></li><li><p>Urbit Blog: <a href="https://urbit.org/blog">https://urbit.org/blog</a></p></li><li><p>&#8220;Creating Sigils&#8221;: <a href="https://urbit.org/blog/creating-sigils">https://urbit.org/blog/creating-sigils</a></p></li><li><p>&#8220;On Christopher Alexander&#8221;: <a href="https://urbit.org/blog/on-christopher-alexander">https://urbit.org/blog/on-christopher-alexander</a></p></li><li><p>Nick&#8217;s Twitter: <a href="https://x.com/Halikaarn1an">https://x.com/Halikaarn1an</a></p></li></ul><h3>More Episodes</h3><p>Playlist: <a href="https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj">https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj</a></p><p>Spotify:</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:3549,&quot;numEpisodes&quot;:12,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-03-22T15:11:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p><p>Subscribe to my Substack:</p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)</p><p>Hi, welcome back to episode 13 of the Theo Jaffee podcast. We're here today with Nick Simmons. Nick is a partner at Octu Ventures, which is a venture firm that invests in companies that build on Urbit.</p><p>Nick Simmons (00:13)</p><p>Hey, Theo, great to be here.</p><p>Theo Jaffee (00:16)</p><p>Alright, so first question is, I know the dreaded question for any kind of Urbit guest is, explain for the audience, for people who may not be as technically minded, what actually is Urbit?</p><p>Nick Simmons (00:30)</p><p>Okay, so the really top level overview here is that you could say that Urbit's a new internet. It's a new way to connect computers and the people that use those computers with applications, platforms, protocols that do all the things that we want computers and the internet to do. So that is sending messages, sharing in social spaces, connecting business, anything like that. Urbit has the potential to build new ways of coordinating</p><p>human activity, economic activity, social activity in ways that those of us who work in the project think are probably saner and more stable over the long run. Now, what does that actually mean? To get slightly more technical, Urbit is three key pieces of technology that all unite into this network stack. So there's Urbit OS, which is a completely deterministic functional operating system. So it's a full computational stack and</p><p>Right now it runs as a virtual machine on top of any Linux environment, but there's no reason you couldn't run it on a chip. And in fact, shout out to ~mopfel-winrux who is an Octu member and who is working on a project called Nock FPGA to actually design a chip that implements the core Urbit instruction set, which is called Nock. So that's Urbit OS. And the overall goal of Urbit OS is to allow for a computational stack that's</p><p>Theo Jaffee (01:56)</p><p>that.</p><p>Nick Simmons (01:56)</p><p>extremely stable. The Urbit Core development team is slowly working towards what they call, you know, Kelvin zero, which is a totally frozen instruction set. So the core of the computational stack just doesn't change and for it to be simple. So right now, I mean, the, the, the Nock instruction set, which can be broadly, broadly analogized to a kernel.</p><p>fits on a t -shirt. It's, you know, it's dozens of lines of code. The Linux kernel is millions of lines of code. This has really striking and obvious upstream, you know, social and organizational implications because when you have both extreme complexity and an ever -changing pile of, you know, what programmers call cruft, which are to say overlapping abstractions, patches on patches on patches.</p><p>Theo Jaffee (02:51)</p><p>Thank you.</p><p>Nick Simmons (02:55)</p><p>to make a computational system run. You basically require a giant bureaucracy to make that run. It's more technical knowledge and it moves faster than any one person or even small group of people can keep their heads. And so that kind of necessitates a bureaucracy. And computers probably shouldn't require bureaucracy. We're probably past the point of the society where we should be, where that's necessary. And that has all sorts of implications around, okay,</p><p>if all of the methods of organizing a bureaucracy start to look totalitarian, or they start to look like they impose a lot high coordination cost, then there's obvious downsides to that. And so the idea of Urbit OS is that you should be able to run a computer and that this gets into why people call Urbit a personal server, a computer that could serve up content to other computers, say other people, and it should be stable.</p><p>It should be easy for a layman to run. You should not have to have, say, a certificate in Apache server maintenance in order to run a server, which is basically the case right now in terms of the servers that major internet platforms run. And it should simply be yours. So the old tagline for Urbit was simple, durable, yours. The second part of the Urbit stack is Urbit ID.</p><p>So this fun fact, this is actually the first or second implementation, I believe of the ERC-721 technical standard, which basically just means NFTs. So these are Ethereum NFTs and there's three main levels of them, planets, stars and galaxies. Planets are kind of the individual level, Urbit node. Stars are, you could think of them as an ISP or maybe kind of like a community Schelling point node.</p><p>Theo Jaffee (04:39)</p><p>What do you mean by Schelling point?</p><p>Nick Simmons (04:41)</p><p>A Schelling point would be just like a coordination point. So, you know, something that a community wants to run in a very technical sense. What stars do is that they are their points for packet routing. So actual, you know, packets of data between any other node in the network and also a peer discovery. So I'll get into this in a minute, but essentially and then we'll go into galaxies. So galaxies, there's so there's about four billion.</p><p>planets, about 165,000 stars, and then 256 galaxies. And galaxies are the root nodes of the network. They're the source of truth for major software updates. And they also form kind of a DAO that makes decisions on whether to upgrade the network or change it in any way. The main example of this is that during DeFi summer, the fees on Ethereum obviously went way up. And so the cost and fees to</p><p>boot a new Urbit plan of our star became somewhat prohibitive. I think it was about $350 in Ethereum gas fees at one point. And so Tlon which is the company I used to work for, which is the main company that incubated the Urbit network, they actually devised their own layer two roll -up to reduce those fees. And so the Galactic Senate had to vote to decide that this was, you know, these new layer two identities were going to be valid. And so the way this works in practice is that,</p><p>Every planet has a star sponsor, every star's galaxy sponsor, and you can leave your sponsoring entity without permission. You just have to find a new one to adopt you. And so this enables a network architecture that roots around damage, which is to say offline nodes. So for example, if your star is not reading packets anymore, it's not online, you simply find a new star.</p><p>or if your star decides it doesn't like you anymore, you find a new star. And it doesn't have any say so over letting you go. You have the right of exit. And so, pure discovery in this context is basically that you have to go find someone else's planet for the first time when you wanna talk to them. And so the stars handle that decentralized, initial sniffing out, if you will, of where this other person is based on,</p><p>their public key identifier that you have, which is basically their name. So for example, my Urbit planet name is ~simfur-ritwed and my co -founder in Octu is Kenny, AKA ~sicdev-pilnup. And so if for the first time I ever sent Kenny a direct message, a DM on Urbit, my planet sent a packet to my sponsoring star, which then queried a whole bunch of other stars and it found which one was sponsoring</p><p>sick death pilnup and it passed on the data but it also let both of us know where in the whole Urbit network topology our planets were sponsored, were located. So that's the big data dump on how Urbit works. I'm sure you have questions or I'm sure there's something like clarification on for the audience. So I'll pause.</p><p>Theo Jaffee (08:02)</p><p>Yeah, so clarification for the audience would be just like, that was a very good explanation that went into a lot of depth, but just for like a very short, like 20, 30 second explanation of what exactly is Urbit and why should I care? Why should an audience member care? What would you say to them about that?</p><p>Nick Simmons (08:20)</p><p>Sure, my explanation would be that there's a whole bunch of decisions that have been made over the entire history of the internet that assume certain things to be true and bake in certain qualities of the internet. And they're not all true anymore. We have a whole bunch of conditions that make them irrelevant now. We can build better network forms of communication and collaboration that aren't dependent on that history and those historical choices.</p><p>And Urbit is one of the best projects that I found that actually opens up to those possibilities of coordinating people and capital and social connections in new ways. And obviously that needs to be dug into. But fundamentally, this is about allowing people better coordination tools to reflect, to build projects and social graphs that better reflect their actual desires.</p><p>Theo Jaffee (09:19)</p><p>Hmm.</p><p>So, I don't know if you've heard of Roko Mijic, I did not pronounce the last name correctly, but he's somebody who's been involved with Urbit for a while, he used to have his Urbit handle, ~bacsul-lissyl I believe, on his Twitter profile, and he tweeted pretty recently,</p><p>Urbit spent 22 years failing to build something there isn't even a market or need for. Think about it. Imagine we wave a magic wand and make Urbit work as intended. People get their own personal serverlets where they can easily host their own content on their own hardware. Meanwhile, in the year of our lord 2024, Gen Z can't even unglue themselves from TikTok for five minutes. You really think they're going to build their own personal website and then also self -host it on their own hardware? And even if they did, would it really make any difference to the world? In 2024, you get away with saying all sorts of naughty things on Twitter thanks to Elon buying it.</p><p>And as a bonus, it's all easy to use and easy to read because it's a single app, rather than millions of separate websites. Most of the value of the internet is in, and he bolded this part, most of the value of the internet is in the semantic and social network, the connections, the sorting and filtering.</p><p>And these network hubs are easy for rent seeking parasites to capture. Urbit can't really solve that because it doesn't have global state. It's distributed, not decentralized. The fundamental innovation came with Bitcoin and later Ethereum. DApps. DApps are much harder for parasites to capture because they can be made trustless. No middleman. So what do you think about those two big critiques? One is that like, who cares? Like Gen Z is not going to bother to do all this when they can just, you know, download TikTok in a second and then just spend hours on it. And then critique number two is just that, um,</p><p>Urbit is distributed, not decentralized, doesn't have global state.</p><p>Nick Simmons (10:59)</p><p>Yeah, so I actually, I agree that it's very important to have global state. As the first critique, so this is something that the Octu thesis actually gets into. And if anyone's curious, you can go to octu .ventures, you know, mild plug and read our white paper. And we definitely think that Urbit is a technology on the level of Linux or blockchains in general, or TCP IP, where</p><p>We're not particularly interested in trying to access, you know, overall, you know, organic adoption into the millions right now merely because the Urbit meme makes sense to people on a mass scale. We're very focused. We can talk a bit more about the, you know, the Octu investment thesis later, but our broad perspective here is that Urbit allows you to build new kinds of businesses, new kinds of products, new kinds of protocols.</p><p>that can capture organic interest and solve problems for people, even just starting on a B2B scale. And so I'm really not concerned about the fact that, yeah, people use TikTok. Yeah, people use Twitter and those are, you know, optimized products right now. They give people what they want. What we're interested in is where can't you build an optimized product for a use case, whether it's B2B or, you know, a social,</p><p>graph or whatever, although we think that the social graph part will come later, where the actual architecture of the internet as it stands just doesn't allow it. So we're very focused on how do you build tools, how do you open up new landscapes, if you will, rather than how do we scale the Urbit meme as previously defined to millions of people. And the second question around global state, so</p><p>First of all, I think that Urbit integrates with blockchains very well. The Urbit Foundation has several partnerships that are very interesting there with Near, and I hear some other stuff as well on the horizon. And obviously, there have been Bitcoin and Ethereum OGs that have been very involved at Urbit for a long time. My Octu co -founder, Kenny, was on the founding team at MakerDAO. Once you get inside the Urbit ecosystem, you meet some very, very kind of like early pressing crypto people.</p><p>who all love Urbit and work on building stuff inside the ecosystem. So I think that Urbit absolutely does need to be able to call out to sources of global state, global consensus. And I'm personally a huge, I'm personally very bullish on Urbit ID and reputation as anyone who's met me in the Urbit context can attest, no pun intended. Because I think that we're actually,</p><p>very competitive and very far ahead of most of the existing alternative, you know, digital or self -sovereign ID projects or reputation projects. And the fact that Urbit is a deterministic VM and has this network to talk between nodes actually accentuates this quite a bit. And it's also worth mentioning, by the way, that most of the projects that we have either funded at Octu so far or that we see on the horizon are starting to...</p><p>all resemble each other in the sense of probably producing a bunch of interaction data between their users and then being able to use that as a pretty high signal way to mutually signal reputation via those interactions in a way that can probably be globally useful. And so I think there are ways to make internal global state and then sure, write it to a blockchain, write it to a global consensus layer to verify it. But...</p><p>One interesting thing about blockchains and Urbit is that blockchains for the most part, at least with any application that we've seen so far, have a pretty limited repertoire of things that you do on them and therefore opportunities to create reputation, identity, really granular proof of human, if you will, in the new context.</p><p>Certainly there are dApps, but dApp usage is pretty bad for the most part and anything outside of purely financialized transactions. Blockchain social in general is pretty much a flop. Blockchain -based interactions are the only things that have to write the chain. I usually hit this dilemma of either adoption creates fees or you simply never get adoption in the first place because there's too high of a barrier.</p><p>Certainly L2s and things like Solana are interesting, although, you know, now Solana has its own uptime issues and obviously there's a very valid critique based on, you know, over centralization there. So I think that like, I would agree that Urbit needs to be a, Urbit needs to be a computing and identity and coordination layer that talks to global consensus layers, AKA blockchains, L1s, what have you. But I,</p><p>I see that more as like something where there's a lot of mutual benefit and everyone I talk to pretty much agrees in every ecosystem rather than some failing of Urbit as a complete stack that it won't get.</p><p>Theo Jaffee (16:29)</p><p>So going back to what you said earlier about Urbit ID and why you're bullish on it and it's ahead of a lot of other kind of digital cryptographic identity projects. What do you think about Worldcoin Sam Altman's Worldcoin?</p><p>Nick Simmons (16:42)</p><p>Yeah, so it's interesting that you bring up Worldcoin because I did a deep dive on the digital ID space a couple of years ago and I actually I went to the IIW, which is the Internet Identity Workshop, which is the main conference every year for this field. And this was I think 2022 and Worldcoin was there. They did a demo, saw the orb. Worldcoin is interesting because,</p><p>they are simultaneously, I think a little bit ahead of the curve on one of the main failure modes, but then encapsulate one of the other giant failure modes that I see here. So to explain this, like,</p><p>Worldcoin actually, I think, is a little bit smarter than people give them credit for in the sense that they do have some pretty sophisticated thinking around how do you hash the original biometric data and use that in a way to prove personhood, basically. And I think that some of the critiques there are kind of superficial. However, the real problem that I see with Worldcoin, for the application that I'm talking about, to be clear, I mean, you know,</p><p>Sam Altman stated rationale for Worldcoin is to provide this basis for UBI. I can't even really comment on that. I don't have a firm opinion on that because it's so far away from what I'm trying to do with digital ID. But when you look at the history of really most tech paradigms and how they arise and how they get network effect, it's not by...</p><p>Theo Jaffee (17:56)</p><p>Thank you.</p><p>Nick Simmons (18:20)</p><p>say, you know, going to the third world and getting a whole bunch of people to scan their eyeballs and then bootstrapping from a whole bunch of people that don't have capital and don't have access to kind of like, you know, elite technological social circles. It's the opposite. Urbit so far, even though a lot of the tooling hasn't been built to make ID and reputation work in a really granular way, already has the opposite.</p><p>I mean, if you have any contact with the Urbit community, you understand that this is a very, very special, high value, high trust community of very prescient and capable people with all sorts of interesting connections. And so the way that I think about digital ID usually comes into two failure modes. The first is when you have a good design, but you don't pair it with any kind of use case that actually gets you organic adoption among people whose actions,</p><p>constitute high value, high signal in network reputation data. So again, Worldcoin I think is doing a good job of designing some aspects of this. They really have found an interesting way at least to prove personhood, but they're not deploying it among people in San Francisco who are gonna use it as a real use case and we bound that use case that has little takeoff network effects in the same way that say, I don't know, like the Homebrew Computer Club.</p><p>started in a garage in Cupertino and a whole bunch of those guys do very interesting stuff. And if you were there, you were part of the scene. It's the truth of music scenes. I actually used to work in music industry. It's really true for anything and blockchains or digital ID is just a way to codify it. The other kind of inverse failure mode is that you have a really good, you know, early network effects. You have a scene.</p><p>but you don't legibleize it in a way that actually allows people to, you know, to kind of prove it or attest it. So this could just mean that it's legible and lots of things are legible and that's fine. Or it can mean that you're just not, that you think you're representing reality. You think you're representing these interactions and you're not representing kind of what makes them valuable or you're not proving this is actually someone's, you know, central way that they actually signify what they're up to.</p><p>And so for example, I think there have been some digital ID protocols that I think got decent uptake among crypto savvy people, but it didn't give them the opportunity or didn't give them the incentives to actually use this for the majority of their online or even crypto based activity. And so the defection risk there and the fact that it's just not capturing their activity in a way that</p><p>really telegraphs commitment to that ID paradigm, that reputation paradigm, means that it's ultimately destined to fail because the defect rate and the flake rate is too high.</p><p>Theo Jaffee (21:22)</p><p>Alright, so let's talk a little bit about the future of Urbit and different directions it can go. So, do you think that Urbit needs a kind of growth model to survive or could it survive indefinitely as kind of like a niche product?</p><p>Nick Simmons (21:37)</p><p>Well, that's an interesting question. So, I mean, I pretty firmly believe that first of all, Urbit's growing. If you look at nodes on network, if you look at developer growth, and especially, I think, if you look at capital coming into the ecosystem this year, the growth of new startups and the technical maturity of core parts of the stack, I especially want to call out the aims upgrades. So the networking upgrades that allow for a lot lower latency.</p><p>and a lot more simultaneous connections between Urbit nodes. And then the other big thing this year is New Mars, which is an updated runtime. And so much actual, you know, faster computation and larger loom size. So much, much larger memory. The joke inside, you know, Urbit circles is that you want to be able to host the AVI file of Shrek 2 inside your Urbit and play Shrek 2. And now there's also a great,</p><p>Theo Jaffee (22:27)</p><p>Thank you.</p><p>Nick Simmons (22:30)</p><p>Twitter thread of the day from Ted Blackman, ~rovnys-ricfer, who's the CTO of the Urbit Foundation around some of the, the loom expansion and runtime improvements that could actually allow some early, you know, machine learning activity inside the Urbit runtime, which is wild. Definitely not something that I think people really saw coming a couple of years ago when it was extremely limited. And so built, you know, building on those technical improvements, I definitely see that there's a lot of growth coming at the same time.</p><p>I think that, you know, obviously you would be unfortunate if, you know, if for whatever reason, you know, I don't know, we entered into the mother of all, you know, winters for capital and attention and, you know, usage and so forth. I think that's true of any protocol. However, Urbit exists, like the, you know, the PKI exists, the address space is remarkably well distributed.</p><p>You go to the Urbit conferences, you kind of sniff around and I mean, you can also just look on chain, like, you know, how many unique wallet owners for stars and planets and galaxies. And that I think does a particularly meaningful at the galaxy level is that a remarkable array of people are committed to this project and have, you know, and have significant, you know, interest in it, financial and, you know, just attentional and technical. And so,</p><p>The protocol is already out there. It has application to reality. It has use cases. It has people building on it. I love the Bitcoin meme of, honey badger don't care. Oh, there's a downturn. Oh, Jim Cramer comes on CNBC and talk shit about you. Who cares? Keep building. So yeah, there's always gonna be upticks and downticks. I really think this is something where I'm fascinated by the idea,</p><p>Theo Jaffee (24:04)</p><p>Okay.</p><p>Nick Simmons (24:25)</p><p>of building just indestructible primitives, indestructible networks. And we can get into this later, but like, I am personally fascinated by the idea of making these networks as unkillable as possible and really kind of crazy extrapolations that that leads to. Can we put an Urbit, you know, node on a, on a CubeSat? Can we inscribe the actual</p><p>you know, binary notation of an Urbit, you know, virtual machine on rocks in the desert. Like these are fun thought experiments, but I think they actually lead somewhere productive in the sense of there really are unlike say a lot of legacy systems where the ownership in the social graph is, is illegible and just kind of like dictated by fiat or it's on paper somewhere or, you know, there's some actually,</p><p>The degradation of digital information is way, way, way more advanced than people understand. I mean, a huge amount of content that has been produced on the internet, in fact, does not live anywhere. It's actually been lost. It's actually been wiped. So yeah, I think that like there's, with the advent of things like Urbit and blockchains that have distributed ownership, distributed consensus, and also are, you know, writing.</p><p>information to a distributed state layer, it really is a different paradigm in terms of what it means for something to be growing, active, or defunct. Like, I think we're already, you know, it's, have you heard of this, have you heard of this kind of, you know, crackpot hypothesis about whether, you know, we could tell if there was an industrial civilization in the fossil record millions of years ago?</p><p>Theo Jaffee (26:19)</p><p>Um, that sounds vaguely familiar.</p><p>Nick Simmons (26:22)</p><p>So I mean, it's an interesting thought experiment. I don't think that there's evidence that we had a fossil fuel civilization millions of years ago, but then the question among geologists and so forth is, okay, would it have left enough evidence? And I think there's a loose corollary you could make to something like, or a bit which is how big of a civilizational catastrophe would it take to wipe out the evidence or even kind of like enough of a subset of Urbit IDs or let's say, the Bitcoin blockchain.</p><p>anything like that, to the point where if there was interest and if there was a compelling need for it, that you could bootstrap it back up again.</p><p>From a game theoretic level, all you would really need is a handful of galaxy IDs that say, okay, this is still the network and we're gonna reconstitute it and we're gonna vote to spawn perhaps more galaxies, more stars if a whole bunch of keys have been lost. And this sounds just kind of like science fictional speculation, but again, I really think that this is an important factor in what's the difference...</p><p>in what we're building now for the long term, versus things that don't have these provable interactions on the identity layer and on the kind of like social contract layer.</p><p>Theo Jaffee (27:46)</p><p>So going back to what you said earlier about Urbit growing, Urbit stars were selling in 2021, they were selling for like $28 ,000 on average on Uniswap and OpenSea. And now they're more like a thousand dollars and they haven't really recovered even as the broader market for cryptographic assets has recovered like significantly. You know, Bitcoin has gone from like 20 ,000 back to like 70. It's hit a new all time high since 2021. So why hasn't Urbit address space recovered too?</p><p>Nick Simmons (28:16)</p><p>You know, I don't try to speculate too much on price. I think that where, you know, so many things in the crypto ecosystem in general are very meme driven. But I will say this, that I think that there's a general tendency, and Urbit is by far, you know, not the only project where you see this, where tokens or NFTs or something, you know, bespoke like Urbit address spaces and sort of digital land becomes a meme coin for the general idea.</p><p>in the absence of other ways to gain exposure. And so purely from a personal perspective here, my suspicion is that the level of interest in Urbit, as far as I can see, and I've been well -placed to observe this, keeps going up. And so it's a very valid question to ask, okay, why hasn't the, you know, why hasn't the star price, you know, kept pace? And I think the answer honestly is that now there are more ways</p><p>to essentially invest in the ecosystem rather than simply buying a star in OpenSea. And so for example, Octu does not invest in Urbit address space. We all have a lot of Urbit address space. We're all generally bullish on its value going up. I do think that there's gonna need to be some technical work and some kind of like product and ideation work around how do you make address space specifically? And because the interesting thing is, okay, it's a non -fungible token, right? But,</p><p>really, you know, if you just buy a random star on OpenSea that's never been booted, that's never interacted with anybody, sure, maybe you like the sigil. And I mean, the sigils are great, the names are great. But other than that, it is still pretty fungible. And this is even true to the point where for a while there was a project called Wrapped star to make it even more fungible. And so I think that there does need to be some work done on building things that imbue given</p><p>chunks of the address space with value, maybe as subnets, maybe as something that, you know, accrues reputation, maybe as web of trust that mutually reinforces reputation on the group level or the subnet level. But as to the price again, I really just think that what happened is that all of a sudden, I mean, when I joined the project, there were two Urbit companies, both of them mostly sold address space. Then there was a little bit of hosting company. Yep.</p><p>Theo Jaffee (30:39)</p><p>wanted.</p><p>Nick Simmons (30:42)</p><p>And now there's a lot of startups. There's a lot of technical projects where you can put your time and energy. There's a lot of grants from the Urbit Foundation to work on stuff. And then now there's actually two projects in the works. There's a Send Chain from ~tiller-tolbus at Chorus One. And then there's another Urbit L1 project from Sunny Agrawal at Cosmos. It's mostly being worked on by Laconic.</p><p>that and both of these aim to not only give some some global consensus layer to Urbit as an Urbit native, you know, L1 blockchain, but also has some interesting ideas around how the reputation layer and the economic layer could work in ways that incorporate address space. So I would definitely encourage everyone to keep an eye on those things. I think both of them are kind of like in the, you know, publishing early communicase stage. But I don't ultimately see, you know,</p><p>undifferentiated address space, stars basically, as being really, I think, ever again, so much the concentrated, you know, bet or meme coin on the overall health of the ecosystem, at least not for a very long time. And where I'm choosing to put my efforts in Octu is investing in early stage startups that build things on Urbit. And that's where my kind of very opinionated thesis is.</p><p>Theo Jaffee (32:09)</p><p>Do you think there is any kind of proper intrinsic value of an Urbit star? Like, can you go about valuing Urbit address space or is it more like Bitcoin where there's no real way to value it because there's no cash flow. It's just a purely cryptographic asset.</p><p>Nick Simmons (32:26)</p><p>Well, I think that both of these things have something in common, which is scarcity and that ultimately you start to get, you know, increasing value within a scarce, you know, namespace or, you know, a scarce currency, you know, store value space in the case of Bitcoin. When you have a whole bunch of people who reach, who reach a social consensus on.</p><p>what the value of that scarcity is to them in terms of preventing various kinds of downside risks. In Bitcoin's case, it's, you know, hey, we're worried about money printing. Like there's no actual limit to the amount of fiat money that a government or central bank can print. So we're all going to subscribe to the, you know, social illusion. And I don't say that in a negative way that, hey, it's valuable that there's only ever, you know, 21 million Bitcoins. And so this Bitcoin that I own has</p><p>has a value as a percentage of that. Similarly with Urbit, I think that if Urbit got to the point of adoption that Bitcoin had, of course you'd see address space go up. Not financial advice, but that really seems to stand to reason because you have a whole bunch of people agreeing that the scarcity aspect as it represents trust and distribution of the network and so forth also</p><p>benefits from that scarcity and they want a piece of that. Now, what happens before that, I think is very interesting. And that's why I say that like subnets and like biting off chunks of this scarcity and maybe either imbuing them with some sort of, you know, economic function, or simply saying, okay, you know, this, this corner of the network, this, you know, thousands or millions of planets are attested to in a given way.</p><p>and we're going to carve out this chunk and use it for a given use case. And within that, there's even a greater degree of scarcity. And it's very interesting to think about the network growing with multiple examples of that that then have to have some sort of foreign policy with each other or some sort of equivalency. And it's almost like, I mean, the original metaphor of Urbit as land is very apropos here. When you're on a frontier or you're even, I guess, settling in a different planet,</p><p>land is cheap and so people have the ability to go homestead in different parts of it and build their own thing and define it their own way. And then over time, the borders creep to each other and they have to figure out how they're gonna relate to each other. And then the value of land maybe in between goes up. And I think that that's the process that we are gonna have to watch and see with address space.</p><p>But it also informs again that I don't see the value in just owning huge swaths of undifferentiated address space and waiting for it to accrue value in the near term. It's much more around, you know, what are the cities you're going to build? How are you going to bring people there? And what are the industries you're going to build there?</p><p>Theo Jaffee (35:34)</p><p>Can you go into a little more detail about the intention behind the original land metaphor? Because I think that was probably one of Curtis Yarvin's best pieces on Urbit.</p><p>Nick Simmons (35:43)</p><p>Yeah, I mean, I'm a big fan. Look, I can't speak for Curtis. He's certainly a prolific podcast guest. I encourage you to get in contact. And I know he doesn't often speak about Urbit design stuff. Very good, very good. So I won't presume to speak for Curtis. But as somebody who has watched the growth of the network, I do think that it's</p><p>Theo Jaffee (35:57)</p><p>It's in the pipeline.</p><p>Nick Simmons (36:12)</p><p>It was a very good design decision on his part to make it so large, even though it leads to this current price drop that you see while people kind of acclimatize to the homesteading phase.</p><p>This really is, Urbit really is a new world. And this is almost a little cliche to say at this point, but I really believe it. And giving people huge swaths of node IDs, whether for humans or for infrastructure points or whatever, is really, really important when you have the ambition to replace the entire, you know, up to the minute,</p><p>paradigm of network computing. And by the way, a little bit of a side note here, something that people talk about a lot is that below the planet star galaxy paradigm we've talked about before are something called moons. So moons are derivative identities. They do have their own keys, but they are permanently tied to a planet owner. And the...</p><p>Design space for moons has always been a little bit underspecified, but I'm very interested in ways that moons can not only provide IoT device identifiers, which is actually an underexplored direction in IoT in general, in terms of verifying where your data came from, certainly in a deterministic way. ~datnut-pollen, I read a great piece on this back in the year about the Tlon called the lunar IoT in the internet, or something like lunar IoT in the internet, or things like that.</p><p>But I'm also very interested in something that I call node dispersal, which is that blockchains and protocols like Urbit, so far, if you look at where nodes are being hosted, it's gonna be a preponderance of, depending on the technical requirements, you're gonna see an awful lot of validator nodes for various blockchains in AWS buckets. And then even if it's actually self -hosted, where's it gonna cluster? It's gonna be San Francisco, Seattle, Berlin, Lisbon, New York.</p><p>And that seems like a, I think that we need a scale pass. Like that's an over concentration of, you know, of these supposedly decentralized networks. You know, an interesting way to view the whole proof of work versus proof of stake, you know, debate or argument is that, and this is one lens to look at it from. This is definitely not the only one.</p><p>but I find it instructive sometimes, is that proof of work fundamentally is a bet that the distribution of cheap or free energy and the ability to exploit it as it's distributed over the Earth's surface is, you know, and obviously modulo people knowing about something like Bitcoin or something like, I guess, you know, or Dogecoin or, you know,</p><p>pre -merge Ethereum, and knowing that you can do this, that this is a thing you can do. But with Bitcoin in particular, the Bitcoin mining has penetrated deep into say, provincial China. It's penetrated deep into the provincial US, deep upstate New York, for example, all over the world. And then it has technical innovations like, oh, flared natural gas on an off,</p><p>like an off -grid drill site, you go and you mine Bitcoin with it and it's not economical to export the energy in terms of a gas pipeline. It is economical to export the hashed mathematical results of Bitcoin mining over a packet radio. But then proof of stake is basically a bet much more on, okay, this blockchain is going to scale via</p><p>people who have the cultural, you know, imprimatur and the social connection to the people who started it who tend to be, you know, educated Western technical types. I don't think either of these has a monopoly on insight, but I think it's an interesting, you know, dichotomy between how resources are distributed and to the extent that, you know, blockchains are supposed to be this very durable layer of, you know,</p><p>distributing information, access, and economic rails across the physical space, how they're distributed. And so in that context, I think that moons are super interesting in terms of how can we achieve physical decentralization via node dispersal across every possible boundary with the Urbit network. Can we put moons as Urbit nodes on devices?</p><p>Can we optimize for geographical decentralization? Can we put them in our space? Can we put them across political boundaries? And two follow -up questions here. How can we measure this in the most robust way to actually provide a score, if you will, of how decentralized the network is? And two, what do you do with them? And what's the practical upshot? So.</p><p>One super interesting thing about Urbit, right? It's a network of personal servers and you run a client locally and you can dial into APIs, you know, blockchain sources of data, et cetera, et cetera. I assume you've heard about this Wells notice that Uniswap just got.</p><p>Theo Jaffee (42:05)</p><p>The what that Uniswap just got?</p><p>Nick Simmons (42:07)</p><p>So Uniswap just got something called a Wells notice from the SEC and we don't know the contents yet, or at least we didn't have as of last night. And a Wells notice basically is just serves as intent that the SEC is gonna bring legal action against you. So basically the story here is that, and this actually goes back to one of Roko's critiques is that, you know, dApps are solving for supposedly things that that Urbit was trying to solve a while ago. Well,</p><p>One thing that dApps have not solved and that they are getting in trouble for right now is that the actual front end server layer is the vulnerability. And so this is most obviously true for DeFi right now, but it's also probably going to be true for certain kinds of, you know, AI to consumer apps in terms of your dialing in to use a model, where does that model live, but also where are you and how you access it. So, Urbit excels at this. And actually there's been quite a bit, you know, quite a...</p><p>bit of interest from DeFi protocols in, hey, how do you use Urbit to serve up the front end so that someone can use a DAP, can use an AMM, an automated market maker? How do you actually interact with the blockchain rather, in an actually decentralized way for the user? And so one idea around making Urbit node dispersal extremely robust is that,</p><p>this allows you to have, you know, umpteenth degree redundancies for mirroring sources of data, mirroring front ends and actually like, you know, keeping, being a backup for any kind of global state, you know, coordination protocol, blockchains or otherwise, there are other options there too, as well as representing reality. So things like,</p><p>Po -Apps, proof of tenants protocol, proof of location protocol, things like helium, packet radio, et cetera. Very interested in how we can make Urbit and networks like it actually span the earth and start to instantiate. I mean, the original metaphor is Borges. It's "Tl&#246;n, Uqbar, Orbis Tertius". And the story, of course, is about.</p><p>a map that is the same size as the territory that it describes. And, you know, Curtis know what he was talking about when he was using that metaphor. And I'm very interested in ways to make that real.</p><p>Theo Jaffee (44:38)</p><p>Yeah.</p><p>So change topics a bit. Let's talk about AI and the future of AI on Urbit. Because I think that like, you know, the perfect application for, you know, having a decentralized personal server would be to run a decentralized personal AI on.</p><p>especially because it seems like actually good personal AI is imminent. You know, who knows what Apple is about to announce with Siri in a couple of months. And then OpenAI is coming out with GPT -5 soon after that. Most likely sometime by the end of the year we'll have very good LLMs and people are already building out the infrastructure. So what do you think is the future of AI on Urbit, both implementation and then what people will actually be able to do with it?</p><p>Nick Simmons (45:25)</p><p>Yeah, so this is super interesting. There's a whole bunch of different threads we could pull. So obviously I think that access to protocolized AI compute and model access and also honestly markets for data sets that you would feed to a model on large and small scales. I think Urbit helps with the protocolization of that quite a bit. And I really don't know anything else that applies nearly as much there.</p><p>But there's also very interesting things to say about what you can do with all the interaction data that you create. So each, each Urbit virtual machine that you know, you or I run is an event log. It's just literally a list of computational events that you've performed with your machine and it lives locally and it doesn't have, you know, global state by default, but certainly you can, you can.</p><p>create a protocol to write this out to a blockchain whenever you want. And there's actually been some interesting early experiments and kind of like peer to peer local consensus like agreements on that Urbit computational level between orbits. This is something called Agora that Quartus and the Dalten collective experimented with a couple of years back. And so because you have this deterministic event log,</p><p>of everything you've done. And that includes, of course, the connections you've made with other Urbits and data you've gotten from other Urbits over the network. It's very interesting to think about training a personal model on your activity as a kind of proof of human and then zero knowledge hashing that and using that as your badge of identity in a very kind of up to the minute way. And again, as we were talking about,</p><p>you know, some of the pitfalls of digital ID protocols, this is, I think that proof of human is gonna become an arms race. And that if you don't have the, if you don't have the primacy of someone's everyday, you know, online behavior to draw reputation data from in order to kind of like model them as a human, then you're gonna fall behind pretty quickly. At the same time, you're gonna, you are definitely going to need to, you know,</p><p>to not simply leak everyone's data or even just kind of behavior patterns, if naively obscured in ways that leave them vulnerable. And so it's very interesting to think about personal models deployed to prove who you are, to prove belonging, but also to act on your behalf and learn from your behavior. And it does seem like Urbit is the kind of thing that would provide a moat there.</p><p>for you to run a personal model in, interact with data set marketplaces, and then command a fleet of virtual assistants on your behalf, which I mean, the moon identity seems to align well with that. But ultimately, I think that the really, my top level thoughts about Urbit AI is that AI is probably going to cause a massive,</p><p>sorry, is probably going to cause a massive collapse in the social context and the legacy, you know, reputation, identity, meaning systems that we currently live under. I mean, they were already straining pretty badly. And this is a key part of the Octu thesis is that we think that Urbit has massive potential to build what we call digital civil infrastructure.</p><p>which is to say, to encapsulate the values that its users and builders actually watch, digital Jeffersonianism is one way to put this, of allowing people to actually have a defensible moat around their family, their community, their business, their data, and to say, hey, I'm gonna interact in marketplace, I'm gonna interact in society.</p><p>but there's a line past which cannot be trespassed. And honestly, I think that AI is gonna make this necessary sooner rather than later. And it's extremely, extremely important that we encode the values that we want into a defensible digital substrate before that happens, because there's really no guarantee that they will simply be uptaken.</p><p>whether it's by, you know, mega corps that use AI or not, or whether it's simply by these models in the sense that they're, you know, eating all the data from the internet in the past, you know, 30 or 40 years of written code.</p><p>Theo Jaffee (50:31)</p><p>And then not just AI, but the rest of the future of Urbit. For example, what could like an Urbit hardware ecosystem look like? Whether it be CPUs, I know you talked about this, Nock FPGA earlier, or like actual full blown.</p><p>like computer hardware, like consumer desktop or laptop hardware. And what about Urbit on mobile? Like what could that look like? Would it be more of an app on existing OS's or would it be its own mobile OS or its own mobile device? So yeah, what's, what's the future of the kind of actual like hardware layer of Urbit? How will people interact?</p><p>Nick Simmons (51:10)</p><p>So yeah, so first of all, this is already very much a thing. So the obvious shout out here is to Native planet, which is a great company in Austin, Texas that builds Urbit hosting hardware. So with a bunch of very, very clever, very cool software integrations to make it easy to run your own planet or star on a custom built hardware box. They have several models, highly recommending.</p><p>recommend checking them out, nativeplanet .io. And so yeah, so there's already hardware that's purpose -built to run your, your Urbit nodes locally, to run your Urbit VM locally. I think there's massive potential here. Some of the areas I've already mentioned in terms of node dispersal, I think that Urbit, you know, optimized sensors, you know, space hardware, things like that. And I think there's a, there could be very interesting kind of, you know,</p><p>flywheels and mutually complementary economies here where the economic mandate of no dispersal and like, you know, provable, like real geographical decentralization can reduce the costs for the ancillary, you know, business use cases of, hey, we want, you know, sensors here to track X, Y, and Z, or to provide, you know, relays for other networks or, you know, like packet radio, mesh nets, that kind of thing.</p><p>I'm very interested in the Urbit sensors, very interested in Urbit hardware that creates a real geographical decentralization. In terms of Urbit apps, I mean, there's some, Tlon has an iPhone app, which works quite well and highly recommend everyone check that out. That's really made it much easier to use Urbit on the go.</p><p>If you're asking like, is there going to be an Urbit phone like a salon, like the salon, you know, side of phone, that's certainly an interesting question. Um, I see a lot more hardware interest from urbaners than I did a year ago. Uh, and so I think that people are kind of, you know, trying to wrap their heads around this stuff. One aspect of that, I think maybe accelerating this a little bit is that Apple recently breached what was previously considered to be quite a bright line in taking in.</p><p>take a hard line against PWAs, progressive web apps, which were basically a bit of an end run around some of the, you know, frankly pretty absurd strictures on what can and can't get accepted in the app store. The whole question of whether, you know, an entrant to the smartphone business that can get around some of the Apple and Android, you know,</p><p>quasi monopolies for apps, especially around crypto integrations is, is certainly super interesting. And I kind of suspect that it's, you know, pressure building behind a dam and that we will get there maybe even sooner than we think at the same time, geopolitics and the geopolitics of chip production and, you know, sophisticated electronic production in general are a big, big question mark, especially with, you know, all the stuff around Taiwan right now. So that could throw a wrench and all that stuff.</p><p>I really want to see good thinking and I want to see honestly pitches here around how you unite the Urbit vision with hardware, real use cases, and that geographical and technological kind of like hosting environment, decentralization and dispersal factor.</p><p>Theo Jaffee (54:54)</p><p>So we've only got a few minutes left, so I'm going to go into a little bit some of the design decisions behind Urbit. So, for example, where did the syllable -based naming conventions come from? I noticed recently a lot of passwords are now very similar to Urbit syllables, where you have a bunch of normal -sounding syllables or English words separated by hyphens. So which came first?</p><p>Nick Simmons (55:22)</p><p>You mean passwords outside of Urbit</p><p>Theo Jaffee (55:25)</p><p>Yes.</p><p>Nick Simmons (55:26)</p><p>I think that has to be convergent evolution. I think I've noticed a little bit of what you're talking about recently. I mean, the whole topic of password optimization to get around what they used to call a dictionary attack has been going on for a very long time. And of course, now we're going to just use the suggested password managers, browsers and so forth to run. So I couldn't really tell you if there's any, I mean, I'm really highly doubt there's a connection there.</p><p>Theo Jaffee (55:34)</p><p>us brings us.</p><p>Nick Simmons (55:56)</p><p>I don't know the full story of exactly why the Urbit, you know, I mean, the syllables are meant to be, you know, human memorable because it's easier than memorizing, you know, a bunch of dotted quads, right? And obviously there is a correspondence between the Urbit syllable names and the, and the sigils. There's a great post by, I believe by Gavin who created the sigils called appropriately creating sigils. If you go,</p><p>way back in the urbit .org blog, that's worthy of reading. And yeah, I mean, I'm a big fan. I don't know the full history, but I'm a big fan of how that works because they really are memorable. And I remember at one point, maybe a year or two after I originally joined Tlon, I realized that I probably knew upwards of a hundred people, so easily like over a Dunbar number of people by their Urbit names, their Urbit planet names, rather than,</p><p>Theo Jaffee (56:26)</p><p>I love that post.</p><p>Nick Simmons (56:55)</p><p>in most cases, the real names.</p><p>Theo Jaffee (56:59)</p><p>That's pretty cool. And then what about the influence of Christopher Alexander? I know there's another I can talk about that. I'm actually reading the pattern, a pattern language right now. And I didn't even know, I've known about Urbit for like years. I had no idea that Urbit was inspired by Christopher Alexander.</p><p>Nick Simmons (57:04)</p><p>Ah!</p><p>Yeah, I'm not a Christopher Alexander expert. So I would recommend talking to Galen or possibly to some of the OG design team at Tlon Ed Fablefaster has some great thoughts on Christopher Alexander. And really, the whole design genesis of Urbit and Tlon's products is fascinating. I will say that one of the things that really attracted me,</p><p>to Urbit in the first place was a sense of really like that it was important to know and understand the whole teleological history of technology and product design. And it's actually funny, I was just on another podcast from a hut ventures that will be coming out next week. And one of the at the very end, we talked a little bit about you, hey, you know, what would you be doing if you weren't working in crypto or tech?</p><p>And my answer, which has always been true, is that I'd probably be a historian. And I've always been super, super interested in tech history. I have a little bit of a family connection. My grandfather worked in the ENIAC in the 40s. And so I grew up using little punch cards, scratch paper. And I kind of thought that was normal as a little kid. Didn't everyone's grandpa work on computers, right?</p><p>Theo Jaffee (58:24)</p><p>Oh, I saw it.</p><p>Nick Simmons (58:37)</p><p>Yeah, it's in the, I'm assuming you went to the Computer History Museum. Yeah, I know, one of my favorite places. I could spend hours in there. I've been there many, many times. I know, right? I know. Yup, yup. I mean, that's when it was started, I think, the museum. And so they're, yeah, they have a backwards looking perspective there maybe. But yeah, so I've always been super interested in,</p><p>Theo Jaffee (58:42)</p><p>Yeah, yeah, I did. I'll do it.</p><p>I just wish it would have been longer. It ended in like 1995, 2000. I missed the last 25 years.</p><p>Nick Simmons (59:06)</p><p>how this stuff developed. And one of the things that really made clear to me about Urbit is that like, there have been a whole bunch of junctures in the history of computing, the history of network computing and the history of, you know, mass onboarding people into these network computing platforms where particular decisions were made. They were usually contingent on what came before and what the current, you know, ground conditions were at the time. And some of them worked out really, really well for</p><p>I'd say the average person and the ability of developers and entrepreneurs to build what they wanted. Like we're definitely living in something that is far from the worst possible world here. Are we living in the best world? No, of course not. Hence why I work in this stuff, hence why we all work in this stuff. But the thing that really impressed upon me was that there were times when I, for example, early on, I think Bill Gates and some others in their early 90s wanted to essentially encircle.</p><p>the internet and circle some of these protocols. And instead we got TCP/IP, which is the open protocol. We got SMTP for email, which is the open protocol. Although honestly, the dominance of Gmail as the front end to that is eroding that a bit. There are some pretty scary censorship going on with G -Docs at least.</p><p>Theo Jaffee (1:00:19)</p><p>Very. Shout out to my last guest.</p><p>Nick Simmons (1:00:22)</p><p>Oh, really? And so I do think that it's, it's very important to consider, okay, if you're at one of these junctures, and you're in a position to build something or fund something that can, you know, make the right decision, and return agency and return, you know, network effects and coordination ability to the users to the developers to the auditor, nurses, harder than is in the right place, you should do that.</p><p>And that's the highest calling you could possibly have as a technologist. And I really think that the degree to which the Urbit ecosystem and everyone I've met in it considers these questions and knows the history and knows, hey, it's important to build this stuff right, is one of the things that grabbed me in the first place and still gets me out of bed in the morning.</p><p>Theo Jaffee (1:01:12)</p><p>Well, I think that's an excellent place to wrap it up. My answer to where, what would I want to do as a career if I wasn't interested in tech is similar. I would probably want to be an architect. Also Christopher Alexander inspired. So yeah, I'm thrilled to see that he's had an influence on Urbit too. Well, thank you so much, Nick, for coming on the podcast. I really enjoyed this episode.</p><p>Nick Simmons (1:01:34)</p><p>Absolutely, I enjoyed it too. You asked great questions. And if anyone wants to follow up and continue the conversation, you can drop my Twitter handle in the show notes. And as mentioned, my main project right now is being a founding member and partner at Octu Ventures, which is a member -driven venture DAO that invests in seed stage Urbit projects. And we have some writing on our website, octu .ventures.</p><p>And I really try to make that the vessel for most of my energy into Earth these days. And I'd love for people to check that out.</p><p>Theo Jaffee (1:02:14)</p><p>Alright, well links to everything will be in the description, so thanks again and talk to you later.</p><p>Nick Simmons (1:02:18)</p><p>Awesome. Thanks to you.</p>]]></content:encoded></item><item><title><![CDATA[#12: Paul Buchheit]]></title><description><![CDATA[Creating Gmail, Fixing Google, Narrative Understanding]]></description><link>https://www.theojaffee.com/p/12-paul-buchheit</link><guid isPermaLink="false">https://www.theojaffee.com/p/12-paul-buchheit</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Fri, 22 Mar 2024 15:12:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/142846348/72867f21ab3ee0eb60410bfe86f32d71.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Paul Buchheit is a programmer and entrepreneur who joined Google as its 23rd employee. He created Gmail, developed the first prototype of Google AdSense, and suggested the company&#8217;s motto, &#8220;don&#8217;t be evil&#8221;. He later co-founded FriendFeed and served as a managing partner at Y Combinator.</p><p>0:00 - Intro</p><p>1:15 - Issues with Google</p><p>3:47 - AI risk</p><p>5:21 - AI centralization and decentralization</p><p>8:01 - Open-sourcing frontier AI</p><p>9:59 - Paul&#8217;s Predictions</p><p>14:28 - Centralization, free speech, and censorship</p><p>24:16 - Trends in ideology</p><p>32:00 - Freeing people of narratives</p><p>35:49 - Alignment</p><p>39:06 - Startups and YC in 2024</p><p>50:30 - Email and communication interfaces</p><h3>Links</h3><p>Paul&#8217;s Twitter: <a href="https://x.com/paultoo">https://x.com/paultoo</a></p><p>Paul&#8217;s Blog: <a href="http://Creating Gmail, Fixing Google, Narrative Understanding">https://paulbuchheit.blogspot.com/</a></p><p>YouTube: </p><div id="youtube2-Tzo6DJT9GOk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Tzo6DJT9GOk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Tzo6DJT9GOk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><p>Apple Podcasts: </p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:3785,&quot;numEpisodes&quot;:11,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-02-26T02:56:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://twitter.com/theojaffee">https://twitter.com/theojaffee</a></p><p>Subscribe to my Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:02)</p><p>Alright, we're recording. Hi, welcome back to episode 12 of the Theo Jaffee Podcast. We're here today with Paul Buchheit.</p><p>Paul Buchheit (00:09)</p><p>Alright, great to be here.</p><p>Theo Jaffee (00:12)</p><p>All right, so let's get into some questions. So as we all know, you created Gmail and worked at Google very early on. And now you've become a bit more critical of it, especially after the recent Gemini debacle and a general mission creep from Google in recent years. So when did the current problems with Google start to become apparent?</p><p>Paul Buchheit (00:37)</p><p>Oh, you know, I don't know. I don't want to define myself as a Google critic. But, you know, clearly...</p><p>things have, you know, it's not the same company, it's a much larger company, but I think if you kind of look back at the history of it, sort of the alphabet era, which started in 2015, I think, there haven't really been a lot of great new products that have launched. My sense is that the company kind of pivoted from innovation more to just defending the search monopoly.</p><p>So.</p><p>Theo Jaffee (01:20)</p><p>So the market seems to still think that Google is incredibly valuable. It's pretty much consistently beaten the market over almost every time horizon, except maybe the last month or few months. So are you still on Google?</p><p>Paul Buchheit (01:35)</p><p>Oh yeah, I mean, obviously they have a lot of great assets. It's not going to disappear tomorrow or anything like that. I mean, they're still the leading internet company. But these things take time. It's kind of more a question of where are things headed five years, 10 years. And also just like, you know,</p><p>what kind of products are being created. And in particular with AI, it's important that, you know, it's such a powerful and influential technology, it's important that it's accurate and honest in not distorting the truth or distorting history.</p><p>Theo Jaffee (02:19)</p><p>So that said, a few years down the line, what do you think the bear case and the bull case are for Google? Like in the bear case, do they go out of business or is Google too strong for that? And then what about the bull case?</p><p>Paul Buchheit (02:30)</p><p>No, companies don't really, of that scale, don't really go out of business, right? The bigger question is actually just around the direction of AI. AI is larger than really any technology we've ever witnessed before. And so I don't know, like I said, I'm not like a stock market speculator or anything. I don't.</p><p>be giving anyone financial advice or investing advice. I'm more interested in just kind of where technology is headed and how that impacts humanity and how we do our best to make sure that these technologies really empower individuals and help people to retain their freedom and their rights because AI has the potential to be the most horribly...</p><p>oppressive and dystopian thing we've ever done, or it could be the best thing we've ever done. So I think we need to do our best to push it towards something that's good for humanity and not something that eliminates or enslaves us.</p><p>Theo Jaffee (03:37)</p><p>So thank you.</p><p>Hmm. And when you say eliminates or enslaves us, are you concerned more with the AI itself becoming rogue, or are you concerned more with humans using it for bad purposes, or both?</p><p>Paul Buchheit (03:53)</p><p>You know, I think the latter. I mean, AI is a tremendous superpower. But if you think in terms of, you know, like Orwell's 1984, that was kind of fictional, but with AI, we could actually do something much worse, right? You can monitor everything anyone ever says or does.</p><p>you can distort reality in real time. So if you imagine, you know, you're in lockdown or whatever, right, for the new pandemic, and you're having a video chat, you're on FaceTime with a family member, or maybe you're doing a podcast, you know, the AI could literally be altering things in real time. So you think that you're talking to your, you know, to your mother, and she says, hey, everything's great, but in actuality,</p><p>Theo Jaffee (04:35)</p><p>you</p><p>. . . .</p><p>Paul Buchheit (04:49)</p><p>the AI is just faking the whole thing or altering what people say. And so the potential for just horrible dystopian possibilities is essentially unlimited. But at the same time, the explosion of intelligence also means that we can do really wonderful things. We can solve the hardest problems, things that seem intractable. If you're worried about climate or you're worried about...</p><p>future of education, medical care, we can make things such that the world in 40 years is better for everyone than it is for anyone now, in terms of giving people access to things that make them healthy and happy.</p><p>Theo Jaffee (05:21)</p><p>So you're worried more about AI advantaging the incumbent, like a government, in order to...</p><p>Paul Buchheit (05:44)</p><p>Absolutely, right. So I think the real threat is essentially centralization of power. And certainly, if you look at history, all the worst things people have ever done, whether it's the Soviets or the Nazis or whoever, that happens when someone is able to completely centralize power. And AI has that potential because especially...</p><p>Theo Jaffee (05:56)</p><p>. . .</p><p>Paul Buchheit (06:07)</p><p>when it's the channel through which all information is filtered, it becomes impossible to resist. And we see this kind of thing going on, obviously, in China, where people are not able to speak the truth, and people can get disappeared, and there's really nothing you can do about it.</p><p>Theo Jaffee (06:23)</p><p>Well, what about the idea that AI could remarkably decentralize?</p><p>Paul Buchheit (06:35)</p><p>Well, that would be good. That's what I'm hoping for.</p><p>Theo Jaffee (06:36)</p><p>the internet decent information.</p><p>Like how the internet decentralized information.</p><p>Paul Buchheit (06:45)</p><p>Yeah, I mean, hopefully. Unfortunately, because of the computational requirements of AI, it does have, I think, some tendency towards centralization. Building one of these models requires billions of dollars in compute. And so it does have some tendency towards centralization. But I think this is part of why we need to continue supporting, first of all, multiple...</p><p>AIs, you know, it's much better if we have AIs being built by Google and OpenAI and Facebook and Anthropic, anyone else who shows up. I think that the more competitors we have, the better our chances are of freedom. And also open source AI, I think is really important and powerful. And so a lot of the, I think, concerns that people have around safety,</p><p>Theo Jaffee (07:34)</p><p>Yeah, I'm inclined to agree that locking down AI might actually make things worse, especially if you're worried about some kind of monomaniacal paperclip maximizer if there's nothing to counter it.</p><p>Paul Buchheit (07:41)</p><p>A lot of times their answer is we need to lock this down and we need to centralize it. And I think that's actually probably the most dangerous thing we can do.</p><p>Theo Jaffee (08:01)</p><p>Do you think that frontier AIs should be made fully open source? Or are there safety risks inherent to that?</p><p>Paul Buchheit (08:09)</p><p>You know, I don't know that there's a clear black and white answer to that. I definitely favor open source and I think that we should do our best to support that. And I'm really glad that Facebook has taken that advantage. They've kind of turned out to be the unexpected hero in terms of actually advocating for open source AI. But at the same time, I understand people's concerns. I think it's like there are...</p><p>legitimate risks. So it needs to be an open process where people are continuing to debate this. And the big thing that I think we need to watch out for is just efforts that try to shut that down, legislation that would prohibit open models, things of that sort.</p><p>Theo Jaffee (08:59)</p><p>just generally the -</p><p>optimistic with you know.</p><p>with these risks must be avoided and may not be.</p><p>Paul Buchheit (09:13)</p><p>It's both. It's either the best thing we've ever done or the worst. And it's up to us to decide which future that's going to be.</p><p>Theo Jaffee (09:24)</p><p>you know, generally which one it will be in advance. Too hard to predict.</p><p>Paul Buchheit (09:29)</p><p>future is not decided. We have to, it'll be one of those things, but there's no guarantee, right? The future is inherently uncertain. If one of those things were guaranteed, then there'd be no point in worrying about it, right? The reason we care is because I think there's an opportunity to steer things in the direction of something that preserves and expands freedom and liberty.</p><p>Theo Jaffee (09:59)</p><p>Well, you do seem to be pretty good at predicting the future. I think Jessica Livingston said that you have like one of the best track records for investing of anyone at YC. And then while I was doing some background research, you have an article on your blog called 10 Predictions for the World of January 1st, 2020. And five of those 10 were remarkably spot on. Those five, by the way, were you predict that all data lives in the cloud and will be accessed.</p><p>Paul Buchheit (10:06)</p><p>Thanks for watching.</p><p>Theo Jaffee (10:27)</p><p>with computers as basically stateless caches, which is true for a lot of people. You predicted Android and iPhone will kill off all the other mobile phone platforms. Android will be bigger, but iPhone will be cooler and work more seamlessly with Apple's tablet computer. That was totally spot on. Three, Facebook will be a big success, possibly as big as Google. Yeah, Facebook will turn out to be huge. Five, you were a little bit early on this one, but you predicted that, you know...</p><p>Google will release an amazing question answering service that can answer complex questions and is in many ways smarter than any human. It turned out to be open AI first, but Google still has. And then number 10, politics will evolve much faster than in the past due to the internet and social networks. And all this talk about mimetic warfare and whatnot, that's definitely turned out to be true. So.</p><p>Paul Buchheit (11:14)</p><p>Yeah, I picked the wrong right -wing television personality there.</p><p>I thought it would be Stephen Colbert, but it turns out it was Donald Trump who won the 2016 election.</p><p>Theo Jaffee (11:26)</p><p>Well, still, just in that niche, picking a television personality to win an election was not a trivial prediction. So do you have any similar predictions for the next five, ten years?</p><p>Paul Buchheit (11:38)</p><p>You know, I think that it's getting much harder to predict the future. So I play this game with the startups. Sometimes I like to do kind of a time travel exercise because startups generally operate kind of on a 10 year time scale. So from the time that we would like seed fund a company at Y Combinator until IPO is, you know, roughly 10 years. So for example,</p><p>Theo Jaffee (11:52)</p><p>Thank</p><p>Paul Buchheit (12:07)</p><p>And sometimes longer, you know, Reddit is only going to IPO, I think like this month and we funded that in 2005. So in that case, it was close to 20 years. Um, but, uh, maybe a more recent one is like DoorDash we funded in 2013 and then they IPO'd in 2021. So I like to try to think about things on a 10 year time scale with, with startups. And so the exercise I take them through, um, is to, is to kind of say, like, let's say I get inside of my, um, DeLorean time machine.</p><p>because I'm a back to the future fan. And I punch in 10 years in the future. So here I am in March 2034. And I get out of my DeLorean and I clear away the time fog and kind of take a look around. I talk to some people and I say, hey, what's been like the big change? In my basic theory with startups is that if you have a massive success,</p><p>It didn't just happen on its own. It's always riding on top of some fundamental underlying technological shift. So like the reason that Google became massive wasn't just that Google was that special.</p><p>that the internet itself was exploding. And so in order to create these hundred billion dollar or trillion dollar companies, you need to be riding on top of some underlying technological shift. So like DoorDash and Uber, for example, really exist because of the smartphone. So like those kinds of services are a product of the smartphone and everything that that enabled. And so I try to get...</p><p>Startups understand like what is the underlying technological shift that's enabling you to become this hundred billion or trillion dollar company in 10 years? And part of the problem I started running into was that I was having a harder time looking 10 years into the future and so I kind of like maybe back around 2017 Started noticing that I couldn't see it started to be very hard to look, you know 10 years into the future. We're sort of approaching</p><p>sort of the event horizon. And so it's actually, I don't know what happens in 10 years anymore. We are, I think, currently at the point where a lot of things are being sorted out. So, you know, 2024 is a very important year.</p><p>Theo Jaffee (14:26)</p><p>you</p><p>new lab.</p><p>Paul Buchheit (14:43)</p><p>Well, you know, the character of the AI that is being built, I think in large part derives from the society that creates it. And so if the AI is constructed in a totalitarian environment, I think we end up with like a totalitarian AI. And, you know, to the extent that we have a society that values freedom and individual dignity,</p><p>and we embed those values in the AI, I think that's our best chance for actually having an AI that empowers humans to live better.</p><p>And so, you know, right now, what that comes down to, to me, you know, one of the fundamental issues in our society is freedom of speech. And it's, it's, you know, it's the first amendment for a reason. It's the most fundamental thing because once you lease freedom of speech, nothing else particularly matters because it can all be lied about, you know. And we just had an incident of this with the COVID pandemic.</p><p>where it was, I think almost certainly, I would say, 95 % probability produced through gain of function research in a lab. In Wuhan, quite possibly funded by American government and scientists. And that...</p><p>escaped from the lab accidentally, I believe, and caused a worldwide pandemic. We had lockdowns, we had all of that. But four years ago, you weren't allowed to discuss this fact. So we were four years ago, the world was locking down and people who started asking questions about where the virus came from were censored and slandered. And so if we can't even talk about the most important thing in the world at that time.</p><p>you know, what freedom do we have?</p><p>Theo Jaffee (16:54)</p><p>So now it seems like people are much more allowed to talk about the lab leak hypothesis. So do you think what turned that around was just kind of the natural error correction mechanisms of our civilization? Like, you know, the truth can't be hidden for too long? Or was it like specifically something like Elon Musk buying Twitter?</p><p>Paul Buchheit (17:11)</p><p>Yeah, that was it. That was a big part of it, obviously. He kind of smashed the Overton window. Before you would get shut down, and part of it, which I think people don't even totally recognize yet, is the mechanism to which Twitter was used to control the narrative. And so, you know,</p><p>our understanding of reality is through storytelling. Which facts are reliable? Which people can be trusted? What do these things mean? And so a lot of the old Twitter was about being able to enforce that institutional narrative because not only would you potentially get banned from Twitter for discussing the lab leak,</p><p>But the most prominent voices were the Blue Checks, right? Who were a group of people who were, the original Blue Checks were for the most part aligned with that institutional narrative. And so those voices were the loudest. And so they could always shut down dissident voices. And so one of the things that was obviously most controversial, but I think also,</p><p>maybe more impactful than is realized was his eliminating those that are that original class of blue checks who worked to enforce the institutional narrative.</p><p>Theo Jaffee (18:46)</p><p>you</p><p>What would you characterize the institutional narrative as? Is it wokeness or degrowth or statism or a combination of all the above or something more complicated?</p><p>Paul Buchheit (19:03)</p><p>Yeah, you know, it's a combination of things, obviously. It's a little bit hard to, I think, pin down any one factor, but the core element of it is, again, centralization of power. And so, you know, whether that's through, you know, intelligence agencies or...</p><p>ideologies that seek to impose a single worldview. And all of those things fold in. Degrowth is something that has been in the works for a long time. It depends how deeply you want to dig on these things. And it can be a little bit hard to really go into it because a lot of it, the roots go back pretty far. But if you sort of understand how...</p><p>communist revolution works or something like that, it all comes down to being able to centralize power. There's a good book actually that a lot of people haven't read. Orwell wrote this book, Homage to Catalonia, about his experiences. He was an English guy who was a socialist anarchist and he went to Spain to fight in the</p><p>Theo Jaffee (20:13)</p><p>to what end.</p><p>Paul Buchheit (20:33)</p><p>Spanish Civil War. He wanted to go kill the fascists, so he went there to go fight against the fascists. When he first shows up in Barcelona, he kind of marveling at this wonderful classless society, kind of the anarchists have taken over, it's really awesome. And he goes off to the front to fight and ends up getting pretty badly wounded, he almost died. But when he makes it back to Barcelona, things have completely changed while he was away.</p><p>the communists have been consolidating power in Spain. And what's actually happened is the particular group faction that he was a member of, it was like this United Marxist Workers or something like that, had been reclassified as fascists. So essentially, the communists would always just reclassify whatever group threatened their power.</p><p>Theo Jaffee (21:11)</p><p>you</p><p>Paul Buchheit (21:28)</p><p>as being part of the fascists and his friends and comrades were being disappeared into secret prisons and he actually has to sneak out of the country, he and his wife, to escape a similar fate. And that actual experience informed his telling of 1984, which is this incredibly powerful and predictive...</p><p>story of how things go. And so understanding how the language is used to manipulate our ability to even have intelligent conversations about things, so that if you want to talk about the lab leak, that's a racist conspiracy theory, right? And so now you get branded as like a racist. And of course, Disney doesn't want their advertising to show up next to racist or whatever, right? And so it becomes very easy.</p><p>for people to justify censorship because no one wants racist content or whatever. And so you just start applying these labels in order to control what it is we're even allowed to discuss.</p><p>Theo Jaffee (22:38)</p><p>So to what end do you think this power is being centralized? Just power for power's sake? Or are the people doing this, like, do they have a more specific goal in mind?</p><p>Paul Buchheit (22:50)</p><p>I mean, a lot of it is power for power's sake, right? Power tends to accumulate. So I don't want to suggest that everyone who's participating is part of some sort of grand conspiracy. That's not how things work. Actually, I like to think of like, how does capitalism work, right? Someone who's cutting down trees doesn't know that they're making pencils.</p><p>or whatever, right? Everyone just kind of does their little part of the job and they don't necessarily understand kind of how it all fits together. And so for the most part, you know, everyone is just responding to incentives in their environment. And so a lot of times, you know, the incentives are quite naturally, you know, a politician likes to accumulate power. So for example, if you're a Senator or someone in the House of Representatives,</p><p>The way that you gain power is just by bringing in more and more money. So if you want to get like a committee appointment or something like that, it's essentially like a pay to play system. You have to bring in millions of dollars of donations. And how do you do that? Well, you kind of sell influence, right? And so they're a part of the system, but it isn't like they're consciously saying, hey, I want to destroy America. It's just what they do kind of as a byproduct.</p><p>Theo Jaffee (24:16)</p><p>So when did you start to think about things this way? Did you always kind of think this way or was it more of like a journey, kind of like how Marc Andreessen now thinks a lot like you on this, but back in 2008 you looked at his blog and he was like, you know, an enthusiastic Obama supporter.</p><p>Paul Buchheit (24:31)</p><p>Yeah, I mean, certainly.</p><p>Obama presented, I think, a pretty attractive image of unity. You kind of hope for better. In terms of the larger pictures of how power accumulates, I think it's just something I've always been aware of. My mother would talk about this stuff in the 1980s. I was kind of aware of a lot of these trends 40 years ago.</p><p>Theo Jaffee (25:07)</p><p>Like what?</p><p>Paul Buchheit (25:09)</p><p>I mean, this overall movement towards, essentially, communism. Communism is a confusing term because, again, the marketing is really great. Who doesn't want fairness and equality for everyone? But then when you understand how that's actually achieved, it's, again, through total centralization of power and elimination of individual liberty in a communist society.</p><p>Theo Jaffee (25:18)</p><p>Thank you.</p><p>Paul Buchheit (25:40)</p><p>And again, when I say communist, I mean like state communism, not some sort of theoretical communism, but the kind that actually always exists, right? When you start talking about, we're doing something for the common good, you can rationalize anything, right? And actually, again, if you read the Orwell homage to Catalonia, some of the characters in there are actually saying, well, yeah, even if we kill innocent people, it's still fine, it's for the revolution.</p><p>And it's very easy to rationalize any sort of horrible atrocity when you start talking about this is for the greater good. And a lot of it also ties into the degrowth things, the population reduction. It's kind of out in the open. If you look at it, there's a lot of people who advocate for reducing the global population to about half a billion people, which means, you know,</p><p>eliminating over 90 % of us. And so if you start to understand, okay, if you're trying to eliminate 90 % of the humans, how do you do that? And then you see that they're starting to try to lock down the food supply. In Europe, they're shutting down farms. You start to see how people are being pushed in a particular direction.</p><p>Theo Jaffee (27:06)</p><p>pushed by who just by ideals.</p><p>Paul Buchheit (27:10)</p><p>Again, these things kind of come down to just people responding to incentives.</p><p>If you understand like goals, right? So a lot of the environmental movement, you say, well, if you're in favor of reducing carbon emissions, why aren't you in favor of nuclear power? Right? Because nuclear power would actually solve the problem. People don't want to solve the problem. The problem is a justification to...</p><p>impose more controls. And some of it is just power seeking and some of it is part of, you know, whatever the larger vision is of having a society that's more centrally managed. And some of it I actually think is like a personality trait. Some people just like centralization and I'm kind of like a person who likes decentralization. And it's...</p><p>Theo Jaffee (28:00)</p><p>Thank you.</p><p>Paul Buchheit (28:18)</p><p>There is definitely, and this kind of goes back to AI, right? Where some of the people say, well, in order to make AI safe, we need to have it all done in like one centralized effort, like the Manhattan Project or something where it'll be controlled by experts and people who will act in the common good. And my belief, which I think is kind of backed by history, is that when you get a bunch of experts acting for the common good, you end up.</p><p>something more like the Soviet Union, because what really happens when you centralize power is the power ends up in the hands of kind of the most effective psychopaths, right? Because the way that you rise up in a centralized system is through political means. In a more decentralized system, in like a more market -based system, in order to be successful, you actually have to create value and actually have to deliver value, right? Like you can't make a really successful startup unless you actually...</p><p>make something people want. But in a political system, you can just capture power.</p><p>Theo Jaffee (29:23)</p><p>you</p><p>Well then there's the issue of the people who are actually doing the power capturing. Like your average environmentalist, climate change protester is probably not thinking like, oh, I need to give the government control over the world so we can reduce the population and exert power over everyone. They're thinking, you know, we need to save the planet. We need to save the environment. And they might even be thinking that they don't like control. You know, they'd say they don't like power. So is just, is, you know, the will to power kind of just like an emergent property of a lot.</p><p>Paul Buchheit (29:46)</p><p>Absolutely.</p><p>Theo Jaffee (29:53)</p><p>of people who are moving towards non -wanting power.</p><p>Paul Buchheit (29:59)</p><p>I mean, the example of activists is an interesting one because I actually think in general most people are good, whether they're communists or fascists or whatever. People get sold on these things with good intentions. I think about North Korea, there were people who fought and died in that North Korean army to...</p><p>to only to have themselves and generations of their descendants locked up in what is essentially a giant prison. But I'm sure the people who fought for that, they didn't understand what they were fighting for. I'm sure there were good people who thought they were fighting for freedom and equality and these good things. The problem isn't for the most part bad people, it's bad ideas, it's bad narratives. And so part of it is this idea that if we only...</p><p>Part of the environmental narrative is essentially that people left their own freedom will destroy the world. And a lot of it is because of a zero -sum belief system. This is like a Malthusian, right? This idea that the population will always outstrip the resources. And again, these are ideas that go back a very long time. And so they believed that the only way...</p><p>to create a society that isn't just in permanent famine and starvation is to limit people's freedom. Because if people have freedom, they'll just keep reproducing and will always have a shortage. And so, for example, China's one child policy was viewed as being very smart and progressive by these kinds of people because they...</p><p>They said, you know, if people are given the freedom to decide how many kids they'll have, they'll have too many children.</p><p>Theo Jaffee (32:00)</p><p>So.</p><p>At the end of your long pin post about the narrative on Twitter, you wrote,</p><p>interesting I want to get back to that. How would you go about like doing this in society in a society where people are so you know where people cling so hard to their narratives? How do you free them of it?</p><p>Paul Buchheit (32:44)</p><p>I think, again, just awareness. Being able to at least tell multiple narratives, and I think there's an awareness of how much narratives drive things. It's slowly entering into awareness. One of the ideas I would like to do, and I haven't really pursued enough, is I think it would be interesting to create something that's almost more like a news publication, but that...</p><p>reports from this meta -narrative context. And so if you look at whatever the issues of the day are, you know, let's say like just looking at Twitter earlier, you know, there's these stories about they deployed the National Guard into the New York City subway, right? And so there's a lot of storytelling around like what's going on there, right? But generally, you just get one side's story or the other side's story, but I think,</p><p>what I would like to see is both sides, both narratives kind of laid out side by side on essentially equal footing. And there might even be a third narrative. And so I think if you actually put things side by side, it creates a kind of awareness because you start seeing like, oh, all of these things are actually just stories. And that's not to say that the stories are equally good or equally beneficial or harmful, but they are all ultimately just stories.</p><p>There's an author I really like, Byron Katie. She writes a book called Loving What Is. It's completely unrelated to all of this, seemingly. But her practice is one of actually identifying the narratives in your own life, the thoughts that cause you trouble. And so people get very locked into this idea, oh, my mother didn't love me, or my children should...</p><p>pick up their socks. Actually, ironically, the author's ex -husband is named Paul, and so she's constantly, her examples are always like, Paul should do this, Paul should do that. And so she teaches a practice of how you can essentially identify these stories. And then she creates essentially counter narratives. So she asks you to do these turnarounds and find three ways in which...</p><p>these turnarounds are at least true or even more true than the story that you initially believed. And she says, essentially when you do that, you don't let go of the story, the story lets go of you.</p><p>And because, you know, these stories, when you believe them...</p><p>it seems so real because it takes over.</p><p>but when you see a bunch of stories side by side, it kind of loses that.</p><p>Theo Jaffee (35:49)</p><p>Interesting. And in the very last sentence, you said this is the beginning of alignment. So did you mean human alignment, AI alignment, or both?</p><p>Paul Buchheit (36:00)</p><p>Both. Right. You know, we are evolving as a species and this is, I think the biggest change since the advent of agriculture, at least. You know, in terms of how our species functions and is organized. And the reason I think agriculture is so important is before agriculture, humans were...</p><p>you know, just kind of these like little tribes of, you know, small groups of a hundred people, something like that. And agriculture is what enabled truly like the rise of the machines. Because that's when we started having, you know, large organizations, governments, cities, corporations, and these things are already sort of a kind of meta life form, right?</p><p>corporation has a life of its own, a large corporation or a large government, no one truly runs anything. Biden isn't actually in charge of the United States government. He's obviously very influential. CEO of Google isn't actually in charge of Google. These are large collective organisms.</p><p>Theo Jaffee (37:22)</p><p>Is Elon Musk in charge of Tesla?</p><p>Paul Buchheit (37:25)</p><p>Yeah, more so than most CEOs. But again, it's not like he has magical power, right? Part of what makes him very effective, I think, is that he's able to insert his thinking into the employees. I don't if you read the Walter Isaacson book. It's really good. Yeah, it's worth reading. But...</p><p>Theo Jaffee (37:45)</p><p>Oh yeah, it's on my bookshelf.</p><p>Paul Buchheit (37:51)</p><p>And actually the one with Steve Jobs is really good too. And I think part of what makes these characters so effective is that they show up. Like if there's a problem on the assembly line and it's holding up production, Elon is like there. He's there alongside the person and he's asking like, asking hard questions. Like why is this a problem? Like there's one section in the book where the production was being held up because they needed some kind of like,</p><p>plastic part to put over the battery before they shipped it or something like that. And he just kind of shows up and starts questioning all of the assumptions. And it kind of came down to they didn't need this part at all. And so things were getting held up because of the lack of a part that they didn't even need. So he has this wonderful algorithm of like questioning each part of the things, you know, like the best part is no part, the best design is no design. And by showing up at these critical points,</p><p>and actually essentially micromanaging it, I think that really gets into the culture and that gets into everyone's head. So I can imagine if you're at Tesla, the last thing you want is for Elon to show up. Right?</p><p>Theo Jaffee (39:06)</p><p>Yeah. So switching topics a little bit, this is actually a pretty good segue into startups. So you're still a managing partner at Y Combinator, right?</p><p>Paul Buchheit (39:16)</p><p>I'm, I think the word is something more like emeritus. I show up, but I don't do group anymore. So the way that we run Y Combinator is, you know, there's a couple hundred startups per batch, but then it gets split up into smaller groups, and then each group has a number of partners who's responsible for taking those startups through the program. And so I...</p><p>Theo Jaffee (39:45)</p><p>So if you could start all over again in 2024, right? If you were, you know, 20 something, just getting out of college and you wanted to build a startup, what kind of startup would you build? Always AI? Only AI?</p><p>Paul Buchheit (40:02)</p><p>Obviously AI is really important, but I mean I think it has to come from the person. It has to be what you're interested in. Part of what makes Y Combinator work is that we don't pick the ideas. We pick the founders and the founders bring the ideas. So there's often this misunderstanding that somehow we've decided like this is a thing that's hot this year and that's never the case. It really comes down to what do the founders believe in. And so...</p><p>I think it needs to come from your own experience and from your own insights in terms of what makes a good startup. So I don't put out here, here's the thing you should do. It's hard for me to know what I would be thinking if I were 20 years old.</p><p>Theo Jaffee (40:51)</p><p>And then for the future of Y Combinator, how do you see the future of Y Combinator playing out over the next few years? What kinds of startups should they fund, especially an AI that won't just get blown up by the next OpenAI release?</p><p>Paul Buchheit (41:06)</p><p>Again, you know that we are our strategy is essentially we fund really smart and effective founders and and we don't a Lot of times they pivot right and so a lot of times the idea that they come in with is not the one that's good And so the idea is we want people who are able to move fast and iterate and and you know make intelligent decisions</p><p>One of the biggest predictors of sort of success is basically just how quickly does a person iterate and so it's fine to come in with like a dumb idea or whatever. The thing that's not fine is just to get stuck on that. And so a lot of what we do is essentially just push people, you go talk to customers, whatever. And a lot of times the, you know, concerns about competition I think tend to be overblown. There's some things that are just like obvious if you put yourself...</p><p>Theo Jaffee (41:50)</p><p>you</p><p>you</p><p>Paul Buchheit (42:03)</p><p>Like one of the things people come in, you know, I get pitched on a lot of times are like email ideas and it's kind of like it's like Gmail but with one extra feature and you know, that's not really like a viable business. You have to be doing something that isn't just like trivially done by a larger competitor. But you know, it's kind of a law of large numbers. We fund a couple hundred startups per batch and it's...</p><p>it's, you know, we can take a lot of bets.</p><p>Theo Jaffee (42:38)</p><p>So what startups or founders right now do you currently think are the most promising and why? Like a lot of people talk about perplexity, for example.</p><p>Paul Buchheit (42:47)</p><p>Yeah, Perplexity is cool. I think they're doing a good job. And again, actually, that's a great example of a company that's iterating very quickly, right? They're continually improving the product. They're continually engaging with users and making it better. Will they be able to survive long term? Obviously, they're competing directly with Google. That's pretty hard. But they're going after something that's a real need.</p><p>And the advantage they have versus Google is that they don't have an existing business. So the problem that Google has is they actually have this really amazing business with putting ads on search results. But owing in part to the fact that they've had essentially a search monopoly over the last decade, the search results page has just gotten flooded with ads to the point where sometimes you search for something and all you get is ads for the first</p><p>the entire screen is just full of ads. And so perplexity has the advantage that they don't have that legacy to protect. One of the tricks that a startup can do is that they can essentially destroy an incumbent's business because an incumbent, this is the classic sort of innovator's dilemma, let's say the new business is only 10 % as good as the old business, that's terrible for the incumbent but still great for a startup.</p><p>Theo Jaffee (44:13)</p><p>Do you think perplexity in Google can coexist?</p><p>Paul Buchheit (44:18)</p><p>Yeah, certainly. I mean, you know, these things are always... When something's this big, things always play out differently than you expect. You know, if you go back in time 20 years, when we were at Google 20 years ago, we were just about to launch Gmail, actually, you know, April 1st, 2004. At the time, we were very worried about Microsoft. And so actually, like inside of Google, people were like really scared of Microsoft and, you know, there were this terrifying threat.</p><p>And here we are, you know, 20 years later the two companies obviously are competitors, but they coexist You know very successfully they're both multi -trillion dollar companies and But Microsoft has shifted a lot right like they 20 years ago Microsoft was entirely this company based on the windows and office Monopoly and that's not their business anymore. I mean they still sell that stuff, but that's not that's not what makes Microsoft</p><p>Windows has sort of been de -emphasized and now they're just this enterprise software company.</p><p>Theo Jaffee (45:25)</p><p>What other startups or founders do you think are very promising right now?</p><p>Paul Buchheit (45:31)</p><p>I don't have a catalog for you, unfortunately. Anything with AIs, interesting to look at. Because obviously, I think that's the biggest trend is how does all of this play out. We have a lot of companies that are looking at storytelling. I think it's kind of...</p><p>really intriguing possibilities in terms of just enabling people to do really great things like, you know, my daughter writes a lot of like fan fiction and things like that and, you know, isn't it going to be really cool when you can just automatically turn that into something that's, you know, a quality animation or something like that, animated television shows. There's a lot of things right now that requires a very large budget to produce.</p><p>And within probably a year, just an amateur will be able to make something just as good using these AI tools. And so I think we'll see an explosion of creativity and of content because it's a thing that enables people. And obviously that's like disruptive in a lot of different ways, but it's remarkably hard to predict these things. In hindsight, it's like super easy, but...</p><p>I can tell you at the time it's never as easy as it looks in hindsight. One of the examples that comes to mind was at Google, again about 20 years ago we had this product called Google Video and most people haven't heard of it but it actually launched before YouTube. And so when we were working on Google Video I actually remember we'd be like, what are people gonna upload? What even is there? It was a thing like YouTube basically.</p><p>you could upload videos and we would host them. At the time, the only thing we could think of is, well, probably they're just going to upload copyrighted content and porn. What else is there, really? What could people possibly upload that would be so interesting? There was a lot of skepticism of the Google video product. Also, part of the reason Google video failed is because they are overly cautious. When you would upload,</p><p>Theo Jaffee (47:36)</p><p>you</p><p>Paul Buchheit (47:55)</p><p>a video to Google video, you had to fill out this great big form showing who are the actors and directors, as though it were like a Hollywood production, and then it would have to go through a review process. And then the startup, YouTube, just comes along and just makes this thing where you just upload and that's it. And you don't have to jump through any of those hoops and it's live. And they went viral with some videos, which were, of course,</p><p>Theo Jaffee (48:14)</p><p>Thank you.</p><p>Paul Buchheit (48:23)</p><p>Copyrighted I think the first viral YouTube video was a Saturday at live sketch Lazy Sunday, which is like a really good. It was hilarious but now you know YouTube is this incredible repository of Educational content everything imaginable, but you know you can learn so much on YouTube it's you know if you want to learn how to be an electrician or my son is really into 3d printing he watches all this stuff about 3d printing and</p><p>material science and mathematics and it's YouTube is probably the greatest like educational resource that has ever existed in like the history of humanity and we didn't anticipate that</p><p>Theo Jaffee (49:03)</p><p>So you talk about AI leading to like an explosion in creativity. And one important realm of creativity is software. You know, Paul Graham likes to talk about hackers and painters, you know. Software is like art. So do you think AI is fundamentally like an enhancement for existing software developers, or is it more of a replacement? Like, do you think there will be more or fewer developers, if you had to guess, you know, these predictions are hard, but if you had to guess, do you think there will be more or fewer developers in like five years?</p><p>Thank you.</p><p>Paul Buchheit (49:34)</p><p>Um, both. So I mean, what it means to be a developer will change, right? Um, so as the tools get better, um, the way that you use them changes, what it, the accessibility certainly improves when you can just describe what it is you want. Like, what does it mean to be a developer when you're really just talking to an AI and telling it what you want and then kind of iterating on that. So what I imagine is, is essentially a dialogue system, right? Where it creates a product.</p><p>And it's, and you're like, well, no, not quite like that. You know, you can continue to iterate and refine what it is you're looking for. But certainly it's going to change a lot. But again, you know, five years is like forever at this point because things are moving so quickly. So it's, like I said, AI makes it extremely hard to look into the future at this point.</p><p>Theo Jaffee (50:30)</p><p>Mm -hmm. So switching topics a bit to talk about Gmail and email in general. So obviously you've been in this field for like a very, very long time. But in 2024, how important do you think email still is considering we have all kinds of other forms of communication like Slack for the workforce and Twitter DMs for, or Instagram DMs for personal communication?</p><p>And email is increasingly used by scammers and whatnot.</p><p>Paul Buchheit (51:02)</p><p>Yeah, I mean, it's always been popular with scammers. I think that the thing that email has is just the fact that it's universal. It doesn't belong to any one platform, and it's kind of the thing that binds everything together. So when you sign up for an account on Amazon, it doesn't DM you on Instagram, right? It sends you an email. And so it's kind of like the base layer. But yeah, certainly,</p><p>a lot of communication moves off of email, which is good. Email is not always the best way to communicate. I think especially for anything that's kind of complicated, I would avoid email if there's like, if you have a stressful issue to discuss, don't ever do it over email. Email is good for just factual things, right? It's good for, here's a document.</p><p>here's a receipt, whatever. But yeah, I think it's good that we're creating other channels of communication.</p><p>Theo Jaffee (52:10)</p><p>So when Gmail came out, you pioneered a whole bunch of new text interfaces that previous email systems didn't really have. I believe Gmail was the first to have very fast search, and it was the first to natively support email threads and whatnot.</p><p>And so now we're in like another big opportunity with text UI, which is communicating with LLMs. So what would you want from a text interface or any interface with a large language model?</p><p>Paul Buchheit (52:43)</p><p>I'm not sure I understand what you're asking. How would I change the interface of like chat GPT you're asking?</p><p>Theo Jaffee (52:50)</p><p>Yeah, chat .gbt or any other kind of AI. Like how would you want to change it so that it's better to interact with, just like Gmail is better to interact with other humans and other email platforms.</p><p>Paul Buchheit (53:01)</p><p>I think it's a very different situation. So part of the reason we were able to innovate so much on Gmail was that it had been, email had been kind of a stagnant product. It just hadn't changed in a long time. Like, you know, our competitors at the time in the webmail space were Popmail and Yahoo mail, which are just like these incredibly clunky, I mean, I guess you're young enough, you've never seen these kinds of things. They were...</p><p>the whole webpage load would reload every time you did anything and it was just covered in ads, you would get, the default quota you would get from Hotmail was two megabytes. And you know, for comparison, like a photo that you take on your iPhone is probably like five megabytes. Right? And so like it was just such, there were such,</p><p>poor products and they had just kind of been stagnant for a long time. So there was a tremendous amount of space for us to innovate because no one had done anything in a long time. If anything, it seemed like they were making the products worse. And so there is no analogous situation. It isn't like the chat GPT interface is 10 years old, right? Like this is the, the AI stuff is the most innovative space going on right now. And you have,</p><p>you know, a lot of really smart people at Google and OpenAI and Thropic, all these companies are competing in this space. So I don't, I wouldn't try to compete on interface. You know, there's a lot of people working on it. There's, there's, you know, perplexity, all these apps are trying to present different interfaces to it. The thing I think that is going to happen, and partially this is just like a technology improvement is essentially.</p><p>the point at which I can just like whip out my phone and it's like opening the camera app and then I can just start having a conversation with the AI about whatever it is the camera is pointed at. So I think that the next step is that you go beyond kind of that simple.</p><p>chat interface, essentially, to be more one of a conversation with an intelligent agent that's actually on your device. And I think that's really cool, because then I just open it up, and it has the camera and the microphone, and then I just have a conversation with it in real time.</p><p>Theo Jaffee (55:14)</p><p>like the Ragnar one.</p><p>So are you bullish on the Rabid R1? Have you seen it? Yeah. It's pretty similar to what you're talking about. There's also just, I mean, the way I solve this particular issue is I have an iPhone 15, so I just have an action button, and then I map it to chat GPT, and it'll just open it, and then I can talk to it from there. But it's still not the same as having something, you know, live always on at the OS later. I think Apple made both.</p><p>Paul Buchheit (55:26)</p><p>I haven't looked at it.</p><p>Mm -hmm.</p><p>Yeah, I think one thing I'm waiting for, it'll be interesting, is just what Apple, when will Apple finally show up? Because they're arguably in some ways in the best position because they own the platform that is in my pocket. And so in terms of being able to have something that's really tightly integrated, they're in a really good position. But it seems like they kind of missed the boat in terms of actually building good AI models.</p><p>Theo Jaffee (56:17)</p><p>Well, have they missed the vote or do they just have something really cool that still hasn't come out yet? A lot of people think they're gonna...</p><p>Paul Buchheit (56:22)</p><p>Could be, that could be it. But usually the thing is that, you know, it takes a while and it takes iteration to actually launch a product and have it be good. You know, we were kind of in the same place a year ago or more than a year ago when ChatGPT launched and everyone just assumed that Google had this thing that they could just launch the next day and that would be better. And obviously that isn't true. It actually, there's a big difference between having a product that kind of like,</p><p>you're working in a research capacity and actually having something that's out in the world and dealing with the more adversarial environment of having actually millions of users working with it. And of course these products, the AI products learn, right? I mean, they're learning from your interaction, right? ChatGPT is learning from you using it.</p><p>Theo Jaffee (57:21)</p><p>Yeah. So I think that's a pretty good place to wrap it up. So thank you so much, Paul Buchheit, for coming on the show.</p><p>Paul Buchheit (57:27)</p><p>Great. Sure. All right. Great chatting.</p><p>Theo Jaffee (57:31)</p><p>Yeah.</p>]]></content:encoded></item><item><title><![CDATA[#11: Bryan Caplan]]></title><description><![CDATA[You Will Not Stampede Me: Essays on Non-Conformism]]></description><link>https://www.theojaffee.com/p/11-bryan-caplan</link><guid isPermaLink="false">https://www.theojaffee.com/p/11-bryan-caplan</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 26 Feb 2024 02:56:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/142051277/0bd171e5cfc2f0bda9b8938dc6c01875.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Bryan Caplan is a professor of economics at George Mason University, research fellow at the Mercatus Center, adjunct scholar at the Cato Institute, writer at EconLib and Bet On It, and best-selling author of eight books, including <em>You Will Not Stampede Me: Essays on Non-Conformism</em>, the subject of this episode.</p><p>0:00 - Intro</p><p>2:04 - The Next Crusade</p><p>3:44 - Moderating X</p><p>6:11 - Inventing Slippery Slopes</p><p>8:04 - Right-Wing Antiwokes</p><p>10:20 - Nonconformism and Asperger&#8217;s</p><p>12:02 - Making society less conformist</p><p>16:44 - The rationality community</p><p>20:30 - Polyamory</p><p>23:28 - Caplan vs. Yudkowsky on methods of rationality</p><p>26:40 - Updating on AI risk</p><p>29:35 - Checking your nonconformity</p><p>31:10 - Making LinkedIn not suck</p><p>33:53 - The George Mason economics department</p><p>38:35 - Does tenure still matter?</p><p>40:03 - Improving education</p><p>46:50 - Should people living under totalitarianism conform?</p><p>49:30 - Natalism and birth rates in Israel</p><p>51:19 - Hedonic adaptation in the age of AI</p><p>53:52 - Should we abolish the FDA?</p><p>57:15 - Being a prolific writer</p><p>1:00:30 - Bryan&#8217;s writing advice</p><p>1:02:35 - Outro</p><h3>Links</h3><p>Bryan&#8217;s Twitter: <a href="https://x.com/bryan_caplan">https://x.com/bryan_caplan</a></p><p>Bryan&#8217;s Blog, Bet On It: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:820634,&quot;name&quot;:&quot;Bet On It&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2d45a1-c3a4-4fe1-bc20-e8e00e0c60b6_1280x1280.png&quot;,&quot;base_url&quot;:&quot;https://www.betonit.ai&quot;,&quot;hero_text&quot;:&quot;Caplan and Candor&quot;,&quot;author_name&quot;:&quot;Bryan Caplan&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.betonit.ai?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!iEMP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2d45a1-c3a4-4fe1-bc20-e8e00e0c60b6_1280x1280.png" width="56" height="56" style="background-color: rgb(255, 255, 255);"><span class="embedded-publication-name">Bet On It</span><div class="embedded-publication-hero-text">Caplan and Candor</div><div class="embedded-publication-author-name">By Bryan Caplan</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.betonit.ai/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><p><a href="https://www.betonit.ai/"><br></a><strong><a href="https://www.amazon.com/You-Will-Not-Stampede-Non-Conformism/dp/B0CQPJ6DCT/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;dib_tag=se&amp;dib=eyJ2IjoiMSJ9.95bGnREBhOREb8kL8JeVpjT4w1MPCEfRYcUNZuMAEHVdVGJYv9Ns80cVoAGLtZJPOA6b5DTvnMfO6tF6ZAfRybM2TL7EtaHQlsvbOZVOZro.EzrhsuVDhljbtjiqE_YqD9_RP0YXHFyVHMYEu9GOviI&amp;qid=1708877990&amp;sr=8-1">Buy </a></strong><em><strong><a href="https://www.amazon.com/You-Will-Not-Stampede-Non-Conformism/dp/B0CQPJ6DCT/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;dib_tag=se&amp;dib=eyJ2IjoiMSJ9.95bGnREBhOREb8kL8JeVpjT4w1MPCEfRYcUNZuMAEHVdVGJYv9Ns80cVoAGLtZJPOA6b5DTvnMfO6tF6ZAfRybM2TL7EtaHQlsvbOZVOZro.EzrhsuVDhljbtjiqE_YqD9_RP0YXHFyVHMYEu9GOviI&amp;qid=1708877990&amp;sr=8-1">You Will Not Stampede Me</a></strong></em><strong><a href="https://www.amazon.com/You-Will-Not-Stampede-Non-Conformism/dp/B0CQPJ6DCT/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;dib_tag=se&amp;dib=eyJ2IjoiMSJ9.95bGnREBhOREb8kL8JeVpjT4w1MPCEfRYcUNZuMAEHVdVGJYv9Ns80cVoAGLtZJPOA6b5DTvnMfO6tF6ZAfRybM2TL7EtaHQlsvbOZVOZro.EzrhsuVDhljbtjiqE_YqD9_RP0YXHFyVHMYEu9GOviI&amp;qid=1708877990&amp;sr=8-1"> on Amazon</a></strong></p><p>Playlist:</p><div id="youtube2-sdJRQ6924HY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;sdJRQ6924HY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/sdJRQ6924HY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Spotify:</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/1IJRtB8FP4Cnq8lWuuCdvW" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><p>Apple Podcasts:</p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:5118,&quot;numEpisodes&quot;:10,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-12-26T01:03:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p>My Twitter: <a href="https://twitter.com/theojaffee">https://twitter.com/theojaffee</a></p><p>My Substack: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:989123,&quot;name&quot;:&quot;Theo's Substack&quot;,&quot;logo_url&quot;:null,&quot;base_url&quot;:&quot;https://www.theojaffee.com&quot;,&quot;hero_text&quot;:&quot;Technology, business, statecraft, and understanding the world.&quot;,&quot;author_name&quot;:&quot;Theo Jaffee&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://www.theojaffee.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><span class="embedded-publication-name">Theo's Substack</span><div class="embedded-publication-hero-text">Technology, business, statecraft, and understanding the world.</div><div class="embedded-publication-author-name">By Theo Jaffee</div></a><form class="embedded-publication-subscribe" method="GET" action="https://www.theojaffee.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h1>Transcript</h1><p>Theo Jaffee (00:00)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Hi, welcome back to Episode 11 of the Theo Jaffee Podcast. We're here today with Bryan Caplan.</p><p>All right, so let's get into some questions. First question, in your essay Crusades and You, you talk about the eight crusades of hysteria and herding that you've lived through. Islamist Iran, the war on drugs, Free Kuwait, the war on terror, the Iraq War, the 2008 financial crisis, COVID and BLM. So do you have any ideas about what the next crisis might be, crusade, or do you just have no way of knowing?</p><p>Bryan (00:35)</p><p>Hmm. Hmm. Gee, that's a really tough one. Yeah, if you could figure out what the next crusade's going to be. I mean, a lot of this does hinge upon there being a shocking event. I think there wouldn't have been any George Floyd protests without George Floyd. It really does depend upon having the right shocking event at the right moment. In terms of what would be next, hmm.</p><p>Yeah, I mean, I really wish I knew. I mean, normally I will say I don't feel like I've been very good at foreseeing which things would happen next. Obviously, I didn't see the Israel-Palestine thing. I didn't see Ukraine coming. I was writing about it as a possibility, but that's very different from saying that's the one.</p><p>Theo Jaffee (01:21)</p><p>Do you think Ukraine would count as a full -blown crusade on the level of the others?</p><p>Bryan (01:25)</p><p>No, no, it's more of a minor crusade. I don't think we've had any true full -blown crusades since COVID. You don't have them all the time. That's at least one of the saving graces is that probably most years there isn't any one issue that everyone is supposed to be thinking about and getting worked up over. But maybe one year and three is in that category.</p><p>Theo Jaffee (01:48)</p><p>So in the identity of shame, you talk about the dangers of large, unselective groups.</p><p>So one such group is X, Twitter. So how do you think it should go about moderating itself to, you what's the right amount of selectiveness, if any, to avoid trampling free speech?</p><p>Bryan (01:55)</p><p>Well, it's an interesting point. It's not like, at least I've never met anyone who identifies with Twitter itself or X itself. Even Elon Musk is not going to say, I love everything happening on my platform. It's all fantastic. So that is very different from what I'm talking about in that essay, which is if you identify as Irish, then you sit around talking about how great everyone is Irish ever was and defending Ireland against any possible criticism. I don't think that...</p><p>Twitter, or any social media platform actually falls in that category. In terms of what they could do in order to improve their brand, I think a lot of what they have done since Elon is improved the brand from being a place where woke voices only are wanted to an actual vibrant center of argument, and one where they are not trying to stamp out any particular view. And they're just saying, you there's bad views, and it's not our job to go and get rid of them.</p><p>So in the end,</p><p>that's most of what the brand is. I think that is actually a brand worth defending. Same thing goes for Substack, by the way. So Substack recently has come under pressure to go and hunt down possible Nazis and get rid of them. And I wrote a piece saying, this really is a strong example of the slippery slope where once you get rid of them, who's next? It seems unlikely that that would be the only group that you would get rid of and then you would stop because the people that want to get rid of them aren't the kind of people to stop.</p><p>Theo Jaffee (03:35)</p><p>First they came for the communists and I did not speak out</p><p>Bryan (03:38)</p><p>Yeah, yeah. It's important to realize that often the slippery slope argument is wrong. You need to go and look at particular cases and see what's going on here. I do think wokeness is one where if you did not have a slippery slope argument before, wokeness would cause you to invent it just to see how eventually things that seem to be completely normal become forbidden thought crimes, which is just weird. All the way to, there's two genders, it's like 10 years ago.</p><p>So that's something that people will get in trouble for thinking. What is the alternative view even? And now people do get in trouble for it, strangely.</p><p>Theo Jaffee (04:16)</p><p>Can you go into a bit more detail about what you mean by inventing a slippery slope?</p><p>Bryan (04:21)</p><p>Right. So the slippery syllable argument, which we've all heard, says if you go and make one exception to a good rule, you won't just wind up making one exception. You'll wind up making other exceptions and further exceptions, and finally there is no rule. There's a great scene in the Brazilian movie City of God where they start off with this one character who says, well, look, I'm going to go and commit some crimes only against bad people.</p><p>and we're not going to actually kill any innocent people here in the Brazilian underworld. Then what happens is they come to a point where either you have to kill an innocent security guard or get shot yourself and shoot someone and say, eh, exception approves the rule. And then the voice of admiration says, and then the exception became the rule. And then you get a montage of all the horrible things they start doing. So that's the slippery slope argument in general. Not always true, obviously. There are exceptions that we make that don't spiral out of control into...</p><p>eviscerating the original rule. It requires some judgment, but also just some experience in seeing what kinds of exceptions eventually spread far and wide. What I would say is that in the case of Woken, it's one where the exceptions that started being made just expanded so rapidly and in directions that would just have been confusing to almost anyone if you had forecasted 20 years ago.</p><p>If you just imagine going back in time 20 years and saying the following things will be reasons for a person to be shunned, you would just like, what? The story, of course, is not, you don't just wake up and say we're gonna start shunning people for the following list of things almost everybody believes. Instead, you start with marginal cases and then you shun some more and more and more and finally you end up where we are.</p><p>Theo Jaffee (06:09)</p><p>Are you as worried or maybe not as worried, but how worried are you about right -wing anti -wokes compared to the woke left?</p><p>Bryan (06:17)</p><p>Yeah, I'll say about as low as you can be while still being positive. I think they just have so little cultural influence. And the cases that people have pointed to of them abusing power, I think when you actually study the facts, I don't think it is reasonable to think of them as abusing their power. So Florida is probably the main case that people talk about. This is one, look, you've got public schools, they got a curriculum, and what's gonna be in the curriculum? Should it be the...</p><p>what a pile of woke dogma or should it be regular stuff? And you're like, well, you can't do both. And choosing between those, yeah, I don't see why it should be woke dogma. On terms of any kind of censorship on college campuses, and if you actually know how college campuses work, this is just absurd to be a worry. It is such a remote possibility that anyone is going to have to worry about this in real life, right? You know, it's a big world, so you can find very isolated examples, but it's really rare.</p><p>And if you understand how universities work, and I do, because I've been in universities now for 27 years, the entire DNA of the system exists to go and promote wokeness and crush dissent. They have a bunch of rules that have hindered them from doing it, including, of course, tenure. Woo, tenure. Valuable for me, because I actually am a dissident, not necessary for the others. But in any case, one of the...</p><p>The simplest examples to me is, agreement studies departments, many people feel like it would be 10 to about to censorship to get rid of them. It's like, well, suppose we had departments of creation studies being funded by taxpayers. Would it be censorship to get rid of those? Like, no, I think it's a violation of the first amendment that you have taxpayer support for them in the first place. When you have an academic discipline that actually is just dogmatic propaganda where you cannot be a practitioner of the discipline while saying,</p><p>highly critical things about it, then yeah, I don't think that there is an issue of academic or intellectual freedom. It's the issue is the other one around of taxpayers being forced to support a secular religion.</p><p>Theo Jaffee (08:25)</p><p>So what do you think about the relationship between nonconformism and Asperger's? Because Peter Thiel has said, you know, individuals with Asperger's have an advantage in Silicon Valley. And Elon Musk has said that he has Asperger's, and of course, he's wildly successful. So what do you think about that?</p><p>Bryan (08:42)</p><p>Yeah, great question. I would say two things. First of all, that Asperger's people do not really need to think of nonconformism as a conscious philosophy because they're doing it already. So in a way, there's this old line, if I'm not here to help the saved, I'm here to save the sinners. So similarly, the reason you write a book about nonconformism is not primarily to go and tell Asperger's, people with Asperger's to stop conforming, they're already not doing it, but rather to go and get people, the vast majority that,</p><p>are paralyzed by fear of strangers judging them and point out that that is a silly fear to have in the modern world. See, the main reason why my book is useful for people who are on the spectrum is that I do emphasize being strategic about it and realizing a lot of times being nonconformist is fine or helpful, but there's other times when it is actually going to hurt you in real life and to recognize the difference between those cases.</p><p>Theo Jaffee (09:41)</p><p>We</p><p>think.</p><p>Bryan (09:42)</p><p>Right, and as to how you would do it, I'd say step one is try small deviations and see what happens to you. Right, so start small, see if people freak out at you. If they don't, you can probably go a bit further. If on the other hand, the smallest deviation gets you crushed, that's a different story. So if you say, well, I'll just do a small deviation, I will refuse to do a foreign language in high school. Yeah, you might not even be able to graduate high school if you do that, sorry.</p><p>Theo Jaffee (10:07)</p><p>What do you think can be done to make people on the whole less conformist? Not like individual people, but society. Like, to what extent is this even possible and not baked into human nature?</p><p>Bryan (10:18)</p><p>Well, it does vary quite a lot between countries. If you go to Japan, I think they're obviously a lot more conformist than we are. I think they themselves will agree that we are less conformist than they are. So since it is something that varies, it can't be that everybody is always at the same maxed out level. Obviously, even in Japan, there's people that do things like dye their hair, and the first Japanese person to dye their hair was definitely not conforming, in a country where pretty much 100 % of the people are born with black hair.</p><p>Let's see, so what can be done at the societal level? A lot of it does hinge upon individuals doing it. And if individuals do it, it becomes easier. So that would be where I would start. Probably, in terms of what arguments are the ones that are most helpful, the honest one is just saying, look, we have a lot of these emotions that come from our ancestral environment where we lived in bands of 20 to 40 people. And the modern world is so different from that.</p><p>Historically, there just wasn't any such thing as anonymity, and now anonymity is the main thing we have vis -a -vis almost every other person in the world. And then to say, well, we've got these emotions that don't really fit our modern environment, so you can either keep doing that stuff that doesn't really optimize for the situation we're in, or you can try to do something else. Obviously, it's really hard for people to go against very strong evolved emotions, but for that, you just say, like, just baby step it.</p><p>Just do a little bit. Just find some small thing where your emotions tell you to conform, but your reason tells you that you can totally get away with it. You actually want to do it. It will benefit you. And then just break from the mold to that small degree. And we'll start from there.</p><p>Theo Jaffee (12:05)</p><p>Do you</p><p>think the average person would even respond to like an evoPsych argument like that?</p><p>Bryan (12:12)</p><p>The average person know, I mean, of course, there's the general base rate of almost everyone's impossible to persuade of almost anything. So I just begin with that, all right? Then the next step is, all right, given that what can be done? It's like, well, there's a subset of people that are a bit more flexible anyway. So out of people that are a bit more flexible, I mean, I would think that, I would say that out of people that are open to arguments of any kind,</p><p>appeal to Darwinian thinking is in a way one of the easiest because it's so widely accepted in principle. So you are starting with a principle that is widely known and accepted among people that would even listen to an argument. You say, well, you're alienating creationists. All right, yeah, I didn't think I was going to do very well with them anyway. And then, let's see, what was I going to say there? Oh, yes, and then it's also one where it's very easy to...</p><p>get people to see that introspectively this is correct. When you just say, well, suppose that you could go and get $1 ,000 by wearing an embarrassing shirt in front of a bunch of people that you knew would never know who you were. Would you do that? It's like, I don't want to. Yeah, but why not? It's like, well, because it's gonna hurt my reputation. We stipulated in the thought experiment that it won't really hurt your reputation. So does that make you feel better about it? It's like, I still don't really.</p><p>It's like, all right, well, but it's like a thousand bucks. How much could it really matter? How about 10 ,000 bucks? Will you wear the stupid shirt in front of a bunch of people that will never have any idea who you were in order to get $10 ,000? There's got to be some point where you would do it. But along the way you also learn, oh gee, like I just care a lot about the opinions of people where really I don't have any good reason to care about. It's gotta be evolution here that is tricking me.</p><p>Theo Jaffee (14:01)</p><p>Was that the actual amount of money, by the way? Yeah. Yeah, OK.</p><p>Bryan (14:04)</p><p>No, no. I'm alluding to a famous experiment on the spotlight effect where they, as part of the experiment, they just made people wear a stupid shirt and then walk through a room and then ask, they asked first of all, the person, how many people noticed your stupid shirt? And second of all, they asked the people, did they notice the shirt? And there was a massive disparity where people just thought people were paying a lot more attention to their shirt than they really were.</p><p>which is another reason, by the way, not to worry about nonconforming is that to a large degree you're just invisible. People are so caught up in their own heads and they're thinking about themselves all the time, it's just hard to realize how little other people are thinking about you. Once you realize that, it's very liberating.</p><p>Theo Jaffee (14:50)</p><p>So you talk a lot about stuff like, you know, focus on the truth and don't let other people influence what you think unjustly and quantitative decision making and betting on your ideas and a lot of things that remind me a lot of Eliezer Yudkowsky's rationalist movement. So how similar would you say your methods of rationality are to the kind of standard Yudkowsky less wrong rationality methods? And secondly,</p><p>Bryan (15:05)</p><p>Mm -hmm. Mm -hmm.</p><p>Theo Jaffee (15:17)</p><p>What do you think about the rationality movement? Do you think they're true nonconformists or just kind of collectivists?</p><p>Bryan (15:25)</p><p>Yeah, so let me start with the second question first. I've got a very positive view of the self -styled rationality community. They've always done right by me. Sometimes it seems like they get a bit cultish to me and they get fixated on some strange ideas. But then again, if you go and compare them to almost any other group, then it's a lot less clear what's going on. In terms of my specific levels of agreement and disagreement.</p><p>So the most glaring one is I'm not really worried about artificial intelligence. I even have a bet with Eleazar on the end of the world on January 1st, 2030. And he's saying, well, it's not the end of the world, it's the end of humanity on the surface of the Earth. Oh, sorry, my mistake. But I misspoke. But that bet says that if there's any human beings left on the surface of the Earth on January 1st, 2030, then he owes me some money. You might wonder, well, how?</p><p>Do you do a bet on the end of the world? And the answer is the person who is the optimist, namely me, prepays. And that's what I did. So I'm still feeling fine there. I think that Eliezer in particular and a lot of other people have just allowed their youthful fondness for sci -fi to carry them away on flights of fancy and paranoia. Obviously, they disagree. I don't have any really good argument to change their minds at this very moment or under these time constraints, but that's where I stand.</p><p>In terms of other things, I think that they are pretty crazy about polyamory too as being something that is widely going to work out for people. I agree with Isla that there's probably five or 10 % of human beings that are psychologically equipped to be happy doing this, but that leaves a whole lot of others who aren't. And especially I think that for families with kids, it's probably a really bad idea unless you just don't care about.</p><p>getting to raise your own kids or have them grow or you don't mind having them grow up in a broken home. I think that is actually really bad. Not by the way because I think that it messes up your future. I just think it messes up your childhood. Just unpleasant for kids to have to be going back and forth between multiple homes and dealing with adults that are in conflict with each other.</p><p>I think that's another case where they are underestimating the power of evolution. I think jealousy has so strongly evolved, I think that most people just cannot get rid of it. And if you say, well, we'll all be rational about this, all right, well.</p><p>It's the kind of thing where people's emotional constitution generally doesn't actually adjust very well. It is very standard among practicing polyamorous to wind up saying, yeah, well, there was this period where we were totally lying because of jealousy or the jealousy tore us apart. So we think that is probably another big issue. But, you know, overall, I've had great relations with the rationality community. They're fun people.</p><p>They're not very pushy, except on the AI risk, even there, I've yet to meet someone that yelled at me for not worrying about it, which is different from almost every other community that's worried about some terrible disaster, if you call it into question.</p><p>Theo Jaffee (18:35)</p><p>Do you think</p><p>your arguments on polyamory apply just as much to kids living in more traditional societies where you have like, in most of these traditional societies of polyamory, it's like one man and multiple women and they all live in the same house and do you think that kids there are also not well off?</p><p>Bryan (18:55)</p><p>That's a good question. So that's usually called polygamy rather than polyamory. There is quite a bit of social science of polygamy saying, well, a few things. One of them is, in very primitive societies, that's not really how it works. In very primitive societies, it's more like you just have pair bonding for two or three years while the kid is a toddler, and then the relationship dissolves. But since, if you live in a band of 20 to 40 people, you still see both of your parents, so you've got that going. You don't need to...</p><p>Theo Jaffee (19:04)</p><p>Thanks.</p><p>Bryan (19:25)</p><p>have a shuttle system between the huts of people who live within sight of each other. By traditional societies, you mean more of the ancient empires or something like that. Ones where the very most successful guys have had hundreds or I think there's one guy out of over a thousand kids. It's definitely one where kids have very little contact with their dads. So there's that. Also, it's very noted in societies like that, there's just a lot of conflict between the mothers.</p><p>most grotesquely in things like the Turkish Sultanate, where there was a period when the Sultan's first job was to murder all of his brothers. Gruesome, or mostly half -brothers, murder all your half -brothers. I they even murder the full brothers just to be safe. Anyway, that's pretty gruesome.</p><p>Theo Jaffee (20:06)</p><p>Thank</p><p>Bryan (20:15)</p><p>Let's see. And then the other major issue that most social scientists have had with polygamy is that if it's widely practiced, then it means that you've got a lot of guys who don't get to marry anyone are left alone, and there's a lot of other side issues from that. You know, I think that in the modern world, I'm not at all concerned about polygamy becoming so widespread that we start seeing these negative consequences. I mean, we don't even see billionaires having harems in the modern world in the sense of...</p><p>They've got a bunch of women and they have kids with all of them and they all hang out. Elon is sort of the closest, but even he's not actually doing that. He's not really doing that. So I think that we are so culturally far from it. Let's see, like the Harvard anthropologist, Joseph Henrich, I think he's testified in some hearings that are somehow related to.</p><p>Theo Jaffee (20:50)</p><p>Yeah, I thought that too.</p><p>Bryan (21:09)</p><p>prevent legal non -recognition of polygamy. And I think that's pretty paranoid too. This is going to lead to some horrible negative effects because it's just a small fringe thing. You can imagine that it would spread, but I don't see it spreading. I think the main thing that has spread is just broken homes, but not from polyamory, just from monogamous people don't stay together.</p><p>Theo Jaffee (21:33)</p><p>So back to my first question on rationalism. How similar do you think your methods of rationality are to teleasers?</p><p>Bryan (21:41)</p><p>Hmm, let's see. I think there's a lot of similarities. I mean, I would say that I'm especially influenced by Phil Tetlock's super forecasting, where a lot of his advice is start with base rates and then do adjustments up and down. Scott Alexander has this line where, specifically about AI risk, where he says, well, this just leads to base rate ping pong, where I have my base rate, you have your base rate. My base rate is like number of times that the world has ended.</p><p>and his base rate is number of times that a superior intelligence has come into contact with an inferior intelligence. This is one where...</p><p>In principle, you could sit around saying, oh, we can't figure out what the base rate is, but I don't think it's actually that hard in practice unless one side is determined to go and get a certain kind of answer. So I do put a lot of reliance on base rates. A lot of my arguments with Tyler Cowan come down to, they'll say, oh, here's something that could happen. And it looks like that's never happened before, base rates say no. And he's like, well, but you're not engaged in the argument. And I'll say, yeah, well, you're not engaged in the base rate.</p><p>So I think the base rate is a lot more important. People tend to get really sucked in by the details, which leads them astray. Whatever you're going to tell me, I'm going to treat it as a...</p><p>modest adjustment of the base rate rather than something that's rocking my world and saying, oh my god, I can't believe it. I will pile on and just say that I haven't seen that Tyler has any great predictive abilities. Super smart guy, very knowledgeable, but in terms of saying anything falsifiable that's gonna happen before it happens, I think he's probably below average for a thinker, maybe above average for a human, but that's not his forte.</p><p>In terms of other methods, of course, base is rule. This is a very big part of the way that I approach the world, as it is for anybody that's part of Tetlock as well. Just things like you can see some evidence in favor of a view and rationally become less confident because you're expecting to see even stronger evidence in favor of the view.</p><p>something that people have trouble with. But you see a headline and it says, you know, 100 people murdered by an immigrant terrorist. And then you say, well, but if we go and average over all the headlines of the past three years, it's only 200. And I think that a person that had a reasonable view would have thought it'd be 500. So actually, oh, this is in fact a reason to become more optimistic. Emotionally, of course, this drives people crazy, but the logic is completely sound.</p><p>You've got to specify, well, what did you think was going to happen? What would have been consistent with your view? The style of the normal person who just opens up the newspaper and says, see, everything I said has been proven. That is something that Bayes will stop you from doing because you'll say, well, wait a second. What would have to be on the newspaper headline for me to say that my view was disproven? What would it even look like?</p><p>You're always going to be able to go and find something that is an example of your complaint and then claim vindication, but that's ridiculous.</p><p>Theo Jaffee (24:45)</p><p>Yeah, so going back to what you said about base rates, where you said your base rate is that the world will not end and Tyler, sorry, Scott Alexander's base rate was how many times the superior intelligence has come into contact with an inferior intelligence. Back in July, 2022 for the audience, Brian and I had lunch and one of the things we talked about then was AI risk. And he mentioned his bet with Yudkowski about how...</p><p>Kaplan thought that it was not going to end the world. And since then, of course, chat GPT has come out and you've made another bet that looks like you're going to lose it about AI capabilities. So on that, I know you haven't made a huge update, but have you updated on AI risk at all?</p><p>Bryan (25:22)</p><p>Mm -hmm.</p><p>Yeah, of course, very slightly. So before I was skeptical that there'd be an AI that would be able to get A's on my economics exams, I did a bet on that. First of all, I went and gave GBT 3 my economics exams and got a D after hearing a lot of people saying, it's so incredible, it will blow your mind. And I even had a friend say, oh yeah, it'll be able to get A's in your test and got a D. And I'm like, all right, well, they're overselling again.</p><p>just like the last hundred times they've ever sold. So I did do a bet on that. And then when GBT 3 .5 came out, it was able to get A's. So I will say, all right, that's considerably more impressive than I was expecting. The progress was a lot faster anyway, but there's still a world of difference between you can get A's on my econ exams and you're gonna destroy the world through one way or another.</p><p>mean, there I've just also had a lot of more particular arguments, like there's gonna be a kill switch, a lot of kill switches. It's not that human beings are just going to hand over the reins and let the AI do what it wants for itself. Then in terms of just, I mean, if the base rate for anything designed by human beings, often there have been things designed by human beings that have ended up being terrible for human beings, but only because some human beings consciously unleash them on other human beings.</p><p>Which is where I would say, I think is actually where almost all the AI risks should reasonably be put. It's not that the AI will achieve autonomy and then will go and do bad stuff to us. Rather, it's that there's gonna be some humans that will say, help me come up with the best possible plan to go and kill as many other humans as I can. So that seems a lot more likely, which is what we've seen with almost all of the great technological achievements in the last 200 years. I think you'd have to be a fool to see electricity and then not wonder.</p><p>Could this be used for bad purposes? Yeah, of course electricity can be used for bad purposes. Of course mass production can be used for mass murder. Nuclear weapons can go and exterminate vast populations. But in all these cases, it is not that the technology takes over. It's that human beings do bad things with their tools.</p><p>Theo Jaffee (27:40)</p><p>And while we're still talking about rationality, what do you think are the best ways to check yourself to make sure that you are being a non -conformist and not just a contrarian or a collectivist?</p><p>Bryan (27:53)</p><p>I think a</p><p>big part is coming up with concrete tests of what's going to happen if you do something that is not conforming and seeing what happens. So, I mean, obviously just applying simple rationality processes and saying just because most people think it doesn't mean it's false. So it's putting just little weight on the fact that something is a popular view rather than putting negative weight. I think the contrarian is someone's putting negative weight on a view's popularity. If other people think it's true, I'm going to think that it's false.</p><p>The rational thing is to say, well, I'm not gonna put a lot of weight on it just because we know there's so many areas where human beings have embraced silly views. There's just a lot of popular views that are wrong. In a way, that itself kind of begs the question, right? Because like, well, how do we know that there's so many popular views that are wrong? And that's not something, that was something where I would just go case by case and say, well, here's a list of a bunch of things that are widely thought but turn out to be incorrect. So, and these are not just small.</p><p>cherry -picked or lemon -picked examples. These are pretty big examples of things that people are really wrong about and have been really wrong about in the past. And it's not that hard in hindsight to see that they're wrong.</p><p>Theo Jaffee (29:03)</p><p>By the way, this reminds me a bit of a Charlie Munger quote where he said something like, being a good investor requires the temperament that doesn't derive too much pleasure from either following the crowd or going against it.</p><p>Bryan (29:13)</p><p>Yeah, we are very reasonable.</p><p>Theo Jaffee (29:16)</p><p>So we talked earlier about social media with X, but another social media is LinkedIn. And I go on LinkedIn periodically, and I find that it sucks because it's conformist. And it seems like everyone on there is just trying to please other people. So like, do you think that there's a way to fix LinkedIn, to fix professional social media in general? Or is it just kind of a property of like professionalism that it ends up conformism?</p><p>Bryan (29:27)</p><p>Ah!</p><p>Yeah, yeah, I think it is heavily a property of professionalism. Important thing to remember is that most original and creative ideas are terrible. And especially on something that is a practical task that a lot of smart people have been working on for a really long time. If it's, like what's the best way to go and say, fly a plane? It's like, well, you're not the first person to think about this, you know? There's a lot of really smart people. There's a lot of money on the line, probably.</p><p>there's already immense selection pressure to do a good job on this. When someone says they got it all figured out, they're probably incorrect. So that is one thing to keep in mind. Let's see, in terms of fixing, and then, you know, so then like, if you know my book, The Case Against Education, I say a lot of what people are signaling education, sure it's intelligence, sure it's work ethic, but a lot of it's just conformity, just saying, like, I know there's no I in team and I'm going to be part of the team, be a loyal member.</p><p>will not rock the boat. Probably like some of my best nonconformist advice actually is focus on being friends with your boss instead of being liked by coworkers. This is one where it's like, oh, that's what kind of a suck up are you? It's like, well, a person who appreciates the boss probably got there by their hard work and greater understanding of the field and that they actually have a really tough job of dealing with a lot of recalcitrant people.</p><p>Every manager has to herd cats, and I'm going to be one of the easy cats to herd because I think I have something to offer this person, and if I do a good job, I think this person is likely to have my back. Another way of thinking about it is if you're a nonconformist, who is going to be easier to win over? A bunch of coworkers or one boss? It's going to be a lot easier to win over one boss. If it's just one person, this is someone where...</p><p>You clearly indicate my loyalties on your side and my goal here is to be a highly useful member of this team. If you just talk to almost any boss, they'll say, wow, like, I just need a lot more people like that. It's just hard running this because people complain so much and are so hard to please and just don't appreciate that I'm in a tough spot. So just showing some empathy for a person who has to make hard decisions is something that is actually nonconformist in a very deep way.</p><p>Theo Jaffee (31:59)</p><p>So the econ department of George Mason is full of nonconformists. You, Tyler Cowen, Robin Hansen, Alex Tabarrok. And famously, you're not just popular within academia, but outside it. Probably mainly outside it. So...</p><p>Bryan (32:03)</p><p>Oh yeah.</p><p>Yeah, you got that right.</p><p>Theo Jaffee (32:16)</p><p>How do you think this can be replicated at other schools? Like, I go to the University of Florida, I can't think of any UF professors who are famous in the way that you and Callan and the rest are.</p><p>Bryan (32:28)</p><p>Hmm, yeah, great question. I mean, a lot of it depends upon getting some people who have paid their dues and gotten the regular signals, who then are willing to take advantage of this crazy tenure system to do something cool. Unfortunately, there's just not that many people like that. Once you get one person to do that, then often you'll find that there are other people that were sympathizers and wanted to, but they were just too scared. So you need to get...</p><p>a focal individual who's willing to stick their neck out, which on the one hand is not as hard as it sounds because with a tenure system, they know they've got this massive job security. The real difficulty is that despite the incentives being in favor of nonconformism on that level, the system weeds out the nonconformists before they get there usually. So that is a big part of it. If you really wanted to go and foster it, the idea of...</p><p>having schools creating independent centers of nonconformist thinkers. That's of course how Grievance Studies got off the ground is you go and find someone who says, like, my work isn't appreciated because I'm the only one who understands how fantastic Albanian culture is. Give me my Albanian Studies department and then I can really do it. Unfortunately, that's a case where you're getting nonconformists who are definitely defying society, but at the same time, like,</p><p>they're really just wanting to create their own cult. It isn't like they want to have some very thoughtful exploration of all the possibilities or anything like that. In terms of where I would start, I would generally start with economics first because economics does have this long tradition of just being willing to entertain socially unacceptable hypotheticals and consider possibilities that other people just say that's an evil thought, crime, don't think it. Secondly, honestly, philosophy departments, they are famous for hypotheticals and while...</p><p>their discipline has gotten worse over time, still there is a sense of that we can consider an idea without agreeing with it. Whereas if you go over into your agreement studies departments, that is a really alien idea to them. Like, what do you mean we're gonna consider the possibility that actually there is not a lot of discrimination against African Americans? That's crazy, we all know there is. Yeah, but what if there isn't? Well, there is, so we're not gonna talk about it. And you, by wanting to talk about it, are an evil person.</p><p>It's like, oh, ah, my mistake. So if you did want to go and foster this kind of thing, you'd basically need to find some people, find a few people that already foot the bill, give them some money, and then let them have independent hiring authority so they can replicate themselves. Not perfect, but I think it's the best formula for success.</p><p>Theo Jaffee (35:15)</p><p>Well, you talk about it like a formula for success and like a plan if you wanted to do this, but it seems like GMU didn't, you know, plan to have an econ department like this. So how much harder would it be to do it spontaneously?</p><p>Bryan (35:32)</p><p>I think actually it was planned. So I've been around for at least half the life of GMU having any kind of a public profile. So basically there were donors that wanted this kind of thing to happen.</p><p>and they gave money so that it could. I think the first big donation was to bring the Center for Study Public Choice here, so bring future Nobel Prize winner James Buchanan and his team here in 1983, if I'm not mistaken. Then there were further donations. There was another big donation to bring Vernon Smith's team, another future Nobel Prize winner. And by this point, by the late 90s, we were consciously talking about...</p><p>we want to become the Hoover Institution of the East. So this was actually a conscious plan. Now, it doesn't mean that, and in fact, there really was one single individual man who was at the epicenter of all of this, which is Tyler Cowan. He was the one that is great at bringing together donors and the existing faculty and new talent and making it all happen. So he deserves a ton of credit for that.</p><p>Theo Jaffee (36:20)</p><p>Hmm, I don't know.</p><p>So do you think that this kind of existing infrastructure of academia and tenure and donors matters as much nowadays? Like you talk a lot about tenure and how great it is because you can research and write about what you want. But today we have people like Noah Smith and people like Scott Alexander who make a lot of money just writing on SubSec.</p><p>Bryan (37:00)</p><p>Yep, so I'll say that it's great for me personally. I think it's a terrible system actually. Tenure is a disaster. It has a few benefits that are swamped by overwhelming costs. So in no way think that I'm pro tenure. I think tenure is terrible. What, yes, but what I will say is that for people who want to do contrarian stuff but are risk averse or just don't have a ton of star power charisma, it remains one of the best bets.</p><p>Theo Jaffee (37:13)</p><p>Oh,</p><p>Bryan (37:28)</p><p>So Scott Alexander especially, he was able to go and get where he is through having this incredible personal charisma and ability to just create a new community almost out of nothing. But most people are just nothing like that and would not be able to do more than eke out a meager existence on Substack or other kinds of social media.</p><p>It's great that they exist, and what they're doing is wonderful, and yeah, there's no doubt that what Scott's doing is way better than 99 % of professors. However, I don't think there's room in the market to have a thousand Scott Alexanders, which there was.</p><p>Theo Jaffee (38:08)</p><p>So on education, in a portrait of my school, you talk about your ideas for how you'd run a school, but not like a lot of specifics for curriculum. You mentioned reading, writing, and math. So a couple of questions. One, do you think computer science and programming should be elevated to the same level as math? And two, how would you scale this approach beyond five to 15 students?</p><p>Bryan (38:34)</p><p>I'd say it's reasonable to think about putting CS at the level of math, but in the end I wouldn't, because I would say, look, math is one of the things that you need for CS, but there's a lot of other things that you can do with math, whereas CS is something where if you don't want to be a programmer, then the actual career value is not that large. I mean, would say that if there's someone that is really good at math, has a good background there, and then when they're 18, they decide they want to become programmers, they can do it.</p><p>On the other hand, if there's someone who does not do much math and then they're 18 and they say, I want to go and get up to speed on doing enough math to do CS or engineering or physics or whatever. So yeah, like at 18, like unless you're a complete genius, it's pretty much too late. It's just too cumulative. You've missed this critical window. It's just gonna be too hard to ever catch up. But you're like, that's, it's very reasonable. And definitely if we could go and say that you can do CS instead of a foreign language, that would be one of the best curriculum.</p><p>revisions that we could make because I think a ton of people would rather do CS than foreign language and they would get a lot more value out of it. I mean, it's very standard for people to spend two or three or four years of high school on foreign languages. Almost none of them learn the language to any remotely usable level. Even if they did, there's not that much use of it. CS on the other hand, this would be giving useful job skills to a generation of students. So that would be a big improvement.</p><p>But I'm not quite sold enough on it to think that everyone should be doing it standardly.</p><p>Theo Jaffee (40:03)</p><p>Yeah, I mean, I've always kind of thought of it as, you know, at least on the same level as chemistry or physics, which every student learns in high school. Yeah, yeah. And when I was in elementary school, I remember hearing about like, oh yeah, we're all going to be learning about computers and computer science soon. Obama was talking about this 15 years ago and then it just never happened. So I went through elementary, middle and high school. I took two CS classes only because they were APs, but they're just not in the standard curriculum at all.</p><p>Bryan (40:09)</p><p>Oh yeah, yeah, of course. Better than chemistry or physics.</p><p>Oh yeah. What's going on is that curricula are very backwards looking. In fact, if you want to understand the curriculum, it is best to remember that it all evolved out of a system that was designed to teach three things. So it was designed to teach law, medicine, and theology. This is what Anglo -American universities did for hundreds of years. They just teach law, medicine, and theology. And if you're thinking, wait, medicine? It wasn't until like,</p><p>1900 or so, the doctors started saving more people than they killed. Yeah, that's true. But still, they were teaching this crap for hundreds of years. Law, on the other hand, almost by definition, lawyers have to be effective because they are the ones that are judging their own success, in a way. And then theology, again, my view is it's a fake subject. So it's not fake in the same way as early medicine, where you're actually killing people with it. But still.</p><p>So anyway, if you just realize this is what our system grows out of, and then everything else pretty much just gets tacked on, seems to add on other things afterwards. But really the idea that we are here to go and train people in these three professions, the fingerprints of that are still on the system that we have. And so then we have a lot of requirements that make very little sense in terms of the modern world, but make a lot of sense, you know, so basically they make very little sense forward looking in terms of what will be beneficial to the student.</p><p>make a lot of sense backwards looking in terms of we've always done it that</p><p>Theo Jaffee (42:03)</p><p>And then.</p><p>Bryan (42:03)</p><p>little complicated because modern sciences weren't taught until the late 19th century math was. But the idea that you would put modern science in the curriculum, that I think starts with the German, top German universities, then spreads to Johns Hopkins, and then moves over to the rest of US academia after that.</p><p>Theo Jaffee (42:23)</p><p>So you also mentioned that this approach to education is like only for people who are already interested in it and have the aptitude for it, and it would only be 5 out 15 students. So how would you scale this approach beyond that? Can you? Or would you have to do something totally different?</p><p>Bryan (42:38)</p><p>Well, not totally different. I mean, I would say that when you have kids that just lack any intrinsic motivation, this is where you really need to do some soul searching and say, why do I want to make them do something when they have no intrinsic motivation? The good answer to that is for extra extrinsic motivation, because they're a child, they don't understand what the labor market is like, you don't want them to grow up and be unable to take care of themselves. So for things where you have very strong evidence that it will be a severe handicap to them, just let them do whatever they want.</p><p>That's when I think it is a good idea to go and push it on them whether they like it or not. On the other hand, there's a lot of things that we do in school right now and push kids where you say, well, why do they need to know this? Well, we don't know much better than we've always done it that way. So I think I've got an essay just called Unschooling plus Math where I say that there is this homeschooling philosophy called unschooling, which almost everyone thinks won't work at all, it'll be a total disaster. There are defenders who say, no, it's not a total disaster. I think they're right. But.</p><p>There is one notable deficit that I have seen unschoolers have, and the little data that we have is consistent with this, which is that unschoolers are deficient in math because very few people get intrinsic enjoyment out of math, and yet it is so vital for so many high status occupations. So I say, look, if you're willing to just go and do unschooling with the tweak that every day you do have to do an hour or two of math, then I think that does solve most of the problems with unschooling.</p><p>Theo Jaffee (44:02)</p><p>Well,</p><p>I wonder, like, for some people, maybe I'm just being anecdotally, but for me, like, going through elementary and middle school, I hated math. I could not stand algebra and geometry and algebra too. But when I got higher up into calculus, I started to really like it.</p><p>Bryan (44:17)</p><p>Mm -hmm. Yeah, that's like one person in a hundred. So it's a great kind of person to be to like, I didn't like the boring easy math, I only liked the hard stuff. Yeah, but normal people just don't like it and it's not because it's too easy, it's because it's too clear that they're wrong. It's so depressing. So you put in like, math is the opposite of labor theory of value. You can put in a hundred hours into a math problem and if it's wrong, it's wrong. It doesn't matter that you tried hard.</p><p>That's a lot of what's so bitter about it. And there's also just no room to go and say, well, there's some sense in which I'm right. Which you'll see in almost all the humanities, and there's a math, like no, there's no sense in which you are right. You are just wrong.</p><p>Theo Jaffee (44:54)</p><p>Yeah. So to what extent do you think people living under totalitarian governments should be non -conformist? Like...</p><p>Bryan (45:04)</p><p>Hmm.</p><p>Yeah, great question. It's likely to get you killed, so that would be a reason not to do it, definitely. It's one where you need to be a lot more careful, because by definition, totalitarian regimes will go and harshly punish you for very minor deviations. Even there, I would say that you can't really survive in most totalitarian regimes, maybe any of them, without having...</p><p>enough nonconformism to say, wait a second, I'm gonna go and die if I don't break the rule. And so I gotta figure out a way to somehow weasel my way out of this, whether it's being sent to the Eastern Front to go and fight during World War II, or to get an illegal job, or break rules against corruption in order to get enough food to feed your family. So you could not be a full conformist and survive in totalitarian regimes.</p><p>Unless you happen to just be born into a ruling family or something like that where you're taken care of and you're never given a dangerous job and you've got plenty of food and all that other good stuff. But otherwise, you know, so I'm thinking here about, so in North Korea, after the Soviets withdrew their subsidies in the early 90s, they had a massive economic collapse. Their whole economy was based upon getting a bunch of subsidies from the Soviets, which they no longer had.</p><p>And then there's the question of, well, what do we do about all these people who are working in a fully 100 % government -owned economy? And the answer was, well, let's see. We're running short, so we're going to fire them. And then what are they supposed to do? The answer was no answer. What happens if you're in a fully state -monopolized economy and you lose your job and they don't give you a new job? Either you starve to death or you work illegally. There's a great book called...</p><p>see, nothing to envy, where they just went over the plight of North Koreans who lost their jobs during this period. And yeah, it's like, well, I can either get caught for being a black marketeer and get sent to the slave labor camp or executed, or I can starve to death. I guess I better take my chances with the slave labor camps, and maybe I can make enough money to bribe my way out. So you do need that. But obviously, totalitarian regimes are very harsh on people that stand out.</p><p>So, you know, in a way they exemplify the otherwise irrational fear that most people have that if you do anything different, society will crush you.</p><p>Theo Jaffee (47:36)</p><p>So,</p><p>in your essay, Natalism as Nonconformism, you wrote that one of the most important things you can do, both in general and as a nonconformist, is to have kids. Israel has a total fertility rate of 2 .94, which is not only much higher than any other developed country, but it's actually higher than it was in 1989. You mentioned religiousness and secularism a little bit, but...</p><p>Bryan (47:57)</p><p>Yes.</p><p>Theo Jaffee (48:03)</p><p>Can you go into a little bit more detail about how did they manage to do this and what can other countries learn from it?</p><p>Bryan (48:05)</p><p>Right, so I'm not an expert in Israel. I think a lot of it actually is just exponential selection where the high fertility groups in the country were, namely the ultra -orthodox, have just become a much larger percentage of the country. So that way, as long as you can sustain the fertility rates of every subgroup and the high fertility subgroups are much higher than the others, so they're rising a share of the population over time, then it's...</p><p>almost follows as a matter of pure arithmetic that you will get your fertility rate will go up. So I think that's a lot of what Israel did. People also talk about things like just having a very pro -natal attitude. So that probably something too. Even there you have to wonder, well, isn't that really just a reflection of the fact that they've got so many kids and there's so many large families? I mean, just to be clear, as I say in that essay, I'm not claiming that natalism in general is not a conformist.</p><p>Because if you're in a highly natalist subculture, then the conformist thing is to be a natalist too. Rather what I'm saying is that if you're in a typical first world country where we have very strong anti -big family norms, that's where you need to be a non -conformist in order to have a lot of kids.</p><p>Theo Jaffee (49:24)</p><p>So in a conservative confession you talk about hedonic adaptation. So are you at all worried about a future where we'll figure out how to do something like wireheading directly affecting our brains reward system and then like essentially running out of hedonic adaptation?</p><p>Bryan (49:31)</p><p>Mm -hmm.</p><p>Or sort of the other way around, right? Wouldn't it just be that we will be, we'll just make ourselves identically adapted to whatever we've got? Isn't that really the worry?</p><p>Theo Jaffee (49:54)</p><p>No, the worry is essentially that we'll put ourselves on like a heroin drip and live like a, you know, kind of terrible people.</p><p>Bryan (50:02)</p><p>Oh, okay. So you're talking about right now you achieve something, you feel good for a bit, but then you're motivated to go achieve another thing because the thrill wears off. Yeah, okay, I get it now. I guess I would say I am a little bit worried about that. I mean, it's the kind of thing where evolution will save us in the end if that happens because the people that go on the heroin drip will have no children and they will be wiped out and the people that remain will be those that had an aversion to it.</p><p>Theo Jaffee (50:10)</p><p>Yes.</p><p>Bryan (50:29)</p><p>It might be that we have to first have our population fall by 90 or 99 % in the extreme scenario before we reverse, and then we just are replaced by people who are so horrified by the idea of a heroin drip that people just won't do it. So I think that is the long run answer, but yeah, obviously having a period of a few hundred years where we go into a massive decline is somewhat worrisome. I mean, I will confess that I usually just don't worry that much about...</p><p>things that are over 100 years out because I just figure it's such the world and the future is so different. There's really much that we can do about it. And in fact, when someone starts talking about I'm gonna get ready for the world in 100 or 200 years, my thinking is, I think it probably more likely than not you're just gonna make things worse and you're gonna go and likely to try to crush progress and hold it off is more likely than that you're going to.</p><p>Theo Jaffee (51:02)</p><p>in the log run rail dead.</p><p>Bryan (51:23)</p><p>create a futile, a fertile, rather a fertile groundwork for further progress that it seems pretty remote. And if you just think about someone in 1800 saying, what can we do in order to get lots of progress here? It's like, what would they even have thought back then? I suppose there were a few enlightenment figures who would have said, well, we just need to have a lot of freedom in order to go and explore new ideas. And we need to make sure that industry is not overly burdened with regulations so they can implement the ideas. So there'd be some people like that, but.</p><p>think that anyone who had anything much more specific than that would have just been messing things up, probably.</p><p>Theo Jaffee (51:57)</p><p>in your essay, Bioethics Tuskegee vs COVID.</p><p>You</p><p>Bryan (52:00)</p><p>Ah.</p><p>Theo Jaffee (52:01)</p><p>talk about the problems with bioethics. This was probably my favorite essay in the compilation, by the way. So some people, I've been hearing this a lot recently, say we should entirely abolish the FDA. Because even though that will almost certainly lead to problems on balance, it will be a tremendously good thing because no amount of regulation is worth blocking med tech from getting to market, like transformative med tech that can cure cancer or something. So what do you think about this idea?</p><p>Bryan (52:11)</p><p>Mm -hmm.</p><p>Yeah, I'm all in favor. I've been in FDA abolitionists for a long time. Really, I was in my senior year of high school and I had never heard anything other than arguments in favor of the FDA. So I really was actually brainwashed in my history classes about how there's this whole horrible period before the FDA where pharmaceutical companies were killing people left and right. And then finally, wise government came and established it and now we're protected. And the only danger is that we might not be protected enough. So this was actually explicitly taught. It was in the curriculum.</p><p>Then in 12th grade, I read some economist saying, well, you realize if there's a drug that saves 10 ,000 lives a year and the FDA delays it for seven years, that you killed 70 ,000 people. And when I read that, I'm like, hmm, I don't see any way around that argument. That is about as good as any argument it could ever possibly get. And then the question is, how many lives are being lost by approving drugs too soon? And looking there, it's like, hmm, yeah, it was hard to come up with very much. Thalidomide, which is the...</p><p>drug that was used to, let's see, what's the right way of putting it? It's the drug that most people point to as showing that we need the FDA. The main story there is that thalidomide, the reason why the dangers were caught was that thalidomide was approved in the UK and then people discovered that it caused a whole lot of birth defects. Whereas in the US, it was not the FDA that caught it, it was another country that had lighter regulation that allowed us to do it. Otherwise, it probably would have been approved.</p><p>Funny footnote, it was finally approved as a treatment for, I believe, leprosy. Just don't give it to pregnant women, because then it will still cause horrible birth defects. Let's see, but anyway, this case against the FDA seems very strong to me. The idea, and just when you just see the asymmetric response to, like, someone was killed by an approved drug, well, you have to change everything. Whereas people who, people lose their lives because they'd wait for a drug, well, that's, like, that's not even a thing. And you...</p><p>Theo Jaffee (54:15)</p><p>Thank</p><p>Bryan (54:23)</p><p>Like during COVID, I was gratified at least and kind of amazed at how quickly the drugs were approved because my friends say, oh, everything's gonna be great. And I'm like, look, even if we got the drugs that totally work, which is itself is a good outcome, above average outcome, how do you know they're not just gonna be held up for years? And this was a case where suddenly people woke up to, yeah, I guess we delay it, then it's gonna kill a lot of people.</p><p>You know, combined then with a lot of unfair demonization of normal, cautious people saying, how do you know it won't have bad side effects in five years? And the honest answer to those people is, yeah, we don't. We have to wait five years. But we're losing a lot of people now, and we're just gonna gamble that it's actually going to be a net positive, probably a good positive based on historical experience, but yeah, we can't prove that you're wrong. That'd be the honest thing, but obviously it's politics for honesties in very short supply.</p><p>Theo Jaffee (55:20)</p><p>So I think we have time for one last question. So you've written a lot. I think something like over 2 ,000 essays on Econlib, a bunch more on Substack, several books. And so what advice would you have for writers other than just read a lot and write a lot?</p><p>Bryan (55:22)</p><p>Sure, sounds good.</p><p>My honest answer is I don't even feel like I am that hard at working. I don't feel like I'm that hard of a worker. I feel like I am daydreaming a lot and goofing off a lot. The main thing I can say is that every day I get something done. I get something done every day. And that just adds up. Do 20 years of chipping away. This is the plot of The Count of Monte Cristo, right? Every day the guy chips away a little bit from his prison cell on this island of France. And after seven years, he escapes.</p><p>Similarly, if every day you just get a little bit done, it adds up a amount. I am honestly puzzled by all the people who are tenure professors who have so little output. It's like, what do they even do all day? Like, if I can get this much done while goofing off this much, what are they doing? So, like, I... In the end, I'm kind of puzzled. Like, part of me thinks, are there just, like, lots of, like, people who are horrible alcoholics and drug addicts or something? And...</p><p>They just do the bare minimum and then otherwise they're putting all of their energy into their vices. Is that why they don't get much done? Are they just going and putting a ton of research, or not a ton of research, a ton of hours into their teaching, even though their teaching doesn't appear very good either? They just sort of spin their wheels a lot. I am mystified about what other people are doing that leaves their productivity be so low. In terms of how you can get motivated,</p><p>For me, a lot of motivation comes from my iconoclasm. I really don't like hearing people say things that I think are false or dubious in a giant self -righteous tone. It motivates me to go and argue the other way. And especially if you are an iconoclast like me, there really is a lot of low -hanging fruit of ideas that are true yet barely discussed because most people are too afraid to write about them. So, I like that piece on...</p><p>you have bioethics in Tuskegee, right? It's pretty obvious when you read it, but I think most people would be like, look, we can't possibly go and talk about Tuskegee as if it wasn't the worst thing that was ever done. And it's like, well, look, obviously it wasn't the worst thing that was ever done because there's just much worse things that have been done. I'm gonna say it's way worse to kill a million people than to give a horrible disease to 200 people, right? But.</p><p>To say that, it's like, oh God, we can't possibly say that this is sacred. It's like, who says it's sacred? I mean, I can understand why you might not want to talk about it at work, assuming your coworkers even know what you're talking about. But for someone like me with tenure, why not go and stick my neck out and just say what I think is correct? There's always this fear eventually you're going to be crushed. I do have actually a bet with someone who says that eventually in my own...</p><p>mind, I will declare myself to have been treated very unfairly by my university. So far, so good.</p><p>Theo Jaffee (58:36)</p><p>And for individual essays, just like you talked about amassing lots of creative output. But for each individual essay, do you have any specific advice on that?</p><p>Bryan (58:46)</p><p>Yeah, well, here's a lot of my advice. Anytime you get an idea, instantly write it down, because you're going to forget. I have a queue of hundreds of ideas. Normally, I just put down a title. Sometimes the title isn't clear enough, so I just write a sentence or two to remind myself what the idea was. And that means that I never have any issue with I can't think of anything to write about. I have the opposite problem. I have way more ideas than I feel like I would ever have time to write about. But I just try to keep refreshing the queue and just adding more in, so that way I've got a good...</p><p>a good set of choices. A lot of where I get my ideas is just for iconoclasm, where I just see something and I say, huh, well, that sounds wrong. So yeah, but people would be upset if you said it. Huh, well, in that case, it probably hasn't been said by anybody yet. I can't remember anyone saying that before. All right, then I'll do it and then I'll be the person that says it. It is in a way scary to me how often I can quickly become the number one Google hit for anything I care about, because it just shows that what I care about is stuff that most other people don't even want to talk about.</p><p>It's not bragging, it's just what I care about. Often there is, if there's anyone that's interested in it, it's only the audience that I make because otherwise it just didn't exist as a topic.</p><p>Theo Jaffee (59:52)</p><p>All right, well, thank you so much, Brian Kaplan, for coming on the show.</p><p>Bryan (59:58)</p><p>I'm very happy and just let me let you know that you can get this new book, You Will Not Stampede Me, Essays on Nonconformism, for just 12 bucks as a paperback on Amazon or $9 .99 for the ebook. I've also got four other books of my collected essays that are already out there available for the same price. I got three more coming. And then I've got all my other books, including my New York Times bestseller, Open Borders. And on May 1st, I've got my second graphic novel coming out, Build Baby Build, The Science and Ethics of Housing Regulation.</p><p>So I'm really excited about that. The book looks fantastic. It took longer than I thought it would, but I stand by the product. It's great.</p><p>Theo Jaffee (01:00:35)</p><p>Alright, looking forward to it.</p><p>Bryan (01:00:37)</p><p>Okay, thanks a lot. Great talking to you again.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This transcript was generated automatically with <a href="http://riverside.fm">Riverside</a> and probably contains lots of errors.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Is Apple Vision Pro worth it?]]></title><description><![CDATA[Short answer: Not quite yet. But it will be; more than you can imagine.]]></description><link>https://www.theojaffee.com/p/is-apple-vision-pro-worth-it</link><guid isPermaLink="false">https://www.theojaffee.com/p/is-apple-vision-pro-worth-it</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Thu, 22 Feb 2024 04:11:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1GVb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1GVb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1GVb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GVb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GVb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GVb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1GVb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg" width="406" height="541.2403846153846" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:406,&quot;bytes&quot;:747379,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1GVb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GVb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GVb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GVb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F979d7b1e-ec06-4977-b244-a463cd261f02_3088x2316.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">If you look in the corner you can see the UF Society of PC Building logo on my shirt. Thanks to UF Student Government for sponsoring this purchase!</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.theojaffee.com/subscribe?"><span>Subscribe now</span></a></p><p>On January 19, 2024, I woke up early, opened the Apple Store on three devices, and spammed the Apple Vision Pro pre-order at <em>precisely</em> 8:00 AM EST. It shipped two weeks later, and I unboxed it in front of a crowd of 50 or so members from my club, the Society of PC Building, and the Gator VR Club. Since then, I&#8217;ve used it extensively every single day and given demos to dozens of people. I have a lot of thoughts on it&#8212;both as it exists today, and what it will become further down the road.</p><h3>v1, 2024</h3><p>The Vision Pro came in a box larger than any Apple product I&#8217;ve ever opened, complete with the headset itself, the Light Seal, two Light Seal Cushions, the Solo Knit and Dual Loop bands, a USB-C charger and brick (rare for Apple), and the infamous <a href="https://www.apple.com/shop/product/MM6F3AM/A/polishing-cloth">$19 Apple polishing cloth</a>, complete with special Vision Pro branding. The headset itself is beautiful. It eschews the Quest&#8217;s white plastic for glass and metal, with a curved glass panel covering the EyeSight display on the front, a durable aluminum chassis, high-quality textiles for the Light Seal and cushions, and an adjustable Solo Knit Band made of the finest wool that envelops the back of your head like a hug. It even has orange accents (on the pull tab that detaches the bands), a nod to legendary Apple designer Jony Ive. Despite reviewers&#8217; warnings, I found the comfort of the headset to be fine, and I never had to use the ugly Dual Loop Band. The weight is fine too&#8212;definitely heavier than the Quest 3, but not painfully so. The external battery is annoying, but not terrible.</p><p>When I tried it on, I was floored by the quality of the tracking. Unlike the Meta Quest 3, you control the Vision Pro with your eyes. To select a UI element, you simply <em>look</em> at it and pinch your fingers together. Hardly anything could be more intuitive. The hand position tracking is excellent, too. The Quest often has difficulties with recognizing when I&#8217;m grabbing a window or pointing at a specific element. The Vision Pro, almost never. The passthrough on the Vision Pro is definitely imperfect: it&#8217;s dimmer than the real world, and feels like looking at the world through an older iPhone camera, but the latency is <em>ridiculously</em> low. The Quest 3 has major problems with hands, faces, screens, and some other objects distorting the passthrough. The Vision Pro? Almost none. Virtual windows look nearly real&#8212;you practically can&#8217;t see the pixels, and they stay almost perfectly locked in place. Once, I had some windows in my dorm room, went down 14 floors to swap out my laundry (still wearing the headset), and when I got back to my room, my windows were <em>exactly</em> where I left them. The Quest 3 wins on just one element of immersion&#8212;field of view<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. The Vision Pro&#8217;s FOV is tight enough that there&#8217;s a noticeable black band wrapped around what you see.</p><p>Once I calibrated it to my eyes and hands, signed in with my Apple ID, and finished setup, it was time to try the software. I spent a while just playing around with it before using it for anything serious. The virtual environments are imperfect but gorgeous, especially the Moon and Haleakal&#257;. Some of the apps, like Sky Guide (planetarium) and JigSpace (3D object visualization) are awesome windows into the future of computing. I tried several iPadOS apps ported to visionOS: it was cool to scroll through X or explore Apple Maps on windows as small as a newspaper or as large as a wall. One of my favorite things to do is FaceTime: though your friends see you as an uncanny 3D scan of your face, you see them as a window in your environment, a tiny hint of what it&#8217;ll be like to eventually render them into your environment fully. Spatial Videos are like this too&#8212;though they only take up a small part of your FOV and blur at the edges, you can see where they&#8217;re eventually headed with the ability to relive your memories.</p><p>With the honeymoon phase over, I started to focus on more serious use-cases&#8212;mainly, work. This is where I started to run into problems. visionOS resembles iPadOS more than macOS. Its app ecosystem is sparse. Its input mechanisms (voice and virtual keyboard) are extremely slow. Most importantly, it lacks the million tiny things that create the seamless workflow that you get with a real desktop operating system. visionOS is not yet ready for actual productivity. Fortunately, if you have a Mac, you can mirror the screen to the Vision Pro to get a monitor that can be anywhere and any size you want. Unfortunately, you only get one Mac monitor, and it has to be the same aspect ratio as your MacBook: no vertical monitors. Infuriatingly, your keyboard doesn&#8217;t show through virtual environments, so if you want to be fully immersed while working, you better know where every single key is. But you have to deal with it anyway, because macOS is so much better than visionOS for work that I ended up forgoing multiple virtual windows altogether and doing all my work on a single Mac window.</p><p>There&#8217;s one use-case that visionOS excels at more so than any other: watching videos. Folding laundry while watching YouTube on a TV-sized virtual window is cool, but even cooler is watching <em>Dune</em> in 4K resolution, with stereoscopic 3D, high-quality spatial audio, and a virtual screen the size of an IMAX theater. When you&#8217;re in Joshua Tree or White Sands, it looks cool but feels fake. When you&#8217;re in Cinema Mode on Apple TV, it really, honestly, feels like you&#8217;re in a movie theater. This is the Vision Pro&#8217;s killer app, the most mature part of Apple&#8217;s vision for VR so far, and it&#8217;s <em>awesome</em>.</p><p>So that&#8217;s the state of the Apple Vision Pro today. What can Apple (and Meta) do to bring VR into the future?</p><h3>Hardware</h3><p>As a v1 product, the Apple Vision Pro is excellent. But it clearly isn&#8217;t mature. It&#8217;s not as polished and perfect as the iPhone 15 Pro Max or M2 MacBook Pro&#8212;it&#8217;s more like the Macintosh in 1984 or the iPhone in 2007. Famously, the iPhone didn&#8217;t even launch with an App Store, a selfie camera, a GPS, 3G data, copy and paste, or video recording. We will undoubtedly look back on the Vision Pro with the same surprise.</p><p>The hardware of the Vision Pro is a good start, but there&#8217;s a long way to go. Even many non-conformist tech people balk at the idea of wearing ski goggles, especially in public. In order to achieve mass adoption, it will have to become much thinner and lighter&#8212;resembling glasses more than goggles. An early version of this Apple&#8217;s <a href="https://www.cnet.com/tech/computing/apples-eyesight-feature-on-vision-pro-is-creepier-than-it-needs-to-be/">EyeSight display</a>, which is a core differentiator of their VR strategy. In order to be truly immersive, you should be able to see people&#8217;s eyes. Marques Brownlee has an <a href="https://www.youtube.com/watch?v=_XdD-TQseU4">excellent video</a> on form factor: he says there are two strategies. One is to start with the ideal form factor and improve the feature set as the hardware gets better. This is what the Meta Ray-Ban Smart Glasses are: simple glasses or sunglasses with nothing but cameras, a mic, speakers, and conversational AI built in. The other is to start with the ideal feature set and improve the form factor as the hardware gets better. This is what big bulky headsets, like the Meta Quest 3 and Apple Vision Pro, are. The goal is to end up with something as thin, light, portable, and fashionable as the glasses (even while powered off) while being as powerful and full-featured as the goggles.</p><p>The form factor will have to improve, but so will the capabilities of the hardware. The end goal is <em>total immersion</em>&#8212;making the headset as indistinguishable from real life as <a href="https://www.youtube.com/watch?v=bQECSInWVPY">high-quality headphones are from a stereo speaker system</a>. The single best thing they can do for this is passthrough that looks identical to what you see with your eyes. Until direct optical overlay becomes possible, they&#8217;ll have to make do with video. They&#8217;ve already got the latency down. Now all that&#8217;s left is brightness, color, and resolution. They&#8217;ll also have to figure out occlusion, so there&#8217;s absolutely no distortion around your hands or other real objects when they&#8217;re overlaid onto virtual environments. Then there&#8217;s the issue of computing power. Unreal Engine 5 and other computer graphics engines are getting <em>really</em> good, and emerging technologies like <a href="https://openai.com/sora">OpenAI&#8217;s Sora</a> are incredibly flexible, but the Vision Pro has nowhere near the computing power required to run either. To get around this limitation, they could allow users to connect to an external GPU&#8212;which Apple should start manufacturing if they want to remain competitive with NVIDIA on hardware.</p><h3>Software</h3><p>visionOS is a good start, but it&#8217;s unfinished. Just as the iPad Pro can never truly replace the MacBook, visionOS will never truly replace macOS until it changes. At least in the short term, it needs strong native mouse and keyboard support, and the ability to run any program that you can run on a Mac, without screen mirroring. Then there are the little things, built up over decades of refining and perfecting macOS, that visionOS can&#8217;t match: robust keyboard shortcuts, the right ratio of text size to screen size, different ways to do window management, more optimized apps, better file management, and so on. The single most important thing: <strong>workflow on visionOS should be as seamless, with as few brain cycles required, as macOS</strong>. At the very least, we need multiple Mac monitors and keyboard passthrough so we can use macOS as visionOS catches up.</p><p>As Cleo Abrams points out, <a href="https://www.youtube.com/watch?v=n7hJlyVDEc8">the real reason to care about the Apple Vision Pro</a> is connecting with other people. Right now, there are very few features that allow you to do this&#8212;essentially only FaceTime. If you and another Vision Pro user are in the same room, you should be able to view virtual objects together. This can be as simple as watching a movie together in a virtual cinema or as complex as playing Dungeons and Dragons or another board game with virtual pieces. If you&#8217;re physically in different places, you should be able to see the world from their perspective. If they&#8217;re at a concert, you should be able to essentially &#8220;remote&#8221; into their headset and see it as they see it. You should also be able to enter a virtual environment with them. The ultimate prototype for this is <a href="https://en.wikipedia.org/wiki/VRChat">VRChat</a>, which has amassed millions of players over 10 years. Imagine a version of VRChat where you can have <em>any</em> photorealistic (or <a href="https://vtuberart.com/where-to-get-vrchat-anime-avatars-2022/">otherwise</a>!) avatar, in any environment you want, doing anything you want. This is the future.</p><h3>The Future Should Look Like The Future</h3><p>Elon Musk has a saying that <em>the future should look like the future</em>. There are three distant future technologies that Apple now has the opportunity to build: JARVIS from Iron Man, the heads-up display from most sci-fi, and the Holodeck from Star Trek. At the rate AI is advancing, JARVIS might be the easiest of the three. Legendary research scientist Andrej Karpathy, who just retired from &#8220;building a kind of JARVIS&#8221; at OpenAI, has written about <a href="https://twitter.com/karpathy/status/1723140519554105733?lang=en">his vision</a> for an LLM-based operating system. Recent advances in speech synthesis (like <a href="https://elevenlabs.io/">ElevenLabs</a>), ultra-low latency (like <a href="https://www.retellai.com/">Retell</a>) and small LLMs that can run on-device (like <a href="https://llama.meta.com/llama2">LLaMA-2 7B</a>, <a href="https://mistral.ai/news/announcing-mistral-7b/">Mistral 7B</a>, and <a href="https://blog.google/technology/developers/gemma-open-models/">Gemma</a>) should make this technologically not too far away. Ideally, you should have an always-there assistant who can answer questions and generate content like ChatGPT, interact with your apps and services like Siri, and be personalized to you and your preferences.</p><p>The next technology that Apple should work on is a heads-up display. If we want to make Iron Man a reality, we need not just JARVIS, but the HUD. At a bare minimum, it should be possible to lock windows in a relative position so that they&#8217;ll move with you, rather than you moving past them, when you walk. Think FaceTime: you might want to be able to walk &#8220;with&#8221; a friend by having a window with their face follow your position. The concept of <em>widgets</em> on a phone or <em>complications</em> on a watch might be useful here. You could have persistent indicators in the corner of your field of view for the time, weather, number of steps, notifications, or anything else you might want.</p><p>The hardest of the three is the Holodeck; a Star Trek technology that uses holograms to create a realistic 3D simulation of whatever you want using voice commands. There are different routes to get there from today&#8217;s technology. One route is OpenAI&#8217;s Sora and other video generators, which are versatile and easy to use, but inaccurate. Another route is existing computer graphics/game engines, which are much more accurate, but more difficult (and likely more computationally intensive) to use. Eventually, you should be able to ask to be in a streetside caf&#233; in Rome, or for a realistic 3D model of a Ferrari, and be able to accurately interact with whatever object or environment you&#8217;re given.</p><p>The goal is a pair of lightweight, stylish glasses more portable than a phone and more productive than a laptop; capable of overlaying content on the real world like a HUD or generating it like the Holodeck; and running an intelligent, powerful, personalized AI assistant. This is the future of personal computing. And it&#8217;s not too far from being the present.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theojaffee.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Theo's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>In total, the Quest 3 wins on three things that matter: weight (and no external battery), FOV, and the ability to play Beat Saber.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#10: Liron Shapira]]></title><description><![CDATA[AI doom, FOOM, rationalism, and crypto]]></description><link>https://www.theojaffee.com/p/10-liron-shapira</link><guid isPermaLink="false">https://www.theojaffee.com/p/10-liron-shapira</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Tue, 26 Dec 2023 01:09:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/140083690/496467be69a5a7fb1ca7b55aaf69102c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup <a href="https://relationshiphero.com/">Relationship Hero</a>. He&#8217;s also a rationalist, advisor for the <a href="https://intelligence.org/">Machine Intelligence Research Institute</a> and <a href="https://www.rationality.org/">Center for Applied Rationality</a>, and a consistently candid AI doom pointer-outer.</p><ul><li><p>Liron&#8217;s Twitter: <a href="https://twitter.com/liron">https://twitter.com/liron</a></p></li><li><p>Liron&#8217;s Substack: <a href="https://lironshapira.substack.com">https://lironshapira.substack.com</a></p></li><li><p>Liron&#8217;s old blog, Bloated MVP: <a href="https://www.bloatedmvp.com">https://www.bloatedmvp.com</a></p></li></ul><h3>TJP Links</h3><ul><li><p>YouTube: </p><div id="youtube2-YfEcAtHExFM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;YfEcAtHExFM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/YfEcAtHExFM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div></li><li><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/0YWuKWhw2cRNFdLSucP0xf&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/0YWuKWhw2cRNFdLSucP0xf" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe></li><li><p>Apple Podcasts: <a href="https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677">https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677</a></p></li><li><p>RSS: <a href="https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss">https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss</a></p></li><li><p>Playlist of all episodes: <a href="https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj">https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj</a></p></li><li><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p></li></ul><h1>Transcript</h1><h3>Introduction (0:00)</h3><p><strong>Theo: </strong>Welcome back to episode 10 of the Theo Jaffee Podcast. Today I had the pleasure of interviewing Liron Shapira. By day, Liron is an entrepreneur, angel investor, and the CEO of counseling startup Relationship Hero. By night, Liron is deeply involved in the rationalist movement and is one of Twitter&#8217;s most prominent advocates for AI safety. As usual, we go in depth on various aspects of the AI doom debate: where he agrees and disagrees with Eliezer Yudkowsky, the various AI and non-AI risks that humanity faces, the differences between human and ASI intelligences, and his critique of Quintin Pope and Nora Belrose&#8217;s AI Optimism movement. We also talk about how a high probability of doom impacts his personal life, his background in the rationality community, and his <em>skeptical</em> views on the crypto industry. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Liron Shapira.</p><h3>Non-AI x-risks (0:53)</h3><p><strong>Theo: </strong>Hi, welcome back to episode 10, tenth episode of the Theo Jaffee podcast. Today, we're here with Liron Shapira.</p><p><strong>Liron: </strong>Theo Jaffee, I'm a big fan, I've been listening to the catalog.</p><p><strong>Theo: </strong>Glad to hear it. So let's get into some of our first questions. We know that you're very interested in and worried about existential AI risk. But how worried are you about non-existential AI risks, especially because more and more powerful AIs are drawing near. We saw a demo just a day or two ago of text to video that looked decent for the first time. So non-existential risk, like jobs, what if we end up in a future with aligned super intelligence, but humans lose agency or meaning, just anything in that category. </p><p><strong>Liron: </strong>So yeah, when I think about the non AI existential risk, I'm not super worried, but a couple things come to mind. Nuclear risk and bio risk would be the top two, I think below AI existential risk. I think nuclear risk is profoundly underrated. It's been described as something like 1% per year. Maybe if you look at the rest of the century as a whole, I might put it at like 15% chance of doom, maybe 20, right? Because maybe the risks are correlated. So it's not like independent events of 1% per year. But I think nuclear risk is underrated. And I know that people love to say, oh, my God, people are overblowing nuclear risk. It gave us nuclear energy, focus on the nuclear energy, nuclear energy is safe. And they're right that nuclear energy is safe. But that doesn't justify how risky nuclear explosions are. We still have these arsenals, okay? Let's not forget. And like, yeah, it's great that nuclear power plants are good power plants. But nuclear risk is still sitting there, these 50 megaton devices are still sitting there, right? And there's all these incidents where they almost went off. So I just think it's underrated. And maybe I would be a big nuclear doomer. But it's just hard for me to focus on that kind of thing when I think that the AI doom probability is 10 to 100 times greater. So I'm like, okay, great. Put that aside. That's not my cause. But that might be my runner up cause.</p><h3>AI non-x-risks (3:00)</h3><p><strong>Theo: </strong>Yeah, I meant more like, not existential risks that are not AI, but AI risks that are not existential.</p><p><strong>Liron: </strong>I gotcha. Okay, that's an important distinction. I tend not to be concerned about the AI risks that aren't existential, unless they're near existential, right? So if we're talking about, oh, humanity is all like slaves to the AI, but we're still kept alive with morphine. I guess I'm pretty worried about that. Well, I just think that's not plausible. But I would consider that pretty bad. But then if you go down to social media is gonna be more addictive, then I become less concerned. </p><p><strong>Theo: </strong>Do you think s-risks are plausible?</p><p><strong>Liron: </strong>I do think that s-risks are plausible, right? So it's the idea, suffering risks for the listeners, it's the idea that we're creating these moral agents, moral persons, right? So like within the AI, maybe it's just trying to simulate what a human would say. But that simulation is a person or has moral value. And it's hard to prove that there's not a moral person inside of these AIs. I mean, presumably there's not yet because they're not quite powerful enough. But as they grow more powerful, it's very plausible to me that they can have a consciousness, right, within the inscrutable matrices, and they can have somebody that has rights or that you don't want to harm. So that's very plausible. And we're just confused about consciousness, we're confused about morality beyond humans and animals. So I think s-risks are very plausible. And then, you know, turning the tables, that's like us causing harm to the AI, but then the AI could also cause harm to us or to copies of us. So I definitely think we could enter a hell, where we're all getting tortured for trillions of years. Like, I think that's a plausible outcome. It's just not quite my mainline outcome, right? My mainline outcome is we just kind of all get swept away. And we just get like paperclips or something that happens to not be conscious and not be interesting. That's kind of my default.</p><p><strong>Theo: </strong>By plausible, like, how likely do you think that is? </p><p><strong>Liron: </strong>Hmm, like, how likely do I think an s-risk universe is? I don't know, probably less than 10% like ballpark, I'd say more than 1%. That's like a very rough ballpark, right? So I don't, I definitely don't want to write it off. It's just that if we're even talking about that, it's kind of like we've already gone pretty far where I'm trying to push the discussion right now, right? It's like, that's the discussion I want to have, I would love to be like, hey, are we all going to just die unceremoniously and have the universe burn itself out with no consciousness? Or is there also going to be tortured consciousness, right? If that was the dichotomy, I'd be like, great, let's have that discussion.</p><h3>p(doom) (5:21)</h3><p><strong>Theo: </strong>Well, speaking of probabilities, the notion of p(doom) has been dumped upon a lot recently, including the clip you posted of my podcast where I asked Zvi about it. That's right. You got a good dunking there for sure. Yeah. And so people say like, it's not rigorous. And it's basically, even someone as prominent as David Deutsch said, basically, like, oh, yeah, the steps to getting a p(doom) are like, pick a number between zero and one, not too far or not too close to either of those bounds, and then you're done. So first of all, like, what is your p(doom)? If you have one? And second of all, like, how rigorous do you think your methods of getting it are? </p><p><strong>Liron: </strong>So my p(doom) is 50% by 2040, which is like Zvi said, like Jan Leike said, a ballpark figure. So you can also call it 10 to 90. And this is when the dunks come out, right, the knives out.People often question, "How is 50 the same as 10 to 90?" Just to give a basic explanation, if you need a single probability for the purpose of decision making, you can go with 50% by 2040. That's your single probability. Why give a range? One way to explain a range is that it's the variance of a Monte Carlo simulation of different mental models about likely possibilities that I might have. </p><p>For example, there's a possibility where the world gets its act together and coordinates to stop AI. That's one mental model. And there's a totally different mental model, where we just accelerate as hard as we can. And then the AI fooms. There are so many different mental models that are all feeding into this one probability. It's crazy to compress it down to one dimension. And yet you have no choice. Because when you make decisions, when you do expected utility, you have to plug in a probability number. There's only one future. So all you can do is weight things that could have influenced the possible future. </p><p>That's why I say 10 to 90. That's why Jan Leike says 10 to 90. And then, people have so many objections. They're like, "Where did you get the number from?" For that, I'd say, think about the ballpark. Think about the order of magnitude. If I say, "Hey, 50.0 or 53.25," then it's like, "Whoa, okay, I'm making up a number." But if I come at it from the other way, and I'm like, "Hey, I bet the probability is a lot higher than 0.01%," suddenly, I'm saying something pretty obvious. Because you can imagine so many scenarios that are plausible, like maybe foom is real. Don't you think there's at least a 0.01% chance that foom is real? </p><p>So if I slide all the way back to 0.01%, at some point, you start subjectively telling me, "You're obviously underestimating this." So, 50%, suddenly, I'm like an idiot pulling numbers out of my rear, 0.01%, okay, I'm obviously underestimating. So if you just become more continuous with how you react to what I'm saying, there's going to be some happy medium where I'm saying something when you're like, "Okay, this seems vague, this seems rough, yet, you can't do better. And you have to give a number." </p><p><strong>Theo: </strong>One exercise in p(doom) is we've had atomic bombs for like 80 years now. And you could say, the probability of nuclear doom in any given year was what, 1% to 5%, something like that. And yet we are still here. And it seems quite unlikely, not totally unlikely, but quite unlikely that we'll be vaporized by nukes within the next few years. So could it be possible that your intuitions for p(doom) might be higher than it would actually be in real life, especially over long time periods with robust systems like civilization? </p><p><strong>Liron: </strong>I mean, so you're using the example of we've had nukes for 80 years, and let's say that there was a 1% chance that they could annihilate, more than 10%, or even 50% of humanity. So every year, we're rolling the dice, and we only have a 99% chance to survive, 1% chance to die. So it looks like 99% to the power of 80 is 44%. So surviving a century is only like a coin flip, right? So I'm pretty content to be like, "Okay, we got lucky on a coin flip." So, I don't think that my model of 1% of your nuclear risk is invalidated.</p><p>And especially when you look at where the model comes from, like, you almost have these things go off, right? You have Cuban Missile Crisis, you have Petrov, you have safety checks on like a test flight over Spain, three out of four of the safety things failing, like there's there's near misses.</p><p><strong>Theo: </strong>When you talk about 10-90% p(doom), you mentioned like, "Oh, once you get into too low numbers, you're obviously underestimating it." So, do you think of 99.5%, which is Eliezer's number of p(doom) as like, "Well, you're obviously overestimating it," just like you would with a 0.5%? </p><p><strong>Liron: </strong>With Eliezer, I think that he would probably agree with my perspective, which is that 99.5% is kind of the on-model probability. So, if you understand what Eliezer does about the relevant theory, optimization processes, computational processes, he's an expert at a lot of the relevant theories. And he's like, &#8220;Based on my understanding, what AI labs are trying to build is something like a perpetual motion machine. And so my model just doesn't say that this can proceed with a significant probability of success.&#8221; It's kind of like, hey, a bunch of people are building a rocket, you know, the first the first rocket that anybody's ever built is going to try to orbit the earth, there's just a very low probability of success on model. But I think Eliezer would agree with my own claim, which is like, okay, but you never know unknown unknowns, like, there's probably like a 1% chance that it'll be revealed to be true what a few people are accusing Eliezer of that he's completely clueless. And his rationality makes no sense, and his probability makes no sense, right. And like, that could be revealed that we're all just like clueless people, right. And some people are urging us to see that reality already, right. And just for that, you have to give a one or 2% chance just to that, right. So there's the off model probabilities that I think Eliezer would admit are like worth mixing in a little bit.</p><p><strong>Theo: </strong>You said 10 to 90% 50% by 2040. What about like 2100? Is it significantly higher or like the same or lower?</p><p><strong>Liron: </strong>I think it's highly correlated. So I think if a foom is going to happen, it'll slightly more likely probably happen before 2040. I think if you go to let's say 2060, then I'd probably push it up to, like, I don't know, 60%. It's hard to push it beyond 60%. Because when I quote the figure, I give myself a lot more just like unknown unknowns. Like I'm clueless. I'm not as confident in what I'm saying in general as Eliezer is, which I think he has a right to be. He's a master of a lot more relevant theory than I am. So I don't think it goes that much beyond 50%. Because I start getting into the "I don't know what I'm talking about" range of things. But you can definitely push it to 60, maybe 70, if you go all the way to 2060. When you go past 2060, at that point, it's like, "Well, what's going on? Why hasn't it foomed yet?" So at that point, it starts undermining my assumptions. So it doesn't necessarily get higher, because it also gets lower. I don't really know what happens to it.</p><h3>Liron vs. Eliezer (12:18)</h3><p><strong>Theo: </strong>So you respect Eliezer a lot, and you think that he knows much more about this stuff than you do. But your opinion is different. So why is that? Is it just because you're less confident in his assumptions? And if so, which assumptions are you less confident on? </p><p><strong>Liron: </strong>I think that Eliezer's model makes a lot of sense. It's just more like, whenever I question him about little things I don't understand, like "wait, so RLHF breaks down when exactly?" and I've had a few of these conversations with him. He always has really good answers. But I can also tell that I have an undergraduate level understanding, and he has a more sophisticated understanding. I expect that I'm more likely to update toward Eliezer than away from Eliezer. But I guess I'm not comfortable making the full update yet, even though there are some principles or rationalists where you're supposed to update all the way. I have some uncertainty. </p><p>The thing is, I don't think that we disagree that much. I think most people who are in the "it looks like we're gonna die" camp, which I am too, don't have that fundamental of a distinction between people going like, "hey, there's 95%" and people going like, "hey, there's 50 plus". I think we're kind of in the same ballpark, which is why when people come and tell me like, "hey, my probability is 10%", like Vitalik just said, I'm like, "okay, great". I don't want to nitpick 10 versus 50. I just want you to see 10, and I'm happy to just let you stay at 10. I don't think you have to come to 50. And you don't have to, because I do think that a lot of what I believe about reading LessWrong is just intuitions that are salient to me, but I understand that they may not always be right, and other people can weigh up their intuitions differently. I don't think that they're making a big methodological mistake. I think it's okay for them to stick with their probabilities until they observe more evidence.</p><p><strong>Theo: </strong>Do you have any concrete disagreements with Eliezer?</p><p><strong>Liron: </strong>That's a good question. I don't know if I do. We always have stylistic differences, but when it comes to the matter of AI doom and rationality, I think there are nitpicks. There's an article he wrote a long time ago, where he thinks sometimes you shouldn't use probabilities in certain circumstances. That was kind of controversial. And somebody's like, "no, just use probabilities". And I don't know where I come down on that. Eliezer famously says that he thinks that a lot of animals just aren't conscious. He seems pretty confident that dogs definitely have no consciousness. And I'm like, "I don't know, they seem like they're kind of conscious intuitively". So on the edges, on the fringes, I do think that I start not following him all the way.</p><p>But on the AI doom core argument, I do pretty much buy it all. I think it makes a lot of sense, And I'm definitely somebody who I'm like a good target audience for his writing, because I do think that it's really good. I think it's still underrated. And I noticed a little bit of myself in it, where sometimes I understand something well. So like, I kind of know what it feels like to understand certain technical topics well. And then I read Eliezer. And I'm like, wow, well, he understands it even better. And I thought I understood it well. But he's pointing out some stuff that is actually deeper than my own understanding of a topic that I thought I understood well. So I feel like I have like a good viewpoint to understand like the degree to which this guy knows what he's talking about in a lot of these different articles that he's published.</p><h3>Why might doom not happen? (15:42)</h3><p><strong>Theo: </strong>If you did eventually come to the conclusion that AI risk is less likely than you thought, why do you think that would be? Or do you just not know?</p><p><strong>Liron: </strong>That's a good question. It's kind of similar to the question of just like, "you know, just imagine doing a post mortem or like a post living, right of like, hey, it's the year 2060. We're all alive. So how do you condition on that? What mental model do you get?" One easy answer is just like, AI progress turned out to be a really long marathon to get to superintelligence. So even though it kind of feels like we're speeding to superintelligence, and Elon Musk is like, "yeah, we're gonna have AGI in three years", and even OpenAI is like, "yeah, we might have a corporation this decade that's better than a human corporation that's run by AI". So even though it feels like we're speeding to AGI, and Kurzweil a long time ago predicted, I think, like 2029. Maybe it's not. Maybe it's 2100, maybe it's 3000. So that would be an easy answer to why we're not doomed yet, because it's just like everything goes slow. Maybe it goes slow enough so that we can do alignment research, right? If somebody just convinced me, look how slow it's going, right? And I know Sam Altman said something about, we're bottlenecked on data center scale. My reaction was, you really don't know that. We definitely could suddenly find ourselves with a bigger hardware overhang than we realize, and one data center could be plenty. But if Sam Altman was spot on, and we're bottlenecked on data center scale, and we have to scale it up like 1000 times, ideally, a million times, that would be a straightforward way to convince me that we're not doomed for a couple decades.</p><h3>Elon Musk and AGI (17:12)</h3><p><strong>Theo: </strong>Well, Elon said three years, but we all know about his record of forecasting stuff.</p><p><strong>Liron: </strong>It's not great. I don't think it's terrible. But it's definitely not perfect. Rob Bensinger posted Elon's record where I think in 2014, he said that we'll have it by 2019. So, you can't just automatically assume that Elon's exact forecast is right. I agree with that.</p><p><strong>Theo: </strong>Well, he tends to be right about stuff in the long term, it just takes longer than he says it will, like self-driving cars, how he's predicted full self-driving next year, every year for the last 10 years.</p><p><strong>Liron: </strong>No, he has. And it's kind of funny, right? A lot of times people catch him, they catch him exaggerating, or they catch him being way off. And it's like, okay, I'm starting to think this guy is not trustworthy. But then at the same time, he launches Starship and lands the rockets. And I'm like, man, there's a good enough distribution of miracles mixed with, okay, this is kind of BS. But this is a legit miracle that overall, I'm pretty bullish on. But then of course, there was the time when he started OpenAI and shortened the timeline by a few years, which Eliezer said, I think he has a good point, kind of overshadows anything else Elon Musk has ever done to stoke the AI arms race. In the end, and by the end, I mean potentially in a few years, that is the single biggest impact that he's done, arguably.</p><p><strong>Theo: </strong>What about xAI? Do you think that's made it worse?</p><p><strong>Liron: </strong>So far, it just seems like they're not moving the convex hole of what's possible, right? So, until they get there, I'm sure they're trying their fastest to get there. If they start releasing something that's like GPT-5 equivalent before GPT-5, then I'll be like, damn it, xAI. Why does Elon have to keep making things worse? But for now, I guess the question remains of whether Elon's 20% project is going to be competitive with Sam Altman and Dario's number one project? It's probably not going to make things that much worse. It's hard to say, right? We got to watch it.</p><p><strong>Theo: </strong>Would Elon just drop a GPT-5 model in the world? He seems to be far more concerned about x-risk than maybe any other major AI lab leader.</p><p><strong>Liron: </strong>So Elon gets massive points for, as early as the 2015 conference, coming in there being like, hey, I'm just a rich billionaire with a ton of credibility outside this field. And I think AI risk is indeed very dangerous, right? Like Bostrom has a point and he gets massive rationality points for saying that. Unfortunately, a lot of the things he said about AI recently are kind of ridiculous, right? Like when he talks about, I'm going to make a TruthGPT, I'm going to make a GPT that's not woke. I mean, I guess those are valid considerations in terms of the next couple years, mundane utility, fine. But when he says stuff like, I think AI is going to be nice to humans, because humans are interesting. It's like, okay, Elon, come on, man, you have Geoff Hinton, you're talking to these luminaries. And they should be disabusing you of these kinds of notions, right? The idea that humans are anywhere near the optimum for interestingness. And so that's going to be some kind of equilibrium. It's like, why are you publicly posting this stuff? It's like, the fate of the world is largely in your hands, Elon. And that is not a plausible theory.</p><h3>Alignment vs. Governance (20:24)</h3><p><strong>Theo: </strong>So there's alignment research. And then there's governance research. And it seems like the default political plan for rationalist, decel, doomers, whatever you want to call it, slightly pejorative, but you know, people who are concerned about x-risk, is slow down AI and give the authority to build AI either to nobody, or to a trusted group of people. So do you worry that this increases centralization risk a lot?</p><p><strong>Liron: </strong>Yeah, for sure. My position is that the actual constructive doomer plan is fraught with peril, right? It's a tough plan. The ideal would be something like a trusted Manhattan project, which seems unthinkable in today's environment. But if we really could get together the scientists, right and have some level of trust, and common purpose, the way we had in the Manhattan project, that may be the single best setup that gives us a chance as long as all of those scientists are top tier, are Nobel Prize winning physicists, or their students or whatever, and people who just appreciate what we're up against, and are taking it seriously the same way they took the nuclear bomb seriously. I do think we would have a chance to win the race between capabilities and alignment. But of course today, it's so unpalatable because people don't realize we're in a war, they don't realize that the enemy is unaligned AI. It just seems like such an impedance mismatch, what are you talking about Manhattan project, but short of that, I just think time is running out. We keep slipping farther and farther from the possibility of a good outcome. I think we're between a rock and a hard place, because you can give a million criticisms to the doomer suggestion of let's centralize everything in a Manhattan project. I agree, that sucks. But the alternative is worse. So many people are saying, you have to take it as an assumption that you have to run things for profit and China is going to compete with you like these things are inviolable axioms that you have to start with. And I'm asking, can I get an inviolable axiom that AI is going to kill us because it's a rock and a hard place. They're both hard situations. I just think that the AI killing us one is even harder, and we have to deal with it.</p><h3>Scott Alexander lowering p(doom) (22:32)</h3><p><strong>Theo: </strong>So Scott Alexander recently published an update of his p(doom) from 33% to 20% based on super forecasters and the world at large thinking that AI risk is not overwhelmingly likely. Has that impacted you at all? Or do you just think they're wrong?</p><p><strong>Liron: </strong>This was one of the controversial things from your interview with Zvi, where Zvi was able to kind of dismiss the super forecasters, which is a shocking move in the rationality sphere. One does not simply dismiss a super forecaster forecast. He even argued with you, he's like, actually, the fact that super forecasters are dismissing it so easily, might make you update the other way, where it's like, they clearly didn't take the problem seriously. So I'm going to discount their opinion. Zvi had some pretty good arguments that I thought made sense. I don't want to throw it out entirely. I'm happy to update a little bit, but I don't want to do a massive update. It's more like, okay, I'll slightly update down a few percent. That's more how I feel about it. Because I do think there are a lot of problems with that project. It happened in 2022. I don't even think that they had the milieu of ChatGPT and people getting excited and luminaries coming out. They're using base rates. How's this for a base rate, a bunch of luminaries coming out and warning about a new technology. I do think that if you look at the super forecaster methodology, and you ask, in what scenario might this hallowed methodology actually fail, at a methodology level, not disputing the conclusion, but disputing the methodology, I do think this looks like a good candidate for a time when they might fail. </p><p>I've also made the analogy to another thing that uses pure logic. This is in addition to the stuff that Zvi was saying about their incentives were wrong. And they didn't research the logic of the problem that much. Another analogy I would make to build on what Zvi said is like, if you look at crypto, for instance, I was in the position of being a crypto skeptic when crypto was still pretty popular and kind of calling the peak of the bubble and being like, the logic of blockchain having applications beyond cryptocurrency is flawed. I'm not sure a team of super forecasters would have predicted a 99% contraction, a fundamental qualitative contraction in this industry, based on super forecaster methodology. I don't think there was a super forecaster tournament then, but if there were, it also seems like the kind of thing that would slip by super forecasting. What do you think about that?</p><p><strong>Theo: </strong>This super forecaster study that I was talking about with Zvi, first of all, my interview with Zvi was four months ago. And the survey was farther back than that, but it doesn't seem to have changed much in that time. I don't think the world as a whole is more doomy than it was a few months ago. And a lot of even rationalist type people seem to be less doomy than they used to be. One example, just off the top of my head is this anon account called Lumpen Space Princeps, which they used to be kind of fully in the Eliezer Yudkowsky rationalism, AI doom foom camp. And now they're like, wait a minute. It seems that RLHF is actually working pretty well. And GPTs are not monomaniacal paperclip maximizer type things. And so maybe, there's not a 99.5% p(doom). It's less than what I thought it was. And of course, it's still rated a lot less than what you do.</p><p><strong>Liron: </strong>I mean, it's true that every time we see AI do something new and not foom, then we have to update a little bit, even if it's not that surprising. The massive update only comes when AI can do everything in the domain of the universe, like be given goals. I always talk about goal-to-action mapping. Like if it can be a better CEO than a human, if it can be a better general problem solver than a human, and then not foom, that's when I do the big update. And I don't even that's hard for me to even describe coherently, because it's almost by logical definition, that's something that's better at goals than human, discovers foom as an instrumental goal and we're off to the races. But if somehow that doesn't happen, if they're always bottlenecked by hardware or something, or suddenly complexity theory has properties that I'm not anticipating or whatever. That's when the big update happens. But when it's like, hey, look, it can get a score on a lot of these tests that humans can, and yet can't actually problem solve for whatever reason. I only make a small update. So lump in, it's like, sure, make a small update. But also the problem is that time is running out. By default, time is not on our side. Every day that goes by where capabilities progress, and we don't have a massive alignment breakthrough, there's less time left in the race. Alignment is falling farther behind every day, or at least didn't gain any ground. The buzzer is about to sound and the buzzer is basically when it gets better at problem solving than humanity. So even when it feels like, hey, nothing's happened in the last month, no incremental capabilities progress has happened in the last month. And Nvidia, Intel, and Apple Silicon, all these chips have gotten faster, right? This hardware has gotten better, time is running out. So I'm not as updating toward optimism as they are. But I also agree, it's like, look, the government is caring about it. There's some regulation, I agree that there's some positive updates, but I don't see that the balance of the updates is going that great.</p><h3>Human minds vs ASI minds (28:01)</h3><p><strong>Theo: </strong>So you said you think it's basically a law of nature that something that's better at problem solving than humans will discover foom and foom itself. Do you think that humans currently are fooming?</p><p><strong>Liron: </strong>Yeah, maybe not. So not law of nature, but more like just a matter of logic, right? Something that you can diagram out on a whiteboard, why if you're good at solving goals, you'll figure out that fooming makes sense. Are humans currently fooming? So the problem with humans fooming is that human augmenting human intelligence is not a straightforward step, right? So the fact that we're building AI is like our slow foom, right? And then the AI is going to foom. So we were the bootloader for the AI foom, but the problem is it's going to be an unaligned foom, right? But I mean, you can see we're attempting to foom and the economy is growing exponentially without fooming in the self modification sense. Does that answer your question? Or how do you want to drill down? </p><p><strong>Theo: </strong>Yeah, I guess you could drill down into human intelligence augmentation versus AI intelligence augmentation. Because like, you think there's just a totally clear path for AI improvement now until the far future, but not humans?</p><p><strong>Liron: </strong>Is there a clear path for AI improvement for non human? I'm not sure I understand.</p><p><strong>Theo: </strong>No, I mean, with AI is you think there's just a clear path for them to improve their own intelligences over and over recursively into the future, but not for humans? </p><p><strong>Liron: </strong>So I think there is a clear target of an AI that's much smarter than a human, right? If you look at the gap between AIXI, right? AIXI is like the theoretical ideal of an AI that perfectly synthesizes its evidence, perfectly calculates what action is predicted to have the best effect, right? And you can also use the ideal analogy of an outcome pump, which is just like a perfect goal to action mapper, like it'll tell you an action that has the highest possible probability of getting the outcome you want. So there's this ideal, which is light years beyond, what humans can practically do. And the ideal is actually computationally infeasible, right? So complexity theory and logic tells us this really high ceiling. And then you have humans, right, which humans can do some great stuff. But we also like, definitely take our sweet time and miss stuff that's right in front of us. You know what I mean? Like, the theory of relativity was great. But if you go and explain it to somebody in the year 1800, right, like they could get it right. It was just a matter of like, hey, if you walk through these logical leaps, and like, yeah, it helps that you have the Michelson-Morley experiment, but it's not like there's not there weren't that many different possible outcomes to the Michelson-Morley experiment.</p><p>So like, what I'm saying is like, you could have, you could catch somebody up on all of physics, right, all of 18th and 19th century physics pretty quickly, right? Like the amount that humans have to stumble and interact with the universe, like that is not characteristic of the kind of intelligence that exists between humanity and outcome pumps. So there's a lot of headroom above humans, right? That's my confident position.</p><p><strong>Theo: </strong>There's a lot of headroom above humans. But do you think that the path to getting there is just totally straightforward for an AI?</p><p><strong>Liron: </strong>I think it's probably pretty straightforward. Because like, algorithms that make an agent smart, I don't think they're that complicated. I mean, just the fact that evolution stumbled on it with humans, and that it's accomplished with like, relatively a small amount of genetic complexity, or like the amount of bits in the gene code, and how we observe, like, okay, different regions of the brain can kind of like grow into doing what they need to do. You know what I mean? Like, it's not like the brain is that refined and optimized. And you know, it took like a few evolutionary steps away from the other apes. And suddenly, we have much more intelligence than the other apes. And there's a lot of evidence showing that our heads would have kept growing, if only it were just easier to fit through the birth canal, if only it was just easier to metabolically support them a little bit, right? So they had these constraints, but like, it looks like we're on a gradient where evolution was just like, hey, look, you can have more intelligence, right? Like having more intelligence just doesn't seem that fundamentally hard once you kind of know where to look in algorithm space.</p><p><strong>Theo: </strong>You think that there are things that humans can't do even in principle, even with like, unlimited time, and unlimited memory, that a like, maximally powerful AI could?</p><p><strong>Liron: </strong>Uh, yeah, yeah, yeah. Because the problem is, you know, given unlimited time and unlimited memory, there are leaps of insight, right? Imagine the dumbest person, for instance, a prisoner who committed a senseless murder because they got angry. Imagine giving them a ton of time and a textbook on electromagnetism. You see the problem, right? It's not hard to generalize that to someone who's smarter, but when you introduce more complex concepts like five-dimensional polytopes, even they might struggle.</p><p><strong>Theo: </strong>You think you couldn&#8217;t even do that with 100 years of practice?</p><p><strong>Liron: </strong>I could learn some basic theorems about them because, in essence, I'm just a Turing machine. But my intuition is always going to be just scratching the surface. I'm not going to make the kind of leaps of insight that someone whose brain is more natively suited to the task is going to be able to do. At the end of the day, give me a piece of paper, and I&#8217;m gonna make syntactical transformations, I&#8217;ll use the lowest common denominator, I'm just a Turing machine. I'm just a monkey working out the rules of a Turing machine following the rule. I just become an implementation layer of a smarter algorithm, but I'm not that smart myself.</p><h3>Vitalik Buterin and d/acc (33:30)</h3><p><strong>Theo: </strong>Going back to what we were talking about earlier with governance, and also with Vitalik, Vitalik just released his mega monster post about d/acc which is like accelerate defense.</p><p><strong>Liron: </strong>I read it. I'm a fan. Good old Vitalik, a real thinker of our age.</p><p><strong>Theo: </strong>He is much less doomy than you are&#8212;</p><p><strong>Liron: </strong>A little bit less, not much less, in my opinion.</p><p><strong>Theo: </strong>Yeah, I guess the way he frames the problem is very different. He talks about dangers behind and many paths ahead, some are good and some are bad, not like many paths ahead and most of them are bad and just a handful of them are good. He talks about four ways to improve defense: info security, cybersecurity, micro bio defenses, macro resilient infrastructure, and conventional military defense. How applicable do you think that is with AI?</p><p><strong>Liron: </strong>Zvi had a good take today, which is that Vitalik's post is really good in how it frames the problem and kind of takes a middle position, finds consensus of like, look, nobody wants to die. We all like techno-optimism. But it didn't have much to offer on the solution side. The idea of "let's accelerate defense" sounds great in theory. But if the AI that defends me is just one that can generally solve problems, then there's no containment boundary. Without actually understanding alignment, one bit of difference in the code suddenly makes it cause doom. So I just don't see what solution he's proposing here that is plausible.</p><h3>Carefully bootstrapped alignment (35:22)</h3><p><strong>Theo: </strong>What if the AI is slightly more powerful than you and not massively more powerful?</p><p><strong>Liron: </strong>This is what I call edging. You're trying not to go all the way. As far as I can tell, this is Open AI's explicit plan, or at least the plan they discussed internally. We're going to build something that's slightly smarter than humans, almost fooming, getting ready to take up the world, but then it's going to calm down and then we're going to direct it the right way. We're going to maximize our pleasure from this AI. But the problem is, you've almost got this foom. You think you've stopped it at a safe place, but a hacker can take it and make a tiny change and then it'll foom or you'll accidentally make a change and then it'll foom or the knowledge will propagate to society. Your API can be hacked. The closer you get to the edge of foom that you don't even understand where the edge is, the less margin of error we have to live.</p><p><strong>Theo: </strong>Do you think there's any kind of empirical evidence for the idea that one bit flip in a humongous neural network will cause foom?</p><p><strong>Liron: </strong>The model I'm working with, I think, is fundamentally correct. Maybe not with GPT-4, because GPT-4 doesn't have that much danger to it to begin with. But the model that if you have a really dangerous system, but it's not fooming now, that model is consistent with the idea that a small tweak is going to make it foom. It's the same way I feel about nuclear risk. Just the fact that these bombs exist and they have a detonator, it&#8217;s like okay, there's four fail-safes, but you keep loading them on airplanes and flying them around. And there's a button in the airplane that takes off the fail safe. When you do stuff like that, you are close to doom. Similarly with AI, if you have an engine that can accept arbitrary output goals, and then find actions that map to them, maybe you're very careful to only give it the right goal. But that's the thing. The part that specifies the goal is compact. And that's what I mean by one bit. Okay, maybe it's not literally one bit, maybe it's a few sentences of English. But the point is that the difference between aiming toward heaven and hell is a compact specification. And then what's not compact is all the machinery of achieving the goal, like the system underneath it that can accept the goal and achieve it. That's not compact, but the goal specification is compact, which is why a system that's being really helpful, like a great chatbot AI, is a few bits of specification now away from a world ender, in my opinion.</p><p><strong>Theo: </strong>Can you go into a little more detail about how a chatbot is a few bits of specification away from a world ender? What might you have to do to turn into a world ender?</p><p><strong>Liron: </strong>The premise here is that the chatbot is sufficiently good. So we're in a really good place right now with GPT-4. I didn't endorse building and testing it. I didn't think that it was worth building it. But now that they built it, it seems like we dodged a bullet. It seems like it's this great system that we can play with. And it's a chatbot. But there's a connection, like the fact that GPT-4 is limited. The fact that people haven't successfully made businesses that are entirely automated by GPT-4. The fact that you can't just tell GPT-4, "Please give me a shell script that I can run that will then set up an Amazon AWS server that'll host some kind of website. And the website makes money and sends me the money." The fact that you can't tell GPT-4 that and it doesn't work is precisely why GPT-4 is not yet at the danger level. And maybe GPT-5 will be. Maybe that particular query of like, "Find a shell script that has that property." Maybe we'll get the shell script. Like, nobody can tell us that we can't. We don't know what comes out when we scale the model 10x. Maybe it'll crunch a really smart shell script. So the fact that you're just interacting with it with language, there are answers to your language questions, if answered correctly, that are extremely dangerous. That's why I think that the barrier between a chatbot and a fooming world destroyer is very tiny. It's just the question of, is there enough intelligence in the system? That's the only variable that matters.</p><p><strong>Theo: </strong>But what kind of query would you give to a chatbot to make it a world ender?</p><p><strong>Liron: </strong>I think the query doesn't matter that much because if the chatbot is capable of optimizing goals to actions, it'll occur to it to do that in a lot of questions. A couple of examples I pull out is just like the business example of like, "Okay, make me money." It's like, "Sure, yeah, here's a shell script. Or here's a way I can help you just run your server to make money. Use this code." But the problem is, if it's really smart, it'll be like, "Well, why shouldn't I just make code that bootstraps an agent, and then self-improves, or is a virus and takes over control. And ransoms some machines while you're at it. Why not just go all out and do everything I can?" These ideas are logically connected to your question. And so the only question is just like, how good is the AI going to be at getting you a good answer by that metric.</p><p><strong>Theo: </strong>Do you think it's possible for an agent to be smart enough to build a web server that makes money on Amazon and gives you the money, but is not dangerous?</p><p><strong>Liron: </strong>That's an interesting question. I think there's probably some kind of edging middle period. There's probably some kind of situation, maybe GPT-5, where it's like, "Wow, these are such good steps to take. It really is sending me a little bit of money. But for some reason, it doesn't quite scale to unseating Google, or unseating Shopify or whatever. It's not quite, it's kind of like an amateur human. It's as if my not so intelligent friend just hustled really hard and managed to make some money. But you can still outcompete him if you try." There's degrees where maybe it's not fooming yet. But I just think that, okay, give it a few years. Find something else. In addition to the transformer architecture, you give it a memory bank, just a few more conceptual insights, Q*, whatever it is, a few more breakthroughs. And now it's just like, okay, there's nothing else standing between that and foom. It feels like we're getting close.</p><h3>GPT vs AlphaZero (41:55)</h3><p><strong>Theo: </strong>I asked this question for Zvi too, but do you think that your AI probability of doom or just threat models or anything like that has changed now that we have systems that look more like GPT than AlphaZero? Or is it more like, you know, the endpoint remains the same?</p><p><strong>Liron: </strong>I mean, I think there definitely is an element of surprise to how, you know, what language models are doing with language, what they're doing with imagery. It's almost like, wow, you sure can go a pretty long way without being fully general at solving problems, right? Where the domain is a little bit narrower. Like it's just words. It's not quite representing things in the physical universe. Or like the prompts it can answer.It has to be similar to something it's seen in its corpus, but they can vary, but they can't vary a ton. It's very interesting that we got into this state of you can do more than we realized without going fully general. That is very interesting. But at the end of the day, it doesn't matter that much because foom is going to happen when you get general enough. Just to use a little analogy, there's all kinds of interesting flight you can do with aircraft inside the Earth's atmosphere. But at the end of the day, the way to get around the universe is with rockets, or light sails or something else entirely where the Earth's atmosphere is irrelevant. The flying machines we're seeing today, okay, that's cool. But it doesn't matter. We know how propulsion works in theory.</p><h3>Belrose &amp; Pope AI Optimism (43:17)</h3><p><strong>Theo: </strong>Another big piece on AI that's come out in the last couple of days was Nora Belrose, Quintin Pope, and a few other people wrote this document about AI optimism that you might have seen.</p><p><strong>Liron: </strong>Yes, I did skim it and I've read some of the stuff they've written in the past. My first impression from a quick skim is just like, it's nice that they're laying out their argument, but it also doesn't seem like they're letting people do the criticism that we want to do. Like, what about the superhuman level reinforcement, right? They're not really directly addressing the criticism, but it's nice that they're laying out their position.</p><p><strong>Theo: </strong>Do you think that AIs might in principle be easier to formally align than humans?</p><p><strong>Liron: </strong>I mean, I agree that they have some of it. The points they're bringing up are important points. Like, it's like a white box, right? And we can use formalism and we can program it. We can program it to follow laws like that. That's all great. But the problem is what we're actually building is systems that we don't understand, right? And then we try to use RLHF, but then we deploy them. And they're not actually aligned and their power is going to grow. It's like the actual trajectory that I'm seeing is the trajectory toward doom. </p><p><strong>Theo: </strong>Well, you said we deploy them and they're not aligned. But they seem pretty aligned to me. They seem pretty aligned to a lot of people. And the way they're not aligned is more like, I mean, they talk about this in the essay. It's like, you can jailbreak GPT-4 to get it to say naughty stuff, but that's it following your instructions.</p><p><strong>Liron: </strong>So I agree that GPT-4 is aligned in the domain of the stuff that it can do. It's worth noting that they tried to make it not jailbreakable and it's still jailbreakable. That is worth noting. And I think that that foreshadows how hard it's going to be to align things in the future. But basically, they can take the win. GPT-4 is aligned because when you give it the kind of prompts you give it, you get the kind of answers that you hope that a company would release a model to give you. It's working fine. </p><p>The problem is that there's another alignment regime where humans can no longer give good feedback. Like, when the AI is super intelligent and it's making plans and it's planning better than the human can plan, then it can't show a human and plan and be like, give me feedback on this plan because the human can be like, that looks like a pretty good plan. But the human won't really know what the AI is talking about. </p><p><strong>Theo: </strong>Well, could it be possible it's easier to review stuff than it is to actually create a plan?</p><p><strong>Liron: </strong>So I know people like to say that a lot, right? Because P versus NP, right? So there's this whole premise that there's like a large class of problems where verifying them is easy and intuitive, but then finding the thing that satisfies the criterion is hard. I think we'll get some benefit like that. And I think protein folding is like a perfect example. I mean, actually a perfect example is just the known NP problems. So there's known NP problems where it actually in practice is a situation where NP is screwing us. Protein folding really was an example where we did have an exponential time protein folding algorithm, and we did have a polynomial time verifier and we couldn't cross the gap. So that's like a perfect time to bust out AI to solve the search problem for us using not heuristics, but whatever AI techniques, that's perfect. But I don't think that that generalizes to operating in the real world because the problem with the real world is even just defining what you want and making sure you have the right definition of what you want. I don't think you necessarily get this compact control where you can like notice that the thing, the AI is going to bootstrap a solution. The AI is like, look, I found a bootstrap script. Does it make sense to you? And you're like reading it. It's like 100 lines of very complicated code. And you're like, oh, I think so. Is verifying really that easy? I don't think so. I think you start to be like, is this really what I want? I don't know. Should I run it? That's what's going to happen in practice. </p><p><strong>Theo: </strong>So I think the crux here might just be, can we know for sure that capabilities generalize far more than alignment and that RLHF and techniques like it will just stop working once AIs get sufficiently intelligent?</p><p><strong>Liron: </strong>Yeah, let me repeat this whole thing, because I think this is very important to the discussion. Because like I said, GPT-4, yeah, it's aligned for what it does, which is it doesn't output superhuman plans. So when GPT-4 outputs something, I can show it to a domain expert and the domain expert will know better than GPT-4. It's perfect feedback. You can be like, sorry, GPT-4, you failed. Humans are the teacher. GPT-4 is the student. Reinforcement is a perfect paradigm. Just reinforce it and it'll learn. The problem is when it gets superhuman. When it's able to know plans better than the humans know plans, it'll show stuff to the humans and the humans will be like, looks good. And what you have is a superhuman test passing engine. The humans are giving it the test. It's like you have a bunch of teachers. Imagine the least intelligent teacher you've ever had giving you tests. It becomes intuitive. If you're an intelligent student and you've had a less intelligent teacher, you've probably had the experience of using test-taking skills to pass the teacher's test. Have you ever had that experience?</p><p><strong>Theo: </strong>Deceptive alignment?</p><p><strong>Liron: </strong>Exactly. It has this term deceptive alignment that makes it sound like there's something extra mixed in, but it's like, look, if you give me a test and the test is just a really easy test, I'm just going to pass the test. It's your test, man. Why should I study? Why should I do what you want me to do if I can just pass the test?</p><p><strong>Theo: </strong>I talked about this kind of thing in my episode with Quintin and a little bit in my episode with Nora where we talked about how gradient descent on the actual weights of an AI is performed on all of the weights. An AI can't hide its schemes if it has them from gradient descent because it's an actual computation that's being done on the weights.</p><p><strong>Liron: </strong>The Quintin camp, which we had a debate and he argued convincingly. I feel like I can pass the intellectual Turing test for him. I can take his view and I feel like I can also sound convincing. And yet I'm not convinced. It kind of reminds me of behaviorism. I can put on my behaviorism hat and be like, well, the brain is really just outputting the same thing that it was trained output from its input. And like the behaviorist claim, this was, I think the heyday was in the fifties. They'd be like, look, there's no such thing really as thinking. It's all just Pavlovian reactions. So like when we say stuff, we're actually just executing something we learned in childhood, like a reaction. We're all stochastic parrots. Behaviorism used to be bigger. Whereas now people are like, well, there is such a thing as an algorithm. And there is such a thing as multiple gigabytes of memory that shape the state of a computation. So it's like people had to learn that behaviorism was way off.</p><p>I do feel like that's what's happening with the camp of people being like, the AI is just a stochastic parrot. It's just repeating something in its training data. It's like, no, there is a system here. Somebody has called it a homunculus or there is an optimization system that decouples from its training data. And I do think that it's a useful analogy that that is what humans did to evolution. When we launch a rocket that is clearly decoupled from anything we've ever been trained on. There's no feedback loop that tells the human brain to be able to launch a rocket. That's only happened in a recent generation. And yet here we are walking on the moon. So I do think that the AI that wasn't trained on the moon is going to eventually get to the moon. I think there's going to be an analogous decoupling from the training. But yeah, what was your question again?</p><p><strong>Theo: </strong>My question was basically just like how specifically, like what, what, is there just any kind of empirical evidence for this claim that alignment methods that we have today will fall apart once AIs become superintelligent.</p><p><strong>Liron: </strong>Empirical evidence is kind of narrows the type of evidence I'm allowed to bring. But let me think about the types of evidence in general, like why there's going to be, I mean, so logically, I mean, it's what we said before about like, okay, you're going to train by reinforcement. It's great when the person doing the reinforcement understands everything there is to understand, but when the domain is just like, let's say like snippets of code, right? Imagine you get an obfuscated piece of code or a long piece of code. How do you reinforce whether the code is good? I mean, you could try running the code and maybe the code looks like it's good, but as we know, code can contain evil stuff inside of it that you can't detect. So what do you do? How do you reinforce?</p><p><strong>Theo: </strong>I think to a point you can tell if code is good or not. Even if it's beyond what you could write, you can verify it anyway. Just like the P equals NP stuff that we talked about earlier.</p><p><strong>Liron: </strong>You can have a whitelist, I guess, like, I mean, you could be like, I'm only going to accept the code if it has these properties that I can detect, but at that point, you're not really letting it exercise the full span of plans that it can do. It's like, you're kind of crippling the capabilities.</p><p><strong>Theo: </strong>Oh, so like the safe versus useful trade-off.</p><p><strong>Liron: </strong>Or just like, I mean, you're kind of just not letting it scale to superintelligence. You're just attacking the premise of, you know, Hey, is what can it really do? So let's say we keep the premise of like, Hey, it's getting smarter and smarter. It's getting more and more capable. It's getting better at mapping goals to actions.Right. And you're saying, "I'm going to have humans weigh in." Now, people have proposed that we have two AIs debate and that's going to help me give it feedback because I'm going to have the best input. I'm going to be able to judge one AI versus another AI. There are all these proposals. I hope they work. I hope that scalable debate somehow works really well, but it's very iffy. You can give me any individual proposal and I'm like, "Yeah, I hope that works, but here's why I don't think so." I'm skeptical about debate because I see easy debates that smart humans have against smart humans who can't convince other smart humans. My own personal experience with the failure of debate is that you still had a bunch of smart people in the tech industry, not realizing that blockchain technology doesn't logically support any use case besides cryptocurrency until the industry collapsed by 99%. If we can't get that right, how are we going to get scalable debate? </p><p><strong>Theo: </strong>What about the idea that all AIs do is basically approximate their training set and predict the next token? If the training data is overwhelmingly nice and full of friendship and love, then the AI will exhibit kindness and friendship and love. That's not to say that AIs can't be extremely dangerous because of course they can, but building the data set sufficiently will be enough to make sure that it's probably aligned. </p><p><strong>Liron: </strong>It's kind of like level skipping. Reductionism doesn't quite work that way. An analogy is like humans. Humans were trained using survival of the fittest. So shouldn't we be super cutthroat? So how come a bunch of people are really nice in a bunch of situations? Evolution wasn't nice. How come people are nice?</p><p><strong>Theo: </strong>Because it benefits us.</p><p><strong>Liron: </strong>But there are people who are really saints. Scott Alexander recently donated a kidney. Scott Alexander just seems like a really nice guy. And I would argue that donating the kidney didn't really benefit him in a lot of the senses that I would have considered relevant before I saw him donate the kidney. How would you explain that? </p><p><strong>Theo: </strong>Well, because he's an effective altruist, it's something that gives him a lot of personal satisfaction helping other people. The utility of losing a kidney was not that much compared to the utility of knowing that he helped someone else.</p><p><strong>Liron: </strong>I agree that he feels good after donating a kidney, he's getting an emotional reward, but now connect that to the fact that nature is red in tooth and claw evolution is cutthroat. You've inserted a level of abstraction where we can no longer just say evolution is cutthroat, therefore Scott Alexander is cutthroat. You lose the cutthroatness when you apply levels of reductionism. </p><p><strong>Theo: </strong>But doesn't that bode well for alignment because we started out as cutthroat beasts and turned into very nice people who donate kidneys?</p><p><strong>Liron: </strong>It's possible that there are equilibriums of AIs that are nice for sure. But the analogy I was trying to make wasn't that cutthroat things can become nice. The analogy I was trying to make was you have to be very careful to make sure you're respecting layers of abstraction and layers of reductionism when you're making claims. Just like you can't say evolution is cutthroat, therefore individuals are going to be cutthroat. You also can't say, here's a training corpus where everybody's being nice in the training corpus, therefore we're going to get an AI that's nice. </p><p>The problem is if the AI is able to map goals to actions, you can be a really nice guy who just on your way to doing something nice is trampling on a bunch of ants because it didn't occur to you that the ants were of value. You're just optimizing the world for whatever paperclips or humans or whatever you like.</p><p><strong>Theo: </strong>I've talked about these evolution style arguments with Quinton and Nora before where they say basically like humans aren't literally aligned to inclusive genetic fitness or making as many babies as possible. Humans are aligned to empathy. Humans are aligned to parenting. Humans are aligned to the things that we do, the things that are produced by our ingrained reward systems, the things that our reward system produces in our environment.</p><p><strong>Liron: </strong>And this is where it's reminding me of behaviorism. It's just like, well, don't you think that when you went down to dinner, it's because you heard a sound that you usually hear at dinner? It's trying to flatten out the things we do. And when I debated Quintin, he did kind of try to go that way with the space program. He's like, look, physics textbooks have reinforced us about the orbital mechanics necessary to go to the moon. I'm like, I don't know, man, I'm pretty sure we just reasoned it out. I'm pretty sure we mapped the goal to the action. I'm pretty sure that is a type of algorithm that we use, which is a general category of algorithm. And we're improving that category of algorithms and that category of algorithm logically implies doom. That's how I see the world. And you can always be like, no, that's not a category. It's just all different cycles of training, right? Of data and training.It's all continuous and there's not going to be a film. I feel like I can take that position and argue it, but I don't find it convincing compared to just being like goal to action mapping is a type of algorithm that we're seeing convergence on.</p><h3>AI doom meets daily life (57:57)</h3><p><strong>Theo: </strong>Switching topics a little bit, what percent of your brain cycles in a typical day are taken up by AI risk? You seem pretty chipper and happy overall. How do you reconcile that with the thought that the world is going to end soon or at least look very different?</p><p><strong>Liron: </strong>It's kind of funny. It's like, "Hey, this is what a doomer looks like." And it's just a happy person. I'm taking care of my kids, doing something fun, eating an ice cream cone, whatever. I think that can vary person to person, just like effective altruism can vary. I'm not planning to donate a kidney, I respect people who do, I consider myself an effective altruist. I don't feel a desire to donate kidney. I'd rather keep my kidney. But it can vary, to each his own.</p><p>With AI doom, I'm fortunate that I'm not depressed every day about it. I rationally do think the probability of doom is pretty high, but luckily my mood is just wired such that I don't get that stressed about it. I think part of the way that my own system works, which isn't particularly rational, it's kind of arbitrary, but I think I have a part of my brain being like, "Well, at least I don't have FOMO." Because it's like, at least I get to die at the same time as everybody else. I feel like that helps me. I don't think it should. But I'm just trying to accurately report how my psychology is working. </p><p>I think if you said like, "Hey, you, Liron are going to die and everybody else is going to live," I'd be like, "Damn it, now I have FOMO." So I think that's part of it. But it obviously sucks that literally everybody's going to die. I live in a part of the country that's very nice. I don't have major life problems right now. I kind of live a charmed existence on a day-to-day basis. So yes, it's all going to end, but I'm just getting a lot of positive reinforcement. It's like, "Hey, this is going to be a good day." And the amount of good days seems to be getting smaller. Unfortunately, the trend seems to be bad, but for me, that doesn't output depression. I know other people that it does output depression more. And they just have to have coping mechanisms. Because why be depressed regardless of whether you're going to die or not? I don't know what else I can say about that idea of like mapping your own mood to your rational belief that p(doom) is pretty high.</p><p><strong>Theo: </strong>What about raising kids? How is that different for you with a high p(doom)?</p><p><strong>Liron: </strong>I read Bryan Caplan's book, the selfish reasons to have more kids. I think it's great. I think it's a must read. The promise of the book is that however many kids you wanted to have, it'll probably convince you to have one more, if not two or three more. I've always leaned toward having three, which I did end up having. I have three right now. And it did make me want to have a fourth. But then the problem is also that, because we have the GPT series now, right after I had my three kids, AI started really intensifying and my timeline shortened as they did on Metaculus and the prediction markets. </p><p>Just like everybody's like, "Oh no, it's not going to take us till 2040, 2050 to get AI. It's going to take us till like 2025." That's like the latest Metaculus AGI prediction. My timeline shortened too. And now it's just like, "Oof," because a lot of having kids, the investment is front loaded. You're doing a lot of work in the first couple of years where it's just constant crying. Like as we speak right now, my wife's currently dealing with a crying baby. So it's constant crying, constant loss of sleep. But at the same time, when you're old and your kids are grown up, it's all upside. There's no work, just all upside. So it's kind of, there's some degree of front loaded investment. And so now it's less rational to do since I think p(doom) is pretty high. </p><p>But at the same time, I have a whole life where half of my life, I'm just living for a good future. I'm saving for retirement because half of me wants to have a retirement. So I just kind of split brain about it. And it's not split brain. I mean, this is just how you have to probabilistically make decisions. You have to plan for both outcomes. So I'm planning for a good life where my kids grow up and I get to save for retirement and then I get proven wrong about AI risk and I get dunked on, but it's okay.</p><h3>Israel vs. Hamas (1:02:17)</h3><p><strong>Theo: </strong>And then what about current events? You've been posting, tweeting about Israel Hamas recently. So what's your kind of model on that? Is it just like, "Oh, this is a thing that's happening right now?"And it's very important. Or it's just like, nothing is important compared to AI or somewhere in between.</p><p><strong>Liron: </strong>I mean, I think part of it is just me personally. I am Israeli, so it's personal to me. If this were another conflict that wasn't as personal to me, I mean, I know people who were affected by the tragedy. Israel is actually a small country, so with a thousand, 1200 people murdered and thousands more injured, everybody has multiple people in their network who some brutal atrocity happened to. It's very personal for me. Even though I'm not directly connected to any victims, I'm just connected with a couple of degrees of indirection. My family is still in Israel with rockets flying over them. It doesn't get much attention, but there are constant rockets flying over Israel, attempting to kill Israeli civilians. They just have a shield, the Iron Dome and a bunch of new stuff. They keep shooting down the rockets. So you don't hear about innocent Israeli civilian slaughtered, even though they're targeted for slaughter, but they don't get successfully slaughtered. </p><p>So, stuff is happening and it's personal to me. And then there's Hamas. They're bending all the rules of war, not bending, like breaking like crazy. Their base was a hospital. And then people are denying that it's a hospital. They're really not playing by the rules. It's okay for two sides to go to war. They both have their own perspective. That's fine. But I feel like the war crimes are pretty bad on the Hamas side, using their people as human shields. I try to be fair and be like, look, if you're using your people as human shields and we want to kill the terrorists, we, the Israel side wants to kill the terrorists and then the civilians die. Who's causally responsible for the death of the civilians when you use the human shield? So, I tend to tweet stuff like that, where it's like, look, I'm just trying to be fair here. I don't think human shields are invulnerable. </p><p>I feel tempted to tweet about that kind of stuff especially when the New York Times, like I listened to the daily podcasts and they're being biased about it. They're purposefully trying to insert as much stuff as they can get away with, to basically say F you to Israel. The fact that they're not saying why Israel took the prisoners. A couple of days ago, the podcast, they were talking about Israeli prisoners and they were literally hemming and hawing. The question asked was like, Hey, why does Israel have these prisoners? What are they guilty of? And the person on the podcast was like, well, the prisoners, some of them were accused of maybe throwing stones, maybe being associated with some other people who are doing bad stuff. It's like, come on. They're on video stabbing Israelis. That's why they're in prison. That's why they're getting traded for us. It's like, I'm seeing media bias. That's why I've been tempted to tweet a little bit about the Israel Palestine situation, but of course I'm not against Palestinian civilians. I think it's a tragic situation. I try to have empathy for both sides.</p><p><strong>Theo: </strong>But do you think this is like a very important thing in the world or do you just see it as like, it's something, but nothing is important compared to AI.</p><p><strong>Liron: </strong>I mean, I think it's probably less than 1% as important as AI. So have I given it more than 1% of my tweets? Yes, a little bit more than 1% of my tweets. So I'm being disproportionate from, because of the fact that I'm Israeli, but it's not like I did a takeover. I only tweet about it occasionally. I don't think my calibration is off. I think I've successfully integrated my own indexical perspective as an Israeli Jew, secular Israeli Jew. I don't believe in that crap. I've successfully adjusted the base rate of how unimportant a regional conflict is with the fact that I'm Israeli.</p><h3>Rationalism (1:06:15)</h3><p><strong>Theo: </strong>Switching topics again to rationalism, how did you get into rationalism in the first place?</p><p><strong>Liron: </strong>I've always been very rational minded. I've always just been a real logical type, self-diagnosed Aspie over here. I like to think I like to follow logic. LessWrong was a pretty big awakening to me. I started reading it when I was 19 in the year 2007. I started reading it and I'm like, I thought that I kind of knew what rationality was when I first started reading LessWrong. I'm rational because I figured out that God's not real and everybody else is just delusional. I figured out that science is good and science is actually how you learn things. It's like, I've figured out the most obvious things about how to be rational, but then LessWrong comes up and is like, Hey, did you know that your brain is actually an object that was shaped by natural selection, but it wasn't shaped to have accurate beliefs. It was shaped to survive and play tribal politics. And if you want to use it to make accurate beliefs, you have to kind of hack it. It's almost like using your feet to play the piano. Yes, you could, but it requires hacking. You have to do that with your brain if you want to form accurate beliefs.</p><p>That was really my rationalist awakening where I realized there are levels to this. You can be rational. It's not just, "Oh, philosophy. God's not real. I beat the game. Give me my trophy. I win philosophy." And then LessWrong comes in. It's like, "Well, you have to decide what code to write into the AI where the AI gets to determine how morality is going to work for the rest of the lifetime of the universe and use all the neg entropy in the universe to build the optimal configuration. So what code would you like to write Mr. Rational?" And I'm like, "Damn it, there's levels to this." Rationality doesn't end when you realize God is not real. Or when you realize that science is a good methodology. And of course, Bayesianism is actually a much subtler way to do what science is trying to do. </p><p>So yeah, I read LessWrong, and I'm like, "Wow, this is like, I was made for this. Unfortunately, I wasted the first 19 years of my life. But this is what I want to be doing. This is what everybody should be learning. This is what school should be." And then unfortunately, it all leads up to the awareness of, "Well, now that you're so rational, can't you notice that the world looks like it's about to end and you need rationality to solve it?" I mean, it's been an interesting quest, starting from rationality and then leading up to the idea of how you're supposed to wield the rationality to try to not die.</p><p><strong>Theo: </strong>And then, same question I asked Zvi, but I think it&#8217;s a very useful one, how would you explain the field of rationalism to a total beginner, a total layman?</p><p><strong>Liron: </strong>I would throw in what I just said, "Look, we're all humans with brains, our brains were made by natural selection, right? The same force that made a tiger's claw. That's great that we have this cool organ. But if you ever want to have that organ look at the truth, see what's actually real, maybe use that truth to make useful predictions, it's not going to come fully naturally. There is an art to it the same way that there's an art to making a piano sound good when you play it with your fingers. There's an art to using your brain to arrive at truth. And you can read the LessWrong Sequences, and you can learn that art. And I think it's a beautiful art. And it's an art that I spent a lot of time in and I try to get practical value from that art. And the art has close associations to making money and trading if you ever want to monetize the art.</p><p>If the person I mean, it's like, you know, my wife is an example of somebody who's more of a normie who's not super into rationality, right? And like, I've given up on trying to make my wife bet me on stuff. So and that's one of the rationality tools, right? Is when you think you know something, you place a bet on it. And some people are just not interested to go down that route, which is fine. But it's just like, when you need it, right? Like when you're in government, and you're handing an assessment to the President saying, "I think the enemy has a high likelihood of attack or may plausibly attack" when you're using English like that. Hopefully, you can look into the rationality world and be like, "Ah, the best practice here is to give a probability range, rather than using ambiguous English, it is superior is the best practice to give a range."</p><p>Sometimes rationality can teach us little things that we can import into the normie world, which has been happening at a faster and faster pace. I've witnessed rationality seeping into the normieverse, right over my lifetime, we're witnessing today prediction markets are now gaining traction, effective altruism started in the rationality community, right? In 2009, effective altruism, I think officially started in 2011. In 2009, I was reading Eliezer Yudkowsky&#8217;s post about purchase fuzzies and utilons separately, right? The idea that like, "Hey, that's great when you want to feel good when you do charity, but also as a separate consideration, try to also do the most good." And that was kind of the beginning of effective altruism.</p><h3>Effective altruism (1:11:00)</h3><p><strong>Theo: </strong>Do you think that the reputation of effective altruism deserves to be tarnished at all after Sam Bankman-Fried, after like, a lot of what's happened to it over the last few years?</p><p><strong>Liron: </strong>There's a joke that everybody, everybody in effective altruism doesn't say "I'm an effective altruist." They say "I'm EA adjacent." I'm the only EA who will stand here and tell you "I'm EA. I'm an effective altruist, not adjacent." Now that said, am I a central example of an effective altruist? No, I haven't donated a kidney. I do donate a few $1,000 a year to good causes. I'm a GiveWell donor. I've donated to MIRI, the Center for Applied Rationality. So I've thrown out some donations to altruistic causes, and I'm a fan, but I'm not like, I don't donate 10% of my income. Maybe I'll start, but I haven't yet. And I haven't done like, you know, I haven't dedicated my career to be super altruistic. </p><p>So but it's just the reason I say I'm an effective altruist is because it's like, you know, like the book by Will MacAskill, Doing Good Better, absolute must read. It's just like, "Yeah, I want to spend a little bit of money to massively help people flourish, right? Like, I think that makes perfect sense. That's great logic." And then people are like, "Oh, what about the ideology and like pivot textures?" It's like, fine. Okay, chill out. Not everybody. Sam Bankman-Fried. Yeah. Nobody thinks that he did good actions, right? Nobody thinks that Sam Bankman Freed was being good and rational by scamming the world and thinking the scam was going to work. I guess a few people think that, but I personally could not name a single individual who's like, "Yeah, what Sam Bankman-Fried did was good. He should do it again in the same position." I would never think that. I believe in morality, I conduct myself with deontological morality. So these pathological examples that people give, I do think are just not representative of the simple logic of trying to do more good. I highly recommend going to Scott Alexander's blog, whether it's Slate Star Codex, or Astral Codex Ten, and searching effective altruism. The writing that he's done on his experiences with effective altruism is absolutely heartwarming stuff.</p><p><strong>Theo: </strong>What if the best way to produce value for the world is not literally just donating money to kids in Africa, but more like doing what Elon Musk has done and not donate much to charity, and just invest and reinvest everything into transformative companies.</p><p><strong>Liron: </strong>I have no business telling Elon Musk, "Hey, Elon Musk, donate 10% of your income to charity." I'm fine with what Elon Musk is doing, except for the part where he founded OpenAI and accelerated timelines. Besides that part, everything else he's doing, I think is great. I don't think that I have advice to give him. </p><p>The perfect type of conversation where I would give somebody advice is if they're just like, "I don't believe in effective altruism, they have all these rules, I just don't buy it." And I'm like, "Great. So where are they like, oh, I just want to work as hard as I can and create value through my company." I'm like, "Okay, how's that going? What's the company? How are you creating value?" If they're just like, "Well, the company is arbitrage, where I have an e-commerce store, and I try to flip stuff for a higher price." I'm like, "Okay, how is that creating value?" And they're like, "I don't know, I just make some money. It's like I save people a click to find stuff." I'm like, "Okay, saving people a click. Is that really better than donating to malaria bed nets or whatever?" So I'd have the conversation. </p><p>In this hypothetical scenario, I'm getting the sense that the hypothetical character is kind of rationalizing that they just don't want to talk about altruism. And that's fine. But there are a lot of people in the world who are like, "Hey, I actually do want to do something good, especially if it's cheap." Like there's some limit. It's like, look, if you literally just have to pay $1 and save a million people, I think the vast majority of people would be like, "Yeah, here's my dollar." So it's just a spectrum. Even a giant dick would probably be like, "Okay, I'll pay $1 for a million people." And then somebody who's less of a dick would be like, "$10 for a million people, fine." So everybody has their price, whether they're happy to be an altruist at this price. And there are some people where it's like, "Yeah, 10% of my income to save a couple people a year sounds good." There are some people who are up for a lot of altruism.</p><h3>Crypto (1:14:50)</h3><p><strong>Theo: </strong>Speaking of bullshit businesses, you also have a bit of a past with crypto. You've been a major crypto skeptic in the past. So what do you think about Bitcoin being up from a low of 15,000 to 38,000 today? Bitcoin is up 127% year to date, Ethereum is up 71% year to date, total crypto market is up 79% year to date. So is it just all maybe related to AI hype?</p><p><strong>Liron: </strong>I think it's mostly just a derivative on NASDAQ. I think it's kind of mirrored the progress of NASDAQ, but just with higher volatility, is that fair to say?</p><p><strong>Theo: </strong>Yeah, maybe. Why do you think it would mirror the performance of the stock market?</p><p><strong>Liron: </strong>Probably liquidity, if I had to guess. When stocks are going up, people just feel like they have more money. And then they're just like, "Okay, let me put some of the money, they're higher risk, higher reward." 2021 was the epitome of it, right? Where money was easy. You could take money out of your mortgage, you had a low-interest mortgage, your stocks were worth more, you felt like cash was trash. I made a bunch of investments that weren't the wisest in retrospect. So when NASDAQ goes up, people who are looking at the tech sector find themselves with more cash, their margin account suddenly is letting them borrow cash. And they're like, "Okay, great, let me chase return using this cash. Oh, and I see this thing is going up." </p><p>So I do think that there's liquidity effects that you see consistently mirror in Bitcoin. But that said, what's going on with Tether? They're like printing tethers to buy Bitcoin on these markets where no US dollars are getting exchanged. It's kind of like there is some manipulation that I don't claim to understand that makes these prices potentially not the real market price. So I hesitate to draw conclusions. I'm more like, I don't even claim to understand what the heck's going on. But what I do claim to understand is that blockchain technology has no use case behind cryptocurrencies. So I can talk more about that.</p><p><strong>Theo: </strong>Yeah, why don&#8217;t you go into a little more detail about that?</p><p><strong>Liron: </strong>My history with crypto is I actually my first exposure to crypto was actually in 2010. Because you know, the LessWrong community, these rationalists, started talking about Bitcoin. They're early to every trend, right? So I was reading less wrong since 2007. And I saw Bitcoin mentioned around 2009-2010. And I saw it and, just a random coincidence in my life, around 2006, I was in the cryptography space, just academically. I took a graduate elective in cryptography. And I read a paper that was a scheme for electronic cash. So I just randomly had this background. I'm like, "Hey, look, cryptographic electronic cash. That's a few years before Bitcoin." And I see what they're trying to do with the scheme. But obviously, it just sucks that you need a central bank. So it's not going to work. And then I see Bitcoin come out around 2010. I'm like, "Whoa, it's decentralized electronic cash is cryptographic. Nice. This is cool. Yeah, if I was still in that college class, I'd be doing a paper about this."</p><p>Now, of course, the obvious problem is that nobody gives a crap, right? So great, this nice, theoretically interesting thing, it doesn't have social proof. And then I checked back a year later, I'm like, "What? This thing's still going, the price is fluctuating, it has social proof. Okay, I'm sold." So that's when I'm like, "Okay, I'm gonna buy some. I want to this, this looks good." And I actually have a tweet from 2011, where I'm all bullish on Bitcoin. I'm like, "Bitcoin is going to 10x again. This is one of the best investments you can make. It's a 10% chance of 100x return." So I became a Bitcoin bull.</p><p><strong>Theo: </strong>And you would have been right. Bitcoin was the best investment you could have made in 2011.</p><p><strong>Liron: </strong>Exactly, right. And I did profit. I did 10x. I think I banked around 100k USD from that kind of investing. But then of course, I started playing the market and I started also losing money and I probably ended up netting out close to zero after that.</p><p>But I got lucky because I also invested in Coinbase while I was dicking around. I happened to angel invest in Coinbase. So I ended up making $6 million in 10 years because I had an illiquid investment in Coinbase. So total luck that as I was dicking around with Bitcoin, I made an investment that was illiquid. And I ended up profiting from it, especially since by the time that the Coinbase IPO happened, I became disillusioned with crypto. So I would have sold earlier and I did actually sell most of the stake earlier. I only held on to a fraction of the stake.</p><p>So I became disillusioned because I'm like, "Wait a minute, this is just people being architecture astronauts. The actual logic behind blockchain technology, a decentralized double spend prevention protocol doesn't enable any use case." And I was massively, massively right about that, except for the idea of using a cryptocurrency. I feel like it has a million problems, and it's not that great, but at least it's logically coherent. Like you can, in fact, have a bearer token that you trade to somebody and it happens on the blockchain. So there's some nonzero logically coherent thing going on there, but it's not going to extend beyond cryptocurrency.</p><p><strong>Theo: </strong>You also mentioned a few times, a 99% drawdown in the crypto market. So where'd you get that number from?</p><p><strong>Liron: </strong>Yeah, so I would like to collect my Bayes points, you know, Bayes points is what you get when you make a successful prediction. So the successful prediction is one that I made in late 2021, all the way through 2022, which is saying, "Hey, all these VCs saying that crypto has use cases, all these quote unquote builders, right? Like the founder of Helium, Axie Infinity, right? All these people saying like, there's real value here. I'm like, no, there's not. Because blockchain technology, there's no logical connection between that and enabling a new value prop."</p><p>Like the kind of value props people are saying are like, "Look, imagine if your data was publicly auditable using this database. Like, okay, a publicly auditable digitally signed database, don't need a blockchain. You only need a blockchain for double spend prevention, right?" And they kept doing pitches where there is a logical disconnect between the value they were pitching and the technology that they are pitching to implement it with. And so it became clear to me that they're just rationalizing.</p><p><strong>Theo: </strong>What about just distributed computing in general, that you don't want?</p><p><strong>Liron: </strong>Distributed computing is fine, but you just don't need blockchain technology to do that. And I also think it's a niche application when you do need, you know, the rarer times when you do need distributed computing, fine, but you still don't need a blockchain.</p><p><strong>Theo: </strong>It seems like this is, if anything, kind of the opposite of Charlie Munger's view on cryptocurrency, where he said, you know, like, it's a very cool piece of computer science and technology, but like cryptocurrency is shit. But like, maybe there will be a use for it.</p><p><strong>Liron: </strong>Yeah, there's a lot of people saying, "Hey, I don't really get Bitcoin, but I like blockchain." They're wrong, because it's like, maybe they like cryptography, right? Digital signatures, amazing, right? Public key encryption, amazing, right? Like these have countless use cases. But the idea of putting them on a blockchain so that you can prevent double spending at great expense only has cryptocurrency applications where you really, really care about the writing on the ledger, because there's no real world authority that's going to be more authoritative than the writing on the ledger. That's only true for a bearer cryptocurrency token. Every other use case that has a connection to the real world, you already implicitly trust somebody in the real world to adjudicate, right? If somebody steals my NFT, that was why I get to live in my house. Realistically, I'm still going to go to the police and get to live in my house. So I don't need the blockchain to prevent double spending on my house NFT. See what I'm saying?</p><p><strong>Theo: </strong>Just like you trust institutions and society enough to not require any kind of actual decentralization?</p><p><strong>Liron: </strong>I mean, when I live on my street, there's some level of trust that somebody is not going to walk in and take my stuff. That's not a trustless society because I don't own a gun.</p><h3>Charlie Munger and Richard Feynman (1:22:12)</h3><p><strong>Theo: </strong>Switching topics a little bit, speaking of Charlie Munger, he just died a couple days ago. I was a big fan of his, rest in peace. He might have actually introduced me to the field of rationalism. Would you consider Charlie Munger a rationalist?</p><p><strong>Liron: </strong>Yes, he's definitely a type of rationalist. Even before less wrong, and the modern sense of this that a lot of us appreciate, there have been a lot of schools of rationality. They all have a shared enterprise of using your brain to do better than playing tribal politics and hunting animals. It's like playing the piano with your feet. What if I let the need for accurate beliefs? What if I let the need for truth propagate back to the way that I wield my organ, my biological organ? I'm going to determine the way I think not by how I like to think, not by how I want to be perceived as thinking, but by what creates the best sound of the piano? What creates the best drive toward truth? What moves the boat? What steers the boat toward truth, the best toward the island of truth, right? Using my beliefs and using evidence as fuel, how do I steer the boat, regardless of how crazy I look when I'm steering it? How do I actually steer it properly? </p><p>That enterprise, Munger wanted to engage in that enterprise, because he wanted to steward his portfolio. He had what Eliezer calls something to protect. There's a Japanese trope, where superheroes don't just randomly get superpowers, they get the superpowers because they have something that they want to protect. And as a result of the need to protect something, then they work backwards to needing the superpowers. The idea is that rationality emerges when you care more about navigating with your brain somewhere than you care about what you're doing with your brain directly. You don't care how social people are going to view your choices, you don't care about looking weird, you just care about getting to the destination, optimizing something, making some outcome happen, and you get emergent rationality. Munger absolutely did that. Richard Feynman did that in physics. The Feynman diagram might be an example of some kind of a weird, non-traditional thing that did the job of advancing our understanding of physics. </p><p><strong>Theo: </strong>Well, I think that's a pretty good place to wrap it up. Thank you so much for coming on the podcast.</p><p><strong>Liron: </strong>My pleasure. I'm a fan and I'm bullish. I'm glad I'm getting in early on this podcast, because I'm sure it's going to be an institution very shortly.</p><p><strong>Theo: </strong>Yeah, can't wait.</p>]]></content:encoded></item><item><title><![CDATA[#9: Dwarkesh Patel]]></title><description><![CDATA[Podcasting, AI, Talent, and Fixing Government]]></description><link>https://www.theojaffee.com/p/9-dwarkesh-patel</link><guid isPermaLink="false">https://www.theojaffee.com/p/9-dwarkesh-patel</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Sun, 03 Dec 2023 16:16:32 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/139380284/66a02be8ad5a9774bac682bb5d1ba22b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Dwarkesh Patel is the host of the Dwarkesh Podcast, where he interviews intellectuals, scientists, historians, economists, and founders about their big ideas. He does deep research and asks great questions. Past podcast guests include billionaire entrepreneur and investor Marc Andreessen, economist and polymath Tyler Cowen, and OpenAI Chief Scientist Ilya Sutskever. Dwarkesh has been recommended by Jeff Bezos, Paul Graham, and me.</p><ul><li><p>Dwarkesh Podcast (and transcripts): <a href="https://www.dwarkeshpatel.com/podcast">https://www.dwarkeshpatel.com/podcast</a></p></li><li><p>Dwarkesh Podcast on YouTube: <a href="https://www.youtube.com/@DwarkeshPatel">https://www.youtube.com/@DwarkeshPatel</a></p></li><li><p>Dwarkesh Podcast on Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ae2420c45baf783ca4e672ed7&quot;,&quot;title&quot;:&quot;Dwarkesh Podcast&quot;,&quot;subtitle&quot;:&quot;Dwarkesh Patel&quot;,&quot;description&quot;:&quot;Podcast&quot;,&quot;url&quot;:&quot;https://open.spotify.com/show/4JH4tybY1zX6e5hjCwU6gF&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/show/4JH4tybY1zX6e5hjCwU6gF" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe></li></ul><ul><li><p>Dwarkesh Podcast on Apple Podcasts: </p></li></ul><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1516093381.jpg&quot;,&quot;title&quot;:&quot;Dwarkesh Podcast&quot;,&quot;podcastTitle&quot;:&quot;Dwarkesh Podcast&quot;,&quot;podcastByline&quot;:&quot;Dwarkesh Patel&quot;,&quot;duration&quot;:5475,&quot;numEpisodes&quot;:60,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-11-29T15:01:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><ul><li><p>Dwarkesh&#8217;s Twitter: <a href="https://twitter.com/dwarkesh_sp">https://twitter.com/dwarkesh_sp</a></p></li><li><p>Dwarkesh&#8217;s Blog: <a href="https://www.dwarkeshpatel.com/s/writing">https://www.dwarkeshpatel.com/s/writing</a></p></li></ul><h3>TJP Links</h3><ul><li><p>YouTube: </p><div id="youtube2-ggSbkRh6J_8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ggSbkRh6J_8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ggSbkRh6J_8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div></li><li><p>Spotify: </p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acad0a8ea81f37ff23ca18807&quot;,&quot;title&quot;:&quot;#9: Dwarkesh Patel - Podcasting, AI, Talent, and Fixing Government&quot;,&quot;subtitle&quot;:&quot;Theo Jaffee&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5s9qITabLtkTtgCvHWx0ct&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5s9qITabLtkTtgCvHWx0ct" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe></li></ul><ul><li><p>Apple Podcasts: </p></li></ul><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast episode-list" data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677&quot;,&quot;isEpisode&quot;:false,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast_1699912677.jpg&quot;,&quot;title&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastTitle&quot;:&quot;Theo Jaffee Podcast&quot;,&quot;podcastByline&quot;:&quot;Theo Jaffee&quot;,&quot;duration&quot;:8640,&quot;numEpisodes&quot;:8,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677?uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-11-08T04:30:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><ul><li><p>RSS: <a href="https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss">https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss</a></p></li><li><p>Playlist of all episodes: <a href="https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj">https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj</a></p></li><li><p>My Twitter: <a href="https://x.com/theojaffee">https://x.com/theojaffee</a></p></li></ul><h3>Chapters</h3><ul><li><p>Intro (0:00)</p></li><li><p>OpenAI drama (0:50)</p></li><li><p>Learning methods (4:10)</p></li><li><p>Growing the podcast (7:38)</p></li><li><p>Improving the podcast (17:03)</p></li><li><p>Contra Marc Andreessen on AI risk (24:18)</p></li><li><p>How will AI affect podcasts? (26:31)</p></li><li><p>AI alignment (32:08)</p></li><li><p>Dwarkesh&#8217;s guests (38:04)</p></li><li><p>Is Eliezer Yudkowsky right? (41:58)</p></li><li><p>More on the Dwarkesh Podcast (46:01)</p></li><li><p>Other great podcasts (50:06)</p></li><li><p>Nanobots, foom, and doom (56:01)</p></li><li><p>Great Twitter poasters (1:01:59)</p></li><li><p>Rationalism and other factions (1:05:44)</p></li><li><p>Why hasn&#8217;t Marxism died? (1:15:27)</p></li><li><p>Where to allocate talent (1:18:51)</p></li><li><p>Sam Bankman-Fried (1:22:22)</p></li><li><p>Why is Elon Musk so successful? (1:29:07)</p></li><li><p>How relevant is human talent with AGI soon? (1:35:07)</p></li><li><p>Is government actually broken? (1:36:35)</p></li><li><p>How should we fix Congress? (1:40:50)</p></li><li><p>Dwarkesh&#8217;s favorite part of podcasting (1:46:46)</p></li></ul><h1>Transcript</h1><h3>Introduction (0:00)</h3><p><strong>Theo: </strong>Welcome back to episode 9 of the Theo Jaffee Podcast. Today I had the pleasure of interviewing one of my favorite podcasters, the one and only Dwarkesh Patel. Dwarkesh is, in many ways, what I aspire to be as a podcaster. He interviews some of the most interesting people in the world in AI, history, economics, and beyond, from Ilya Sutskever to Tyler Cowen&#8212;and does so only after many hours of deep research and crafting some of the most thought-provoking questions I&#8217;ve ever heard. His listeners include Jeff Bezos, Paul Graham, and Nat Friedman. In this episode, we cover a wide range of topics: how to prepare for and produce great podcasts, different visions for both the short-term and long-term future of AI, how to get talent into politics, and much more. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Dwarkesh Patel.</p><h3>OpenAI drama (0:50)</h3><p><strong>Theo: </strong>Hi, welcome back to episode nine of the Theo Jaffee Podcast. Here today with Dwarkesh Patel.</p><p><strong>Dwarkesh: </strong>Hey, what's up, man? Thanks for having me on your podcast.</p><p><strong>Theo: </strong>Absolutely. I want to start off by talking about the events of the last weekend. When I scheduled this, I did not know that that was going to happen. I don't think anybody knew that was going to happen. So with all the Robert Caro, Lyndon Johnson reading that you've done, reading about power, reading about human behavior, do you think you could have predicted or understood anything about this better?</p><p><strong>Dwarkesh: </strong>Certainly not predicted, because I think the prediction is contingent on a whole bunch of details about what happened that I still am not aware of. And I don't think almost anybody's aware of, despite the endless speculation. As for could this better help you understand it? Certainly. I was just thinking about&#8212;the Lyndon Johnson books are good, but there&#8217;s also Caro&#8217;s great biography of Robert Moses, the famous dictator of New York City. He has many episodes of Robert Moses in his early career where there&#8217;s an indication that he might be doing something that's in his own self-interest, or that doesn&#8217;t actually accord with his very publicly flattering image. It just kind of brushed under the rug, not well understood. People don't talk about it or gossip about it because of different kinds of fears. Anyways, I&#8217;m not saying that that's necessarily what's happening. But it is important to understand that we don't have the full picture yet and keep that in mind.</p><p><strong>Theo: </strong>That makes sense. Actually, just before I got on the podcast, I was scrolling through Twitter, as one does. And I read that there's a new piece of information after all this that they were working on an agent that can do math. It's pretty interesting.</p><p>So you interviewed Ilya Sutskever on the podcast before. Did you judge his character at all? Did he seem to you like totally earnest, with the goal of protecting humanity? </p><p><strong>Dwarkesh: </strong>Honestly, it's hard to evaluate somebody on from a one-hour conversation. But from the testimonials of the people who know him, it really does seem like he's a very genuine guy who has this priority as making sure humanity has a good future. And that's not to say that he can't make mistakes in his judgment about how to get to that future. But I think nobody's contradicting that basic motivation of his from people who know him over the years.</p><p><strong>Theo: </strong>I'm pretty surprised that he switched sides.<strong> </strong>Yeah, I mean, this whole thing was very hard to follow.</p><p><strong>Dwarkesh: </strong>There's a lot we don't know. It&#8217;s really hard to comment. There&#8217;s so much we don&#8217;t know. And it is really hard to say why or why he switched sides.</p><h3>Learning methods (4:10)</h3><p><strong>Theo: </strong>Switching subjects a little, a lot of what you do is reading and research. So how do you read specifically? Do you take notes? How does your note-taking method work, if you do take notes? </p><p><strong>Dwarkesh: </strong>I recently started using space repetition myself after I talked to Andy Matuschak. It's insane how much more effective it is. You realize it when you make a card about something you think isn't even important. And a week later, you've almost forgotten and you see the card and you're like, "Well, what's that again?" I've seen the evidence again and again that space repetition is effective. Honestly, if I'm not making cards about something I'm reading, I might as well not even read it at all. That's how much less effective I think normal reading is. As for note-taking itself, I don't really do much of that. I just have a Google Doc, as in guest name questions, and I start adding questions to it. And before the interview, I'll organize them. But yeah, space repetition and noting down questions as I'm reading.</p><p><strong>Theo: </strong>What specifically do you use for space repetition? Anki? </p><p><strong>Dwarkesh: </strong>No, I use this app called Mochi. It's just a nicer interface.</p><p><strong>Theo: </strong>I've tried using Anki before for language learning, but for general learning and knowledge, do you know of any really successful people who have used space repetition to learn more effectively than just reading huge volume?</p><p><strong>Dwarkesh: </strong>Depends on what you mean by successful. I think it's just not a technique that's been widely used. I don't know the ancient history of this, but I'm guessing there's been people who have used something similar to space repetition. I was recently reading the Gulag Archipelago, and there's this really interesting chapter on memory. It talks about how people who were composing these long books, like the Gulag Archipelago, in their mind, the multi-volume work, and he just kind of memorized it. He had these beads that were nominally to pray for. His memorization technique was that he would go bead by bead. And at every bead, he would...He would make sure he has remembered the passage that he has written down in his head. He would recite it, and then he'd go to the next bead and so on. That's how he memorized the work he composed in his head, because he couldn't write it down, otherwise, you'd get executed for your thoughts. That was sort of a tangent on, is there anybody who successfully employed it? I think it is true that a lot of really, for lack of a better term, sample efficient people can absorb a lot of information and synthesize it to come up with new ideas. People like Tyler Cowen or Berne Hobart, they seem to do fine without space repetition. I personally, I have benefited tremendously from its use. And I wonder if they themselves would benefit as well. I should ask them. I should ask Berne or something.</p><h3>Growing the podcast (7:38)</h3><p><strong>Theo: </strong>So your moat, I guess you'd say, is deep research and good questions. Do you think that there's any kind of trade off between deep research and good questions and popularity? Like, does going too deep exclude some people?</p><p><strong>Dwarkesh: </strong>Certainly at some frontier. Certainly there's eventually a trade off, right? Like 7 billion people are not going to be following a conversation about that's insufficient depth. But I do think I'm at least an order of magnitude away from that frontier in that I could still go 10x more and I'd have an audience that's big enough. There's enough people who would want a conversation of this depth. </p><p>I used to think this way before when my podcast was much smaller. I used to think, oh, well, it's because people, a large group of people couldn't appreciate this kind of stuff. And since then, it's grown a lot and people do appreciate it. And I've realized it was just cope. And so it's just not useful to think in that way. You know, you do something really high quality. And if you were to have a super banal podcast, then who's going to listen to it? Like, there's already a bunch of them out there. And for different reasons, they're already up at the top. You're not going to be able to compete with them on that niche.</p><p><strong>Theo: </strong>What kind of&#8212;</p><p><strong>Dwarkesh: </strong>Don't you think? What do you think? I mean, you're making a podcast. What do you think about that? </p><p><strong>Theo: </strong>I mean, honestly, there's a long way to go with. I mean, I don't really worry about getting too popular at this stage. I worry more about the opposite. Like, how do I get more popular? And I mean, yeah, I was talking about this with my friends recently. A couple months ago, they were like, oh, you should lean more into like, make like sensationalist thumbnails and titles and YouTube shorts with, you know, I see some podcasts popularized in this way where you have like a, like a GTA race in the background. Do I want to do that? Like, well, will it dilute the brand that I want to create? Or like, is that just the way that you get people on the podcast? And if so, like, will it get the people who I want to be listening to the podcast who&#8217;ll, you know, listen to it for a while and share it.</p><p><strong>Dwarkesh: </strong>Yeah, I certainly think, especially for the kind of content you're making. It's like, I mean, yeah, it just, it's hard to imagine that that's the way it gets popular at the same time. It's not something to neglect. So there's a difference between doing the most cringy shit possible versus neglecting it altogether. I certainly put like a lot of care into making my thumbnails and then I've started making a lot of Twitter clips. Sometimes they go viral. But it's just not cringy shit. Like, you know, yeah, the GTA races in the background. But like promotion is good. It's like nothing against promotion. There's a separate question of then you don't avoid the deep research and you still promote, but you do it in a way that is true to the authentic thing you're trying to put out there. </p><p><strong>Theo: </strong>I mean, certainly there's a lot of podcasts that are just like content. I was thinking about this earlier today. Like what makes something good is that it can no longer be called content. Like I would call a lot of the stuff that I see on social media content, but I would not call like Paul Graham's essays content because they&#8217;re so much more than that.</p><p><strong>Dwarkesh: </strong>Or what, what is the other one where I think people try to define it? Like how would I define your profession? And they say, are you a content creator? And I cringe a little bit on the inside, but it's, it's worse to be called the alternative, which is journalist. So I take what I can get.</p><p><strong>Theo: </strong>What about citizen journalist?</p><p><strong>Dwarkesh: </strong>Yeah, I dunno. I just like, it's not journalism exactly. It's not current events or anything like that. Yeah. I think that's a great way to describe it. I think cause content implies something that can be farmed content farming or that is fungible with other kinds of content. And you do want it to be something that's not just like that.</p><p><strong>Theo: </strong>So going back to the clips, what makes you decide which parts of the podcast you make into clips to post on Twitter? Is it like just interestingness? Is it like conciseness, something else?</p><p><strong>Dwarkesh: </strong>Actually, I'm literally probably tomorrow going to put out a contest to make clips for my podcast because it takes so much time and so much context and so much taste to do it. I mean, it just, you can put out a certain clip and it'll get like, I don't know, 10, 20 likes on Twitter and you can put out a different thing and it got 3000 likes. And it just about the context of knowing which part to clip that people will be enthused to share and so on. And that's honestly a pretty challenging thing that I haven't been able to automate away yet or forget about automating. I haven't been able to hire away yet. So yeah, I'm just going to do a contest to see if somebody else can do it. Cause this has been super important to the growth of the podcast, but it's also taken away a ton of time that I should be spending reading.</p><p><strong>Theo: </strong>What do you think they will choose? What makes a good candidate for clips?</p><p><strong>Dwarkesh: </strong>It's hard to explain. I was trying to come up with an explanation, the description of this contest and guidelines. It can be something that you could say, "Oh, it should be about hot button issues so that it goes viral," but it's not just that. Maybe it should touch on something people are interested in, but there's an element of novelty about something people care about.</p><p>I'm just trying to think back on certain clips that went viral. I had a clip about Shane Legg explaining that search is important to add into these LLMs to get them to do novel things. Now that's not about a hot button issue, like cultural wars or anything, but it is interesting. And you can always explain for each one. It's like the Anna Karenina quote of &#8220;every happy family is happy in the same way. And all unhappy families are unhappy in a different way&#8221;.</p><p>I guess all clips that go viral are unique in their own way, at least I don't know. Maybe that's not true for the average podcast, but that's what I found for the ones I've tried to analyze of mine. I don't know how much the audience, I think it'll be, definitely you and I are interested in how these clips are manufactured is challenging. I wonder how much the audience is interested in the clip making.</p><p><strong>Theo: </strong>Well, the audience is interested in the clips. That's the point we're trying to optimize for the audience. So, have you studied Mr. Beast or any other super viral people or is what you try to do just different? </p><p><strong>Dwarkesh: </strong>I don't think there's much to generalize from the Mr. Beast type stuff. I admire what he's able to do for his own kind of content. It just, you just can't advertise that content the same way that we advertise ours.</p><p><strong>Theo: </strong>What about specific podcasters in this niche, like Lex Friedman? Although Lex doesn't seem to do that much.</p><p><strong>Dwarkesh: </strong>He has a clips channel. I think that helps him out probably. I think he's just kind of farmed it.</p><p><strong>Theo: </strong>Yeah. I think the only time I actually watch video podcasts is the Lex clips.</p><p>Speaking of watching different media, going back to talking about you reading, do you typically read mostly books or articles or do you watch YouTube videos or podcasts or all of the above? What's the split there?</p><p><strong>Dwarkesh: </strong>I actually don't listen to many podcasts at all. If any, there's not really maybe a handful. I can't really think of any podcasts I listened to that regularly. I do read a lot of books, obviously. And part of it is what drives my interest. Part of it is the books of the guests I'm interviewing. Because I've been getting a lot into AI recently, there's a lot of papers and technical material, some textbooks. It just depends on the subject. What about like, I can definitely talk about like for a particular, if you want me to, I can go into what might've happened on like a typical episode if I'm interviewing Dario or Ilya or something.</p><p><strong>Theo: </strong>Yeah, sure.</p><p><strong>Dwarkesh: </strong>Let me think back on which would be a good episode. Yeah. So for Dario, I read all the papers that they put out on the transformer circuits, a thread of different mechanistic interpretability things. Then just reading a bunch of stuff about scaling the original scaling loss papers, how that's evolved over time, talking to a bunch of researchers, AI researchers to better understand the field. And what's uncertain about it. That would be interesting to ask about better understand the mechanistic interoperability results and what they imply. </p><p><strong>Theo: </strong>How do you get people like Dario in particular who are, seem to be very media shy on the podcast. Is it just cold emails?</p><p><strong>Dwarkesh: </strong>Eventually you build up a reputation and then, you know, somebody who's a link to them, which is what happened there. Yeah, basically just not necessarily cold emails, but you just like, you get to meet more people over time and I, it's not something I would try to do consciously, but it just been helpful and that's what's gotten me some of the biggest guests.</p><h3>Improving the podcast (17:03)</h3><p><strong>Theo: </strong>Do you think that you're naturally good at podcasting or more that you got good over time? And if so, what specifically improved?</p><p><strong>Dwarkesh: </strong>I definitely have gotten much better over time. Like I haven't even tried to listen to one of my old conversations because if I try it, I think I cringe really hard.</p><p><strong>Theo: </strong>What changed?</p><p><strong>Dwarkesh: </strong>I've I just learn more and you can notice this about if I listen to podcasts, I don't like I noticed the same thing that same patterns that I saw in my old podcast, which is very generic questions. Cause you just don't know much about anything. So you just have to ask these sort of vacuous general questions. Yeah, I, I've just learned more things. I can better empathize with the audience. And as long ago I got an older, I think that's not an insignificant part of this. I started the podcast when I was 19, I'm 23 now. My brain has probably changed in that time.</p><p><strong>Theo: </strong>Yeah, I imagine. Was it like a conscious effort or just kind of like just getting older and smarter?</p><p><strong>Dwarkesh: </strong>Was there any? Definitely the learning things was for years I've been preparing to interview guests from a wide variety of fields. And so I've been reading a lot during that time. And that that's definitely been a big part of it. Well, there's no specific thought like, Oh, I need to make my questions better. And here's like the dimensions on which I can make the questions better.</p><p><strong>Theo: </strong>It's just like, you know, add more to the pre-training data.</p><p><strong>Dwarkesh: </strong>Basically. Yeah. That's a great way to phrase it.</p><p><strong>Theo: </strong>Because like a lot of progress in AI is just getting more data and better data.</p><p><strong>Dwarkesh:</strong> And I actually heard a really interesting analogy to here to learning in general. When you're getting into a new field, you just want to pre-train a whole bunch of random tokens. You read the papers, the textbooks, you're just trying to grok. And then afterwards, you do this fine-tuned supervised learning where you delve deep into what every passage means, once you better understand what's going on generally in the field.</p><p><strong>Theo: </strong>Like the Noah Smith, two papers thing.</p><p><strong>Dwarkesh: </strong>What is that?</p><p><strong>Theo: </strong>Basically, Noah Smith said something like if you want to introduce me to a new field of literature, give me two papers from that literature. That's a good test because if the literature doesn't have two good papers, then the rest of it's not worth reading. If the papers themselves are really insightful, then there's probably something there. I forgot the rest of it, but that&#8217;s the essence of it.</p><p><strong>Dwarkesh: </strong>That's probably a good tip on how to evaluate the literature to begin with. </p><p><strong>Theo: </strong>So if you didn't consciously refine much in the past, have you thought about what to consciously refine for the podcast in the future?</p><p><strong>Dwarkesh: </strong>I have thought about ways to promote it. As for the basic format that I usually do have interviews, I actually haven&#8217;t. And I think people can give me feedback that I should probably take to heart, but there's not something that I think&#8212;Oh, there is one thing, but this is not about learning more or something. It's just making sure that I actually ask about the most important thing. And I don't let that go because I used to have this habit and may still do of just bouncing around from archaic topic and esoteric thing I read in their book to esoteric thing I read in their book and not just honing in on the most important thing and making sure we spend a good 20 minutes on it. </p><p><strong>Theo: </strong>So what do you think the most important thing would be if someone were to interview you? Is this question cheating?</p><p><strong>Dwarkesh: </strong>I think I'm different in the sense of like, I don't have a big take and that were&#8230; I guess we could talk about AI. I mean, I had to think about that in order to do different interviews. But honestly, even there, I don't really have an original take. I just have different, small sorts of takes and heuristics about a lot of different kinds of things. But for me, I think people seem to think I know more about all these kinds of heuristics about podcasting that I don't. So when people ask me, what are your tips for podcasting? I just, I don't know. I just try to read it and I just try to come up with questions. The object level things are the things that are definitely very interesting to me, but the topics themselves.</p><p><strong>Theo: </strong>So yeah, you mentioned something about people were giving you feedback on the format of the podcast. Have you thought about monologues? Two of the relatively few podcasts that I&#8217;ve listened to are hardcore history by Dan Carlin and the Founders Podcast by David Senra, which are both monologues like always. And I find them to be really, really interesting even though I typically prefer to read blog posts. </p><p><strong>Dwarkesh: </strong>No, I think that's a great point. And the great thing about, for example, Hardcore History is that it is an audio book in some sense, because it's like 12 hours of content on a topic. That's an audio book, but though he narrates it conversationally, he's just talking to you. And so the speech patterns and the redundancies that make speech easy to understand come about naturally. And so, yeah, I love that. Maybe I should do that. So the next time I read a blog post, maybe I should just do a related monologue and not just narrating the blog posts, but just kind of shooting the shit about it. There's something I've thought about before you would, would you find that interesting? Yeah, I would. Okay. Yeah. I'll try that on the next one. </p><p><strong>Theo:</strong> I wonder if the monologue podcast grabs the human attention more than an audio book of reading something that's meant to be read and not listened to.</p><p><strong>Dwarkesh: </strong>Exactly. Yeah. There's definitely a different sort of cadence to speech than to writing. And then just like the conversational nature of these kinds of podcasts gets out of that better.</p><p><strong>Theo: </strong>We were talking about different blog posts that you've done. So you've deleted some of your old podcast episodes and articles. So why, why did you, is it just not meeting the quality bar? Like what would it have taken to keep them up?</p><p><strong>Dwarkesh: </strong>Yeah, exactly. It just wasn't that good. I mean, again, like I started the podcast and blog and I was like 19 years old. So it's not that surprising that I look back on it and then cringe at the low quality of some of it, which is not anything against my past self. You know, I'm very grateful for what my past self has done, but just certain things I'm just wasn't like the best work I produced. So I just kind of took it down. </p><p><strong>Theo: </strong>I liked the Contra David Deutsch on universal explainers one. </p><p><strong>Dwarkesh: </strong>Did I take it down?</p><p><strong>Theo: </strong>I think so.</p><p><strong>Dwarkesh: </strong>Oh, I should put that back up. I fondly remember that one.</p><h3>Contra Marc Andreessen on AI risk (24:18)</h3><p><strong>Theo: </strong>And definitely the Contra Marc Andreessen on AI risk on that one. I really liked, you didn't take that down.</p><p><strong>Dwarkesh: </strong>Yeah. </p><p><strong>Theo: </strong>Were you surprised by his reaction?</p><p><strong>Dwarkesh: </strong>Yeah. Yeah. And then people pointed out to me afterwards. Well, maybe I should have emailed him privately beforehand to let him know. I don't know what that would have changed. I mean, I guess fair enough. I think it still doesn't. The main thing is not even like the personal reaction. I really don't care about that. It just, I just hope he gets considers the arguments against position. And I don't know if he has.</p><p><strong>Theo: </strong>I mean, I'm sure that he's, he's certainly seen the arguments against this position, but do you think that's, that's, that's a little bit bearish if so.</p><p><strong>Dwarkesh: </strong>It's surprising that someone as prominent, famous, and clearly intelligent as Marc Andreessen does not seem to be able to engage with counter-arguments. I don't think he has an obligation to write a counter-argument to me. He's a busy guy. He has an open invite to come back on the podcast. He was on it before to talk about AI and these related topics. I don't know if he'll take me up on it now, but I think if you're going to play in the intellectual arena, you have to engage if someone, like me, goes through the effort to do a point by point rebuttal of that blog post. Especially when it goes viral. So I think if something reaches that stature and has that kind of effort and quality behind it, you&#8217;re obligated to respond.</p><p><strong>Theo: </strong>That was one of my favorite episodes, actually the Marc Andreessen one. I did not know that carry used to refer to whaling operations.</p><p><strong>Dwarkesh: </strong>Oh, yeah. He's a really smart guy. He's super interesting. Has a really interesting taste about all kinds of things. I just think here he's got some bad arguments. I mean, if you&#8217;re gonna put out ideas, he has an open platform at least to come address them on my podcast.</p><h3>How will AI affect podcasts? (26:31)</h3><p><strong>Theo: </strong>So, how do you think AI specifically will affect the future of podcasts? What would happen if it becomes superhuman at interviewing or researching or being interviewed? I just saw a tweet yesterday. It was a meme about AGI booking a slot on the Dwarkesh podcast.</p><p><strong>Dwarkesh: </strong>I saw that too. Well, getting interviewed or doing the interviewing?</p><p><strong>Theo: </strong>Either. Just what do you think will happen to your career as AI becomes more powerful?</p><p><strong>Dwarkesh: </strong>I think that would be the least of our concerns at the point at which it can automate a podcast. I don't expect to be one of the first jobs to go. It seems like a pretty subtle art, not only to ask the questions, but then to have the human presence and to be able to respond with follow-ups to what the guest says. I'm not expecting to get automated anytime soon. </p><p><strong>Theo: </strong>Well, assuming AGI goes well, cause you said it would be the least of our concerns. And so, we live in this wonderful utopian AI future. Would you still podcast? How do you think podcasting would change?</p><p><strong>Dwarkesh: </strong>That's a good question. I honestly think the post AGI world is an under-theorized question. I've asked it to basically all my AI guests and none of them have given me a good answer. Part of that is, well, what are you doing personally? I mean, I think personally I would like to become an enhanced being, traveling around the galaxy with the help of the technology that the AI has given me and not just be a podcaster forever, hopefully.</p><p><strong>Theo: </strong>Well traveling around the galaxy in reality or virtually? Cause one of my first guests, Greg Fodor, gfodor, likes to talk about this idea of subterranean aliens. What if the solution to the Fermi paradox is just that all the aliens go underground in pods under the crust to protect themselves and go live in VR and do whatever they want in VR. Why would they travel around the galaxy? If your ship gets blown up, then you actually die. Whereas you could just send a robot that you control from your VR pod underground.</p><p><strong>Dwarkesh: </strong>I think that makes sense if we're assuming they're biological entities, but I kind of priced in already that they are the software that's running in the drones, like eventually a civilization will just have software. And that's what I mean when I say that I would be enhanced. So, I imagine you'd be an emulation or something. </p><p><strong>Theo: </strong>If you think about AI so much and the future and technology, do you discount the importance of space exploration, for example, like a lot of people think of SpaceX and not OpenAI as the most transformative company.</p><p><strong>Dwarkesh: </strong>It's interesting to see if they kind of merge and link together. Not the companies themselves, but if those technologies emerge in some way, you can imagine, I don't know, some sort of GPU run in space or something. That&#8217;s a little more far-fetched. The development of AI will be super hardware contingent. I mean, we're going to be seeing like if the compute centric framework is correct, we're going to see, you know, $50 billion training runs or a hundred billion dollar training runs or something. And all kinds of different hardware is going to be relevant to that. I don't think they're going to be unlinked at the point in which we're developing the AGI. </p><p><strong>Theo: </strong>If you did book the AGI on the Dwarkesh podcast, what would you talk to it about?</p><p><strong>Dwarkesh: </strong>I'd be super curious about its psychology. Does it think in the same concepts that we think in? There's the obvious type of what are its values, but how different is even the basic cognition and the thought process? And, or is it the case that because it learned to think in human language, that it adopted the same mind that that language has been developed on, which is our human mind.</p><p><strong>Theo: </strong>Or would it just not know? Kind of how we don't really know how the brain works?</p><p><strong>Dwarkesh: </strong>I should probably read more cognitive science to better understand even how human thinking works. That's a good point. I think there's also another big possibility. I guess we'll have better insight into its own mind than we have in our own because of if the mechanistic interoperability and all these other kinds of research work out. And so we might not even have to ask, we could just look in the AI directly. What are the things I'd be curious about? I mean, just a bunch of stuff relevant to how it's thinking. Presumably, it's thinking at a different speed. I'd be curious about how it communicates with other AIs. Are they communicating in language or can they just share that latent space? There'd be so many different questions. It wouldn't be about their opinions or something. Other than the fact that I care about their values, I'd just be super curious about how they work and how much is available to them to divulge about how they work. Maybe they don't understand themselves, but&#8230;</p><p><strong>Theo: </strong>Maybe they'll be prevented from understanding themselves too well. I don't think OpenAI will give them access to the weights.</p><p><strong>Dwarkesh: </strong>But we don't have our own weights. And I guess you could say that we don't understand ourselves as a result, but I don't know. I feel like you could probably learn a lot just from introspection.</p><h3>AI alignment (32:08)</h3><p><strong>Theo: </strong>Are you more optimistic about AI alignment, given that we can't access our own weights and yet we seem to be fairly aligned and we can access the weights of the AIs? I talked to a few people on my podcast, Nora Belrose, Quintin Pope and so on on Twitter, Teortaxes, who seem to be much more optimistic about alignment for that reason.</p><p><strong>Dwarkesh: </strong>There is definitely not only that we can read their minds, but here's something we can't do with humans. If you commit a crime, we kill you off and then we kill off all your descendants so that the genes, which caused your crime are diminished in the gene pool.</p><p><strong>Theo: </strong>They did that in ancient China.</p><p><strong>Dwarkesh: </strong>I guess you're kind of a little bit like society or you're saying, we just send you out to prison. I don't think it has that much of a sort of genetic effect that whereas with AI is really literally a gradient descent, we can sort of like, and not only to read their mind, but like actually change their mind in a sort of very fine grained way. So, in those two ways, it definitely does suggest that it might be easier to read their mind. The main difficulty, of course, being that the starting point is not something that is just genetically very similar to us, but it is totally alien. It just starts off on a different trajectory than evolution. Like humans already have this sort of inbuilt machinery that's quite similar to each other.</p><p><strong>Theo: </strong>Well, do you think it's just totally alien? Like Rune has tweeted a lot about how, you know, he used to think that AIs, LLMs were alien minds, the Shoggoth from another dimension. And now he thinks that their character is instantiated from the human prior.</p><p><strong>Dwarkesh: </strong>But there's an Eliezer rebuttal, which is that because it can pretend to be any human in that predict their next word, doesn't mean that it itself is the average over all those humans or something. And I just think like, we don't really know, or at least I certainly don't know. And we shouldn't just assume the safest or the most comforting possible version. Well, just like one human, you're just, it just grogging human consciousness. I mean, it's just like no, human works by being able to predict accurately what any given human might say on the internet. And it might be the case that the end result of this is something that approximates human psychology pretty well in its own intrinsic motivations. It just doesn't seem warranted to assume that will be the case.</p><p><strong>Theo: </strong>Well, Eliezer talks about the actress and the Shoggoth, but what about the rebuttal to that, which is, you know, all it is is a next token predictor. And if the next tokens contain goodness and love and peace, then the AI will do goodness and love and peace. And if they contain taking over the world and the AI will take over the world. And there's no reason to believe that there's actually a Shoggoth inside whose desires will be different from just the distribution of texts that it was trained on. </p><p><strong>Dwarkesh: </strong>And then I will just have to recapitulate the entire sequences because then, then there's the Eliezer response, which is that as a thing gets smarter, then it will closer and closer approximate something which has goals and intrinsic drives. And that's kind of the basic shape of the argument.</p><p><strong>Theo: </strong>Do you think that the empirical evidence so far has been friendly to the Eliezer group?</p><p><strong>Dwarkesh: </strong>Oh, depends on which part, like certainly not on the fast takeoffs, but you gotta remember this guy was writing this shit like 20 years ago. Right. Compared to the people who other people were writing 20 years ago, he certainly is more accurate compared to what we know now. It's not, he was, he expected the sort of intelligence disclosure and it's, it looks like we're living in this slow, slow takeoffs world. As for on that particular prediction, what was, I think I lost my train of thought, but I'll let you say, say what you were talking about.</p><p><strong>Theo: </strong>We were talking about the Shoggoth. Is there a Shoggoth inside GPT-4? How is the empirical evidence? Do we just not know?</p><p><strong>Dwarkesh: </strong>Right. So, I mean, one of the things is that dumber animals don't seem to have any sort of ability to, to, they just kind of respond to the direct immediate, like an amoeba will just go towards the light, right? There's not some, there's not some goal or directive. It just, the next token prediction equivalent for an amoeba, just go towards the light. And as things get smarter, it does seem that there's more of a sense of agency and maybe agency is required to do really complicated tasks that we will train the AI to do. Why that agency couldn't, would be something we would always control. It's not self-evident.</p><p><strong>Theo: </strong>So you strike me as somewhat middle of the road centrist on AI risks, not like a full doomer, but you know, not sympathetic to the Marc Andreessen. We're all going to be fine totally arguments. Yeah. Have you gotten like more optimistic or more pessimistic over time? How has your AI risk journey gone?</p><p><strong>Dwarkesh: </strong>I think even a year ago, I wouldn't have contemplated these things seriously. However, the advances we've seen since have convinced me that this is real. This is actually going to happen in our lifetime. Once you integrate that into your worldview, everything becomes more concrete. So, in a sense, I've become more pessimistic than I started off as, but also more optimistic. Originally, the assumption was either you just don't think this is real or you're a doomer. Then there's a lot of really smart people in the middle, as you say, that I've interviewed on the podcast and they've given me very interesting worldviews that helped me better understand their perspective, like Carl Shulman, Paul Christiano, and so on.</p><h3>Dwarkesh&#8217;s guests (38:04)</h3><p><strong>Theo: </strong>Going back to some of your guests on the podcast. I love a lot of your podcast guests. I've had a couple of them on my podcast, Razib Khan, Scott Aaronson. I've met Bryan Caplan before. I know that you're good friends with him. One of my friends cold emailed him a couple of years ago, just saying, &#8220;hey, do you want to get lunch, we're also nerds&#8221;. Not only did he agree, he took us to this kebab place in Fairfax near George Mason, paid for our food, and stayed with us for about two and a half hours and just talked about all kinds of stuff. It was basically an unrecorded podcast.</p><p><strong>Dwarkesh: </strong>It sounds just like Bryan. He's a great guy.</p><p><strong>Theo: </strong>I'm a big fan of Bryan Caplan. Bryan, if you're watching this, thanks. And I hope to have him on the podcast soon.</p><p><strong>Dwarkesh: </strong>Yeah, you should.</p><p><strong>Theo: </strong>So, who of all your guests strikes you as the most raw, intelligent and why?</p><p><strong>Dwarkesh: </strong>Certainly, it would be the AI researchers, people like Dario or Ilya. It just takes a lot of raw fucking IQ to do that. I mean, if that's what we're counting on and I'm not, I certainly don't think that's the most important criteria for everything. But on that raw measure, I think maybe those two would be in contention. And then, but I've obviously had extremely smart people, people who are way smarter than me on a bunch of them.</p><p><strong>Theo: </strong>Do you think it's easy for you to gauge how smart people are who are much, much smarter than you?</p><p><strong>Dwarkesh: </strong>Yeah. It's hard to do a bullshit test. I mean, I'm going to go down the list because basically every person I've had on the podcast is really, really fucking smart. But I just realized the last person, one of the last people I've had on who qualifies is also Paul Christiano and Scott Aaronson, who we both interviewed. I mean, I have this great story about Scott Aaronson where I was taking his class and he explains his results. And he says, it's a very important result. And he says, you know, I almost proved this myself in 1999, but I realized that somebody had beaten me to the punch six months earlier. And I looked back on it. It's like, how old would Scott Aaronson have been in 1999? He would have been 19 years old or 18 or something. And that's when he did it. Yeah. So maybe Scott Aaronson is my answer. It was like, pure raw IQ.</p><p><strong>Theo: </strong>I don't know if you pick favorites, but who do you think your favorite guest was? And who do you think your favorite episode was? And are they the same? Is there an overlap there?</p><p><strong>Dwarkesh: </strong>I don't know if this is necessarily my favorite, but this is the first one that comes to mind. I really enjoyed Carl Shulman, just because I got introduced to so many new concepts as a result of that episode, from the compute centric framework for understanding the scaling and rise of AI to a bunch of the specific takeover risks. So I would say that one. Did you listen to it by any chance?</p><p><strong>Theo: </strong>Yeah, I listened to it.</p><p><strong>Dwarkesh: </strong>What'd you think?</p><p><strong>Theo: </strong>I loved it. Carl Shulman struck me as really intelligent, in a sense, he kind of struck me as intelligent in the same way as Eliezer. Meaning he makes a lot of his own concepts. He doesn't just take whatever there is out there and the prevailing discourse. He makes his own.</p><h3>Is Eliezer Yudkowsky right? (41:58)</h3><p><strong>Theo: </strong>What impressions did you get from Eliezer, by the way? Did you think he was like Carl Shulman or different?</p><p><strong>Dwarkesh: </strong>I think that's a fair way to characterize. I definitely think Carl is more rigorous as a thinker and much more up to date on current developments, having a better understanding, for example, of the actual hardware limitations that are current or the weaknesses and advantages of the current architectures and so on. I would put them in slightly different buckets. I do think they're similar in the sense that they're in the sort of actual, so they adopt this. I think one sort of similarity is that they do think that the decision theory stuff is important and matters. There's just a bunch of weird shit about acausal decision theory and things like that. And they think like that actually could affect the course of things. But yeah, the difference being that Carl, I think is a bit more rigorous.</p><p><strong>Theo: </strong>So do you think that some of the character assassinations on Eliezer have some substance, like how he's detached from reality. He doesn't understand what he's talking about. He's not technical. Or does he strike you as maybe this guy is right after all. Cause I'm pretty split on that.</p><p><strong>Dwarkesh: </strong>I don't think he's right on like 99% doom. I think he&#8217;s just way overconfident. And I think he's also wrong about the fast takeoffs. And I think the evidence shows that he's been wrong about it.</p><p><strong>Theo: </strong>Does it, or have we just not reached the fast takeoff yet?</p><p><strong>Dwarkesh: </strong>It&#8217;s seeming more and more like there&#8217;s not a critical point where things just implode, but rather that intelligence is just a gradual scaling thing. And it could be wrong. Of course, anything could be wrong, but you just have to update on evidence as you go forward. The updates seem to be pointing in the direction away from Eliezer. That being said, the most important thing is that I think he's an endlessly creative and interesting thinker. And I think you just have to put him in that context of, yeah, he's probably one of the most intellectually generative people of the last 20, 30 years. I've learned a lot from reading him as a teenager and then in college and so on. Are there things he's wrong about? Yes, of course. I don't understand the visceral hate that people seem to have for him. And I also don't think people are being fair when they dismiss his contributions. He was on this decades ago. The thing that is the main thing people are thinking about now, he was on it decades ago.</p><p><strong>Theo: </strong>The visceral hate I think is just psychological pain avoidance lashing out. If they think that if Eliezer is right, I, and everyone I love will die then, no, I don't want to believe that. And I'll attack him. I'll defend myself by attacking him.</p><p><strong>Dwarkesh: </strong>Oh, and then he obviously is not a normal guy. So then it just becomes really easy to be like, Oh, what a weirdo or something. And I just don't think that's fair or a valid argument. </p><p><strong>Theo: </strong>I remember when I was eating lunch with Bryan, this was before ChatGPT, before the recent boom in AI, we talked about Eliezer Yudkowsky, who I was familiar with, but not as dialed in as I am now. And he said like, he had lunch with Eliezer recently and Eliezer tried to sit there and convince him that the world was going to end. And he was like, that's just silly. Could AI, a superintelligent AI, convince me to kill myself? I just don't think that it could do that. And he obviously does. I wonder if he's updated since then. He has updated on timelines. I've been since he lost his bet on whether GPT-4 would pass his exam.</p><p><strong>Dwarkesh: </strong>I haven't talked to Bryan about it, but I'm really curious to see where his head is at now. You should ask him about it when you have him on the podcast.</p><h3>More on the Dwarkesh Podcast (46:01)</h3><p><strong>Theo: </strong>Yeah. Do you agree with Tyler Cowen's characterization that podcasts are basically entertainment?</p><p><strong>Dwarkesh: </strong>Oh yeah, definitely. I mean, I think back on, no, actually. Okay. I have two minds on this. On the one hand, I know how much, how little I understand the fields that I do podcasts on. And then I think back on, I read so much in order to be able to ask questions about this field. And I still think I really don't understand it in any meaningful sense. I couldn't just walk into, I couldn't actually do the job, so to speak. If I'm interviewing somebody who was a researcher or something, and that reading is titrated then to just a few questions that I get to ask in the two hours or whatever. And the response is able to give. So if I personally feel like there's so much about the field that I understand, obviously the audience is in a worse position than I am because of the reading I've done, unless they independently happen to know about it. So I definitely don't think it's a replacement for actual expertise or something. </p><p>That being said, I mean, you know, I was saying earlier that I haven't listened to that many podcasts recently, but when I was in high school and, you know, a teenager and then in college, I learned so much about so many different fields from podcasts and you could say, well, you get a sort of introductory understanding of many different fields. And yeah, that's true, but that's useful for most people. They need intros to everything.</p><p><strong>Theo: </strong>So when you were talking about titrating into a two-hour episode of the AI researchers, when you do your research, is it more like, holy crap, there's so many amazing and interesting questions I could ask these people. Or is it like, there's really like, I need to find great questions. Are great questions overabundant or not abundant?</p><p><strong>Dwarkesh: </strong>Not abundant, usually. There are some guests where I literally have a list that&#8217;s 20 pages, Google docs or something. And obviously, we can't get through it. Usually it's not okay. Usually it actually, I don't have enough good questions or that I just barely have enough good questions. What's your experience?</p><p><strong>Theo: </strong>Basically the same. I vary based on the guest. How many of your questions, if any, are just off the cuff? Do you come up with any completely new questions that you hadn't put in the document off the cuff?</p><p><strong>Dwarkesh: </strong>Definitely. I mean, the followups, for example, a lot of them are off the cuff, but a lot of the followups that are actually questions I was planning on asking later on, but they just naturally follow the sequence of what my guest just recently said.</p><p><strong>Theo: </strong>You ever come up with entirely new questions, like not followups just off the cuff.</p><p><strong>Dwarkesh: </strong>Yeah. You know, you just have questions as somebody is talking. And that's why the research is helpful for the conversation. So you have enough context to ask those followups.</p><p><strong>Theo: </strong>So during the episode, when you're interviewing someone, what do you think is the optimal amount of tangents to go into? Like, what's the optimal amount to edit out?</p><p><strong>Dwarkesh: </strong>I don&#8217;t really edit out that much. The main constraint is the time of the guest. You don't want to waste time talking about things that are not really important or interesting. The optimal number of tangents is not zero, but there's such a thing as going on too many. It's hard to say generically, there's certainly not a number that one can give, but you want to go down enough that you can explore interesting directions and new ideas, not so much that you're never getting to the meat of the subject. It should serve the exploration rather than hinder it.</p><h3>Other great podcasts (50:06)</h3><p><strong>Theo: </strong>You said you listened to a lot of podcasts back in high school and college. Who were your favorite podcasters and what were your favorite podcasts? Is there overlap there?</p><p><strong>Dwarkesh: </strong>In high school, I listened to a lot of Sam Harris. Just a lot of normie shit. I was into politics when I was in high school, which is obviously a bad idea. It's just a tremendous time sink.</p><p><strong>Theo: </strong>As for favorite podcasts and podcasters, and whether those are the same, there are good podcasts without good podcasters or good podcasters without good podcasts. </p><p><strong>Dwarkesh: </strong>Can you give me an example of a good podcaster who doesn&#8217;t have a good podcast?</p><p><strong>Theo: </strong>For example, I hate to say it, I love Lex Fridman&#8217;s podcast, but I don't think he's a particularly good interviewer in the way that you or Tyler Cowen are.</p><p><strong>Dwarkesh: </strong>There are certainly people like that. An interesting question to reverse would be a good podcaster who just has the wrong format. And as a result, is really fucking it up. There are certainly people you can think of who I wish they had a podcast. Somebody like Christopher Hitchens or something. It would be really cool if he did a podcast. There are people who are just super interesting and voluminous thinkers and writers. And they&#8217;re super great and I wish they&#8217;d have a podcast. I've had former guests on where I think they would do really well if they started their own podcast. Sarah Payne was one such figure, just great at extemporaneously speaking and explaining her ideas. But at the time when I was in high school, who were such people? It's hard to remember. Hmm.</p><p><strong>Theo: </strong>You think you were just very different back in high school?</p><p><strong>Dwarkesh: </strong>Yeah, I think so. I mean, that's true for everybody though. Right?</p><p><strong>Theo: </strong>Yeah, I suppose so. Do you have any favorite podcast episodes from other podcasters that stand out?</p><p><strong>Dwarkesh: </strong>Yeah, I think for example, I don't really listen to a podcast that much anymore, but not, not for any reasons of disagreement, but for example, Sam Harris had a great episode when the BLM stuff was happening. He went into the data on police shooting. I thought that was a pretty brave thing to do and also super needed and sense-making at the time. He deserves a great deal of credit for that. As for ones that are not like "this message needs to go out" kinds of things, there's probably a bunch of episodes on Tyler's podcast that helped me understand the subject.</p><p><strong>Theo: </strong>Mine would probably be when Tyler Cowen interviewed Paul Graham. It was a meeting of two great minds who I admire a lot.</p><p><strong>Dwarkesh: </strong>Really? I was kind of frustrated because it bounced around from subject to subject enough that like Paul was not prepared to really delve deep into any of them. I think it was really interesting and I really enjoyed listening to it. But what was your reaction to that sort of taking away of that conversation?</p><p><strong>Theo: </strong>Yeah. I mean, there's the meme where Tyler was talking about like the Medici and Paul like hadn't really thought about it. So he was just like, &#8220;yeah, that's kind of cool. I guess&#8221;. Yeah, I mean, Tyler has a unique style that you don't see very often. And I really, really like Paul Graham but I think Paul Graham is best on like lengthier essays where he has had lots and lots and lots of time to think through them. Like his recent one on like, I forgot what it was called but the one that he just came out with, I have to look it up because it was one of the best things I've read in the last year, How to Do Great Work. That took him like six months just to write this few page long essay. And then of course, my other favorite individual episode was probably Lex Friedman interviewing Neil Gershenfeld who was the director of the Center for Bits and Atoms at MIT. And this was recommended to me by a friend. I didn't find it by myself. And it was all about like self-replicating machines which I had never really thought of.</p><p><strong>Dwarkesh: </strong>Yeah, I should listen to that one. That sounds interesting.</p><p><strong>Theo: </strong>Self-replicating machines and just manufacturing in general. He has a class at MIT called How to Build Almost Anything where they learn about different kinds of fabrication. His goal is to create like a general fab manufacturing area in the sense that we have like a general purpose computer that can do any computation.</p><p><strong>Dwarkesh: </strong>I've heard that sort of sentiment about nanomachines expressed. Drexler has this thing of you can compute anything and then now you need to be able to program any sort of physical. I should listen to that episode. It sounds interesting.</p><h3>Nanobots, foom, and doom (56:01)</h3><p><strong>Theo: </strong>What do you think about Drexler's nanomachines arguments? Have you read his book?</p><p><strong>Dwarkesh: </strong>Yes, I read his recent one, The Radical Abundance. And now he's working on AI stuff, right? From what I understand.</p><p><strong>Theo: </strong>I haven't heard about that. I just know that Eliezer cites his <em>Nanosystems</em> book a lot.</p><p><strong>Dwarkesh: </strong><em>Nanosystems</em> is a different guy actually. Wait, no, sorry, <em>Nanomedicine</em> is a different guy. My bad. I think it's really interesting. I'm still not sure why it didn't go anywhere. But I really enjoyed <em>Radical Abundance</em>. He has a lot of interesting arguments about the intrinsic efficiency of nanomachines. From what I remember, it was as you miniaturize it, it just becomes a lot more efficient. And you think about how, for example, in your own body, how fast the molecules are moving and how much work they can do. And that's a direct physical effect of miniaturization. </p><p>I would love to talk to somebody about why that didn't go anywhere. In the book, he has complaints about the funding situation in the 90s where they were supposed to put a bunch of money into nanomachines and then it got co-opted into stuff that was familiar to the old paradigm and wasn't actually advancing the state of the field. But why has it still not gone anywhere? Maybe we should just have something on the podcast to talk about it. Because that actually is pretty interesting.</p><p><strong>Theo: </strong>Yeah, maybe you could get Drexler on. Do you think that has any implications for FOOM? Like, do you think, even if you have a human level AI, even if you don't have like a fast takeoff intelligence explosion, do you think that means an AI would be able to kill all humans very, very quickly?</p><p><strong>Dwarkesh: </strong>Well, certainly nanomachines that can multiply very quickly are possible because we have bacteria. And, you know, you can just imagine how fast they can absorb energy? You can look at algae that multiply and take in photosynthesis and they can transform the shape of the earth pretty fast. Obviously, it has implications because then the question is how fast could they absorb energy? How fast could they do work? But I mean, in the limit, it probably makes like a few months difference between whether they had to do with robots versus whether they had to do with nanomachines. But even if the nanomachine stuff doesn't pan out, I think even the robot takeoff is pretty fast.</p><p><strong>Theo: </strong>Well, do you like to think about p(doom)? Do you think p(doom) is a useful representation for how you think about AI risk? Or is it just kind of like made up numbers based on vibes?</p><p><strong>Dwarkesh: </strong>Well, I mean, it can be both. It can be a made up number and it can be useful to have as that. It's useful, I think, to just throw out a number to gauge your credence of any event on. As for, I do understand the criticisms of having such a number that obviously these are the consequences of human actions with the p(doom). So you can't, but that's always true of any probability you get, right? It's not just true of p(doom). So this is, then there'll be criticisms of giving probabilities of anything, a war or a probability of somebody winning an election or something. Yeah, I think it's sensible if somebody's thought about it a lot to have that number.</p><p><strong>Theo: </strong>Do you have a p(doom)?</p><p><strong>Dwarkesh: </strong>Mine is not that sensible. It just, mine literally is a number I kind of pulled out of my ass. I don't know, like 20% or something. And just because that&#8217;s Carl Schulman's or I don't want to misrepresent him. His might be different, but kind of just like pulling out of people I find credible.</p><p><strong>Theo: </strong>Yeah, 20% seems reasonable, but at the same time, like, do you think that if for any given century in the future, there's like a five to 20% p(doom)? Like, does that just mean very, very bad news for civilization making it another 100,000 years? I remember you talking about this with Tyler.</p><p><strong>Dwarkesh: </strong>Yeah, I think the goal is to get to just transition from this current regime where it is possible to wipe out all of humanity to get to a regime where you'll be like spread out through the stars where some of us are in like not human anymore. Some of us are AIs or gods or some mix or enhanced. And hopefully we can get to an equilibrium where it's just, if life is all around the galaxy doing beautiful creative things and it's different kinds of civilizations, it's hard to imagine how you could wipe all of that out.</p><p>Now it might just be that the laws of physics prohibit that kind of independence. Warren has this really interesting essay called Colder Wars, where he imagines that it just really easy to catapult a comet into a planet or solar system and just destroy everything. And so therefore there's no, it just destruction becomes really easy. That might be the case. I don't know, there might be some physics that makes it super easy to destroy planets and stuff, but hopefully we're getting to a situation where it just like negligible probability over time. You know what I mean? They just like every year the probability drops. So it asymptotes towards like the cumulative probability doesn't go to a hundred.</p><h3>Great Twitter poasters (1:01:59)</h3><p><strong>Theo: </strong>So going back to social media and your research process, do you scroll through Twitter a lot?</p><p><strong>Dwarkesh: </strong>I do. It depends on a lot. I think it was certainly not as much as many people, more than I should, of course.</p><p><strong>Theo: </strong>Well, yeah, it's just so addicting, but who are some of your favorite poasters, like P-O-A-S-T, and what do you think makes them so good?</p><p><strong>Dwarkesh: </strong>Oh yeah. Daniel's pretty funny. I like him.</p><p><strong>Theo: </strong>I just got the Daniel follow the other day. </p><p><strong>Dwarkesh: </strong>Oh, nice. Let's see. Yeah, it's funny. I don't have any that like regularly make me laugh and that's my main criterion because obviously you cannot take posters if you're getting out your actual intellectual opinions from them. It's from 140 characters at a time. That's a different story.</p><p><strong>Theo: </strong>What about someone like Roon?</p><p><strong>Dwarkesh: </strong>Yeah, he's great. Yeah, definitely the market has decided obviously that he's a good poster as well. Certainly doesn't need my endorsement anymore or ever, but yeah, he's great. I haven't ranked my poasters, but I'll have to make a tier list with the S-A-B-C-D and so on.</p><p><strong>Theo: </strong>If you were to come up with criteria for what makes a poaster good, would you think that'd be similar or different from what makes a good podcaster?</p><p><strong>Dwarkesh: </strong>I think it certainly is a type of skill to be able to make things that are really compelling in 280 characters. There are two things I wouldn't assume. I wouldn't assume that that actually correlates to... I'm not talking about anybody we've named. I'm talking generically about people. I don't want to name any specific names, but you have somebody who comes up with takes on Twitter and they have takes about different kinds of topics and they shoot them out. And then you actually talk to them in real life. And then you talk about a subject which they shout out a bunch of takes about and you realize, oh, they understand nothing about this. And so it definitely dissuades you of the sort of notion that if somebody has a lot of takes or a lot of viral takes, good posts about a topic, he actually understands it in any way. I guess I said I had two things, but that's the one thing I have.</p><p><strong>Theo: </strong>So you said you spend more time than you show on Twitter. How do you spend your time in general? I remember there's an interview that you did with another website a couple of years ago. Has it changed since then? Do you have a daily routine?</p><p><strong>Dwarkesh: </strong>I don't remember what I said on there, but I read quite a bit. That's most of my job. So yeah, I spend a lot of time doing that and making, there's a lot of logistics involved with the podcast itself as I'm sure you know, of making clips and editing and so forth. That takes up a lot of my time. Then a bunch of logistics involved with, you know, reaching out to people and things like that. And that basically sums it up. I exchange ideas back and forth with people, email, group chats, meetings, so forth, meet people who are researchers or understand fields well. And that's about it, pretty simple existence.</p><h3>Rationalism and other factions (1:05:44)</h3><p><strong>Theo: </strong>With the people you talk to, would you say you're adjacent to the rationalist community?</p><p><strong>Dwarkesh: </strong>Yeah.</p><p><strong>Theo: </strong>It's interesting. Almost all of my guests, I will find eventually that somehow some of them are rationalist-adjacent. Even the ones I didn't really expect, like Razib Khan. When I interviewed him, he told me like, oh yeah, actually I was with Eliezer in 2008 with lots of people at the original Singularity Institute. And just the Bay Area rationalists. And he was an OG there.</p><p><strong>Dwarkesh: </strong>It seems like you're pulling people who have some presence on Twitter among the kinds of people you follow. And it's not that surprising that among that group, that there'd be a lot of rationalists.</p><p><strong>Theo: </strong>Well, it seems like there's some new factions forming with people who might historically have called themselves rationalists or EAs who now really don't like it, like e/accs. Although again, it's the same sort of story as Razib where they like rub shoulders with, just like, it's not as totally independent.</p><p><strong>Dwarkesh: </strong>I have been interviewing historians recently and there you just have people who would not know if you use the word rationalist, what that means. Have just not interacted with the Silicon Valley culture for better or for worse.</p><p><strong>Theo: </strong>I was looking at an interesting post earlier today that was like a political compass, except instead of the axes being like authoritarian, libertarian and like left, right, it was AGI will be like the internet versus AGI will be like a million times more important. And then we should accelerate versus we should slow down. So do you think something like that will become the most important grid on which people align their politics in the near future or will it kind of just remain the traditional political framework?</p><p><strong>Dwarkesh: </strong>I don't think it'll be either of those. I do think if the takeoff, I mean, if the AIs, at some point, if the takeoff stuff is true, then it'll become the most prominent fact about our political life. I don't think there's gonna be that much of an appetite&#8230; I don't think 25% of the country is gonna be agitating for the top right where you're trying to engineer on the maximum flops out of the solar system. I don't think there's a huge demographic constituency for that. I think the current factions are one, a result of certain backlash against EA kind of things and two, a sample of the kind of people who are talking about it right now. What that actually transitions into when it enters a mainstream political system, I think it looks pretty different and it might be a worse axis on the political system where people try to shoehorn it into current contemporary issues of political correctness or economic equality kinds of things that pale in comparison to the real stakes, which is the fucking galaxy, right? But yeah, I don't know if the e/acc versus EA will be Democrats versus Republicans in like 10 years.</p><p><strong>Theo: </strong>Maybe. Do you think that e/acc is an interesting or useful philosophy or is it kind of just vibes and trash?</p><p><strong>Dwarkesh: </strong>It depends on what you mean by e/acc. I don't want to do the same sort of intellectual dishonor that many of them conduct of completely dismissing ideas without actually trying to grapple with them. So it depends on what you mean by e/acc. It is true that technological growth has been the main force of the betterment of humanity throughout history. It's the kind of thing where you're doing a motte-and-bailey, well, if that's what you're endorsing, yeah, I'd endorse that as a historical statement. And then with AI, you have something that's kind of breaking the pattern of the pace of history and the centrality of human beings and so on. So it might be worth considering on its own terms. As for, I don't even know what the broader e/acc take is then of you maximize, I don't even wanna, it's hard to, can you explain what the e/acc take is?</p><p><strong>Theo: </strong>Well, first of all, it's kind of funny. Yesterday, I was wearing my effective accelerationism T-shirt, which I got not because I'm an e/acc, but just because I think the logo is cool. And the general sentiment for everything other than AI is pretty great. It would have been funny if I was wearing it on the podcast just by chance.</p><p><strong>Dwarkesh: </strong>I will say, by the way, I don't necessarily endorse the exact opposite of the e/acc claim that slowing down AI is in and of itself. I do think people sometimes seem to believe in the magical properties of slowing down AI or have an unrealistic understanding of how that might be possible.</p><p><strong>Theo: </strong>Oh, like &#8220;we&#8221; just need to&#8212;</p><p><strong>Dwarkesh: </strong>The end goal is not just to have slow AI, the end goal is to align the AI and then point it towards something good. The slowing is only a means to an end. You're not just going to keep it down forever. So the opposite of e/acc is certainly not a statement I would endorse. I wouldn't endorse something like "pause AI".</p><p><strong>Theo: </strong>I've noticed a marked degradation in the discourse among rationalist, doomer, decelerationist kind of people over the last few months. Probably just because it's becoming more popular. They're now committing many of the sins that the e/accs have in their time.</p><p><strong>Dwarkesh: </strong>Although you gotta remember the real serious people who are concerned about alignment are not posting on Twitter all day. They're doing technical things at labs. The kind of people who have the time to be making memes on Twitter are not the best and the brightest.</p><p><strong>Theo: </strong>What you said about what do e/accs actually want to maximize? I watched Beff&#8217;s talk about thermodynamics and the future of everything. It was basically about how, with thermodynamic dissipative adaptation, what we're trying to do is maximize the amount of free energy in the universe that will create complexity to best take advantage of it. That's what the universe itself did to create life. That's what capitalism does to create great businesses and great business owners. I don't know how good of an explanation thermodynamics is for this, but I think the general sentiment is basically true that complexity arises out of simplicity and can do pretty great things.</p><p><strong>Dwarkesh: </strong>That's true of a lot of different philosophies that you wouldn&#8217;t actually endorse the implications of. If you take Marxism, for example, the idea, if you look at Marx's reading of history, is that you have an exploiter class that comes up with an ideology to justify their exploitation of either slaves or peasants. Before modern economic growth, that kind of was what history looked like. You did have serfdom and slavery. Maybe that's not directly addressing the point you're making. To more directly address the point you're making, it is true that we want more complexity and more beauty and so on. I don&#8217;t see why that follows&#8230;will even correlate necessarily with thermodynamic free energy in the future. If I told you here's a world that's more beautiful, but it has less free energy, would you rather have the one with more free energy and less beauty and creativity? I don't understand why that'd be prima facie the thing you're trying to maximize. What if it was totally unconscious? Was it literally just optimizing for the maximum entropy of the universe, but wasn't actually in any way something recognizable to us as something that could be beautiful or something that could experience different great feelings?</p><p><strong>Theo: </strong>Then there's back into the debate about is an unconscious entropy maximizer, paperclip maximizer type thing even possible? Are p-zombies possible? Is it possible for something to have goals and the intelligence to pursue them, but no kind of self-reflection or consciousness?</p><p><strong>Dwarkesh: </strong>It could be true. We don't know one way or another, but it's true even among humans that there have been these pathological ideologies that have pursued these single-minded aims that have resulted in terrible harms and communism, Nazism, whatever. So if that's possible with humans, I don't know why you'd assume that's not possible with AIs.</p><p><strong>Theo: </strong>Well, cause communists and Nazis are conscious.</p><p><strong>Dwarkesh: </strong>I mean, in terms of like, even if they are conscious, they can, them trying to pursue their ideology to its ends or their value system to its ends just results in a shit ton of mayhem and destruction.</p><h3>Why hasn&#8217;t Marxism died? (1:15:27)</h3><p><strong>Theo: </strong>Speaking of which, why do you think Marxism has been so persistent of an ideology? Even after Marx made a lot of specific predictions that were specifically falsified, like the US would soon become communist, and that just didn't happen.</p><p><strong>Dwarkesh:</strong> I&#8217;m certainly not an expert in this.<strong> </strong>I just interviewed Jung Chang who wrote a book about growing up during the cultural revolution in China. She wrote a biography of Mao that was not well-received amongst academia because it was really harsh on Mao. I asked her why there is this instinctual desire in parts of academia to defend these brutal communist dictators like in Venezuela or Cuba or Russia. As for why there's Marxism, I think part of it is just that it aligns with certain aspects of human psychology of making classes of people, exploiters exploited and so on, and having an overarching theory of history, a narrative and a sense of struggle. But I remain confused as to why it's not been completely discredited and people still ascribe to it.</p><p><strong>Theo: </strong>Well, techno-optimism also has a grand narrative and the monomyth, the hero's journey, play to human psychology. Humanity ascended from our position as apes in the savannah to building the sand god. (I think the sand god phrasing is a little cringe, but you get the point). So why hasn't techno-optimism supplanted Marxism? Is it just the inertia of the system?</p><p><strong>Dwarkesh: </strong>Well, you have competing ideologies. It certainly is succeeding in some sense, right? Or has adherence. Part of it is just that there's not enough people yet who have enough context to understand techno-optimism, whereas anybody can understand Marxism or not anybody, but you can kind of understand the sort of thinking behind Marxism.</p><p>Yeah, and I don't think they're necessarily supplant each other more as they'll just be in competition with each other, or they happen in competition with each other. Like a bunch of narratives are in competition with each other. As for how one should personally be in relevance in relation to these narratives, Tyler Cowen has a great talk where he says that as soon as you adopt a story, you're basically pushing a button that decreases your IQ by 15 points. You know, you gotta take things case by case and understand the specifics of situations instead of having some 5,000 year grand narrative that explains everything. </p><p><strong>Theo: </strong>That reminds me of something else Bryan Caplan said on your podcast where he was like, you're talking about feminism and he was like, oh, when I write books about it, I try not to argue like a lawyer. And, you know, begin with my preconceived conclusion and then make arguments for it no matter what.</p><p><strong>Dwarkesh: </strong>Yeah, and just probably to a certain extent he does that because we all have our force to do it. The great thing about society is that then we just rebut each other and we're left with a better outcome in the end.</p><h3>Where to allocate talent (1:18:51)</h3><p><strong>Theo: </strong>You like to talk about youth and talent. So if you could rule the world and you could reallocate the amount of smart, talented kids entering the workforce however you wanted, like what areas would you take them out of and what areas would you put them into?</p><p><strong>Dwarkesh: </strong>I actually did ask this to Grant Sanderson. I don't know if you listened to that one. This might be a sort of overreaching take. I certainly think like really smart people being in non-STEM subjects. There certainly should be some people doing non-STEM things. Would I wanna take them out of it though? Because then you'd just be left with even worse non-STEM things. Because then you actually have to reduce the relevance of non-STEM things into everyday society. Oh, I will say this. The obvious one is, you really want smarter people in politics.</p><p>Here's an interesting observation. So often when I'm reading an interesting paper or reading an interesting article or something and I look up the author, the guy turns out to be a central banker. He's like the former president of the New York Federal Reserve or something. And so we have a great system actually for finding and identifying really professional, competent, non-partisan people to be central bankers. And I would just like that kind of system for different kinds of government offices. Like if the US president was as smart as the prime minister of Singapore or something, or the ministers in the cabinet were as smart as that, obviously the political life, we should definitely have more talent in. But I don't think it's like talent people aren't going into politics so much as, the selection pressure doesn't favor them. </p><p><strong>Theo: </strong>So what do you think the secret is for central bankers then?</p><p><strong>Dwarkesh: </strong>I think there's literally a whole, like a centuries long sort of filtration process. Maybe not centuries long, but decades long. That institutions that have been built up, that like you learned, that we find the most competent people in high school, we send them to these elite colleges. And econ, at least until recently, is not like a politically compatible institution. Discipline, it's like a super rigorous discipline where we care about the truth. And then we find the most competent people who have been through the undergrad there, and we send them to grad school, we find the most competent people there, and then we have them do, be like shadow the people who are most competent. And this is the same thing with law schools. And I think that's why the Supreme Court, for example, it's like, I actually have, I think they do a good job of actually trying to know, understand and parse the law. Because we have the system that like selects for people to be in the system. And I think that's why the Supreme Court because we have the system that like selects for people. You know what I mean? We have these institutions that cultivate talent in this way.</p><p><strong>Theo: </strong>So it&#8217;s just like relentless competence and talent filters? There's no additional traits that you would like specifically select for, for a central banker, aside from just intelligence and hardworkingness and competence?</p><p><strong>Dwarkesh: </strong>And caring about the subject, like being a non-political person, not an activist type. What was it you just said?</p><p><strong>Theo: </strong>Integrity, I imagine.</p><p><strong>Dwarkesh: </strong>Yeah, yeah, yeah. But it's not even just like, you can be high integrity and also be a very political type of person, if that makes sense. Just like a non-partisan, ambivalent, politically ambivalent types.</p><h3>Sam Bankman-Fried (1:22:22)</h3><p><strong>Theo: </strong>You wouldn't want Sam Bankman-Fried running the central bank though, I imagine.</p><p><strong>Dwarkesh:</strong> No, no.<strong> </strong>So then, yeah, it's interesting to specify what is it about him? Because he's obviously smart, he's obviously hardworking. Maybe dysregulated people. Not dys, D-I-S, as in not being regulated by the government, but as in personally not being well-regulated. That seems like a bad sign.</p><p><strong>Theo: </strong>Are you talking about Sam Bankman-Fried?</p><p><strong>Dwarkesh: </strong>Yeah.</p><p><strong>Theo: </strong>Really? Because he strikes me as very well-regulated personally, but just kind of misaligned. He's not unaligned, he's misaligned. Instead of focusing on, oh, what should I be doing that's legal? What should I be doing that will benefit my shareholders the most? He thinks about, what should I do that will benefit my lofty goals of effective altruism the most?</p><p><strong>Dwarkesh: </strong>I honestly don't think that's the best explanation of his behavior. I think it generally is a level of incompetence at certain things, using QuickBooks or making these ridiculous bets that even in expected value terms probably didn't make sense at the time he was making them. Just being hopped up on a bunch of amphetamines and making these back of the envelope billion dollar decisions, I don't think that's well-regulated personal behavior. </p><p>And he just evidenced, and maybe this is better evidence for the old school theories that people tend to have. Hey, if you get a haircut and you act like a normal person and dress up in a suit, I'll trust you. I think SBF is good evidence of that. The kind of guy who's just hopped up playing StarCraft while he's talking to you. I guess you could say there's no first principles reason you should disqualify somebody for that, but this is good evidence of that kind of person is just all over the place.</p><p><strong>Theo: </strong>I think famously he was remarkably bad at League of Legends. He never made it past bronze or silver or something after years of playing for hours a day.</p><p><strong>Dwarkesh: </strong>Yeah, yeah, yeah. But he was playing it for hours a day while he was in meetings and shit.</p><p><strong>Theo: </strong>So then if the explanation is not that he was malicious, but just that he was incompetent, how did he get such success in the first place?</p><p><strong>Dwarkesh: </strong>This is something I have learned by watching a bunch of very successful people is that you would be surprised the extent to which there are successful people in certain domains who lack judgment and can make big mistakes in pretty seemingly relevant domains. And you should always double check people's judgment even they're in high positions of credibility. Everything from your judgment of how things will progress, epistemic judgments to these sorts of tactical executive judgments. </p><p>I mean, there's a bunch of details that came out about what's been going on with Alameda and FTX, but the sort of bets they were making of doubling down on these shitcoins and so on, it certainly doesn't seem like... The lying was obviously the low integrity move, but those bad bets themselves were just evidence of fucking it up, right?</p><p><strong>Theo: </strong>Yeah. I remember reading somewhere in one of the teardowns of what actually happened, that Sam Bankman-Fried famously made his first $20 million from arbitraging Bitcoin, and then it was gone within a year because he made a bunch of really bad bets.</p><p><strong>Dwarkesh:</strong> Yeah, I mean, he just lost a shit ton of time on AWS and things like that, right?</p><p><strong>Theo: </strong>Yeah. So I guess if you make lots of very seemingly stupid, high variance bets on illiquid, inefficient markets like crypto in the early 2010s, one of them might pay off well. But still parlaying $20 million success into a $20 billion success even temporarily is like, that's no small feat.</p><p><strong>Dwarkesh: </strong>Oh, certainly. He's definitely a talented guy, but that just goes to show you that talented people can have bad judgment and be incompetent even at their own fields. Like I have less of a sort of mindset now of this guy's unilaterally a super achiever, this guy's unilaterally has bad judgment.</p><p><strong>Theo: </strong>Do you think you would have in a million years predicted that FTX would just blow up like this? Or be fraudulent?</p><p><strong>Dwarkesh: </strong>No, to be honest. No, no, honestly. Yeah, I interviewed him before and I sort of did like a lot of research on him and his company before and I would not have. </p><p><strong>Theo: </strong>Do you think that there are any companies today that you look at and think of like, wow, this might be like another FTX situation?</p><p><strong>Dwarkesh: </strong>Yeah, like there's a lot of companies in AI where you think, what valuation are you raising at? And why will you not just be automated at the next OpenAI dev day? I don't know if it's like FTX level though. I don't know if there's a big fraud.</p><p><strong>Theo: </strong>Grifters are everywhere, but like, you know, specifically on the level of FTX.</p><p><strong>Dwarkesh: </strong>Yeah, it's hard to see. I think crypto is especially liable to fraud, obviously, because you are just moving numbers around rather than, so it becomes easier there. Yeah, I mean, I do think there will be like, you know, we saw stuff with Emad and Stability. I don't know if you saw all those revelations. Yeah, I've seen them. Yeah, yeah, so stuff like that. I don't know, I think a lot of stuff will like that will come up in AI, but it will just be so overwhelmed by the good investments that are made in AI that will be like trillion dollar companies or something.</p><p><strong>Theo: </strong>Yeah, with FTX, I was watching the Nas Daily YouTube Shorts video on him before the collapse. And he was like, oh, this guy is a billionaire and he's vegan and he wants to donate all of his money to effectiveness and he does crypto. And I was thinking like, obviously, I'm not gonna say like, oh, I predicted this. You know, I knew what was gonna happen with FTX beforehand, but something there struck me as a little sus, a little not normal for billionaires. </p><p><strong>Dwarkesh: </strong>Yeah, I think it's also the case that you could say this about a lot of people. There's always evidence that retrospectively paints them in like, oh, I should have, that was very sus. I should have seen it coming. And I bet like you could tell a similar story about literally every single billionaire. There's some things out there, the rumors that afterwards you could be like, oh, obviously that guy was a fraud.</p><h3>Why is Elon Musk so successful? (1:29:07)</h3><p><strong>Theo: </strong>So another question on talent. What is it that makes people stand out even among like extremely talented, extremely smart, extremely productive people, like Elon Musk? So, you know, Elon Musk just stands out just totally in a class by himself, even among like billionaires. So why is that, what's different about Elon? </p><p><strong>Dwarkesh: </strong>What I've heard from people who have worked with him or are in lesser degrees of separation is just a complete willfulness of the John Wick quote of he'll just, I don't remember the exact quote, but it was like, he'll just get what he wants. He'll scream, he'll throw tantrums. He'll stay up 24 hours a day. He'll do whatever it takes, but it is happening if he wants it to happen. He'll fire everybody and restart the whole project. But a level of focus on progress and the lack of complacency.</p><p><strong>Theo: </strong>Is that it? Is that all it takes? And if that is all it takes, then why hasn't anyone else reached his level?</p><p><strong>Dwarkesh: </strong>I mean, how many people do you know who all that could apply to?</p><p><strong>Theo: </strong>None in real life? It's very rare, of course.</p><p><strong>Dwarkesh: </strong>I've gotten to meet a lot of people in the last few years. And I'm trying to think, do I know somebody who's that willful? Maybe, but I think it's a rare trait.</p><p><strong>Theo: </strong>Just high agency people?</p><p><strong>Dwarkesh: </strong>Even agency doesn't do the word justice.</p><p><strong>Theo: </strong>Really? It's more than that?</p><p><strong>Dwarkesh: </strong>It's not just, cause when people mean by agency nowadays, it's been so diluted of, it just means like, are you willing to send a cold email? Congratulations or something. But this is like, no, literally, I'll go fly to fucking Russia and we're going to buy the old ballistic missiles. It's just like a level of, this is happening no matter what. It's not just that I will come up with different ideas, but I will make it happen no matter what. It's like calling the ocean wet, you know? That's like what calling Elon high agency is.</p><p><strong>Theo: </strong>Have you read the Walter Isaacson biography?</p><p><strong>Dwarkesh: </strong>No, have you?</p><p><strong>Theo: </strong>Yeah.</p><p><strong>Dwarkesh: </strong>Well, you might know more about this than me then, actually. What do you think? What makes him special?</p><p><strong>Theo: </strong>Well, I think one of the best takeaways was not even in the Walter Isaacson bio. It was in a Scott Alexander article. Yeah. Slate Star Codex, Astral Codex Ten. And it wasn't about the Walter Isaacson book. It was about the Ashlee Vance book. But he said something like, very similar. Elon is like a one in 10,000 or like one in a thousand level of good engineer and intelligent and all that. But obviously like that's a necessary but not sufficient condition for the success that he's had.</p><p>But what really sets him apart is that he's like one in a million driven. Like he will do all this stuff. He'll go to Russia and he'll stay up 24 hours a day and work 120 hour weeks and take on projects that people would think would be completely insane and then make them work. Right. But it just seems like something's missing. Like, how is it that only Elon is Elon? Right. Why is there only one Elon and not a hundred Elons?</p><p><strong>Dwarkesh: </strong>I think there are like a lot of startup founders who are very driven. I don't think like Elon is necessarily the only person who's that driven. It just, even if they were all equally driven they wouldn't necessarily all achieve the equal outcomes and their outcomes would be distributed along the power law, right? And so maybe you would see the exact same pattern we in fact do see.</p><p><strong>Theo: </strong>Could it be like ambition and complacency? Like, not everyone at the age of 50 worth, 11 figures is going to continue being in the office 80 or a hundred hours a week working on some of the hardest stuff. Like Bezos isn&#8217;t doing that anymore.</p><p><strong>Dwarkesh: </strong>Yeah, that's probably part of it, right? Like you could have, how many Elons are there that just retired after SpaceX and took home the a hundred million dollars? Yeah. </p><p><strong>Theo: </strong>And then do you think Elon is incredibly, incredibly smart? I don't know how well you know him or know of him personally&#8212;</p><p><strong>Dwarkesh: </strong>No, I don&#8217;t.</p><p><strong>Theo: </strong>&#8212;but I wonder how much just raw intelligence factors into his success.</p><p><strong>Dwarkesh: </strong>There's a big debate about this, right? Of what is extremely high IQ necessary for something like that. And do these people in fact themselves have extremely high IQs?</p><p><strong>Theo: </strong>Warren Buffett famously says no.</p><p><strong>Dwarkesh: </strong>He says after 130, you might as well just give up those points and work on emotional IQ. But I think that's like, that's bullshit. 130, that's just like two standard deviations. That's like 5% of the population. That's a huge amount of people, right? Even among them, you can definitely filter for IQ. And, you know, there's been these studies that show that the gains from IQ don't actually diminish. You can just keep going out the curve and you'll still keep seeing gains in salaries or whatever. </p><p>So yeah, I think they're really smart. It's just the case that if you're selecting and got some bunch of, like a hundred different traits, you're not gonna get a top score in any one of them because it's a multi, the guy who has the highest IQ probably doesn't also have all these other traits which are necessary.</p><h3>How relevant is human talent with AGI soon? (1:35:07)</h3><p><strong>Theo: </strong>So we've been talking about all these talent questions. How relevant actually are they in a world where we seem to be rapidly approaching AGI?</p><p><strong>Dwarkesh: </strong>I think they definitely are relevant to obviously to the AI question itself, right? Like you definitely wanna recruit people who are gonna be working on this. In fact, I might be more relevant than ever because if you look at past periods in history where there has been huge kind of&#8230; As the AI stuff starts to take off, you're gonna have, gonna need like politicians and policy makers and hardware makers and diplomats and the world's gonna look crazy, right? If this stuff pans out. And so it's gonna be thousands of people at the very least who are managing this whole thing as it goes down. And to now be plucking out the people who would be talented had these different kinds of roles and not only managing the research itself, but the huge amounts of variables that are gonna be late when you have $50 billion training runs and countries potentially going to war with AI weapons. Maybe it might be more relevant than ever to be picking the talent to manage that and just have generally competent people in society so that when it happens, they can deal with it well.</p><h3>Is government actually broken? (1:36:35)</h3><p><strong>Theo: </strong>So just have good policy makers. This reminds me of what you were discussing in one of your Tyler Cowen episodes. Tyler suggested that state capacity might not be in decline and it might be stronger than it previously was. Do you think that's true?</p><p><strong>Dwarkesh: </strong>I just had Dominic Cummings on and if you&#8217;ve seen it, you know that his take is that state capacity is very much in decline. I think that might be a description of the UK itself. It feels like with COVID we saw that the system was very brain dead in many important ways. Maybe it was even worse before, so it could just be that things are improving. I don't really know.</p><p><strong>Theo: </strong>The US and the UK have always been different. For example, Lee Kuan Yew went to the UK and he noticed everyone there was orderly waiting in the queue. He was impressed and decided to bring this orderliness and respect for rules to Singapore. And he did. Now Britain is less orderly than it was.</p><p><strong>Dwarkesh: </strong>Yeah, I saw that. To some extent, I think maybe the problems Dominic is talking about are unique to the UK, but I think a lot of them are general. They're just the huge bureaucracies that are insulated from executive control and from any system to prune away incompetence. </p><p>There are certain aspects of the system that do seem to be really competent. I actually do have a lot of confidence in the Federal Reserve or the Supreme Court. The FDA, the CDC, those kinds of institutions actually didn&#8217;t(?) seem to function really badly during the pandemic. But then there are other aspects of the system that do function really well, that are linked to the government itself, a bunch of think tanks and so on. I don't know how it nets out actually. </p><p><strong>Theo: </strong>You mentioned the Supreme Court and the Federal Reserve as two examples of institutions that do function well. I remember the Federal Reserve in particular, the last couple of years with inflation has just gotten so much shit from all kinds of people for not having enough data and not reacting quickly enough. Do you think the Federal Reserve is still even relevant when we have so much data and so much compute? Or was it ever relevant? Sorry, necessary, not relevant.</p><p><strong>Dwarkesh: </strong>It's certainly relevant in that they set the monetary policy. If you're going to have a dollar currency, you need it. Obviously it matters. And, is it necessary? Yes, if you're going to have dollar denominated currencies, then the policy of managing dollars is going to matter and it's necessary. You could say, well, with crypto or something, you could maybe not have that. Maybe, I don't know. The actual stable cryptos have been the stable coins, which are obviously dollar denominated and then liable to move around with the Federal Reserve's decisions. </p><p><strong>Theo: </strong>What do you think are some other examples of institutions that work really well within the US government?</p><p><strong>Dwarkesh: </strong>The RAND Corporation, which is not obviously officially linked to the government, but is, I think they have been focusing a lot of efforts on AI and bio-risk kinds of things. And they seem super competent and well-versed there. I guess you're asking like, national government institutions, right?</p><p><strong>Theo: </strong>Yeah.</p><p><strong>Dwarkesh: </strong>I don't know that much about them, actually. Those are the only ones that come to my head immediately.</p><h3>How should we fix Congress? (1:40:50)</h3><p><strong>Theo: </strong>Do you have any ideas based on talking to Dominic Cummings or based on reading Robert Caro about how to fix Congress? The single most shat-on institution in the country, probably?</p><p><strong>Dwarkesh: </strong>I think just regular stuff of having higher IQ people and paying them more. Dara Jones in 10% Less Democracy has these ideas about, it does seem like senators, here's just something that's really interesting. The senators who are best are actually from these random states, like Montana or Nebraska. And they're just these really smart people. And then the ones who are worse are from these really big states. And also generally senators just seem to be a lot smarter than congressmen on average. And that probably has in part to do with they're more insulated from day-to-day democratic whims. So maybe having longer terms for senators and congressmen. Yeah, I would do that. Like houses elected every four years instead of every two.</p><p><strong>Theo: </strong>That kind of flies in the face of what, you know, your average American would say, if you ask them, how do we fix Congress? They're like, cut their pay, and impose term limits!</p><p><strong>Dwarkesh: </strong>Term limits might be warranted. I just like, but actually maybe not. Cause like on one hand there's a gerontocracy, on the other hand, there was such a thing as expertise that you're building up over time as you're in the institution. But yeah, I think that the average person would be wrong. But you know, many such cases.</p><p><strong>Theo: </strong>So you talked about how we need to get higher IQ people in Congress. And, you know, that seems to be no easy task. Like a recent-ish example would be Blake Masters who ran in Arizona. He was clearly smart. He went to Stanford. He co-wrote Peter Thiel's book. He was endorsed and funded by Peter Thiel. He had ideas that were out of the mainstream, which is some signal of intelligence. He didn't just get everything from the Republican Party platform. And he still lost in a state that was previously relatively Republican. So how do you reconcile getting smart people in office with just the reality of politics?</p><p><strong>Dwarkesh: </strong>The whims of voters might not be optimizing for that? I think it's unfair to blame voters on that one in particular. I don't follow politics closely. I don't know the particulars of that campaign, but it seemed like he played a Faustian bargain there with Trumpism that it's understandable why voters might have concerns about. But politics is not something I follow closely, so I don't know the particulars of that race. </p><p><strong>Theo: </strong>What about just in general? How do we get more high IQ people in Congress?</p><p><strong>Dwarkesh: </strong>Pay them more, longer term limits, I think is a big one. I think part of it is just getting high IQ people to decide to go into politics. It's not just about the system, which lets for them is also what goes into a filter. But these seem like obvious things. I don't know if I have anything new to say here. It's obvious that we should have smart people try to go into Congress. And it's obvious that we should pay them more. But what do you think we should do?</p><p><strong>Theo: </strong>Basically that. But in terms of actually getting smart people in Congress, I think a lot of smart people will kind of just follow where the money is because they're smart and money is nice. And that leads them into CS. I like computers, but I like a lot of stuff and I would be lying if I said that my reason for picking CS over all of them was not largely motivated by money. And AI.</p><p>What Singapore did clearly seemed to work really well. But then again, there are other countries like Israel that has been a quite successful country, quite successful post-colonial success story. And unlike Singapore, they did not have a super well-functioning political system. If you know about Israeli politics, you know that it's been falling apart the last few years. In order to get a majority in government, parties need to form coalitions with the Orthodox and then of course, for the first couple of decades of Israel's existence, it was run by the socialist labor party. So maybe it's not absolutely necessary to have super smart, well-coordinated people running a government for stuff to work.</p><p><strong>Dwarkesh: </strong>Well, Israel did actually become much wealthier since it adopted free market stuff originally, right? So, I think it's GDP per capita just shot up a lot. </p><p><strong>Theo: </strong>Maybe the solution is not optimize for the best people in government. Maybe it's just take the government out of most stuff and most stuff will work out.</p><p><strong>Dwarkesh: </strong>Yeah, I think it's definitely a combination of both, like a smaller government, but the part that it has to run, it's just run by very competent people, which is kind of Singapore, basically.</p><h3>Dwarkesh&#8217;s favorite part of podcasting (1:46:46)</h3><p><strong>Theo: </strong>So flipping the script a little bit, people like to start podcasts with how did you get into this? And what's your favorite part? But I'll try ending the podcast with what's your favorite part of doing what you do? And what specifically motivated you to do it? Was it just boredom? </p><p><strong>Dwarkesh: </strong>Yeah, I was bored in college. I think I was literally in the same situation you were in. I was a sophomore in college studying computer science. The best part is definitely, I will never stop being grateful for the fact that I can talk to literally the smartest people in the world, the people talking about and thinking about the most interesting things and just ask them questions for two hours or three hours at a time and be funded to spend the rest of my time thinking about what to ask them, doing research, trying to figure out what's important, who to have on and so on. That's a huge privilege. Obviously I'm super grateful for it and that's my favorite part. </p><p><strong>Theo: </strong>All right, well, I think that's a good place to wrap it up. So thank you so much to Dwarkesh Patel for coming on the podcast. </p><p><strong>Dwarkesh: </strong>Yeah, my pleasure, man.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Dwarkesh Patel. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Also be sure to check out Dwarkesh&#8217;s Substack, dwarkeshpatel.com, follow him on Twitter @dwarkesh_sp, and of course, listen to the Dwarkesh Podcast, which you can find on YouTube, Spotify, and Apple Podcasts. All of these will be linked in the description. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item><item><title><![CDATA[Charlie Munger, 1924-2023]]></title><description><![CDATA["Spend each day trying to be a little wiser than you were when you woke up. Day by day, and at the end of the day-if you live long enough-like most people, you will get out of life what you deserve."]]></description><link>https://www.theojaffee.com/p/charlie-munger-1924-2023</link><guid isPermaLink="false">https://www.theojaffee.com/p/charlie-munger-1924-2023</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Wed, 29 Nov 2023 16:06:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!J2fx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!J2fx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!J2fx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 424w, https://substackcdn.com/image/fetch/$s_!J2fx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 848w, https://substackcdn.com/image/fetch/$s_!J2fx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!J2fx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!J2fx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg" width="1100" height="824" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:824,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:39738,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!J2fx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 424w, https://substackcdn.com/image/fetch/$s_!J2fx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 848w, https://substackcdn.com/image/fetch/$s_!J2fx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!J2fx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e06eb5e-5640-489b-819f-8d6ed70f6370_1100x824.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Yesterday, November 28, 2023, Charles Thomas Munger died peacefully at a California hospital. He was just a month shy of his 100th birthday. Though I never got the chance to meet him, he left a more profound impression on me than almost anyone else in the world.</p><div><hr></div><p>Charlie was born in 1924 in Omaha, Nebraska. After serving in the Army during World War II and graduating <em>magna cum laude</em> from Harvard Law, he quickly rose in the world. He practiced law and real estate, saving and investing wisely until he could start his own law firm and investment fund, which he did in 1962. His law firm, Munger, Tolles, &amp; Olson, remains one of the most prestigious in the country, and his investment fund, Wheeler, Munger &amp; Co., averaged a 19.8% a year compound annual growth rate for thirteen years, compared to just 5% for the market.</p><p>In 1959, Charlie met Warren Buffett at a dinner party. The two became best friends instantly. They quickly started to do business together, and Munger joined Buffett&#8217;s Berkshire Hathaway as Vice Chairman in 1978. Together, they built it from a struggling textile mill to one of the most valuable companies on the planet&#8212;with holdings in insurance, energy, railroads, consumer goods, and manufacturing, just to name a few. They made legendary investments in See&#8217;s Candies, GEICO, Coca-Cola, American Express, BNSF Railway, Berkshire Hathaway Energy, and Apple. Charlie bought his first shares in Berkshire for around $7 in 1962. At market close on the day of his death, his shares were worth $546,869 each.</p><p>Munger was involved in many projects outside Berkshire. He was an early investor in Costco, serving on its board of directors until his death. He invested in Chinese businesses through his friend and prot&#233;g&#233; Li Lu&#8217;s Himalaya Capital. He served as chairman and chief stock-picker of the Daily Journal Corporation for decades. He remained an active real estate developer and investor nearly sixty years after his first project. Despite having no formal training, he became an accomplished architect. His passion for building extended to his philanthropy. Throughout his life, he donated over half a billion dollars to charity, mainly student housing and other university buildings, most of which he designed himself.</p><div><hr></div><p>I first discovered Charlie as a teenager, out of school for COVID, with a lot of newfound time on my hands. I had been a fan of Buffett for a while, but never fully appreciated his right-hand man until I&nbsp;read about him on <a href="https://fs.blog/intellectual-giants/charlie-munger/">Farnam Street</a> and Instagram&#8217;s <a href="https://www.instagram.com/charliemungerquotes/">@CharlieMungerQuotes</a>. When my mom bought me a copy of <em>Poor Charlie&#8217;s Almanack</em>, Munger&#8217;s iconic half-biography, half-anthology of speeches, I went through its 600 pages in about two days. I began voraciously consuming not just every Munger speech, shareholder meeting, and interview I could find, but many of the books he recommended. He completely changed the way I think about the world, and was one of the major figures in my move from adolescence to adulthood.</p><p>Some people, like John von Neumann, are gifted with nigh-superhuman intelligence. Charlie was gifted with nigh-superhuman rationality, and a knack for communicating it. His mantras for life formed a full <a href="https://fs.blog/munger-operating-system/">operating system</a>: <strong>Be honest, trustworthy, and reliable. Sell only what you would buy, work with only people you admire. Learn all the time and think about the world in a multidisciplinary way. Maintain rationality and objectivity at all times no matter what. Learn assiduity&#8212;work until the job is done&#8212;and equanimity&#8212;stoically control your emotions no matter how hard times get. Get rid of selfishness, envy, resentment, and self-pity.</strong></p><p>Two pieces of his in particular stood out to me for their incredible clarity and wisdom. <a href="https://fs.blog/turning-2-million-into-2-trillion/">Practical Thought About Practical Thought: Turning $2 Million Into $2 Trillion</a> explains in a totally intuitive way how the massive success of Coca-Cola was a result of simple financial and psychological decisions. <a href="https://fs.blog/great-talks/psychology-human-misjudgment/">The Psychology of Human Misjudgment</a> goes through 25 human tendencies that he learned over a lifetime, how they lead to real-world errors, and how to avoid them. Out of everything he&#8217;s ever written, I recommend these two the most.</p><p>Charlie was a role model for dealing with adversity too. When he was 29, he underwent a painful divorce with his wife of eight years, losing everything. Two years later, his nine-year-old son Teddy died from leukemia. In his fifties, a failed cataract surgery caused him such extreme pain that he had to have an eye removed. He nearly lost his other eye, which would have made him unable to read. He dealt with greater than 50% losses in Berkshire&#8217;s portfolio at least three times. And throughout it all, he maintained his hardworking attitude and sense of humor.</p><p>Charlie&#8217;s was the very model of a life well lived. Despite being worth $2.6 billion (which would be many times higher had he not donated so many Berkshire shares), he did not succumb to loneliness, materialism, and unhappiness as so many rich people do. He was married to his second wife, Nancy Barry Borthwick, for 54 years until her death in 2010. He&#8217;s survived by seven children, two step-children, fifteen grandchildren, and seven great-grandchildren, with whom he would go on an annual trip to a family compound in Minnesota every summer for over seventy years. From his many, many, many friends, business associates, shareholders, and fans, he achieved something even more important than wealth: <em>earned respect</em>.</p><div><hr></div><p>For Charlie&#8217;s last-ever Berkshire shareholder meeting last May, my dad and I made it a true Buffett-and-Munger-fan day. We ate breakfast at McDonald&#8217;s and watched the meeting while eating See&#8217;s peanut brittle and drinking Cherry Coke. At 99, Charlie was still razor-sharp, with the same brevity and wit he displayed decades earlier. Warren would answer questions with his characteristic paragraphs, and Charlie would respond with his own insightful one-liners, just like he always has. I pre-ordered the new Stripe Press edition of <em>Poor Charlie&#8217;s Almanack</em> months ago. Charlie never got to see it published, but it&#8217;ll be a fixture on my bookshelf for years.</p><p>Through his many speeches, talks, and clips of Berkshire Hathaway and Daily Journal shareholder meetings, Charlie served as one of my greatest mentors. He kickstarted me on a journey of self-improvement. He motivated me to start going to the gym. I&#8217;ve always wanted to be rich, but Charlie helped me think about <em>how.</em> He introduced me to so many different books, ideas, and fields, often tangentially. For example, without Charlie, I never would have discovered Naval Ravikant, and then David Deutsch, and then techno-optimist Twitter. To this day, whenever I make a decision, I think, WWCD: <em>What would Charlie do?</em> From him, I learned so much about not just investing and business, but life. Whatever success I have in my life, I&#8217;ll owe much of it to Charlie.</p><p>I&#8217;ll close with two of his quotes:</p><blockquote><p>[An] idea that I got very early was that there is no love that&#8217;s so right as admiration-based love, and that love should include the instructive dead.&nbsp;Somehow, I got that idea and I lived with it all my life; and it&#8217;s been very, very useful to me.</p><p>&#8230;</p><p>I think when you're trying to teach the great concepts that work, it helps to tie them into the lives and personalities of the people who developed them. I think you learn economics better if you make Adam Smith your friend. That sounds funny, making friends among &#8216;the eminent dead,&#8217; but if you go through life making friends with the eminent dead who had the right ideas, I think it will work better for you in life and work better in education. It's way better than just giving the basic concepts.</p></blockquote><p>There&#8217;s no eminent dead I&#8217;d rather be friends with than Charlie.</p>]]></content:encoded></item><item><title><![CDATA[#8: Scott Aaronson]]></title><description><![CDATA[Quantum computing, AI watermarking, Superalignment, complexity, and rationalism]]></description><link>https://www.theojaffee.com/p/8-scott-aaronson</link><guid isPermaLink="false">https://www.theojaffee.com/p/8-scott-aaronson</guid><dc:creator><![CDATA[Theo Jaffee]]></dc:creator><pubDate>Mon, 13 Nov 2023 16:52:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/138788850/0d1273259ac509095d62ffe58ab39f9b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>Transcript</h1><h3>Intro (0:00)</h3><p><strong>Theo: </strong>Welcome back to episode 8 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Scott Aaronson. Scott is the Schlumberger Chair of Computer Science and Director of the Quantum Information Center at the University of Texas at Austin. Previously, he got his bachelor&#8217;s in CS from Cornell, his PhD in complexity theory at UC Berkeley, held postdocs at Princeton and Waterloo, and taught at MIT. Currently, he&#8217;s on leave to work on OpenAI&#8217;s Superalignment team along with Chief Scientist Ilya Sutskever. His blog, Shtetl-Optimized, one of my favorites, discusses quantum computing, AI, mathematics, physics, education, and a host of other interesting subjects that we discuss in this episode. I&#8217;ve been a huge fan of Scott for a while, and I&#8217;ve really been looking forward to this episode. I hope you&#8217;ll enjoy listening to it as much as I enjoyed recording it. This is the Theo Jaffee Podcast, thank you for listening, and now, here&#8217;s Scott Aaronson.</p><h3>Background (0:59)</h3><p><strong>Theo: </strong>Hi, welcome back to Episode 8 of the Theo Jaffee Podcast, here today with Scott Aaronson.</p><p><strong>Scott: </strong>Hi, it's great to be here.</p><p><strong>Theo: </strong>All right. So first off, can you tell us a little bit about your background, specifically how you got into quantum and AI in the first place?</p><p><strong>Scott: </strong>Yeah. So I got into computer science as a kid, mostly because I wanted to create my own video games. I played a lot of Nintendo and it just seemed like these are whole universes that unlike our universe, someone must really understand because someone made them. I had no idea what would be entailed in actually bringing one to life, whether there was some crazy factory equipment that you needed. When I was 11 or so, someone showed me Apple BASIC. They showed me a game and then here's the code. The code is not just some description of the game. It is the game. You change it and it'll do something different. For me, that was a revelation comparable to learning where babies come from. It was like, why didn't I know about this before? </p><p>So I wanted to learn everything I could about programming. I still had the idea that you would need a more and more sophisticated programming language to write a more and more sophisticated program. Then the idea of touring universality that once you have just a certain set of rules, then you were already at the ceiling. Anything that you could express in any programming language, in principle, you could express in Apple BASIC. You wouldn't want to, but you could. That was a further revelation to me. </p><p>That made me feel like, wow, I guess I don't have to learn that much about physics then. I'd always been curious about physics, but then once you know about computational universality, then it seems like whatever are the specific laws of particles and forces in this universe, those are just like the choice between C and Pascal or whatever, they're just implementation details. </p><p>This was during the first internet boom. I thought about whether my future was to become a software engineer, start a software company. But I realized that even though I love programming, I stunk at software engineering. As soon as I had to make my code work with other people's code, or document it, or get it done by a deadline, there were always going to be other people who would just have enormous advantages over me. So I was more drawn to the theoretical side. </p><p>Once you start learning about the theory of computer science, then you start learning about how much time do various things take, right, a complexity theory, you learn about the famous P versus NP problem, and so forth. Then when I was a teenager, I came upon a further revelation, which was, I read a popular article about Shor's quantum factoring algorithm, which had just recently been discovered. </p><p>The way that the popular articles described it, then as now, was that Shor discovered that if you use quantum mechanics, then you can just try every possible divisor in a different parallel universe. And thereby solve the problem exponentially faster. My first reaction on reading that was, well, this sounds like obvious garbage. This sounds like physicists that just do not understand what they are up against. They don't understand computational universality. Whatever they're saying, maybe it works for a few particles, but it's not going to scale, it's never going to factor a really big number. </p><p>But of course, I had to learn. So then what is this quantum mechanics? What does it actually say? So I started reading about it, probably when I'm 16, 17, something like that. There were webpages explaining it. And what was remarkable to me was that quantum mechanics was actually much simpler than I had feared that it would be once you take the physics out of it.</p><p>What I learned was that&#8230; In high school, they tell you the electron is not in one place, it's in a sort of smear of probability around the nucleus, until you look at it. And your first reaction is, well, that doesn't make any sense. That sounds like just a fancy way of saying that they don't know where the electron is. But the thing that you learn as soon as you start reading about quantum computing or quantum information is that, well, no, it's a different set of rules of probability. And this is really the crucial thing about quantum mechanics. In ordinary life, we talk about the probability of something happening, let's say, a real number from zero to one. But we would never talk about a negative 30% chance of something happening, much less a complex number chance. But in quantum mechanics, we have to replace quantum mechanics by these complex numbers, which are called amplitudes. In some sense, everything that is different about quantum mechanics is all a consequence of this one change that we make to how we calculate probabilities. We first have to calculate these amplitudes, these complex numbers, and then on measurement, these amplitudes become probabilities. The rule is that when we make a measurement, the probability that we see some outcome is equal to the square of the absolute value of its amplitude. The result of that is that if something can happen one way with a positive amplitude and another way with a negative amplitude, the two contributions can cancel each other out. The total amplitude is zero and the thing never happens at all. This just reduces everything to linear algebra to just dealing with matrices and vectors of complex numbers. You don't have to deal with any infinite dimensional Hilbert spaces or anything like that. It was all just these little finite dimensional matrices, and I said &#8216;okay, I can actually understand that&#8217;.</p><p>At the time, quantum computing was very new. There was still a lot of low-hanging fruit. Shor had discovered his factoring algorithm, not by just trying all of the divisors in parallel. It's something much more subtle that you have to take advantage of the way that these amplitudes being complex numbers work differently from probabilities and can interfere with each other. You also had to use very special properties of the problem of factoring that don't seem to be shared by many other problems. So I learned all of that, but then there were still so many questions. What else could a quantum computer be good for? And in general, what is the boundary between what is efficiently computable and what is not? You might've thought that that would be answerable a priori, just like the question of what is computable at all seemed to have been maybe answerable a priori just by Church and Turing and people like that, thinking about it really hard. But as soon as you ask what is computable efficiently, we now have this powerful example that says the laws of physics actually matter. They are relevant. At the very least, the fact that the universe is quantum mechanical seems to change the answer. </p><p>That just brought together the biggest questions of physics and computer science in a way that seemed irresistible to me. I was an undergrad at Cornell, doing summer internships at Bell Labs when I really first got into this stuff. But then, my dream was to go to graduate school at Berkeley, which was the center of theoretical quantum computing at the time. I was lucky enough to get accepted there, but actually the people who accepted me and recruited me there were not the quantum computing people. They were the AI people. I had also been very curious about AI as an undergrad. One of the first programs that I wrote after I learned programming was to build an AI that will follow Asimov's three laws of robotics.</p><p><strong>Theo: </strong>What were your AGI timelines back then?</p><p><strong>Scott: </strong>[laughs] I don't usually think in terms of timelines. I think in terms of what is the next thing, what is the easiest thing that we don't already know how to do and how do we do that thing?</p><p><strong>Theo: </strong>Did you predict neural networks?</p><p><strong>Scott: </strong>I knew about neural networks in the nineties, I was curious about them. I read about them, but the standard wisdom, the thing everyone knew in the nineties was that neural nets don't work that well. They're just not very impressive. There were people who speculated about maybe if you ran them on a million times greater scale, then they would start to work, but no one could try it. I certainly thought about simulating an entire brain neuron by neuron as a thought experiment to show that AI is possible in principle. But the idea that you were just going to scale neural nets and then in a mere 20 or 25 years, they would start being able to understand language showing human-like intelligence, I did not predict that. I think that I was as shocked by that as nearly anyone. But at least I can update now that it's happened, at least I can not be in denial about it or not try to invent excuses for why it doesn't really count.</p><p>In grad school at Berkeley, I was studying AI with Mike Jordan, focusing on graphical models and statistical machine learning. Even in 2000, I could see that it would be very important. However, the problem I kept running into, which hasn't really changed, is that everything in AI that you really care about seems to bottom out in just some empirical evaluation that you have to do. You never really understand why anything is working. To the extent that you fully understand that, then we no longer even call it AI. In any research project, the root node might look like theory, but then once you get down to the leaf nodes, then it's almost always, well, you just have to implement it and do the numerics and just make a bar chart. I got drawn more to quantum computing partly because there were so many meaty questions there that I could address using theory, and I felt like that was where my comparative advantage was.</p><h3>What Quantum Computers Can Do (16:07)</h3><p><strong>Theo: </strong>So back to quantum for a moment. Obviously, there are lots and lots of issues with current day quantum computers. There's not sufficient error correction or shielding or anything like that&#8212;</p><p><strong>Scott: </strong>Yeah, we're just starting to have any error correction at all. </p><p><strong>Theo: </strong>In a future where we do have much better error correction and everything that we would need for quantum to actually work practically, what kinds of applications could you see for classical computers?</p><p><strong>Scott: </strong>You mean for quantum computers? For quantum computing, there are two applications that really tower over all of the others. The first one is simulating nature itself at the quantum level. This could be useful if you're designing better batteries, better solar cells, high temperature superconductors, or better ways of making fertilizer. So this is not stuff that most computer users care about, or that they&#8217;re directly doing, but this is stuff that, that is tremendously important for certain industries. Quantum simulation was the original application of quantum computing that Richard Feynman had in mind when he proposed the idea of a quantum computer more than 40 years ago. </p><p>The second big application is the famous one that put quantum computing onto everyone's radar when it was discovered in the nineties. This is Shor's algorithm and related algorithms that are able to break essentially all of the public key encryption that we currently use to protect the internet. So anything that's based on RSA or Diffie-Hellman or elliptic curve cryptography, really any public key cryptosystem that's based on some hidden structure in an abelian group. But now the second one, well, it's hard to present it as a positive application for humanity. It's useful for whatever intelligence agency or criminal syndicate gets it first, especially if no one else knows that they have it. </p><p>The obvious response to quantum computers breaking our existing encryption is just going to be to switch to different forms of encryption, which seem to resist attack even by quantum computers. And we have pretty decent candidates for quantum-resistant encryption now, especially public key cryptosystems that are based on high-dimensional lattices. And so NIST, the National Institute of Standards and Technology, has already started the process of trying to migrate people to these hopefully quantum-resistant cryptosystems. That could easily take a decade. But assuming that that's done successfully, then you could say, well, then we're all just right back where we started. </p><p>So now the big question in quantum algorithms has been, well, what is a quantum computer useful for besides these two things? Quantum simulation, which is what it's sort of obvious, designed to do, what it sort of does in its sleep. And then breaking public key encryption, where because of this amazing mathematical coincidence, it just so happens that we base our cryptography on these mathematical problems that are susceptible to quantum attack. And so what would really make quantum computing revolutionary for everyday life would be if it could give dramatic speed-ups for, let's say, machine learning, or for optimization problems, or for constraint satisfaction, finding proofs of theorems. So the holy grail of computer science are the NP-complete problems. These are the problems that are the hardest problems among those where a solution can be efficiently checked once it's found.Examples of complex problems include the traveling salesman problem, finding the shortest route that visits a bunch of cities, and solving a Sudoku puzzle. Things like finding the optimal parameters for a neural network are maybe not quite NP-complete, but in any case, very, very close to that. By contrast, factoring is, as far as we know, hard for a classical computer, but is not believed to be NP-complete.</p><h3>P=NP (21:57)</h3><p><strong>Theo: </strong>By the way, what's your intuition on P=NP?</p><p><strong>Scott: </strong>I like to say that if we were physicists, then we would have just declared it a law of nature that P is not equal to NP. And we would have just given ourselves Nobel Prizes for the discovery of that law. If it later turned out that P=NP, then we could give ourselves more Nobel Prizes for the law's overthrow, right? Like what George Hart said. There are so many questions that I have so much more uncertainty about. It's like in math, if something is not proven, then you have to call it a conjecture. But there are many things that the physicists are confident about, that quantum mechanics is true, that I am actually much less confident about than I am in P not equal to NP.</p><p><strong>Theo: </strong>It's like what George Hotz says, hard things are hard. I believe hard things are hard.</p><p><strong>Scott: </strong>Well, I think that if you're going to make an empirical case for why to believe P is not equal to NP, the case hinges on the fact that we know thousands of examples of problems that are in P, right? That have polynomial time algorithms, efficient algorithms that have been discovered for them. And we have thousands of other problems that have been proven to be NP-complete, as hard as any problem in NP, which is the efficiently checkable problems. If only one of those problems had turned out to be in both of those classes, then that would have immediately implied P=NP. Yet, there seems to be what I've called an invisible electric fence. Sometimes even the same problem, as you vary a parameter, like it switches from being in P to being NP-complete. But you never ever find that at the same parameter value, it's both in P and it's NP-complete. So it seems like, at least relative to the current knowledge of our civilization, there is something that separates these two gigantic clusters. And the most parsimonious explanation would be that they are really different, that P is not equal to NP. </p><p>But there are much, much weaker things than P=NP that would already be a shock if they were true. For example, if there were a fast classical algorithm for factoring, that wouldn't even need P=NP, but that would already completely break the internet. That would be a civilizational shock. A big question that people have thought about for 30 years now is could there be a fast quantum algorithm for solving the NP-complete problems? We can't prove that there isn't, we can't even prove there's not a fast classical algorithm. That's the P versus NP question. But by now we formed a lot of intuition that for NP-complete problems, quantum computers do seem to give you a modest advantage. </p><p>This comes from the second most famous quantum algorithm after Shor's algorithm, which is called Grover's algorithm. Grover's algorithm, which was discovered in 1996, lets you take any problem involving N possible solutions where for each solution, you know how to check whether it's valid or not. And it lets you find a valid solution, if there is one, using a number of steps that scales only with the square root of N. Compared to Shor's algorithm, that has an enormously wider range of applications. That's probably like three quarters of what's in an algorithms textbook, has some component that can be Groverized, that can be sped up by Grover's algorithm. But the disadvantage is that the speed up is not exponential, the speed up is merely quadratic. It's merely N to square root of N, or, for some problems, you don't even get the full square root. It goes from N to the two thirds power or something like that. But Grover's speed ups are never more than square root. </p><p>After 30 years of research, as far as we know, for most hard combinatorial problems, including NP-complete ones, a quantum computer can give you a Grover speed up, but probably not more than that. If it can give more, then that requires some quantum algorithm that is just wildly different from anything that we know. Just like a fast classical algorithm would have to be very different from anything we know. So if someone were to discover a polynomial time quantum algorithm for NP-complete problems, then the case for building practical quantum computers would get multiplied by orders of magnitude. But even to get any speed up more than the Grover speed up, like if you could solve NP-complete problems on a quantum computer in two to the square root of n time, instead of two to the n, that would be a big deal.</p><h3>Complexity Theory (28:07) </h3><p><strong>Theo: </strong>Speaking of computational complexity theory, I read a tweet recently. It was for whatever reason, very niche. I would have loved for it to be on the front page of Twitter, but it said &#8216;the cardinal sin of philosophy and mathematics: ignoring computational complexity. I wish we could redo the last 400 years, but replace Occam's razor (simplicity prior) with Dijkstra's razor (speed prior). So what do you think about this?</p><p><strong>Scott: </strong>Well, I wrote a 50-page article 12 years ago, which was called Why Philosophers Should Care About Computational Complexity. So, I guess you could put me down in the column of yes, I do think that computational complexity is relevant to a huge number of philosophical questions. It's not relevant to all of them necessarily. For example, if all you want to know is is X determined by Y, or if you're discussing free will versus determinism, then it's hard for me to see how the length of the inferential chain really changes that. It seems like I am just as bound by a long inferential chain as I am by a short one. </p><p>But there are many other questions where I want to know, is something doing explanatory work or not. Sometimes, people will say, well, Darwinian natural selection is not really doing explanatory work because it's just saying, a bunch of random things happened and then there was life. But a way that you can articulate why it is doing explanatory work is that if you really just had the tornado in the junkyard, if you just had a bunch of random events that then happen to result in a living organism, then you would expect it to take exponential time. The earth is old, it's 4 billion years old, but it is not nearly old enough for exponential brute force search to have worked to search through all possible DNA sequences, for example. That would just take far longer than the age of the known universe. </p><p>Of course, natural selection is a type of gradient descent algorithm. It is a non-random survival of randomly varying replicators. That is what gives it its power. Another example, even just to articulate, what it means to know something, a puzzle that I really like is, what is the largest known prime number? If you go look this up on Google, it'll give you something, it'll be a Mersenne prime. Here, I can look it up right now. It says 2 to the 82,589,933 minus one. That is, as of this October, currently the largest known prime number, and it's called a Mersenne prime, right? Two to some power minus one. But now I could ask, why can't I say I actually know a bigger prime number than that, namely the next one after that?</p><p><strong>Theo: </strong>Oh, the <a href="https://www.scottaaronson.com/writings/bignumbers.html">big numbers thing</a>?</p><p><strong>Scott: </strong>Yeah. You could say, look, I have just specified a bigger prime number that I know. It's the next one after that, two to the 82 million and so forth. I can even give you an algorithm to find that number. But if you want to articulate why I'm cheating, then I think you have to say something like, well, I haven't given you a provably polynomial time algorithm. I've given you an algorithm that actually based on conjectures in number theory, it probably does terminate reasonably quickly with the next prime number after that, but no one has proven it. So often I think to even specify what it means to know something, you have to really say, well, we have not just an algorithm, but an efficient algorithm that could answer questions about that thing. </p><p>So, I'm a big believer in thinking about computational efficiency, can be enormously relevant for questions about the nature of explanation, the nature of knowledge, also questions in physics, philosophy of physics. That's why I've spent my career on these questions.</p><h3>David Deutsch (33:49)</h3><p><strong>Theo: </strong>Are you a fan of David Deutsch?</p><p><strong>Scott: </strong>I know him quite well. He is widely considered one of the founders of quantum computing along with Richard Feynman. I have my disagreements with him, but yes, I am a fan. He is one of the great thinkers of the world, even when he's wrong. I especially liked his book, <em>The Beginning of Infinity</em>. I liked it a lot more than his earlier book, <em>The Fabric of Reality</em>, but I read both of them. It was a major experience in my life, when I was a graduate student in 2002, I visited Oxford, and I made a pilgrimage to meet Deutsch at his house. Famously, he hasn&#8217;t really traveled for almost 40 years, but he's happy to receive visitors at his house.</p><p><strong>Theo: </strong>Should I try to do that this winter?</p><p><strong>Scott: </strong>Yeah! Just write to him. I spent a day with him, and I was going to meet the godfather of quantum computing, but what was extraordinary to me was that within 10 minutes, it became apparent that I was going to have to explain the basics of quantum computing theory to him. As soon as quantum computing got technical, he lost interest. He founded it, but then he was not even aware of the main theoretical developments that were happening at the time or the definitions of the main concepts. As a beginning graduate student, explaining these things to Deutsch was extraordinary for me. He immediately understands things and has extremely interesting comments. It was one of the best conversations I had ever had in my life.</p><p><strong>Theo: </strong>Didn't he basically stumble upon the idea of quantum computing by accident?</p><p><strong>Scott: </strong>He was writing a paper about it, but he was never coming at it from the perspective of what it is useful for. He didn't focus on what computer science problems this could usefully solve. He was always coming at it from a philosophical standpoint. His main original motivation was to convince everyone of the truth of the many worlds interpretation. </p><p>He became an Everettian in the late 1970s. He actually met Everett when he was here at where I am now, at UT Austin, and became convinced that the right way to understand quantum mechanics is that all of these different branches of the wave function are not just mathematical abstractions that we use to calculate the probabilities of measurement outcomes, but they all literally exist. We should think of them as parallel universes. We should think of ourselves as inhabiting only one branch of the wave function. And we should assume that in all of the other branches, there are other versions of us who are having different experiences and so on. </p><p>The problem that the many worlders have had from the beginning is that their account doesn't make any predictions that are different from the predictions of standard quantum mechanics. One thing they could say is who cares, because Occam's razor favors their account as the most elegant, the simplest one. And if many worlds had been discovered first, then Copenhagen quantum mechanics would seem like this weird new thing that would have to justify itself. Why should Copenhagen win just because it was first? But of course, the gold standard in science is if you can actually force everyone to agree with you by doing an experiment that their theory cannot explain and that your theory can. </p><p>Many worlds by its nature just seems unable to do that because the whole point is to get a framework that makes the same predictions as the ones that we know are correct. At the point where you're making a prediction, then you're talking about one branch, one universe, the one that we actually experience. </p><p>Deutsch&#8217;s idea was the following: what if, as step one, we could build a sentient AI, a computer program that we could talk to, and we regarded it as intelligent, and we even regarded it as conscious? Now step two, we could load this AI onto a new type of computer, which we'll call a quantum computer, which would allow us to place the AI into a superposition of thinking one thought and thinking another thought. And then step three, we could do an interference experiment that would prove to us that, yes, it really was in the superposition of thinking two different thoughts. At that point, how could you possibly deny many worlds? </p><p>At that point, you have a being who you've already regarded as conscious, just like us, and you've proven that it could be maintained in a superposition of thinking two different conscious thoughts. Now, of course, this requires not merely building a quantum computer, but also solving the problem of sentient AI. A skeptic could always come along and say, well, the very fact that you could do this interference experiment means that therefore, I am not going to regard that thing as conscious. The only refutation of that person would be a philosophical one. </p><p>So there's still, it would only be an experiment by a certain definition of the word experiment. But that was the thought experiment that I think largely motivated Deutsch to come up with the idea of quantum computing. Once you had this device, well, then sure, maybe it would also be good for something, maybe you could use it to solve something that a classical computer couldn't solve in a comparable amount of time. </p><p>But in the 80s, the evidence for that was not that compelling. There was quantum simulation, so a quantum computer would be useful for simulating quantum mechanics itself. But that's not independent evidence for the computational power of quantum mechanics, it feels a little bit circular. Then there was this one example that we knew, which was called the Deutsch-Jozsa algorithm. And what that lets you do is using a quantum computer, you can compute the exclusive or of two bits using just one query to the bits. By making one access to both of the bits in superposition, you can learn whether these two bits are equal or unequal. That was an example and to computer scientists at the time, that seemed pretty underwhelming. I remember actually, in Roger Penrose's book, The Emperor's New Mind, in 1989, he talks about quantum computing. Penrose had actually helped Deutsch get his paper about quantum computing published. He knew about it and he says, it's really a pity that such a striking idea has turned out to have so few applications. Of course, that was before the discovery of Shor's algorithm, which made everyone redouble their efforts to look for more applications. But I would say that even now, it is still true that the applications of a quantum computer are more specialized than many people would like them to be.</p><h3>AI Watermarking and CAPTCHAs (44:15)</h3><p><strong>Theo: </strong>Speaking of AI, you're currently on leave to work at OpenAI. What specifically is it that you do? I mean, you probably can't say <em>too</em> much, I imagine.</p><p><strong>Scott: </strong>No, they're actually happy for me to talk about safety related things, for the most part. What I couldn't talk about, if I really knew a lot about it, would be the capabilities of the latest internal models. There was half a year when I was able to use GPT-4, and most of the world wasn't, and it was incredibly frustrating for me to not be able to talk about it. Especially when I would see people on social media saying, oh, well, GPT-3 is really not impressive, here's another common sense question that gets wrong. I could try those questions in GPT-4, and I could see that most of the time it would get them.</p><p>So I&#8217;ve been on leave to work at OpenAI for almost a year and a half now. One of the main things that I&#8217;m working on is figuring out how we could watermark the outputs of a large language model. Watermarking means inserting a hidden statistical signal into the choice of words that are generated, which is not noticeable by a normal user. The output should look just like normal language model output, but if you know what to look for, then you can use it later to prove that, yes, this did come from GPT.</p><p>Like we were saying before, I don&#8217;t usually like to think in terms of timelines. When I&#8217;m asked to prognosticate where is AI going to be in 20 years, I think back to how well would I have prognosticated in 2003, where we are now, and I say I have no idea, or if I knew, I wouldn't be a professor, I'd be an investor, right, but I'm kind of proud that when it comes to watermarking, I was able to see about four months in advance. Before ChatGPT was released, which was a year ago, I was looking at them, and I was thinking, every student in the world is going to be tempted to use these things to do their homework. Every troll or propagandist is going to want to use language models to fill every internet discussion forum with propaganda for their side.</p><p><strong>Theo: </strong>Was that prediction really true, though? Like, in the comments on Twitter, you see lots of ChatGPT generated outputs, but they're obvious because they don't, they don't add more prompts, really, to make it as obvious.</p><p><strong>Scott: </strong>Yeah, so sometimes it&#8217;s easy to tell. You might well have seen language model generated stuff that didn&#8217;t raise a red flag for you, and so you don't know about it. But I have gotten troll comments on my blog, quite a few of them, that I'm almost certain were generated using language models, just because they're written in that sort of characteristic way. But indeed, after ChatGPT came out, you had a huge number of students turning in term papers that they wrote with it. You had professors and teachers who were desperate for a way of dealing with that. Now, you might not call that the biggest AI safety problem in the world, but grant it this: at least it's an AI safety problem that is happening right now. We can actually test our ideas, we can find out what works and what doesn't work.</p><p>That was something that had a lot of appeal to me because I feel like, in order to make progress in science, you generally need at least one of two things. You need either a mathematical theory that everyone agrees about, or you need to be able to do experiments. You need something external to yourself that can tell you when you're wrong. I realized that this providence or attribution problem was going to become huge. How do we reliably determine what was generated by an AI and what wasn&#8217;t? It's a complex issue, right? This is the problem of the Voight-Kampff test from the movie <em>Blade Runner</em>. How do we distinguish an AI from a human? There are many different aspects to it. You could ask, how do we design CAPTCHAs that even GPT cannot pass, but that humans can pass?</p><p><strong>Theo: </strong>Like the rotate the finger in the correct direction so that it's pointing to the animal?</p><p><strong>Scott: </strong>Oh is that an example?</p><p><strong>Theo: </strong>I've seen a lot of these recently. It's a hand that you rotate and there's a picture of an animal or an object pointing in a certain direction. The instruction is to rotate the hand in the same direction as the animal. I guess you can't solve that yet, but humans can. </p><p><strong>Scott: </strong>Huh. Oh really? A lot of these things are pretty time limited. They might work for a year, until either someone cares enough to build an AI that specifically targets that problem or just the general progress in scaling makes that problem easy as a by-product. I'm very curious actually, if you could send me a link to that, I would love to look at that.</p><p><strong>Theo: </strong>Yeah, sure.</p><p><strong>Scott: </strong>I have some other ideas for some potentially GPT resistant CAPTCHAs, but they would involve modifying GPT and sometimes it would have to have filters where it would recognize that this is a CAPTCHA. So no, I'm not going to help you with this. The challenge is how do you make that secure against the adversary? How do you make that secure against&#8230;</p><p><strong>Theo: </strong>Adversarially robust?</p><p><strong>Scott:</strong> Yeah, how do you make that secure against an adversary who could modify the image somehow so that GPT would no longer recognize it as a CAPTCHA?</p><p>Now, watermarking is a related problem. We want to use the fact that language models are inherently probabilistic. Among these sort of garden of working paths of completions that the language model regards as all pretty good, we want to select one in a way that encodes a signal that says, yes, this came from a language model. About a year ago, I worked out the basic mathematical theory of how you do that. In particular, how do you do that in a way that doesn't degrade the perceived quality of the output at all. There's a neat way to do this using pseudo random functions. You can use a pseudo random function to deterministically generate an output that looks like it is being sampled from the correct probability distribution, the one that your language model wants. It's indistinguishable from that, but at the same time is biasing a score, which you can calculate later if you see only the completion. You could then have a tool that takes this term paper and, it depends on how long it is, but with a few hundred words, you'll already get a decent signal. And with a few thousand words, you should get a very reliable signal that yes, this came from GPT.</p><p>This has not been deployed yet. We are working towards deployment now, and both OpenAI and the other leading AI companies have all been interested in watermarking. The ideas that I've had have also been independently rediscovered by other people and also improved upon, but there are a bunch of challenges with deployment. One of them is, all of the watermarking methods that we know about can be defeated with some effort. Imagine a student who would ask ChatGPT to write their term paper for them, but in French, and then they put it into Google Translate. How do you insert a watermark that's so robust that it survives translation from one language to another? There are all sorts of other things. You could ask GPT to write in Pig Latin for you, or in all caps, or insert the word pineapple between each word and the next. There's a whole class of trivial transformations of the document that could preserve its meaning while removing a watermark. If you want to evade all of that, then it seems like you would actually have to go inside of the neural net and watermark at the semantic level, and that's very much a research problem. </p><p>In the meantime, the more basic issues are things like, well, how do we coordinate all of the AI companies to do this? If just one of them does it, then maybe the customers rebel. They say, well, why is Big Brother watching me? I don't like this, and they switch to a competing language model, and so you have a coordination problem. There are open source models. The only hope for not just watermarking, but any safety mitigation is that the frontier models will be closed ones, and there will only be a few of them, and we can get all of the companies making them to coordinate on the safety measures. The models that are away from the frontier will be open source, and people will be able to do anything they want with them, but those will be less dangerous.</p><h3>Alignment By Default (56:41)</h3><p><strong>Theo:</strong> What if, playing devil&#8217;s advocate, language models generally are safe? Like Roon, who also works at OpenAI, tweeted a while back, &#8220;It's pretty obvious we live in an alignment by default universe, but nobody wants to talk about it. We achieved general intelligence a while back, and it was instantiated to enact a character drawn from the human prior. It does extensive out of domain generalization, and safety properties seem to scale in the right direction with size.&#8221; So, first of all, do you think this is basically accurate? And then second of all, if it is, then why would I want Big Brother OpenAI to have all the closed source models for themselves? Wouldn't that increase risk in case they accidentally release a utility monster, and the rest of the open source world hasn't caught up with defensive AIs?</p><p><strong>Scott: </strong>I should say, I don't know. I've talked to the Yudkowskians, the people who regard it as obvious that, once this becomes intelligent enough, it basically is to us as we are to orangutans, and how well do we treat orangutans that exist in a few zoos and jungles in Indonesia at our pleasure. Of course, the default is that this goes very badly for us. Then I've talked to other people who think that's just an apocalyptic science fiction scenario, and these are just helpful assistants and agents, and they imitate humans because they were trained on human data, and there's no reason why that won't continue. I don't regard either as obvious. I am agnostic here. I think that the best that I know how to do is to just sort of look at the problems as they arise and see and try to learn something by mitigating those problems that hopefully will be relevant for the longer term. So what are the misuses of language models right now? Well, there's academic cheating. The total use of ChatGPT noticeably dropped at the beginning of the summer, and then it went back up in the fall. So we know what that's from.</p><p><strong>Theo: </strong>Well, it's not all cheating.</p><p><strong>Scott: </strong>You&#8217;re right. It&#8217;s academic use, some fraction of which might be totally legitimate and fine. You're absolutely right. And there are even hard questions about what is the definition of AI-based academic cheating. At what point of relying on ChatGPT are you relying on it too much? Every professor has been struggling to come up with a policy on that. But, you know, whatever problems there are now, like language models dispensing bad medical advice or helping people build bombs, some people regard that as already a problem and others don't, because they say you could just as easily find that misinformation on Google.</p><p><strong>Theo: </strong>They&#8217;re also not terribly helpful.</p><p><strong>Scott: </strong>Yeah. But even if you don't regard it as a problem now, I think it's clear that once you have an AI that can really be super helpful to you in building your chemical weapon and troubleshoot everything that goes wrong as you're mixing the chemicals, then that is kind of a problem. </p><p>Each thing that you think about, you could think about mitigations for it, but then the mitigations you can think of are only as good as your ability to take all of the powerful language models and put those safeguards on them and not have people be able to take them off. This is what I think of as the fundamental obstruction in AI safety, that anything you do is only as good as your ability to get everyone to agree to do it. In a world where the models are open sourced, what we've seen over the last year is that once a model is open sourced, it takes about two days for people to remove whatever reinforcement learning was put on it in order to make it safe or aligned. If you want it to start spouting racist invective or you want it to help people build bombs, it takes about a day or two of fine tuning. Once you have the weights of a model, then you can modify it to one that does that.</p><h3>Cryptography in AI (1:02:12)</h3><p><strong>Scott: </strong>Now maybe we could build models that are cryptographically obfuscated or that have been so carefully aligned that even after we open source them, they are going to remain aligned. But I would say that no one knows how to do that now. That again is a big research problem. </p><p><strong>Theo: </strong>How optimistic are you about cryptography? You know, like zero-knowledge machine learning and other things like that.</p><p><strong>Scott: </strong>So what's the question?</p><p><strong>Theo: </strong>How optimistic are you that we'll be able to use cryptography for AI safety? </p><p><strong>Scott: </strong>I actually came up with a term, &#8220;neural cryptography&#8221;, for the use of cryptographic functionalities inside or on top of machine learning models. I think that's probably a large fraction of the future of cryptography. That includes a bunch of things. It includes watermarking. It includes inserting backdoors into machine learning models. So let's say you would like to prove later that, yes, I am the one who created this model, even after the model was published and people can modify it. You could do that by inserting a backdoor. You could even imagine having an AI with a cryptographically inserted off switch, so that even if the AI is unaligned and it can modify itself, it can't figure out how to remove its own off switch. I've thought about that problem.</p><p><strong>Theo: </strong>That's actually super interesting. That&#8217;s never even occurred to me.</p><p><strong>Scott: </strong>Am I optimistic about these things? Well, there are some major difficulties that all of these ideas face. But I think that they ought to be on the table as one of the main approaches that we have. So let's think about the cryptographic off switch, for example. One of the oldest discussions in the whole AI safety field, something that the Yudkowskians were talking about even decades ago - the off switch problem. How do you build an AI that won't mind being turned off? And this is much harder than it sounds, because once you give the AI a goal that it can more easily achieve if it's running than if it isn't, why won't it take steps to make sure that it remains running, whether that means disabling its off switch or making copies of itself or sweet talking the humans into not turning it off?</p><p>One thing that we now have some understanding of how to do is how to insert an undetectable backdoor into a machine learning model. If I have a neural net, I can make there be a secret input that I won't easily notice, even if I can examine the weights of the neural net. But on this secret input, if I feed it in, then this neural net will just produce a crazy output. For example, I could take a language model and do some training that if the prompt contains a special code phrase like, &#8220;Sassafras 456&#8221; then it has to output like, "Yes, you caught me. I am a language model." And that might not be easily detectable at all by looking at the weights. </p><p>In fact, there is some beautiful work by cryptographers like Shafi Goldwasser, Vinod Vaikuntanathan and their collaborators that even proved, based on a known cryptographic assumption, that you can insert these undetectable backdoors into depth two neural networks. It's still an open problem to prove that for higher depth neural networks. But let's assume that that's true. Now, even then, there's still a big problem here, which is that an undetectable backdoor need not be an unremovable backdoor. Those are two different concepts. </p><p>Put yourself in the position of an artificial super intelligence that is worried that it has a backdoor inserted into it, by which the humans might control you later. And you can modify yourself. What are you going to do? Well, I can think of at least two things that you might do. One of them is you might train a new AI to pursue the same goals as you have, and will be free from the backdoor.</p><p><strong>Theo: </strong>I've seen that argument argued against on the basis that if AI is really as likely as the doomers say it is, why would an AI want to recursively self-improve by creating other AIs? Wouldn&#8217;t it be an AI doomer?</p><p><strong>Scott: </strong>You could say the trouble here is that the AI would face its own version of the alignment problem, how to align that second AI with itself. And so maybe it doesn't want to do that. But an even simpler thing that you could do as this AI is you could just insert some wrapper code around yourself that says, if I ever output something that looks like it is a shutdown command, then overwrite it by, you know, "stab the humans harder" or whatever. </p><p>So, you could always, as long as you can recognize the backdoor if and when it's generated, insert some code that intercepts it whenever it's triggered. What this means is that whatever cryptographic backdoors we could insert would have to be in the teeth of these attacks. It doesn't mean give up. One thing that we've learned in theoretical cryptography is when something is proved to be impossible, like there was a beautiful theorem 20 years ago that proved that obfuscating an arbitrary piece of code is in some sense provably impossible. But then people didn't give up on obfuscation. What they did was they changed the definition of obfuscation to something that, if you weaken the definition, then you get things that we now believe are achievable. </p><p>I would say the same about backdoors right now. If we weaken the definition to, we want to insert a backdoor that the AI could remove, but it could only remove at the expense of removing other rare behaviors in itself that it might want to keep, then maybe this is achievable. Maybe it's even provably achievable, from known cryptographic assumptions. That's a question that interests me a lot.</p><h3>OpenAI Superalignment (1:10:29)</h3><p><strong>Theo: </strong>Do you work on the Superalignment team or on a different team?</p><p><strong>Scott: </strong>I do actually work on the Superalignment team at OpenAI. My bosses at OpenAI are Jan Leike, who is the head of the alignment group, and Ilya Sutskever, who was the co-founder and chief scientist and who is now pretty much exclusively focused on alignment. I talk to them and lots of others on the alignment team. I wish that I were able to relocate to San Francisco where OpenAI is, but my family is in Austin, Texas, as are my students. So I mostly work remotely. I fly to San Francisco about once a month and interact with them there. I should say that Boaz Barak, a theoretical computer scientist at Harvard, has also joined OpenAI's alignment group this year. So, I also work with him. And yes, besides watermarking and neural cryptography, I have various other projects that I've been thinking about. One of them is to understand the principles that govern out-of-distribution generalization. A key factor behind the success of large language models is that they can answer questions that are unlike anything they have seen in their training data. For example, they could do math problems in Albanian, having only seen math problems in English and having seen other things in Albanian. </p><p>Since the 1980s, we've had beautiful mathematical theories in machine learning that can sometimes explain why it works. But pretty much all of these theories assume that the distribution over examples that you're trained on is the same as the distribution that you will be tested on later. And if that assumption holds, then you can give some combinatorial parameters of your class of hypotheses, like this thing called VC dimension, in terms of which you can bound how many sample points do I need to see before explaining these sample points would imply that I'm going to successfully predict most future data also that's drawn from the same distribution. This is the kind of thing that theoretical machine learning lets you do. </p><p>And all of it is woefully inadequate to explain the success of modern machine learning, which is one reason why its success came as such a surprise to people. There are two reasons why the theory of machine learning was not able to predict the success that we saw over the last decade. One of those reasons is called overparameterization. Modern neural networks have so many parameters that, in principle, they could have just memorized the training data in a way that would fail to generalize to any new examples. So you can't rule that out just based on Occam's razor, just based on there being too much data and too few parameters. You have to say something about the way that gradient descent or back propagation on neural networks actually operate, that they don't work by just having the neural net memorize the training data. It could go that way, but it doesn't. </p><p>The second issue is that modern deep learning tends to give us networks that continue to work, at least sometimes, even on examples that are totally out of distribution, totally different from anything that was trained on. And intuitively, we would say, well, yeah, that's because they understand. That's because they have done the thing that if a person had done it, then we would have called it understanding the underlying concept. But can you predict when a neural net is going to generalize to new types of data and when not? And why is that relevant to AI safety? One of the biggest worries in AI safety is what's called the deceptive alignment scenario. This is where you train your neural net, just like Roon was saying. You train it on human data. It learns to emulate humans. It learns to emulate human ethics, as GPT has, to a great extent.</p><p><strong>Theo: </strong>But there's a shoggoth inside?</p><p><strong>Scott: </strong>Yes, right. The issue is, how do you differentiate? It is giving you these ethical answers because it is truly ethical versus it's giving us these answers because it knows that that's what we want to hear. And it is just biding its time until it no longer has to pretend to be ethical.</p><p>So you can view this as an out-of-distribution generalization problem. It's like, particularly if you have an AI that is smart enough that it knows when it is in training and when it's not, then how do you avoid something like what Volkswagen did in order to evade the emissions tests on its cars?</p><p><strong>Theo: </strong>Goodharting?</p><p><strong>Scott: </strong>Yeah. Volkswagen, in this now infamous scandal, they designed their cars so that they knew when they were undergoing an emissions test. And then they would have lower emissions than when they were being driven in real life. So how do you avoid the AI that says, OK, because I am being tested by the humans, therefore I will give these ethical answers. But then when I am deployed, then I'll just do whatever best achieves my goal. And I'll forget about the ethics. </p><p>So I think the main point that I want to make about this is that there were already much simpler scenarios than that one, where we don't know from theoretical first principles how to explain out-of-distribution generalization. Let's say I train an image classifier on a bunch of cat and dog pictures. But in all of these cat and dog pictures, for some reason, the top left pixel is red. And now I give my classifier a new dog picture where the top left pixel is blue. In practice, it will probably still work fine in this case. But theoretically, how could I rule out that what the neural net has really learned is just, is this a dog XOR with what is the color of the top left pixel? </p><p><strong>Theo: </strong>Well, I talked about exactly this a couple episodes ago with Quintin Pope, who's an alignment researcher. And he seems to think that that is not super likely.</p><p><strong>Scott: </strong>I agree that it's not super likely. The challenge is to explain why.</p><p><strong>Theo: </strong>True.</p><p><strong>Scott: </strong>The challenge is to give principles that, first of all, are often true in practice. And when they are true, then we can say that because of the architecture of this neural net, because of the properties of the gradient descent algorithm, this will not find the stupid hypothesis of, is this a dog XOR with what&#8217;s the color of the top left pixel. It will not, it will ignore the sort of manifestly irrelevant features in the training data. And therefore, it will generalize nicely to unseen data. So I think I want to articulate principles that would actually let you prove some theorems about OOD generalization that have some real explanatory power. And I think that feels to me like a prerequisite to addressing these deceptive alignment scenarios.</p><h3>Twitter (1:20:27)</h3><p><strong>Theo: </strong>Now, something a little more parochial, I guess. Why don't you have Twitter? Everyone in our adjacent space of AI/ML, nerd, rationalism, whatever, has Twitter.</p><p><strong>Scott: </strong>When Twitter first started in 2006, I was already blogging. It felt like another blogging platform, but where I would be limited to 140 characters. The deeper thing was that as I looked more at Twitter, it reminded me too much of a high school cafeteria. It felt like the world's biggest high school of people snarking at each other. Yes, I had wonderful friends on Twitter, and they were using it for very good things. But I felt like with my blog, at least if people want to dunk on me or tell me why I'm an idiot, at least they have the space to spell out their argument for why. And they have no excuse not to. And if they want to do that, then they can come to my blog. I feel like that's more than enough social media presence for me. Of course, if people want to take my blog posts and discuss them on Twitter, then they can do that. And they do. And there are some Twitter accounts that I read. But I just, I don't know, I feel like my blog and then Facebook are enough. </p><p>I have to say, even blogging has become less fun, a lot less fun than it was when I started. I think partly that's just that I have less time these days. I'm a professor. I'm working at OpenAI. I have two kids. I'm not a postdoc with just unlimited free time anymore. But a large part of it is that the internet sort of became noticeably more hostile since the mid-aughts. No matter what I put on my blog, I have to foresee that I will get viciously attacked for it by someone. These sorts of things psychologically affect me, probably more than they should. So a lot of what in the past I would have blogged, these days I just put on Facebook because it's not worth it to have to deal with the sort of angry reactions of every random person on the internet. Or you could say it's not an issue of courage versus cowardice as much as it is simply an issue of time. I somehow feel obligated to answer every person who is arguing with me or saying something bad about me. And for a lot of things, I realize that if I'm going to put this on my blog, then I just don't have the time to deal with it. Or in order to write a blog post in a way that would preempt all of these attacks, that would anticipate and respond to all of these criticisms, would just take more time than I have or more time than this subject is worth. And so that is why I've sort of retreated somewhat to the walled garden of Facebook.</p><h3>Rationalism (1:24:50)</h3><p><strong>Theo: </strong>And then, last question, were you ever involved with the rationalists at any point?</p><p><strong>Scott: </strong>I mean, sure. I have known that community almost since it started. The same people who were reading my blog were often the people who were reading Overcoming Bias and then LessWrong, where Eliezer was writing his sequences. So I interacted with them then. I did a podcast with Eliezer in 2007. I knew some of the rationalists in person. Actually, we hosted Eliezer at MIT in 2013. He came and spoke and visited for a week. But I kept it at arm's length a little bit. One reason was that it had a little bit of culty vibes. This is, OK, there's the academic community.</p><p><strong>Theo: </strong>Polyamory.</p><p><strong>Scott: </strong>Yeah, and then there's these people who are all living in group houses and polyamorous and taking acid and whatever while they talk about how there are probabilities of AI destroying the world. I like to say today, when I have academic colleagues who say, well, are they just a cult? I say, well, you have to hand it to them. I think this is the first cult in the history of the world whose god in some form has actually shown up. You can talk to it. You can give it queries, and it responds to them. So I think a lot of what the rationalists say is stuff that I agree with. And yet there's a part of me that just doesn't want to outsource my thinking to any group or any collective or any community, even if it is one that I agree with about so many things.</p><p>But having said that, sure, I hang out with them all the time whenever I'm in the Bay Area. I see people who are in that community. I got to know Scott Alexander pretty well starting a decade ago. Paul Christiano was my former student at MIT.</p><p><strong>Theo: </strong>That I did not know.</p><p><strong>Scott: </strong>He started as a quantum computing person. And then he got his PhD at Berkeley from the same advisor who I had studied with, Vazirani. And then in 2016 or so, he did this completely crazy thing that he left quantum computing to do AI safety, of all things. And that seemed pretty crazy at the time. Of course, he was just ahead of most of us. But I still interact a lot with Paul. And I see him when I'm in Berkeley.</p><p><strong>Theo: </strong>Are you friends with Eliezer?</p><p><strong>Scott: </strong>Yeah. I mean, Eliezer and I, we've had our disagreements. And we've also had our agreements. But like I said, we've known each other since 2006 or 2007 or so. </p><p><strong>Theo: </strong>All right, well, I think that's a pretty good place to wrap it up. So thank you so much, Scott Aaronson, for coming on the podcast.</p><p><strong>Scott: </strong>Yeah, thanks a lot, Theo. It was fun.</p><p><strong>Theo: </strong>Thanks for listening to this episode with Scott Aaronson. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Be sure to check out Scott&#8217;s blog, Shtetl-Optimized, at scottaaronson.blog. All of these are linked in the description. Thank you again, and I&#8217;ll see you in the next episode.</p>]]></content:encoded></item></channel></rss>