Theo's Substack
Theo Jaffee Podcast
#6: Razib Khan
0:00
Current time: 0:00 / Total time: -1:16:35
-1:16:35

#6: Razib Khan

Genetics, ancient history, rationalism, and IQ

Intro (0:00)

Theo: Welcome to Episode 6 of the Theo Jaffee Podcast. Today, I had the pleasure of speaking with Razib Khan. Razib is a geneticist, the CXO and CSO of a biotech startup, and a writer and podcaster with interests in genetics, genomics, evolution, history, and politics. Today, we talk about all of these things plus more: the difference between genetics and memetics, the origins of domesticated animals, an inside view on the rationalist community, and one of social science’s most controversial findings. This is the Theo Jaffee Podcast, thank you for listening, and now, here’s Razib Khan.

Genrait (0:37)

Theo: Welcome back to episode 6 of the Theo Jaffee Podcast. I’m here today with Razib Khan.

Razib: Nice to meet you, Theo. I'm excited to talk to you.

Theo: Awesome! So, first question, you're the CXO of a genetics company called Genrait. What does your day job look like? What do you do specifically?

Razib: I'm on a lot of calls right now. I'm not really heads-down on science development much anymore, although I try to spend some hours every week doing that, otherwise I lose my touch. I'm mostly on calls, doing biz dev, and managing our head of science. We work together. I don't want to give the wrong impression. I like to say we're colleagues. I reach out to customers, and candidly, I've gotten most of the customers at this point, founder-led sales. So, my day job involves a lot of calls, responding to emails, sending out emails, and I talk to the scientist about how the science is going and what we need to do.

There are a few things in population genetics, like evolutionary stuff where I know more than her. She comes out of more like a comparative genomics-type background. She's definitely much better than me upstream on the cycle of data generation, data analysis, but I've done a lot more of the later stuff. Sometimes there are science things where I need to come in and do stuff, but mostly, I just do a mix of a lot of things.

That's the biggest difference that I have seen being a founder (I have a lot of equity in the company and I have a C in front of my title) versus being a high-level employee, which I have been before at startups, where you’re kind of heads-down and narrow focused. As a founder, you just have to do what you have to do. I was telling a friend yesterday that learning when you're at a startup, especially when you're a first-time founder, is just making a lot of mistakes and—can I swear?

Theo: Sure.

Razib: If you don't fuck up, you're not actually going to learn because it doesn't stick with you. Sometimes you make a mistake and it works out. You don't get caught. You're actually not going to learn from that. Whenever you fuck up pretty bad, you're never really going to forget it. So a lot of the mistakes that we've made is how we learn as founders, is what I feel. And so there's a lot of cliches that people say, but once you're a founder, you understand where those cliches come from. I will say that. Yeah. But I'm on a lot of Zoom calls like this.

Theo: What is the CXO, by the way?

Razib: Experience. I'm kind of CXO slash CSO. If we talk to biotech people, I would more say CSO, but in general, we're pivoting—

Theo: S like sales?

Razib: CSO is Science. But if we're talking to more info tech, IT, data science, we'll say CXO because that's more of a Silicon Valley tech thing. So my two co-founders come out of Silicon Valley tech, I obviously come out of science. On LinkedIn, I have “CXO/CSO” to make it clearer for people. But when it comes to presentations and we're talking to mostly investors, I will say CXO just because they're mostly tech investors. The company is mostly a tech company. It works in data and our domain, our vertical, is biology.

Genetics and Memetics (4:31)

Theo: How did you get into genetics and genomics in the first place?

Razib: I've always been interested in the topic. My undergrad background is in biochemistry. I kind of came up at a time where molecular biology was big. The first time I took a biology course that had a genetics component, I’m like “oh this is fun”, I found it interesting. Also historical science, I've always been interested in history. With genetics, there's a few big principles you memorize and you can derive from it. It's a little bit like physics in that way.

Francis Crick was a physicist and a lot of theoretical population biology and population genetics people come from physics backgrounds. R.A. Fisher was a math guy who worked in thermodynamics. Fisher's a nova and all these like Fisherians, the Fisher statistics methods, like with max likelihood. But there's also evolutionary geneticist and he kind of fused Mendelian genetics with evolutionary biology. With genetics, you don't have to memorize as much stuff as you do in other parts of biology, like neuroscience. Neuroscience doesn’t have a good theory. Genetics. We have a good theory. We have a good short term theory in terms of the Mendelian system. And then we have evolutionary biology, evolution. So it's a very system oriented branch of biology. I've always been interested in biology and, you know, my strengths, I think, are in systems orientation. So that’s how I got interested in it.

I'm interested in evolution. I've always been interested in evolutionary biology, interested in history. Genetics can explore history as a tool. Game theory is applied to both evolutionary biology, evolutionary genetics and in economics. The difference is in genetics, we have evolution, we have genes, whereas in economics they have currency. We feel the genes are much more systematic and clear in terms of the currency of fitness. That's why I like genetics, because we have the substrate. It's a reductionist science in that way, you start from the foundation. On one hand, you can derive from the foundation or you can drive back to the foundation, you can abduce back to the foundation. Does that make sense?

Theo: Yes. What are the pieces of the foundation that you're talking about that you can derive from?

Razib: For example, Mendel's laws, the law of segregation, the law of independent assortment. They didn't know the structure of the DNA until obviously the 50s. There were some arguments before. They knew that DNA was probably associated with inheritance and transmission in the 40s, maybe even earlier. But Mendel did not know anything about DNA. He just saw the patterns.

So genetics is basically looking at the patterns and figuring out, how are the patterns of the traits occurring? The law of segregation is, you're getting one of your gene copies from each parent. They're segregating. You have two gene copies and you're getting from one parent and the law of assortment. Independent assortment is the traits are inherited independently. We know that because now we understand different parts of DNA code for different traits and different parts of DNA are inherited in independent ways, depending on whether they're in chromosomes or depending on they're really far apart in the genomes or recombination is breaking things up.

When you talk about cultural evolution, it's very plastic and everything. It can happen really fast. You can maintain huge differences between cultures, et cetera, because there's just so much power with cultural evolution. There's very few restrictions. So model building with cultural evolution does have some problems related to that. With genetics, there's limitations and the ground rules are very restrictive. So with genetics, you are 50 percent one parent, 50 percent the other parent. With things like selection, they can only occur at a certain magnitude because the underlying molecular biology, the physical substrate of how the information is encoded is very restrictive, compared to memes. Memes are plastic and kind of chaotic. Genes are very structured, if that makes sense.

Theo: Yes, it does. It is kind of interesting how much of memetics was just ripped from genetics and how well it works to a point.

Razib: A lot of the cultural evolution is taking population genetic methods, population and quantitative genetic methods and applying them to culture. People like Michael Muthukrishna, Joe Hendrick, Cavalli-Sforza, Peter Richardson, Robert Boyd, et cetera, are using these evolutionary biology, evolutionary genetics methods because they think that the evolutionary theory framework actually is the better framework to understand culture and psychology and economics, all of these things. Their argument.

Well, economics, I guess there's homo economicus, there's neoclassical theory, there's some theory there. I don't want to overdo it. But disciplines like psychology, Michael has written about this, have a problem because they don't have a good theory. So they just have all these independent studies. And he thinks the evolutionary theory can be very, very powerful. That's why evolutionary psychology is so popular, even though it has a lot of problems, because it has a theory.

Imagine biology—and neuroscience doesn't have a theory either. Consciousness is kind of a theory, but they don't. It's not. There's all the arguments about consciousness right now about which theories are good, which it's like. You know, it's like imagine before 1859, before Darwin's ideas, that's what a lot of these sorts of scientific scholarships, social scientific scholarships, disciplines are still like.

Theo: With memes in particular, especially with Internet memes, I wonder what makes some so ridiculously enduring and what makes some just flare up and get all over the Internet for a few days and then just vanish, like Wojak, for example. You have this bad drawing of an exploitable face that you can edit. And then, what is it, 10, 15 years later, every other meme has a Wojak in it.

Razib: So this is complicated in terms of you come out of two ways. There's a cultural evolution model, which is looking at the culture of the variation and how the selection happens kind of exogenously externally. But Dan Sperber and these cognitive anthropologists who are evolutionary anthropologists in France, like Scott Atron is part of this tradition. They actually kind of start more from evolutionary psychology and they think memes adapt to the landscape of your brain. So certain ideas are just attracted to your, he uses attractors and repeller, certain ideas, there's attractors and like they're salient.

So Pascal Boyer is a cognitive psychologist at St. Louis, Wash U, he's written about religion. But, in general, he, along with people like Atron, have talked about how for ideas to be attractive, they need to be somewhat counterintuitive, but not so counterintuitive that they're not relatable. So something like the Wojak, let's think about it. It's kind of weird, but it kind of makes sense. Like it's not totally incomprehensible. And so ideas that persist are comprehensible, but they're salient. They're salient because they're somewhat different. And so that's the broad class of things that we think of that are kind of like interesting memes, stuff like that is an issue. Memes can hitchhike onto cultural ideas and achieve success. Take the Star of David, for example. It's a cool star, but its particular configuration is associated with the Jewish people and has been around for a long time and have influenced a lot of other cultures. So that meme kind of hijacked. There's nothing special about the Star of David, aside from its connection to the Jewish people. Memes can also hijack, which is again, has a genetic analog called hitchhiking, which is the hijacking of genes along another gene that's being selected. It shows the correspondence between these different things.

So I would say, there's a couple of things going on. You have to adapt to the brain. Once that's done, you can figure out other things exogenously that things can adapt to. For example, swastikas are found in a lot of cultures, most people know they're Indian. But they're not popular in the West, because they got attached to Nazis. Swastikas are actually cognitively appealing in some ways, they're an attractive symbol. But then it got associated on the cultural scale to Nazis, and so they're not attractive anymore. Those two things are balancing out. In the East and Asia, swastikas are all over the place still, in Buddhist and Hindu culture still.

Domestication (13:48)

Theo: Now, do you do anything related to animals, by the way, or just human genetics?

Razib: My background is as a mammalian genomicist. I've worked with cats, dogs, and mules, which are actually like donkeys. I was super interested in domestication, domestic animals when I was in graduate school.

Theo: Can you talk a little bit about what you did with mammalian genomics in grad school?

Razib: The outcome of my work was figuring out where cats originated. The Garden of Eden of cats is actually kind of close to the Garden of Eden, whereas for humans, it's Africa, but in cats, it's probably West Asia, Egypt. I found that cats can be divided into Eastern and Western cats. The Eastern and Western cats are actually like Mesopotamia versus Egypt and the Levant originally. These two branches move West and East. So Siamese cats, East Asian cats are from these Iranian cats originally. They're not Iranian, Persian, because there's a long time ago, like Neolithic. They're not like dogs, they weren't domesticated before the Neolithic. They're domesticated the last 10,000 years.

Then you have the Western cats that come from Egypt. There's some early remains in Cyprus, probably related to an Egyptian culture. Then they go into Europe. They actually don't go into Northern Europe. Domestic cats don't go into Northern Europe until the Roman Empire. So they're very late. They follow cities. So cats tend to like cities. Why are they different than dogs? Well, dogs can follow nomadic bands. Cats cannot. They're too small. And so you have wild cats in Europe that they're hybridizing with. So there's some hybridization going on there. But ultimately the cats in Europe are from Egypt and Syria. And those are the predominant, dominant cats actually in the world. They're all over the place. You see their genetics everywhere. So they obviously are spreading with European colonialism. And then there's the indigenous cats of the East in particular, in Southeast Asia and in China. And those are somewhat distinct and have preserved their distinctions. But just like with dogs, you see the spread of cat genomics all over the world.

With horses and donkeys, with these sorts of equids, what's interesting evolutionarily is they're not as close as, say, there's other lineages where there's hybridization, where they're very close, like European wildcats. With horses, they're far enough that you have sterility, like mules are mostly sterile. But there's a lot of gene flow between the different equid lineages, between different types of donkeys, the onagers, the wild asses, as well as different horse lineages.

The Mongolian horse has a different chromosome number than the domestic horse, which is derived from one particular horse lineage in the Southern Urals. And so horses have these weird chromosomal issues going on that are interesting from an evolutionary perspective. We now know from ancient DNA that there are probably multiple horse domestications, but all the domestic horses, the calabas, are from the South Urals, from the Sintashta culture. But it looks like there were earlier domestication events. And there's also a Chinese horse, a European horse. And some of these horse ancestries did get into some local horse lineages. The Mongolian wild horse is actually descended from what's called the Botai horse, which is the horse of the Kazakh steppe. And it looks like the Botai people of the Kazakh steppe actually rode these horses, but they were eventually marginalized and they went feral. And now they're in Mongolia as Mongolian wild horse.

Theo: What distinguishes the animals that we were able to domesticate from the ones that we weren't? Like we domesticated wolves, but not like bears.

Razib: Well, wolves, it's too simplistic to say wolves are just the two alphas, but wolves are hierarchical social organisms. Bears are not. The stylized fact, which again, I don't want to overdo it because it gets a little simple, is that we became the alpha for the wolf. Also, just to be clear, it looks like the dog was domesticated maybe as early as 30,000 to 40,000 years ago, very early on with the arrival of modern humans into Siberia. It's derived from a wolf population that went extinct. So, dogs are essentially just wolves. However, I think that's a bit simplistic because if you look at a dingo, that's a feral dog. It never became a wolf. So dogs are not just wolves. They've evolved to the point where they're a different species, probably, even though they're totally interfertile. All the wolves of Eurasia and North America are actually descended from a relatively newly diversified lineage of wolves. All the old wolves disappeared. So dogs are probably from some sort of Siberian wolf that no longer exists. They've mutated and changed to the point where they can't reverse back.

Cats are not like that. Cats can revert back to being like wild tabbies, European wildcats a little bit. European wildcats are a little bigger. They look more like tabbies. But dogs have changed so much that they never go back. Wolves do much more provisioning of their offspring. Dogs rely on humans. Wolves are smarter. Dogs are a little dumber. Dogs have diverged from wolves. So the dingo is what a feral dog is. And the dingo is kind of wildish, but it's not a wolf, is it? So they have definitely changed over time, if that makes sense.

Theo: That does make sense. It reminds me of CGP Grey, a YouTuber I like, had a video a while ago where he tried to answer the question of why a lot of Native Americans were wiped out by Old World diseases, but there was no corresponding plague that the Native Americans brought to wipe out the Old World. He said the reason for that is because they weren't densely populated cities. And the reason for that was because there were no domesticated animals. Is that a good explanation?

Razib: It’s okay, and there's actually a recent paper that came out with ancient DNA, is that zoonotic diseases are a big thing. Zoonotic diseases really kicked off about 5,000 years ago with nomadism, with exclusive nomadism. These are diseases that are jumping from animals. It looks like the Yamnaya pastoralists, originally Indo-Europeans, really spread them all around Eurasia and created a common pathogen pool. There were diseases that are associated with farmers in Neolithic Europe that are worse than the ones that were during the forager times. We definitely did have cities like Tenochtitlan, Teotihuacan. So those things are more like Neolithic cities in Europe. And yes, the lack of domesticates, the main effect disease-wise that it has is that it, I think, limited the spread to each city.

Whereas if you had pastoralists that are going between the cities all the time and that are connected transcontinentally, the pathogens sweep from one end to the other. And so it really took it up to the next level. The only domestic animal, I mean, they have the llama, the llama is domestic, but guinea pigs, llama, and dogs. And the dogs come from the Old World. They come from Siberian domestic dogs. The huskies are definitely related to the Siberian domestic lineages.

Theo: When you said that the plagues were largely confined to the cities in the Americas, how is that not the case for when the Europeans came? Because when the Europeans came, if I remember correctly, the predominant theory says that the natives trading spread it far further into the continent, far quicker than the Europeans.

Razib: For sure. But my point is these diseases are going to be much more powerful and virulent because in Eurasia, their rate of evolution is going to be faster because the size, it doesn't matter if it's in a city or not because you have long distance nomads. Nomads can go from one end of the steppe in a year. So, I mean, this has not totally been borne out necessarily, but in Plagues and Peoples with McNeill, he thinks it's East Eurasian gerbils that really incubated the Black Death. And so it's in the Eastern part of the Eurasian steppe and eventually spread to the Western steppe and spread to Europe.

During the Neolithic period, without long distance nomadism, there is some pastoralism in Europe, even before the Yamnaya. But before long distance nomadism, the pathogen networks are smaller. When you have smaller networks, evolution can't evolve the superbug as easily. So this is one of the reasons that globalization and people flying everywhere is a problem. COVID-19 would not have happened probably. I mean, it wouldn't have happened before the age of Columbus. It wouldn't have been a global pandemic. It would have been a local epidemic.

Theo: But does flying have anything to do with it or is it long distance shipping too? Because the Spanish flu took over the world.

Razib: Yeah. That's why I said it before Columbus.

Ancient History (22:48)

Theo: Speaking of ancient history, have you seen the new evidence recently for civilization being a lot older than we thought it was? Robin Hanson has something to say about this.

Razib: Yeah, yeah, yeah. I think Robin got it from Samo, my friend Samo Burja. I think that's probably true. Samo's really been pushing it. So I think it's probably true. What Samo said on my podcast was he's like, it's not that advanced, more like ancient Egypt maybe. But I think it's probably correct. The issue here is it's just really old stuff disappears really fast and it's really perishable. Egypt is dry and it's only 5,000 years ago. So imagine, I don't know, I'm just making this up as my co-founder would say, imagine a pyramid in Mexico, in southern Mexico, 16,000 years ago. I'm just making this up. But maybe it's a little smaller than Giza.It's rainy, there are jungles everywhere, and it's 16,000 years, not 5,000 years. So we have a lot more issues with preservation with these small scale Neolithic societies.

I wouldn't be surprised—I wouldn’t be shocked, I think it would be cool—if we discover a civilization after the last glacial maximum about 20,000 years ago. Maybe before the Ice Age ends, there's somewhere where there's a civilization, and it's an incipient civilization that went extinct. We see this in history. For example, Mycenaean civilization collapses, Greeks didn't even know that those were their ancestors. They lost literacy, they lost cultural memory, they thought that the citadels their ancestors created were made by Cyclops. This is only a 400 year, max 400 year gap, probably arguably less. There were probably local areas like places like Euboea where the Mycenaeans persisted a little longer. But in a couple of centuries, they just lost all memory.

So I think this is quite plausible that somewhere probably in the old world, there was a civilization and it disappeared. And we might discover some stone artifacts or something else that will probably blow our minds.

Theo: So does that answer the question, what took everything so long? There's an article I read a while ago, it was talking about how did it possibly take humans tens of thousands of years to invent rope or weaving or boats or something like that. They're all very obvious things if you think about it. So is the answer actually they did and it was just lost before they could reach the critical mass necessary to bootstrap today's civilization?

Razib: I think a lot of that is true. Cultural evolution people like Joe Henrich did some early work on this, quantitatively, formally modeling it. But William McNeill in the human web, his last book before he died, talked about how redundancy and synergy became a much bigger deal. And you can see it across history. So the early collapses, like the Bronze Age Collapse, resulted in total wipeout. Cultural memory wipe out or very close. The later ones did not. If you look at Chinese history, the interregnum between dynasties shrinks pretty much each time. And so the idea is institutions and robustness is increasing monotonically over time.

In Eurasia, for example, Indus Valley civilization, they seem to have some sort of writing system, a primitive writing system that disappeared. Indians got writing from West Asia. All the Indian writing systems are derived from Aramaic. And so once that happened, literacy never disappeared after alphabetic systems where it did disappear, it was reconstituted very quickly. It's famous that literacy mostly disappeared in Western Europe outside of these monasteries and a few elite areas, but then it just shows back up because the monastery serve as institutional reservoirs.

Without the Byzantines, most of the Greek classics would not exist in the West and would not have persisted. The Byzantines persisted like Euripides, Aristophanes, the Muslims didn't care about that. They only cared about philosophy. They care about Aristotle and Plato. So they kept those and they have really good translations of those. Byzantines have those too, but really where they showed was through the humanism because they had cultural continuity with the ancients in that way, because they're Greek speaking.

And so you're seeing in real time, I'm giving you a concrete example of how the redundancy works. So in the past, you might not have that sort of situation. You have one civilization with a couple of geniuses, they invent a few things and then the civilization winks out, but it wasn't copied anywhere else because there were no other civilizations, at least they weren't close enough. And so you need closeness and interchange. And that creates an information network. It's kind of like the internet, the internet, it could mostly disappear, but they model it, the nuclear war couldn't take it all out because there'd be enough nodes around for stuff to transmit.

Theo: Speaking of old writings, have you seen Nat Friedman's Vesuvius challenge?

Razib: Yeah, apparently the winner found out about it by watching a podcast with my friend Dwarkesh Patel, on his podcast with Nat. So that's really cool. A lot of that's going to be a big deal. The other thing that related to that is that there's a lot of old cuneiform that hasn't been translated. These are like these tablets and most people obviously do not know how to translate cuneiform tablets. And apparently, the translate there's like scanning software that can do it now with regular writing, but they're apparently not good with cuneiform yet, but with machine learning and these AI techniques, they're going to get really good. So we will actually know a lot more about Mesopotamia and the near East in the near future. Because all these museums have things that they just haven't been able to translate. Because obviously cuneiform translators, they have other things to do. It's kind of a boring job. And you know, I don't think you want to devote your whole career to just looking at tablets and translating it. But a computer would be okay with that. Just doing that 24/7.

Theo: Like how Euler said that—It wasn't Euler, but some mathematician—once said that the minds of great men are wasted on computation when machines should be doing it.

Razib: Computers used to be high school educated women, right? Those were the original computers and then they were replaced by machines.

Theo: I'm taking linear algebra right now and doing the computations, like matrix multiplications by hand, I’m like, wow I'm so glad that we have invented computers so I don’t have to do this.

Razib: I took linear algebra as well and then you get MATLAB and you're like, okay, this is great.

TESCREALism (30:02)

Theo: Let's talk about the past a bit. How often do you think about the future, like transhumanism when it comes to genetics?

Razib: Somewhat. I was on the edge of the Bay Area transhumanist, what became the rationalist scene, between 2007 and 2011. I went to the Singularity Summit. I was friends with the president of the Singularity Institute. I knew Eliezer pretty well, Robin, all of those people. My focus hasn't been entirely on that, but here's a thought, a concrete thought back when I was interested in that scene. It was mostly dorky guys with some dorky women that were kind of on the spectrum and they were super interested in using cybernetics and body modification and extension.

Today we do have transhumanism, it's the trans movement, and it's totally different in terms of who does it, and it's an identity group, it's a marginalized group now, it's associated with the radical cultural left, it's a totally different thing. But it is transhumanism, having gender modification by changing your body.

Theo: Are steroids transhumanism?

Razib: Arguably, it is.

Theo: What do you think counts, if being transgender counts?

Razib: I think basically anything that uses modern technology to change your body plan, you change your body chemistry in an extensive way, that is transhumanism.

Theo: So do people who take medications, SSRIs, anti-anxiety medication, is that also transhumanism?

Razib: Well, we don't know what the, I mean, I don't think we know enough about SSRIs, because if I have high cholesterol and I take medication to get the cholesterol back to normal, I don't think that's transhumanism because that's just wild type. If I get my leg cut off and we can regrow a leg and I'm back to normal, that's not transhumanism. That's just repairing back to the wild type, right? That's like wild type humans. Like genetics, you have wild type, right? So that's the non-mutated version.

Theo: So just fixing something that's broken is different from improving something.

Razib: That's just called medicine or the aim of medicine. Now, if you get your legs cut off and you can regrow it, like a lizard with a tail, back to the same length, that is not transhumanism. If I grow back and they're like 15 feet long, that is transhumanism.

Theo: So do you think that the overall impact of the transgender movement for the transhumanist movement has been good or bad?

Razib: I think it's generally been bad. I think it's making people much more skeptical of transhumanism because transhumanism as a choice of an individual that wants to push the frontier is different than transhumanism as a society and culture defining a shift. The original transhumanists were not an interest group that wanted new laws. They just wanted to push the frontiers of science. But the new transhumanists, and they don't call themselves transhumanists, but that is what they are. I don't know if you're like, like socially integrated into that world, but a lot of the people in 2010 that were into transhumanism and posthumanism, there's a substantial minority that did actually go switch their genders. They often tended to be gay men, basically, that became trans women. But so there, there's a cultural overlap between the two groups. Although they're not, obviously, the majority are not from the old transhumanists, but of the old transhumanists, a really high number did change their gender.

Theo: It really is interesting how similar the rationalists, transhumanists, those people are. TESCREALists, as they like to say. But also just so different politically. Like you have people like Roko who are like, we should not allow immigration at all. And then you have people—

Razib: Yeah. I knew him 15 years ago, by the way. He was totally different then, just so you know. I knew him before he was famous. He was very normie. He was one of the most normal people in that scene in the Bay Area. I knew him in real life. This is like a new evolution over time. I was way more fucking based than him. I exposed him to a lot of things. He was a normie. I don't know if people know that. Now he's pretty infamous. He's super out there. But back then he was, him and Michael Anissimov, who became a Nazi, were the most normal people in that rational scene.

Theo: Really? I wonder how you explain that one. I mean, I guess...He was kind of out there when it comes to his opinions on AI risk. I don't think anybody was thinking much about S risk at the time until he mentioned it. I heard of Roko's Basilisk long before I actually knew who Roko was.

Razib: So, I don't remember all the details, but there was a massive falling out between him and Eliezer around 2009 or 2010. He left the U.S. right after that, because he was kind of unpersoned. Not unpersoned, cause that's not what you do in the rationalist community, but… He wasn't shunned, but there's a cult-like aspect of Eliezer does not like, and that's not true anymore. It's not about Eliezer, but back then a lot of it was about Eliezer Yudkowsky and his ideas. There was a circle around him and Roko was kicked out of Eliezer's circle by a massive disagreement on Less Wrong. Now it's much more diverse, obviously, but there was a social personal aspect to the divergence and evolution of Roko, I think.

Theo: Didn't Eliezer call him a fucking idiot and then remove the original post because it was an infohazard or something?

Razib: Yeah, yeah. I mean, the other issue I would say is also, you know, I still know Roko, we still talk a little bit. So it's not like I'm talking about somebody I don't know. I'm not saying anything that I wouldn't say to his face.

Theo: So far, I think just about everybody that I've had on this podcast has had some level of connection to Eliezer Yudkowsky and rationalism. The closest was probably when I interviewed Zvi Mowshowitz, who I'm sure you know, if you know all these people.

Razib: Yeah, I've hung out with him.

Theo: So, I mean, I'm surprised because I didn't know you were, but I guess all roads lead to Rome.

Razib: It's not a big part of my brand. And actually, people are a little surprised when they find out that I was there. But yeah, I was there. Part of it is also because I know you're going to ask me about my more edgy beliefs. A lot of the rationalists have edgy beliefs on group differences and stuff like that, that they don't advertise because the marketing is not great. But so they probably wouldn't want people to know that I was just there at all the parties.

Theo: How did you get involved with them in the first place, especially so early?

Razib: I lived in the Bay Area and reading me very early on. Again, like most of the rationalists are not normies. They're born, not normies. Right. So, you know, they go where the evidence leads and they're interested in all sorts of different things.

I don't know if you know, Michael Vassar, Michael Vassar and I have known each other since 2003. And so he was president of the Singularity Institute for a while in the late 2000s. I think he got MeToo’d, or something. I don't know the details. Michael and I only talk like every six months now, but so there are people in the rationalist community before it was called the rationalist community, but when it was mostly like transhumanism and we were going to like, you know, Singularity Summit, maybe the bill conferences, Aubrey de Grey was there, you know, Peter was still funding them. And it was around Eliezer.

Theo: Peter Thiel?

Razib: Yeah. Yeah. He was a big backer early on. He's very turned off now—

Theo: I can imagine.

Razib: —But he was a very, very big backer of Eliezer. And, you know, what was the Singularity Institute became the Machine Learning Institute.

Theo: And then they turned into decels.

Razib: Yeah, some of them did. I mean, originally it was like, originally there were like, you know, whiteboards trying to figure out how to make friendly AI and stuff back then. And then Eliezer decided we're not going to be able to do that. We have to stop it. And so the issue is like, there's other people in the community or out of in that circle that are not decels and like, you know, that don't think AI is, you know, strong AI or like artificial general intelligence is going to take over and destroy everything. I mean, I have friends who didn't have kids because they thought AGI would be, was going to be around and here in the 2020s, which is not totally crazy now with the LLMs, but we'll see. But yeah, this is like 2008 and I had friends that were like, yeah, I'm not going to have kids because I think we're just doomed. And there's still people like that. They think the probability is low.

Theo: I think it's kind of funny that you said there are people sitting around with whiteboards trying to design friendly AI. It's like, I don't know how people thought that building friendly AI would be easy. It's like the Dartmouth summit back in the fifties. They said that like, they thought they would have a working human level AI by the end of the summer. And then like Eliezer thought that he could build it like by himself and save the world in a similar timeframe.

Razib: I mean, there were people, there were people who I'm not going to say who, cause whatever, they're not like public people. There were people who thought it was BS. So the rationalist community, what I would say is there was an aspect of it, which is like, okay, we're here to save the world. Just like in the scientific world, like we're here to like understand the world. But then there was an aspect that was very social. Cause like all of a sudden you're not the weirdo and everyone's a weirdo. And if everyone's a weirdo, no one's a weirdo. I think I was pretty, I haven't changed very much in terms of, I don't know, like a lot of people have changed a lot over there. Like I've never been like liberal, never been religious. I've never been like really a hardcore rationalist. And so far as like thinking that I can redesign everything from first principles, but I've seen—

Theo: Optimize Literally Everything Forever?

Razib: Yeah, that’s a very common view. I would argue with the rationalists back then. A lot of post-rationalism was me back then. I don't use the word, I'm not part of the community. I've never been poly[amorous]. I'm relatively normal in my behavior and in my social norms.

Theo: I'm not poly either. I don't get it.

Razib: I've had conversations where it's been said that reason is and ought to be a slave to passions. I would just tell them that they're rationalizing being polyamorous. There are people I know who would argue that polyamory is the only way to be, that it is the correct way. And then it didn't work out for them and they switched to saying that monogamy is the correct way. I think they're just rationalizing everything.

That was one reason why I probably didn't get super involved in the community. I would get sick of arguments where people were trying to convince me of their beliefs. I've always been an atheist, but I was never a new atheist. They would argue that you should not believe in God because it's wrong. They would do that with everything. Not everybody, but a lot of them would. That's why I was always on the edge. I enjoyed hanging out with them, and there was a lot of things that were great about them, but the excessive adherence to trying to reason out everything in your life was just exhausting and led to People’s Front of Judea vs Judean People’s Front type of conflicts.

Theo: Just ridiculous levels of infighting. When did you leave the community?

Razib: I've always still been on the edge of the community. I went to grad school in 2011 at Davis. I was close. I would go back periodically. I'm here in Austin. I am still on the edge of the LessWrong community. I go to some of their meetups, I have social things that I organize.

Theo: Did you go to Vibecamp?

Razib: No, I didn't go. But I was there. I don't want to be around naked people. I know the kind of stuff that they do and I don't want to stumble on an orgy.

Theo: What was Vibecamp?

Razib: The vibe camp that was in Austin had a bit too much of a Burning Man vibe from what I know and from who was there. So, I mean, Aella still lives, I don't know if I should say, Aella is in and around Austin some of the time. I think she does admit that. I don't want to, I'm not trying to dox her, but there's people like her, you know, Scott Aronson's here. There are people like Scott Aaronson here. I hang out with Scott sometimes. I'm still integrated with the rationalists. You know, it's just a lot of them move from the SF, Patrick Friedman recently moved from SF Bay area to Austin. So a lot of them have come here after I came here.

So that there's that, I mean, I'm also like part of the right-wing scene here. That's a different thing. So my own, like, my social network is like, you know, like if I throw a party, there'll be like a bunch of scientists, a bunch of techies, a bunch of right-wing activists and rationalists like this, basically. And then like, there's, there's some civilians. I have friends that are just entrepreneurs. One of my closest friends is in the food and beverage industry and I don't mean that he's a waiter. I mean, like he's a wholesaler and he does that, you know what I'm saying? But I mean, he hangs out with me and like, he's kind of curious about, you know, rationalism and Scott Aaronson, that's why he hangs out with me, but he's definitely a civilian. Like he's not part of any of these like weird groups, you know, he's like a normal person and normal looking person with a normal woke girlfriend who gets triggered by me.

Theo: Yeah. For Vibecamp, though, I meant the one in Maryland.

Razib: I didn't know all the controversies. I knew Michael Kersey back when he was one of the normal guys. I met Michael in 2008. He was super chill, super nice, super normal. Now he's a big shit stirrer on the internet. His sister was involved with leverage. Everyone knows everyone. Everyone's smashed everyone, if you know what I'm saying.

It's interesting to me all that stuff that's happening. Grimes is on the edge of the community. She's back in California now, but I see her around. I tweeted out a picture with her, but she didn't like the picture, so I deleted it. But on Instagram, the Grimes fans kept it. They're asking who I was. It's kind of interesting.

Theo: It’s interesting how there's so much overlap now between the nerdy sci-fi tech bro rationality, future people and the k-pop stan music fan people who like Grimes.

Razib: Well, it’s part of the Elon connection, and I've never talked to him, but I've been at parties where he's been at. There's the whole rationalist and effective altruism community interaction. I mean, arguably, effective altruism, Will McCaskill and stuff, all those people, yes, like explicitly. But look, EA-type thinking was there from the beginning because they're rationalists. Caroline Ellison was a big fan of my blog. You can see the screenshots there. She, hey, Caroline, if you're listening, I did respond to your DM. I'm sorry I didn't follow you back so I didn't see it originally [laughs]. I have a friend and she was funded by SBF for her PhD. People often say that she had a relationship with SBF. But you know what? If I say it enough, it's going to be true.

Theo: SBF did that with everyone, you know, about his penthouse in the Bahamas where all the FTX employees were camping there with their stolen money. The whole situation is ridiculous. I cannot wait for them to make a movie out of it.

Razib: Yes, yes. There’s a lot of stuff there.

Theo: Have you read the Sequences in full?

Razib: No, I'm not that hardcore. I have friends who have, but I'm not that hardcore. A lot of my friends came into rationalism through the sequences, but I was there before the Sequences.

Theo: “Do not cite the ancient magic to me, witch. I was there when it was written.”

Razib: I'm older now. Back when I was your age, I was bright-eyed and bushy-tailed, but I've seen the things that I have seen, like tears in the rain. I've seen people come and go, people blow up, people fade. And I feel like I've kind of been the same, maybe I'm a little bit more well-known than I was, but I have a regular life. I'm a startup bro. I'm focused on that. I got three kids and I've never been poly. I've never done any of the weird things. This community of transhumanists, rationalists, whatever, these kind of weird out there people, they've gone through so many different ups and downs and I've just observed it. I'm an observer.

There's people like Altman who've been, you know, like it's mainstream now. It's not even counter-cultural. Altman's a big deal with OpenAI.

Theo: Did you see his bio?

Razib: With that Eliezer thing?

Theo: Yeah, he changed his bio to “eliezer yudkowsky fan fiction account”.

Razib: But he hates Eliezer.

Theo: Really? He's only met him once as far as I know.

Razib: Are you talking about the time that Elon took him to meet Eliezer?

Theo: No, I mean the time that, oh, you mean the joke post?

Razib: I don't remember, but it was like, he was like, don't ever bring this moron.

Theo: Yeah, that was an Oppenheimer reference. There was one famous picture from months ago, maybe not a year ago. It was Eliezer, Sam Altman, and Grimes at a club together. And he said, it was everyone's first time pairwise meeting each other. So does Sam, I don't think Sam hates Eliezer. I think he thinks he's wrong about AI risk. Although I think he also thinks-

Razib: There's a lot of people who have strong feelings about Eliezer and thinks that Eliezer is going to inspire terrorism now. There's a lot of effective accelerationists or whatever you want to call them. My social circle, after ChatGPT, is split down the middle. Some of them are scared. A lot of them are super scared. A lot of people in academia are super scared too. And then a bunch of people in artificial intelligence are like, we need to ride the tiger.

Theo: What do you think about it?

Razib: I probably lean to the second. Right now I do. Probably because I'm a startup guy and my startup has an AI component. I probably use ChatGPT every day. I do look at it as a tool and we got to figure out how to use this tool. I'm not a mystic about it has to be our squishy wetware that creates the soul or anything like that. I don't believe that, but I do think that it's going to be a while. I think it's possible, but it's going to be a while before, but I don't have strong confidence in that. And I know that I'm self-interested. My company is planning to do AI related stuff. Everyone has to, to keep up. So another issue is like, if we ban AI, if we do what Eliezer says, what's going to stop China? We need a world, we need a Butlerian Jihad. So unless Eliezer wants to become Serena Butler, it's futile.

Theo: I mean, I think that is kind of what he wants. He's backtracked on this a little, but he said even the risk of a nuclear war between two countries would be preferable to one of them building an AGI. And now he goes on Twitter saying, I never called for drone striking AI labs, but…

Razib: Someone's going to do it. Someone's going to be like some crazy person. It's going to be like Cosmic Pizza. Right. It's going to be like some crazy person watching the Alex Jones show goes to shoot up Cosmic Pizza. Cause they're like kids in the whatever, you know what I'm saying?

Theo: Oh, Pizzagate?

Razib: Yeah, someone's going to take Eliezer literally and seriously. So yeah.

Theo: I don't know. I mean, Eliezer has specifically said, don't do that because it will make the movement look bad. Terrorism is not good for-

Razib: But retarded people don't know any of that. And they might not hear it. They might be like, oh, he's secretly communicating to me that this is a lie. Like I can see the way he's winking or something.

Theo: And if the last week is any evidence, not everybody's going to condemn terrorism.

Razib: Yeah, that's fair.

Theo: Not even close to everybody.

Razib: No, no. They'll be like, I mean, there are Christians who are just like, you know, that are like, they're demons. Artificial intelligence is a potential demon. Cause I, you know, I'm in right-wing circles. Like I know people who are religious and they're like, aliens are demons. Artificial intelligence is demons, you know? Yeah.

Theo: Why not, artificial intelligence is like a gift given to us by God to like explore the universe or something.

Razib: Yeah, that's a more, I think Mormons are more, Mormons are a lot more like that than normal Tridentine Christians. Yes, because Mormons believe in apotheosis. Mormons believe that God was a human, a physical human being, a mortal. So Mormons are a little bit different than typical Christians, which separate the divine and the mortal very precisely.

Transhumanism (53:05)

Theo: Okay, so we got a little off track. I'm talking about transhumanism. So what are you most looking forward to with human genetic enhancement?

Razib: The abolition of genetic disease, which is feasible. For example, cystic fibrosis, a lot of people that are alive today will now live, I believe, 30, 40, 50 years now. I mean, if you have a child with cystic fibrosis right now, that child will be cured within 10 years. I do believe that because we know the gene for cystic fibrosis. We know what to target. And with CRISPR genetic engineering technology, we will be able to deliver it. If we can deliver it well, we will be able to rescue enough function that people can survive normally. They're never going to be marathon runners.

The way they do it will be like some sort of spray and it'll modify about 10 to 20% of the tissue and that's enough to live. So right now people with cystic fibrosis die in their 40s if it's good. Some rare cases live into their 50s, but I think it's like 30 to 45 or something like that.

Theo: What is cystic fibrosis?

Razib: Cystic fibrosis? It's basically a lung disease. Your lungs, I think there's salt—I'm not good with mechanistic biology, but it's like the salt concentration's out of whack. Your lungs dry out. And basically it's like you have lifelong pneumonia.

It's bad. So it's carried, Europeans in particular, like a minority of Europeans carry it. Like it's like 5% carry the CF mutation. And so you do 5% times 5%. Those are the people that are born with CF. That's a very small percentage, but they're born with CF. They carry two copies. All you need to do is fix the cell. If it's a single gene disease, you can fix it, right? And there are other genes like ALS, Lou Gehrig's disease. It's quite often at one gene, Jerry's kids. Again, all you need is to make the muscle good enough so that the heart and lungs can continue to function. They're never gonna be jacked. They're never gonna be super muscular, like doing curls or something like that, but they'll be able to live, right? So morbidity is still gonna be around in terms of their sub-functional, but their mortality is gonna be much better.

Theo: What about Alzheimer's? Like how Chris Hemsworth has a gene that gives him some chance of getting premature Alzheimer's.

Razib: Those genes have incomplete penetrance, which means that there's a lot of variables. So the bang for the buck is lower there. That's down the line. I think maybe in 30 to 40 years, we're gonna be able to do a lot of cool things. Like if you want to have pink skin, you can have pink skin, but I'm talking 40 years down the line to do these sorts of cosmetic things.

I think the curing of disease is gonna be the coolest thing that's gonna happen in my lifetime. It will definitely probably happen in the next 20 years. It will probably happen in the next 10 years. A lot of diseases are gonna start to get cured. Like after 2025, by 2030, a lot of diseases are gonna start to get cured. Various types of diseases, the type of kids that go to St. Jude, they have congenital diseases, they have diseases that little kids have, and they present because of some genetic illness really early on, CF, Lou Gehrig's disease, probably some types of cancers and other things have genetic basis, probably type one diabetes will probably be cured. So, I mean, some of it, it's gonna save your life a lot. Some of it, it will improve your quality of life almost to normal. And so that's gonna be a big deal.

Later on, there will be other things like, can you be smarter? Can you be stronger? Yes, but to improve function is a lot harder than to just fix something that's broken, if that makes sense.

Theo: Do you think that most of the human improvements, transhumanist type things of the future will be bio stack or silicon stack? Like Neuralink, we have nanotechnology, we have fully immersive VR.

Razib: This is an argument that goes back to the 2000s, gray versus red, biostacks, silicon stack. Some of this stuff like nanotechnology, what little I know about it is, it's gonna be a really long time before we ever get nanotechnology as good as our molecular machinery. I don't know why, but it is how it is. So I'm not optimistic on that. Nanotechnology, like artificial intelligence and robotics, has always been 10 years in the future, and like nuclear fusion. Now with artificial intelligence, that 10 years is getting legit now, not so with robotics. Nuclear fusion is kind of in the middle, I think, from what I've heard from friends, it's either literally getting legit. But the point is, there's certain things like nanotechnology I'm not super optimistic about.

Neuralink we'll see, I hope it works, but I think they have a long way to go. I haven't looked at it in detail. They've been talking about these sorts of things, like obviously for a long time and early on it looked super unrealistic. So I think in terms of cybernetics, I can imagine, I have some friends that are super into this actually, just basically like Alita: Battle Angel type stuff, where you can imagine your arms and legs being replaced. I think we're gonna have to do that probably for, I can see how organs could be replaced with stem cells. So that’s gonna be a big revolution by the way. Organ matching problems will disappear if you could use your own stem cells to grow organs. That I can see how that would happen.

Replacing legs biologically by growing another leg seems less plausible to me than say an organ which is less differentiated tissue. However, I can see how you could have artificial limbs. You would do that for people that got into accidents and whatnot, and you could affect the nerves, et cetera. I think that there's already primitive forms of this to my knowledge. And so that’s gonna be the start and then eventually some people may want to replace most of their body with these artificial limbs because they want to get into fights or whatever. You can imagine soldiers volunteering special forces. I think that that sort of stuff is going to happen. But in terms of the gray stuff, in terms of silicon, in terms of integrating with humans, I think the core of humanity, our brain, is going to be still wetware for a while. That's what my intuition is.

Theo: Will there even be soldiers in this transhumanist cybernetic future or will we just use robots? Why send humans?

Razib: Robots right now, they have severe limitations. Robots sometimes go crazy, like whatever heuristics they have, they'll jump into a wall. They'll be good 99% of the time then they jump into a wall and the robot's destroyed. So yes, in theory we will have good robots, but it seems like that's just a harder problem than people have anticipated. When I was a kid in the 80s, I read about robots in those little books and it was like, by 1999, you will have a robot maid. I mean, we're really, really far from that. Roomba doesn't count.

Theo: Yeah, I guess Roomba is pretty cool anyway though, if you don't like vacuuming.

Razib: Robots and artificial intelligences with these heuristics are really good at doing precise, invariant things. Being a soldier is not a precise, invariant thing. For example, the average Special Forces soldier is about 5'10". The reason they're not huge, they're not like some jacked giant, is that jacked giants have no endurance. So if you're a very, very large man, it's harder to pull yourself up over obstacles. Obviously, if you're too small, you're not going to be strong enough. And so the ideal Special Forces soldier is about 5'10". So they have the balance between strength, agility, and endurance. So it's not like even an average human, obviously. It's like someone very specialized. So imagine having a robot that can balance all these things and doesn't have an irrational spergy spur-of-the-moment in the middle of a mission. That's not going to work, right? And even robots like the ones that are going to Mars where there's nothing complicated, they have problems. Which is fine. You can fix the problems with programming, engineering, wait three months for the dust storm to clear or whatever. But in a war, you don't have time.

IQ (1:02:26)

Theo: So going a bit into culture and politics. One of the most controversial ideas in social science is the whole bell curve thing. The observed finding that A, IQ is correlated highly with a lot of important things. And B, that different races have different average observed IQs. So what do you think about this?

Razib: Well, I mean, you've read my stuff. You know what I think about it [laughs]. No, I mean, it's obviously true and predicts a lot of things. I reviewed my friend Charles Murray's book, Facing Reality. I kind of gave it a middling review because I was like, don't we all know this? But it depends. A lot of people are pretty stupid. They don't know any of this.

Theo: How would you explain this to an average woke person, though?

Razib: You wouldn't. They're not going to be able to take it in. How do you explain it to an average woke person?

Theo: I don't, but some people I know used to buy into the idea that all traits are distributed equally. And then over time came to realize maybe not.

Razib: Yeah. So they have to come to it themselves. This is an issue. This is an issue that is really easy. Basically what they will say, you can argue with them and kind of get them further. So, for example, the SAT scores, their predictability is pretty good across groups. So if they're actually discriminatory and biased, their predictability wouldn't be good. So the SAT score of a black person does predict their grades just like it predicts a white person's grades. If it was biased, it wouldn't predict the grades well. Their predictability would vary. That's how ETS, Educational Testing Service, checks for discrimination and bias.

So there used to be the old stuff like, oh, well, only upper-class white people know what a yacht is or something like that. Well, that's going to reduce the predictability. They try to get away from those. They want things that have generalizable prediction value. So you can explain to people stuff like that. But a lot of it is people are just going to refuse to believe that the groups differ in any way and they think it's bias. And that's fine. I'm not going to spend a lot of time arguing about that because people just need to – I don't know.

I mean, look, the reality is – for example, you're Jewish. Jewish people have higher IQs than Gentile whites. I would say about – I was talking about this the other day. I think about 15% to 20% of my closest friends are Jewish at any given time. Like about 2% to 3% of the American population is Jewish. So Jews are highly over-represented. Why is that? I mean you know why it is.

They’re part of the professional managerial class, et cetera, et cetera.

Theo: The People of the Book?

Razib: Yeah, I'm in social circles where there's a lot of Jewish people and why is that? I don't know. I think most people would be more open to agreeing that Jewish people are, whether it's genetic or not, smarter. There are two steps. One step is, are there group differences? A lot of people do not know, by the way, that there are differences. They don't know anything about the bell curve chart anymore. Your generation has been that's been erased from a lot of the record. It used to be when I was growing up that people would acknowledge that there were average outcome differences in the test but they were due to discrimination. Now people don't even know that there are outcome differences.

Theo: I think they basically know that.

Razib: No, a lot of people don't. I was in graduate school. I can tell you a lot of people don't. You do because you have a different social milieu. But I can tell you I was in graduate school. A lot of people do not. They are shocked because I would encounter people that would not understand why we didn't have certain ethnic groups in our program. And I said, well, we filter at a certain GRE score, and they didn't understand what I was getting at. And so I had to look it up for them, and they were shocked. So they didn't know. So they just don't – they don't know that fact. So that fact has to be acknowledged, and then after that, you can talk about what does that predict? Talk about predictability and psychometrics. And then, of course, you get to the third rail of could the group differences be somewhat hardwired? And that's really difficult because then you need to talk about genetics. So it's like people have to get to it themselves most of the time. I don't really try to explain it.

As far as woke people, no, I don't want to – no, it's like I don't believe in God exists. Look, I do, but I'm weird. I'm not going to recommend it to other people. I still have an upper-middle-class income, and I haven't been totally canceled, but I mean I'm very weird. I'm very disagreeable. I'm very aggressive. Most people are not going to be able to handle what I've been through. So I just don't recommend it. I don't think you're going to be able to handle it. I have friends who have tried it, and they got really burned. They're stressed and traumatized. I'm like, okay, whatever. It's only a few people that can go out there and say God is dead or something like that.

So I'm not going to tell – I mean most Christian – most of my religious friends know I'm an atheist, but I don't argue with them about it. What's the point? They're not going to be convinced. They have to come to it themselves. I think I'm right. I think they're wrong. They think I'm wrong, and they're right, but there's no benefit for us arguing about it. So some of these things with woke people or people on the far left, there's no benefit arguing about it because they're not going to be convinced through an argument. Maybe you can expose them to some facts and just let it be, and then they have different explanations for those facts, and that's fine for them.

I will tell you – I'm not going to say – I have to be a little vague because I'm trying to protect the innocent. But I know someone who is pretty normally conventionally liberal, not really woke because they're a little too old to be woke, but not based or – there are liberal people. Peter Singer is based left, right? He's not one of those people. He's just a normal liberal guy, and he's—

Theo: Like a 2008 Obama liberal type?

Razib: Yeah, maybe that, but maybe a little bit more liberal. He's an academic. He's a math guy, and he tries to do the DEI stuff. He was the department chair, and they admitted people below their average test scores from underrepresented groups, and it was a disaster because math is hard. And you can't fake it, and basically it was a disaster for everybody because all of those students had to withdraw after the first year because they just did not have the ability – I don't have the ability to be a math graduate student, so no hate on them. But he was pretty depressed about it because it was kind of like it destroyed their self-esteem, wasted a year of their life, etc., etc. It was really stressful for them.

A lot of people in the department were trying to help them out, and it was just a waste because they lost them as graduate students. They lost them as future academics. They just didn't have that possibility, and it made him – I mean I guess I just told you the person's gender, but it made him – I'm not going to say he's a Charles Murray believer, but at some point he's just like – he basically said, we can't ever do that again. We can't ever go through that again. So I don't know what that says, but that's not a person that's sitting around like Charles Murray or somebody who's super – like Roko or people who are obsessed with these hate facts or whatever you want to call them. He's not one of those people.

He's just a normal academic liberal person, but he straight-up said, we can never do that again because they don't want to – it was a harrowing experience because they have this theory that, oh, if we just expose them, they will rise up. It crushed the people that they admitted. It crushed their spirit. It crushed a lot of the academic spirit to try to help them out, and they just couldn't measure up, etc. The person also told me that a very, very high fraction of the admitted female graduate students were trans women, and – I guess I can say this.

Basically when they – this math faculty at a Research 1 university but not like one of the top, top ones, but Research 1, not trivial. They have totally different standards for males and females, totally different. So it's like for a female that they hire, usually there's going to be 30 males that they would rank higher.

Theo: MIT has more than twice as many male applicants versus female applicants, and there are roughly similar numbers of men and women who get in.

Razib: It's difficult. I don't know what to do about it. I'm not psychologically normal, and I have a lot of experience talking to people about it. A lot of people give me advice. I'm like, you’re a pussy, you've never said anything controversial in public in your life, so don't give me advice. Seriously. People are like, oh, you should do this. You should do that. I'm like, you’re a pussy, you've never given any clue to anybody that you have any based views. This is all theory for you while I've had to do this for 20 years.

Theo: Praxis.

Razib: I've never said anything on the internet that's a lie. There are people out there that just straight-up lie, like famous people that I know, and they've done well lying, and that's fine. But I'm not going to take any advice from other people about that sort of stuff because if there's anything I know, it's being able to say – and this is my personality. It's worked for me. Most people are not cut out for it. I think that's true, and you just have to have a really, really high tolerance for people throwing shit at you from all different directions. But it's fine. I'm not – people cannot touch me financially right now, so I say what I want to say.

The issue is most people that become super rich, they’re also pussies because want to be richer. They're like – I'm not rich, but I know people that are like 10 millionaires. They're like, well, I want to get to 100 million. Once they get to 100 million, they want to get to a billion.

Theo: Well, some people who are really rich are based – Marc Andreessen, Peter Thiel, Elon Musk.

Razib: I know them. I don't know – I know Marc really well. The others, like I've met Peter. I've been in the same room as Elon, but they don't say what their real views are. You can guess what they're—

Theo: Marc is like my best friend. He followed me on Twitter once.

Razib: Yeah, but I'm saying you don't know what his real-based views are. He's not totally candid. He's a billionaire, and he's not totally – yeah, but he's a billionaire. Nobody could do anything to him. Well, he fears social – it would be social. You can't go to those parties. I mean I'm not dissing him. I'm just trying to say he doesn't show his power level, and he's worth billions.

Theo: They could ruin the reputation of the fund who his name is associated with.

Razib: Yeah, but he doesn't need that money.

Theo: I guess.

Razib: The fund could change its name.

Theo: a16z.

Razib: It's all about positionality. Peter – everyone knows what Peter believes, who he's had to his dinners, but even he's never explicitly said a lot of things, and he's at even a different level than Marc. As for Elon, you could guess, but Elon has been very careful – well, kind of careful I guess, but anyway, whatever. I'm not – anyway, I'm not saying anything specific about what any of these guys have said, and I know them to various degrees, and I know various things that they've said, and they're billionaires, all of them. Elon's the richest person in the world right now depending on what day it is, and yet even they have – although I do have to say Elon has like really put hate facts out there on X, and I think it takes being the richest man in the world that's extremely weird and disagreeable to go there, but it just shows you how difficult it is for most people, right? Because if he wasn't so weird, he wouldn't be doing that. Jeff Bezos never did that, you know?

Theo: Is Jeff Bezos just less weird?

Razib: Oh yeah, he's certainly less weird.

Theo: I guess he took the conventional billionaire path of like donating to Democrat causes, and then like he, you know, decided he was rich enough and retired to go party on yachts with his hot girlfriend.

Razib: Elon's definitely a weird person. I don't know him personally, but I know people that know him. He's very charismatic. He's very bizarre. He's pronatalist. He's really concerned. Like, okay, I got to go now, but like I will tell you here's the thing that I know about Elon. He fucking wants the Kwisatz Haderach now. He is scared of the thinking machines. Elon is motivated by extreme fear of artificial general intelligence, and people try to read into some of the things he says, but that is what you need to know about him. That is what he's concerned about, and that's what I think I can say candidly from what I've heard. That's legit, you know? That is his lodestar.

Theo: All right. Well, thank you so much for coming on. I really enjoyed talking to you.

Razib: All right. My pleasure, bro.

Theo: Bye.

Theo: Thanks for listening to this episode with Razib Khan. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. All of these are linked in the description. Thank you again, and I’ll see you in the next episode.

Discussion about this podcast

Theo's Substack
Theo Jaffee Podcast
Deep conversations with brilliant people.
Listen on
Substack App
RSS Feed
Appears in episode
Theo Jaffee