Theo's Substack
Theo Jaffee Podcast
#4: Rohit Krishnan
0:00
-1:05:14

#4: Rohit Krishnan

Developing Genius, Investing, AI Optimism, and the Future

Intro (0:00)

Theo: Welcome to episode 4 of the Theo Jaffee Podcast. Today I had the pleasure of speaking with Rohit Krishnan. Rohit is a venture capitalist, economist, engineer, former hedge fund manager, and essayist. On Twitter, @krishnanrohit, and on his Substack, Strange Loop Canon, at strangeloopcanon.com, he writes about AI, business, investing, complex systems, and more, all topics we discuss in this episode. This is the Theo Jaffee Podcast, thank you for listening, and now, here’s Rohit Krishnan.

Comparing Countries (0:33)

Theo: Welcome back to episode four of the Theo Jaffee Podcast. Today, I'm interviewing Rohit Krishnan.

Rohit: Hey, thanks for having me.

Theo: Yeah, absolutely. So first question, from what I understand, you grew up in India, you went to college in Singapore, and then you moved to the UK. And now that you're in the US, what do you think about each of these locations? Culture? Which one's your favorite?

Rohit: Good question. It's hard to answer. They're all fairly different. Maybe just to qualify, I grew up in India, I did eight schools in 12 years, so I moved around quite a lot. In a weird way, the longest I've ever lived anywhere is actually London, which impacts how I see the world a little bit. They all have different pluses and minuses.

India, I left when I was 17. So my impressions before that are very impressionistic, shall we say. I don't have a well thought out point of view about living in India, because I was never living in India. I was living at my parents house, going to school, and then I left. And the India that I left in 2002, is dramatically different to the India that is today, in every way imaginable. The culture, food, people, spending habits are different, everything is different.

Theo: The culture is different?

Rohit: It's just gotten much more prosperous, the cities have gotten much bigger, it's gotten much more internationalized.

When I left, pretty much anybody who was anybody would think about becoming an engineer or a doctor, that was kind of the dream. It still is the case, but entrepreneurship has spiked up like crazy. The number of people who go internationally and come back are enormous. My best friend moved back to India after college, one of them, and because he could see that the opportunities actually existed enough that you can have a wonderful lifestyle. So if you actually map all of those things out, the life is actually fairly different today to what it was like 20 years ago.

Singapore is wonderful. Singapore is incredibly convenient. It's like living inside a giant shopping mall. That's the way I describe it, because it's uber convenient, it is exceptionally clean. It feels a little soulless, because well, you're living inside a giant shopping mall. But the things that you do not appreciate as much when you're in your 20s, now I appreciate a lot more when I'm in my 30s. Like landing at Changi is kind of the best experience you can have in an airport, which is sort of not a thing that is easy to say.

Moving to London, London made me or UK made me realize where India learned its bureaucracy from. It is pretty spectacular, the amount of paperwork you need to do for all sorts of random stuff. I mean, US has no slouch in that regard either. But it's an interesting place. I mean, it's exceptionally multicultural. And London in many ways is different to UK as a whole, because it's a cosmopolitan city with people from all over the world, and that you kind of get used to that, right.

And the US, I mean, I'm in the West Coast at the moment, US is in a little bit like it's a mixture of the stuff that came about, right? In a weird way, it's much less diverse here, if I can say that, compared to London, for example. Yeah, much less. I mean, it's also much fewer people, but it is much less diverse. I mean, whether it's in demographics or population or work, everybody seems to be roughly doing similar-ish stuff. Gross overgeneralization, but I'm just kind of giving you impressions.

Food is, I don't know, maybe at par, slightly worse than London, I find here. Food, the best is in India. Maybe second best is Singapore. Third best is London, and the Bay Area kind of comes somewhere at the fourth. Infrastructure here is, I mean, this is kind of well talked about, it's terrible. Like I despise driving to get anywhere, but I understand why that's the only way you can do anything here, because the infrastructure is just, it's absurdly bad.

I think American culture is interesting. I mean, I've traveled here so much that I have difficulty seeing it with new eyes, but I find that like, I'm still sort of trying to get to grips with how people like to have lives here. The one good thing that I can say about moving here is that people are much more outdoorsy, or maybe I'm much more outdoorsy here than anywhere else, because the weather is lovely. So I actually feel like going for a run.

Theo: The Bay Area probably has the best weather of anywhere on the planet. I mean, I can see why you experience lots of new things moving to the Bay Area, because the Bay Area is different, even by American standards. Like I live in Florida. I just went to the Bay Area recently, and it was like profoundly different, more different than any other place I've been to in America.

Rohit: Right. It is. Yeah. It's a weird place. I mean, in like New York feels to me very close to London in almost always. And it kind of works. Large cities. I mean, I lived in DC in the middle for a while. DC is probably closest to sort of, I don't know, I felt it was very typical American experience in some ways like large, small, clustered, not clustered, et cetera. I love that city. Bay Area is very different. Like very few things here make sense to me. If I'm like, I remember sort of 15 years ago, whenever I first came here driving and seeing these giant giant, like Volkswagen, whatever showrooms, you know, Porsche, Toyota, whomever. And I kind of said like, okay, so I've seen three now, one in every town, each town has like 30,000 people. Like how the hell are these guys surviving? I still don't get it.This is one of those big questions that doesn't make sense to me about how the place operates or the house prices make no sense to me because I look around and it's empty land as far as the eye can see, no matter where you look, very few things here, actually, I look at and understand how these circumstances came about. It's an interesting place.

Reading (6:50)

Theo: As a child, did you read? And if so, what would you say your most foundational books were?

Rohit: Oh man. I read a lot. A lot, lot, lot. Foundational books is there's hindsight bias because the ones I remember now might not have been the foundational books in the first place. I started reading quite a lot. I think I started with the usual, Enid Blyton was quite big. Terry Pratchett was quite big. Agatha Christie, all of those spy novels. Then a lot of books that my mom and dad used to have around a lot of encyclopedias of all sorts from space stuff, very little dinosaur stuff. Now that I think back about it, a lot of space and a lot of science and lots of math.

Foundational books. I don't know. I mean, I think I still feel, I'm going to give you an answer. I don't know if it's true. Maybe Douglas Adams or Terry Pratchett comes closest because they have, they combine a few things that I really like, right? I mean, they're irreverent, which is good. They're hilarious, which is great. They tackle ideas, which is sort of really, really important. And they do, they tackle like really complicated subjects with a level of lightheartedness that I feel is both important to sort of deal with them as well as it, I don't know, there's a trend when serious ideas have to be discussed seriously. And I, I liked the fact that they kind of went against the grain. I don't know that they're foundational, but I definitely think about them quite a lot.

Theo: Douglas Adams is foundational to me too. Foundational to Elon Musk as well, according to Walter Isaacson's new upcoming book.

Rohit: Yeah. I mean, there are very few novels. So there, there's this thing, you know, like you talk about novel of ideas and there are novel of ideas, but the problem that I've always found is that novel of ideas are written as novel of ideas, making it turgid, which I don't think they should be. I think, I don't know. One of my beliefs is that books should be readable. And I think, I think it's one of the few things that I hold strongly to, you know, like quite often people say like, Oh no, no book should be difficult for you to struggle through them. And I kind of disagree with that. I feel like books should be readable.

Theo: So rather than as a child, what would you say your current favorite books and blogs and podcasts and Twitter accounts and other information sources are?

Rohit: I stopped actively tracking these maybe a few years ago, which is I've weirdly made it better with that caveat. I think I'll tell you what I regularly read, right? I mean, I look at arts and letters daily, every in the morning, I'm looking for new sources to find new places to read stuff. It's a bunch of Substacks that I read. I read Erik Hoel. I occasionally read ACX. I read Marginal Revolution, which kind of together, I think they give me roughly enough of the zeitgeist of whatever is going on in the world.

Books is sort of harder. There are a few books that I keep going back to. Most like Herman Hesse kind of, I go back to quite often Glass Bead Game specifically, because I really like it. And Hofstadter, whom I first read in college, I go back to quite a lot. I'm rereading his I'm a Strange Loop at the moment because I feel like, again, they have an interesting way of playing with ideas that I find is lacking. Fiction, I mean, I read, my friend published a sci-fi book recently that is still on my mind called Exadelic, which is, again, a little bit of a novel of ideas, and I really like it. And, you know, I think that's kind of done quite well. I'm trying to think sort of who else am I reading at the moment? I read, you said podcasts. I don't actually listen to too many podcasts. I read Tyler [Cowen]'s podcasts, transcripts, relatively often. And I listen to No Such Thing as a Fish, which if anybody knows, it's a podcast done by the, there's a, there was a TV show in the, in UK called QI, quite interesting, which was narrated by Stephen Fry, which was about facts and interesting facts that exist in the world. And they would talk about it and make jokes. Long running, long running TV show. He left a few years ago to giving to someone else called Sandi Toksvig. The researchers in that started a podcast discussing four facts that they love from the last seven days, roughly. They've been going for ages, like maybe nine years, say eight years. That's one that I listened to because it's funny and it's about facts. I think coming back to sort of the Adams-Pratchett theme, I can recommend it if you don't know.

Developing Genius (12:36)

Theo: So your day job is a VC, correct?

Rohit: It was until recently. I left a little while ago to try and figure out if I want to build something, which is relatively under wraps, but that's one of the things that I'm working on at the moment. I invest a little bit personally, but not, not too much to call myself a VC.

Theo: Well, this year is like the year of the builder. I mean, I'm relatively recent to this part of Twitter, but would you say that the discourse around building and being a builder is like more now than it has been in the last decade or so?

Rohit: Definitely. Yes. With maybe sort of two thoughts. I think last decade has been, there's been a crescendo slowly building up where building is seen as a sensible, good career move for a larger and larger number of people, both by sort of direct evidence of people having done so and done well and sort of indirect encouragement from people who basically tell them that this is how you should think about your life and spend your time. So put those two things together. There's been a slow buildup where you're kind of steeped in this thought process where if you are building, it's not seen as a weird thing to do all of a sudden. It's still at the margins, but that's definitely the case, which means that over time, there's been more and more people who are interested in creating something or building something.

The funny thing that I think about quite often is that when you consider all of the large, prestigious jobs that want to encourage people to join them, they often adopt some of this language. They'll say, "Oh, you'll get freedom to pursue your own thing," or "you'll be able to build stuff inside the organization." So clearly, it exists in the zeitgeist as a positive thing for people to have done. It kind of exists regardless of the fact that it's still not the default path for most people to take.

So put those things together. Yes, there's a larger number of builders probably now at the height of its ever been. I'm sure there were peaks before where everybody jumped on the SaaS train and tried to build it, or everybody jumped on the mobile train and tried to build it. But then the number of people who consider doing it, the bottom of the pyramid, if I can call it, was narrower in those times compared to today, when it seems like it's getting broader and broader, which, you know, you can argue there's a lot of sort of outcomes from that or eventualities, but it definitely seems like the way these things play out.

Theo: In your sub stack, Strange Loop Cannon, you write a lot about building and different types of building, and different types of creative work in general. So yesterday or the day before, Avi Schiffman tweeted, "I would pay so much money for a single service that completely handled my basic needs. So I could focus." He listed laundry three times a day meal prep, personal trainer, cleaner, therapist. He said, "Turn my house into a monastery. This should be VC value add. Founders should treat themselves as athletes." So you've written a lot about grants as well. Again, From Medici to Thiel. So do you think that genius grants should go more in this direction of like summer camp for founders, or should it just be like an unconditional cash transfer?

Rohit: I was having this conversation with somebody, I think yesterday, but they were asking if the Thiel grant was actually an investment, would it have had the same ROI, so to speak? My instinct was yes, but I've been thinking about it over the day and I'm not super sure about it anymore. The thing with these grants is that ultimately their heterogeneity is what actually gives them any kind of power at this point because they're a search function to try and figure out if there are people who are missing from the traditional modes of doing whatever, and can rescue them from that in order for them to go off and try something else.

The question that exists in my mind is like, I'm exceptionally suspicious of summer camp-ish things where, because those things exist in so far as you kind of say like either, we will teach you something. So there's a kind of curriculum or whatever essays or lectures or something in order to impart a piece of information. Or you say like, all of you guys kind of need to come together, stay in one place and learn from each other. The latter has more possibilities for success for some of these things than the former, because the former only works when you're working in highly legible fields where the outcome is measurable and you can push people in the right directions at the right time.

So I think an unconditional transfer is still probably the best way to follow through with anything just because the entire premise behind the selection process today is that there is something there that they will figure out and we just need to remove the obstacles in the way. If you say that there's possibly something there and they might figure out, but we will have to help them and remove the obstacle. That's an exponentially harder problem to solve. And I'm not entirely sure that those two things are quite equivalent.

Theo: With the Thiel Fellowship, is a hundred thousand dollars enough in this day and age? That will get you like one year of middle-class living in San Francisco.

Rohit: I mean, for his demographic, a hundred percent, right? Like, a hundred thousand dollars is a lot. It's easy to kind of pooh-pooh saying like, yeah, you know, if you live in the middle of SF, yeah, your rent would be four grand or whatever. But on the other hand, he's giving it to 18-year-olds who are primarily interested in doing something else with their time. Like, I don't know, schlep out like 30 minutes in any direction, choose your living conditions slightly better. I don't know. I feel like that is an artificial problem that you create. It's a little bit like if you're in New York and somebody says like, "Hey, you only get a hundred thousand." You're like, "A hundred thousand is not enough because I want to live in this part of Brooklyn or Upper East Side." But you don't need to. The money is a way to ease your problem of coming into this. If you want to go for the same level of prestige living, I realize this sounds pejorative, you're kind of selecting yourself out from it. I think back to even a hundred thousand dollars, which is supposed to cover your cost, give you enough to buy a laptop or something, and meet your basic needs for a year. It's still pretty high for that. I remember when I got a scholarship to go to college, my scholarship was about 500 bucks a month on top of room and board. Inflation is a thing, but it's not 10X a thing. This is fine. I think a hundred thousand dollars is actually quite generous. Don't you think so?

Theo: In your article Rest, you talk about sabbaticals, how there have been many people throughout history who have taken sabbaticals off from their boring job to produce some extraordinary creative work, whether it's Newton discovering calculus or Einstein discovering the photoelectric effect. So what should the day of someone who's on sabbatical look like? Should they be actively pushing themselves constantly to work on their project or should they kind of just be resting and seeing if the work will come to them naturally or a combination on different days?

Rohit: I think it's impossible to be prescriptive here. The entire point of a sabbatical is that different people will do it differently. And we don't know what works. If you take a six month sabbatical and chill out for five months, is that better or worse? It's impossible to measure because we don't really have enough counterfactuals. We have to rely on the individual to make that decision themselves. Occasionally poke them, have a chat conversation, but it's unnecessary to be prescriptive. The entire pressure that they might be going away from in the sabbatical period might be one where you're required to produce a large amount of output in a small period of time or under tight deadlines. That is the kind of yoke that you're running away from. Some people might already have a very clear idea and an agenda of something that they want to go towards in which case a sabbatical just becomes a sort of a cleaned out time where they can go behind that. Whereas a lot of people might also just generally have sort of, they don't know. This is the first time that they might have ever had time to actually sit back and read a book or think or relax.

I don't know about you, but when you talk to friends, the number of folks who say they do not have time to do X—for any value of X, work out, read, travel—is tremendously high. And some of these might be artificial constraints in the sense that they could have made it happen if they prioritized enough to go into top two or whatever. But the entire purpose is to take away that stack ranking necessity. And for you to be able to do what you think is most productive or useful or interesting or just restful in a short period of time. And maybe at the end of it, you go back to your old life. That's fine. You just end up rejuvenated, so to speak. Academics do this all the time. Sometimes it's for new projects. Sometimes a lot of those projects might not work out. Sometimes it's for rest.

Think about people who go into business school. A proportion of them go back to their old careers. A proportion of them—I knew at least a few people who probably had almost no real benefit, if I can call it that, from coming to business school. You spend a couple hundred thousand dollars, and you're going back to the life that you could have had without coming here. But they value the experience. They value what they did. They value the network in an intangible way, if not a tangible way. And that's good enough because it gave them a breathing space of a couple of years to think about a variety of things that they would not have been able to otherwise. I think that's sufficient. I don't think we need to be prescriptive about this.

Investing (24:08)

Theo: On the topic of business and specifically finance and investing, in your recent article, The Big Sort, you talked about how individual investors will no longer be able to get rich by simply buying and holding big tech companies. So what do you think would be the best thing for individual investors to do over the next few years?

Rohit: Oh, man. I wish I had a good answer for this. This is something I'm still working on for myself. I still think the time of value investing is coming back into vogue. And in some ways, what that means is that you can still probably get rich over the long term by doing the sensible things: hold a diversified ETF, some in bonds and some in stocks. There's a well laid out path that you can mix and match in any combination and roughly will take you in the same place and have mid-high single digit IRR returns over a large enough period of time, which is good enough for everyone. That's how life used to work. My point there was that there was a world where you could buy Apple and expect it to 10X. And that easy trajectory, because you have a path laid out in the future, does not really seem to exist anymore. There aren't that many, call it, hundred billion dollar companies that I can easily see becoming a trillion dollar company or $500 billion companies that I can see becoming a 3 trillion dollar company.

Theo: Tesla?

Rohit: Tesla was already pretty large. First of all, they're incredibly overvalued. Their margins are under attack. Their sales are growing, but Tesla is a 20-year-old company that already has had a success story and every single other automaker is gunning for it. Maybe it'll still succeed, but I don't know. It doesn't seem as easy as it was to buy Google in the early 2010s or Facebook in the mid 2015s.

Theo: Charlie Munger said something about Tesla to the effect of, it's a wonderful company and Elon Musk is a genius, but people think that he can just cure cancer. And the stock seems wildly overpriced. Keep in mind, this was years ago that he said this. And since then it's exploded. And he said something like, I would never bet on it. I would never buy Tesla stock. I sure as hell would never short it.

Rohit: I don't short stocks anyway, because unless you're a professional, it takes too much time, effort, and energy to be able to short it properly. You can lose your shirt doing that. One of my first really stupid trading mistakes was to short a currency pair. I forget which one. This was back in college and I lost a huge sum of money trying to do it, more than I had made in the last two months of trading. Way more. Since then, I've been much more careful. I thought I knew the theory and I did, but like practical experience of seeing it happen where like in a few minutes you kind of get wiped out as something new. And since then I've been much more careful.

Investing is one of those things where your job is not to maximize your returns necessarily. Your job is to try and make sure that you get enough returns according to the risk tolerance that you have over whatever time period that exists. There's always going to be companies, stocks, investments, themes, bonds, currency pairs, commodities that slip through the radar because you don't know exactly how to do it. And that will cause you to—"lose money" is not the right word—to have better trade years elsewhere. Your job is to say, I know these things relatively well, or I don't care about these things at all. And within this universe, I want to try and maximize considering the amount of time effort that I'm putting in, how much return am I trying to get? You can't solve the world. You can't look at large caps, small cap, private, public, fixed income, global stocks for that matter, macro moves, currencies, commodities. There's too much. You can also have all of these things simultaneously. The best you can do is to say, I'm a specialist in this because I understand it a little bit better. Like the circle of competence that Buffett and Munger talk about. And within the circle of competence, you can say like, oh, okay, here, I know some bets I want to make. I mean, Munger having said that he invested in, they invested in BYD, right? And that did pretty well, despite China, despite the macro risks, et cetera.

So I don't know, like I feel for an individual investor, the question is almost going to be like, you want to put a bit of money in Tesla, that's fine. I don't know what the Tesla's market cap is today, but is it going to be a $5 trillion company? I'm not sure. They've missed a huge number of milestones that they've set up for themselves, which were ridiculous in the first place. They're clearly the leader in EVs here, but on the other hand, I know Renault, Ford, and Chevy work really well. So there's enough competition now across the price points that it is not hard to imagine someone else being able to build a commanding lead and presence in what is essentially not the highest margin business in the world. So you have to bet on the fact that they will be able to continually maintain the lead in the automotive sector, continually making roads into the new sale sort of pie chart, however you want to say, they'll be able to make a ton of money off the superchargers. They will do autonomous ride hailing and like make a ton of money. I don't know. There's too many ifs there. And all of the assumptions that I've seen people lay out, including the dumb one from ARK, it just does not pass the smell test. I'm like, that is just way too many assumptions. I mean, at that number of assumptions, we can talk about 15 other companies that could also go from 300 to sort of whatever, 3 trillion.

Theo: So there’s stocks and then there’s Bitcoin. Bitcoin, despite falling off significantly in the last couple of years, is actually up 55% year to date, which is more than all but 20 companies in the S&P 500. Some people, most notably Michael Saylor, who's the CEO of MicroStrategy, are still very bullish on Bitcoin because they think that in the near future, it will become the hedge against inflation or it will become much easier for individuals to use as currency. So what do you think about Bitcoin in 2023?

Rohit: Every narrative about Bitcoin is hopelessly confused. If you want to buy it and hold it, that's fine by me. But it's not been a hedge against inflation, which we saw. It's not used as a currency. It might be at some point, like people say, once with Lightning Network, it'll end up being utilized. Like okay, I don’t know, show me. I want to be able to pay somebody with Bitcoin easily and see that happening the world over. Why doesn't it happen? Because as an asset, its price increases 55% a year today. That's not a currency. If I needed to send you X value, I wouldn't send the value in Apple stock because it's volatile and it can move around. It's not easy to do. I would do it in dollars, which is what a currency is. A lot of the conversations kind of elide the differences between what is a currency, what purpose does it serve versus what's an asset and what purpose does it serve. They try to mix these things together in the hope that if you mix them enough, something new and interesting will come out. I haven't seen it.

Clearly, Bitcoin has captured enough market share in people's minds. It's a Schelling point sufficiently that there is enough money that is roughly floating in and around and sort of through it. So overall for the people who purchased at the right time, it's good, but I'm not entirely sure that, A, I don't hold any, and I don't plan on buying any at any point, at least sort of as of now, and B, I'm not entirely sure what I would be buying if I'm buying. I don't have a point of view as to what is the benefit to me in my portfolio of holding Bitcoin. There is an asset allocation answer to say if you model it out, it's uncorrelated, but that just turns out to be not true. It's fairly well correlated with high tech, or at least it was until very recently. I haven't looked at it in the last few months.

I can make no fundamental case for why it is useful beyond the fact that a lot of other people think it's useful. That feels like a fairly flimsy foundation to stand on. Maybe it's efficient. Maybe it ends up continuing to exist as a digital gold equivalent that a lot of people buy and hold, but I don't know why I would do it. There's no utility here. Even gold has some utility, even though I disagree with a large part of why that's useful as well. Maybe once it exists for 200 years, it becomes like gold. And you're all like, "Oh yeah, I mean, Bitcoin, you're going to have to have that because it exhibits a deep enough liquidity in the markets. The holders are diversified enough that you don't need to worry about it." It exhibits a lot of those qualities that make me more comfortable in buying into it as a commodity, but I don't think it's there yet.

Contra AI Doom (34:27)

Theo: On the topic of AI, you wrote this article, "Artificial General Intelligence and How Much to Worry About It: presenting the strange equation, the AI analog of the Drake equation”, which I think is your magnum opus and the best argument against doomers on Twitter. But Eliezer Yudkowsky did not think so. When he saw it, he reacted dismissively by saying, quote unquote, "sigh, good old multiple stage fallacy. Did you really just assign an only 80% probability to super intelligence being possible?" So why do you think he reacted so dismissively? And if you had the format of longer than a tweet to respond to him, what would you say?

Rohit: Eliezer has written in the past about this thing called a multiple stage fallacy, which effectively means that if you break anything down into a sufficient number of parts and assign each of them a less than a hundred percent probability, that the multiplier multiplying all of those together will get you a much smaller number. I mean, take any event X, break it down into 10 components, each component, you assign a 90% probability of happening, then, you know, 0.9 raised to the power of 10 will be much less than 0.9 as it is, which is true. But to me, I never understood why that's a fallacy in the first place. I mean, it's a fact that things do have multiple stages.

You can argue whether there needs to be more or fewer stages. If you think stages are more correlated with each other, then you should increase their combined probability. Some can be as high as a hundred, as I wrote in the article. So I think he's fundamentally mistaken about this as he is about other things. Just because he calls it a multiple stage fallacy does not make it a fallacy. Think about any engineering problem that you would solve if you're building a wind turbine, or if you're building jet engines, if you're doing software for that matter, you do have probabilities accumulating over multiple stages in order for you to get to whatever the end result is likely to be.

Now, I think part of the issue is that Eliezer has a particular form of doom that he's incredibly invested in for logical reasons, starting from his assumptions. Because his assumption is that a sufficiently optimizing, more powerful AI will eventually result in FOOM and basically all of us getting killed by some version of diamondoid bacteria. Once that is your core assumption, then obviously you dislike arguments saying that in order to get there, you need to solve these three, four things. And I don't see why. If he wants to dismiss it, that's fine by me, but he's wrong. That's no question about it.

Theo: This is similar to the arguments George Hotz brought up.

Rohit: I haven't listened to it because it was too long and life's too short. But at the same time, if it is great. To me, it's like, guys, listen, you had a theoretical argument starting about whatever, 15 years ago saying sufficient optimization in this particular format will lead to doom. It will lead to the emergence of some version of sentience or at the very least goal-oriented kind of independent action. None of it just happened. So, I mean, if none of that has happened and the closest you can do is to point to GPT-4 and say, "no, no, no, there's a shoggoth inside it." Surely you should be revisiting your assumptions. It's a question of iterations and recognizing when something is wrong. It's not about simply optimizing. It's not a case of 4 might not have it, but 5 will, or 5 might not have it, but 6 will have it. This is moving the goalposts. I feel like you need to break it down a little bit and actually look at the components rather than thinking of it as a mathematical theorem that you have to prove once and for all. I think that's just a category error. It's thinking about it wrongly.

Theo: I think Eliezer doesn't see it that way. I think he sees it more as a convergent outcome. The laws of the universe favor more intelligent creatures that are better at process optimization to defeat and destroy and replace less intelligent creatures. He always likes to say, you can't predict the path. You can predict the outcome. This was basically the crux of the George Hotz versus Eliezer debate. Eliezer was saying this will happen, and Hotz was insisting that you can't just skip over the implementation details. That is critical to how it will end up happening.

Rohit: I disagree with both the premise and I agree with Hotz that implementation matters tremendously. I think they're both true. You can't just say the universe prefers intelligent creatures. That's as close to a theistic belief as it gets. Look around, that's not true. Humans are the one data point that he has to support this particular theorem. That's it. It's not exactly like there are thousands of data points that exist about how intelligence is preferred and therefore we can actually understand it. Even in this one data point instance, humans did not optimize just for intelligence. We optimized for intelligence insofar as it is helpful in survival. It's not a direct correlation of becoming more and more intelligent over time in order to defeat everybody and everything around us. That's just not how it actually worked.

Even if that's how it worked, there is no reason to suppose that the same optimization that happened in the natural world will be the same process that happens when we train algorithms intentionally to do certain things. You can believe it, you can claim it, but it doesn't make it true.

Theo: Eliezer would say, it doesn't matter. It's instrumentally convergent.

Rohit: I don't know what that means. Instrumentally convergent is one of those phrases that gets thrown around as if it's a trump card, but it's undefined in any meaningful sense. It's the argument against the fact that the moral arc of the universe tends to us in a particular direction. Again, you can believe it, but do we know that? I don't think so.

The most powerful AI that we have ever created so far, call it GPT-4 in the language model category, do we feel like it's instrumentally convergent? Look at its behavior. If anything, it's too human. It refuses to answer a bunch of questions. It acts like it has a moral high ground, which is kind of frustrating.

As a supposition, I'm totally for it. You should believe whatever you want to believe. I'm for plurality of belief in that particular instance. But it doesn't mean it's true. As I said, all of these are based on a bunch of axioms and then you're building on top of the axioms. All the hard work is being done by the axioms. If you believe in instrumental convergence, and if you believe that sufficient optimization will create something of sufficient intelligence, then almost by definition, you've guaranteed the end outcome that you will have a super intelligence that you cannot control. It's a simple logic that once you accept the axioms, the outcome comes through. I'm saying that there is no reason to accept those axioms as true. And considering what we have seen so far, in fact, every piece of evidence from our efforts shows that the axiom is not true.

The closest it comes to is to say that Sydney got snippy with some humans as an LLM, or that we do see goal-seeking algorithms trying to find shortcuts in order to achieve their goals. There are these very small things that we have been able to find. So what? There's no perfect software that we'll get right on the first try ever. I don't see why we would expect this one to be different.

Theo: But what if, if you don't get it right on the first try, everybody dies?

Rohit: But we haven't died. That only works if you accept the premise that if you don't get it right, it'll recursively self-improve and go FOOM. I think the entire premise behind writing the strange equation was exactly to lay that out. If you don't think it's going to be fast, you don't need to worry because you can see it, you can stop it. You can intervene. You can do whatever you want to do in the middle. You can nuke data centers like Eliezer wanted to. If you don't believe it's going to recursively self-improve, it doesn't matter. You can look at it, see its behavior, and wait until the next one is trained to go after it. But unless you accept all of the premises, then even if one premise does not hold, you don't need to worry about this FOOM thing at all. And if there's no worry about FOOM, then we don't need to worry at all about the fact that we accidentally made something and it'll kill us all. That's not how anything works. We have to create something of sufficient power, knowledge, intelligence, and utility that it is able to intentionally or unintentionally intercept every part of the world economy or economic system or biological system. The number of things that it needs to be able to do is so astronomically high that just assuming it away saying, "Oh, superintelligence will figure it out," is an enormous hand wave.

Magnus Carlsen is way smarter than me in chess as well as other things, but he won't be able to persuade me to do something dumb or commit suicide or figure out how to dupe a gene lab somewhere to create COVID plus plus plus. That's not how anything works.

Theo: I’ve talked about this with Brian Caplan before, where he told me he had lunch with Eliezer Yudkowsky and Eliezer was trying to convince him that a superintelligence would be able to persuade him to kill himself. Brian said, "I don't think that there is a single combination of words in the English language that a superintelligence would be able to say to make me kill myself." I mean, is there? It sure doesn’t seem like it.

Rohit: If you wanted to do it, you would have to do something like, I don't know, take someone else hostage or you have to do a bunch of other things in order for those words to have an effect, right? Like if you do not kill yourself, I will end up nuking the Eastern seaboard. Now we are kind of getting somewhere, but you better have the ability to back it up because otherwise, what are we talking about here?

The Future of AI (46:26)

Theo: So, do you think there will be some kind of true human-level AI by the end of the decade or maybe the next 20 years?

Rohit: I find timelines hard, because I don’t think they’re particularly useful, but I think we'll get there. I wrote an essay called "Building God" where I pointed towards a way whereby we might have human-level intelligence—or intelligence is a bad word, we’ll say capability—in an AI system that will enable it to do a bunch of the things that humans can do. Whether having that is equivalent to it developing a personality, sentience, consciousness, I am unclear because I don't see how that just automatically emerges if you're using the same parameters that we are using today or the same processes and methods that we are using today. But capability wise, I think we should be driving pretty close to getting there, even if purely through better scale, better optimization, better memory, better ways of doing inference, better ways of doing tuning, better ways of self-learning recursively, self-improving, etc. I think we might have a good shot.

Theo: You clearly like Douglas Hofstadter, you mentioned him earlier. You named both this blog post and your blog itself after him. So have you seen the interview fairly recently where he said pretty much like, "I changed my mind about neural networks. I'm now freaked out about AI, either about AI killing everyone or just replacing everyone and making us as cockroaches to humans." So what do you think about this sudden flip?

Rohit: I think it's an interesting one. I appreciated that quite a bit because it's rare to have people who change their minds in the first place. I think part of the reason for his flip comes because the idea of strange loop, the recursive loops that enable us to create a self from within, is what he's transposing from his particular theories, as well as the old cybernetic theories, to today's neural networks. He's assuming that the autoregressive networks with some level of back propagation is equivalent in some way to creating that strange loop inside it. I don't know if he's right or wrong, quite frankly. He might be right. If he is right that that is the way that consciousness is supposed to emerge, then in a way, that's great. Because we will have evidence of it pretty soon, if not already. We don't have evidence of it today at all, that there is a consciousness that has emerged in any meaningful sense in any of the GPT-4 variants that we have seen so far.

I understand where he's coming from. I appreciate that it's logically consistent with his idea of what a strange loop is. I disagree that because it comes about, it actually predicts anything else. The two leaps that are being made there, leap number one is the fact that the recursive nature of existing neural networks means that it is able to develop something akin to a consciousness, is leap number one. Leap number two is to say that once you develop some form of consciousness, plus have heightened abilities, then there will be a normal kind of warring. The Yudkowsky argument about more intelligent beings will actually supplant less intelligent beings. And therefore, they will take over from humanity. I'm not sure I buy either of these arguments. The latter argument has more validity to it than the former argument, in my mind at least.

Because tomorrow, if for example, let's assume an alien civilization visits Earth. They're much smarter and more capable than us. Would I think that the percentage probability of human extinction goes higher or lower than it is today? It's higher. That's logical sense. But that's assuming a bunch of things about their intelligence, their capabilities, their ability to execute, et cetera. There's a bunch of questions in there that makes that even a question. I mean, there's countless movies that have kind of gone off that exact premise. But then you do need to break down the term "intelligence" to combine all of these different things. The type of intelligence you see in GPT-4 is different from the type of intelligence you see in AlphaGo, which is different from the type of intelligence you see in a cat or a tiger, which is different from the type of intelligence we see in us, to dolphins, whatever. These are not always commensurable in the same way. So to place them on a scale and extrapolate them is probably a pretty bad way to make any kind of decision. But yeah, from Hofstadter's interview, at least those are the two leaps that he is implicitly making that I'm fairly uncomfortable with.

Theo: Do you think the world will look radically different by 2030?

Rohit: No, I don't think so. Not radically different. The world is inertia. It's slow to change. Most of the world still looks the same way it looked in the 1990s. So I'd probably bet on the base rate on that one. The world for us might look radically different, though.

Theo: What does that mean?

Rohit: If you're hyper-networked, tech-savvy, living in a large city, then yeah, your lives can look radically different. But does your life look radically different in 2024 than it does in 2014? I don't have a good answer to that question. In some ways, intuitively, yes. I mean, you have faster internet, cell phones are better, social media is there. You can work from home. In other ways, it's like, eh. I mean, cars are better. You still drive. Trains are better. Flights are better. You still fly. You go to the same vacation spots. You eat the same kind of food. You drink the same Mai Tai.

Theo: Flights might be the unprincipled exception there. I think they've probably gotten worse in the last 10 or 50 years. They've gotten cheaper and coverage has gotten better, but the experience itself.

Rohit: I think the planes are better. Having been on a couple of new planes, they're better. But yeah, the experience is much worse, because—we should be thankful that airlines operate at such low margins. And the fact that there's a heightened demand, which doesn't help us as passengers. Will we all be living like Jetsons? I doubt it.

Theo: What about by 2050?

Rohit: I hope so, man. The one thing that I think will make a radical delta is the introduction of robots in regular life. I am optimistic that that will happen. And by 2050, hopefully, it should start percolating enough that just like electric vehicles had a slow rollout through the world, we probably, hopefully, will have robots that will have a slow rollout through the world during this decade, as well as the next. And by 2050, we should be in a decent-ish place. I'm highly hopeful for that. Because I feel like a lot of the things that you would want them to do, they can actually start doing proviso cost requirements. But that's a mass market production question where it can come down over a period of time.

Theo: Do you think Tesla is poised to win humanoid robotics? Or is it just way too early to tell?

Rohit: It's way too early to tell. And I wouldn't particularly bet on them only because they have a day job. And it's not building humanoid robots. I feel like it's more likely going to be a startup that came out of nowhere else that we haven't seen yet that probably is likely to win them up. I mean, in some ways, it doesn't need to be humanoid robots either. Like if I think about my house and things that I want them to do, if I want them to do the work around my house, whatever, laundry, dishwasher, all that kind of stuff, does it need to be humanoid? Yeah, it'll be nice. But just because it gives it more mobility through the spaces and maybe less freaky. But I feel like we should be able to get there in the next 10 years.

Theo: Do you think AI will be able to get to real intelligence, quote unquote, whatever that means, through scaling? Or do you think it needs some kind of fundamental breakthrough?

Rohit: I think it probably needs some kind of fundamental breakthrough. I feel like already we are reaching the point where scaling helps. But the way to make truly useful things out of it is actually to combine multiple experts together in order to do specific tasks far better than the largest model can effectively do. I was talking to someone about this yesterday that I used to use GPT-4 for everything, but for specific tasks, if I wanted to go really deep, then it's starting to get better to train particular models. I think my bet is on not one model to rule them all in the future, but more likely multiple models working together to actually do a range of tasks properly. It's a supposition, though. It's not exactly a clear, I don't have empirical evidence to prove that it is the case beyond the fact that people say GPT-4 is actually a MOE in the first place. But we'll see how that goes, actually.

Theo: Do you think AGI and eventually super intelligence will, in some sense, obsolete human labor, human creativity, human creation in general?

Rohit: Leaving aside super intelligence part of it, hopefully. I mean, labor for sure. The creativity and our urge to create is an innate desire. We don't create necessarily because we're the only ones who can do it. We create because we feel like we can create something amazing for us and others. So labor, I hope so. Because what's the point of progress if we still have to continue doing the grudge work that we had to do before? None of us like plowing our own fields or planting our own corn or building our own furniture or houses, for that matter. I don't see why clicking an Excel sheet is the exception to this particular progress curve. I'd rather it catch everywhere.

Theo: But if you have an AI that can, let's say, make a painting better than a human painter, would the human painter still want to learn all the techniques behind painting and spend time painting, coming up with new strategies?

Rohit: It's a different arms race. Human painters compete with other human painters, even though they're not as good as them. And not all of it is competition in the first place. Some of it they create just because they like it. I'm sure you have friends, I have friends who love painting, not because they're going to be the world's best painters, but because they like painting. They like creating something by themselves. It's the IKEA furniture effect. Sometimes you like it more because you made it. So I don't particularly see that going away.

And if AI becomes able to do paintings in a way that can surpass the artists of our day, then what that means is the artists of our day will have to find new methods, mediums of creation that the AI cannot easily match. I mean, to give an instance, if AI is able to create digital art, or art in general, that is much better than what people can create as of now, then perhaps the painters who are primarily focused on creating, I don't know, neoclassical sort of paintings are probably going to be the ones that are out of a job. At the same time, you might want to go towards more abstract paintings. You want to create not just paintings, but actual works of art with 3D layering, like use multiple different types of materials. There are ways to kind of expand the horizon here. And I would say that would be really good, because this is how humanity evolves.

I mean, we don't all sit around painting the same versions of Virgin Mary and the Christ anymore, because it used to be that the church was the primary patron, and that's the kind of painting that they wanted. And as an individual, it doesn't matter if you're Da Vinci or Michelangelo, those were the paintings that were given to you for you to paint, and then you had to put your individuality stamp on top of them. Whereas now you're able to kind of paint your own thing. Will that go away? I'm optimistic, but this is a hypothetical.

Theo: Meaning you're optimistic that it won't go away and people will continue painting?

Rohit: I'm optimistic that the fact that a computer can do it better is not sufficient reason for the entire fields to vanish, but rather for the field to change.

Theo: Yeah, that makes sense. What do you think about open source for AI? Do you think we'll have an open source model that can beat GPT-4 by the end of the year?

Rohit: Yeah. Later, I think so. Not super high confidence, but I think so. I think the biggest boost for a lot of the research so far has been the fact that open source models, code, and discussion actually exist and is incredibly vibrant. And this stands for advances in capabilities, building larger models, smaller models, more capable models, specialized models, as well as sort of figuring out what type of training is actually better. Do you want to do pre-training versus fine-tuning versus LoRa versus QLoRa? There's a lot of questions here. And the only way to answer it is by people to actually get in and explore. And I think open source is a fantastic way of being able to do that.

I am a reasonably big fan of open source software as well. I think it makes sense that in a situation like this, you'd want to have as much of the code, weights, whatever open source as possible so that we can play with it and understand it. Regardless of where you stand in the spectrum, you should want it because ultimately, you should want more people trying to understand this thing which everybody seems to claim is not understood as of today. Because that's the only way that you can actually kind of get to grips with it and build better solutions.

Theo: Do you think that brain-computer interfaces will allow humans to eventually compete with sufficiently advanced AIs?

Rohit: I don't know. I think I don't know at all about what it would take to create a strong enough, good enough brain-computer interface, which makes me confused as to what an answer for that might look like. I mean, there's a theoretical answer saying yes. But practically, I don't think the brain is like an easy computer in the sense that you figure out how to merge it. I think it's a fairly complicated piece of kit. And I'm not entirely sure what a BCI might look like. That is beyond a very bare minimum of helping people with disabilities kind of get back to normal or make minor adjustments to their lives or have them see colors or something, as opposed to sort of directly interfacing in the sense that you can have complex, high-bandwidth discussions and conversations that both boost you as well as a computer and work together.

Theo: Yeah, makes sense.

Theo: All right, so I guess that wraps it up. So thank you so much, Rohit Krishnan, for coming on the podcast.

Rohit: My pleasure. Thank you for having me, and sorry I have to dash.

Theo: Thanks for listening to this episode with Rohit Krishnan. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Thank you again, and I’ll see you in the next episode.

0 Comments
Theo's Substack
Theo Jaffee Podcast
Deep conversations with brilliant people.
Listen on
Substack App
RSS Feed
Appears in episode
Theo Jaffee
Rohit Krishnan