Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

#19: Samo Burja

Superintelligence and History, Ideology, and 21st Century Philosophy

Samo Burja is a writer, historian, and political scientist, the founder of civilizational consulting firm Bismarck Analysis, and the editor-in-chief of governance futurism magazine Palladium.

Chapters

0:00 - Intro

1:06 - Implications of OpenAI o1

10:21 - Implications of superintelligence on history

35:06 - Palladium, Chinese technocracy, ideology, and media

1:00:44 - Best ideas, philosophers, and works of the past 20-30 years

Links

Samo’s Website: https://samoburja.com/

Bismarck Analysis: https://www.bismarckanalysis.com/

Palladium: https://www.palladiummag.com/

Bismarck’s Twitter: https://x.com/bismarckanlys

Palladium’s Twitter: https://x.com/palladiummag

Samo’s Twitter: https://x.com/samoburja

More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf

My Twitter: https://x.com/theojaffee

My Substack:

Transcript

Theo Jaffee (00:00)

Welcome back to episode 19 of the Theo Jaffee Podcast. Today I had the pleasure of speaking with Samo Burja. Samo is a writer, historian, and political scientist, and he’s done a lot. He developed Great Founder Theory, the idea that societal change is often primarily driven by institutions shaped by the choices of powerful individuals. He founded Bismarck Analysis, a consulting firm that publishes detailed research on companies, industries, nations, and other large-scale societal organizations. He chairs the editorial board of Palladium, a magazine focused on “governance futurism”, and with, in my opinion, immaculate taste and aesthetics. He previously did research at the Long Now Foundation, and his Twitter bio reads “There’s never been an immortal society. Figuring out why.” In this episode, we talk about the meaning of AI on the trajectory of history, how we can get the best of Chinese technocracy while avoiding the worst, and some of the interesting new intellectual movements breaking the stagnation of the past few decades. This is the Theo Jaffee Podcast, thank you for listening, and now, here’s Samo Burja.

Theo Jaffee (01:07)

Hi, welcome back to episode 19 of the Theo Jaffee podcast. We're here today with Samo Burja So we should start the conversation today with the massive news that just came out yesterday where OpenAI announced O1, which is their new reasoning system. And they've proven for the first time that reinforcement learning can scale just like pre -training. And a lot of people are seeing this as, you know, the golden path towards AGI. So...

Samo Burja (01:12)

Great to be here.

Theo Jaffee (01:37)

What do you think about current day AI right now in terms of the kind of research that you do at Bismarck? How helpful is it?

Samo Burja (01:47)

First off, do think it's an impressive result from OpenAI that they have managed to reduce hallucination in the mathematics portion especially. Infamously, that had been a problem because most white collar professions are actually professions where reliability is the key foundation that makes something worth buying. You don't want a doctor that is

is right 95 % of the time and wrong 5 % of the time, or honestly even a lawyer, right? So I think many people are actually in professions where they are paid for consistency and reliability of a certain intellectual level.

I think that to answer your question, I personally don't actually use it that much, but perhaps I will use it more with this new generation. I have heard people use it very effectively to find search terms in literature review. So basically you ask the AI, what is something called in a specialist field like medicine, energy, law, finance, et cetera, you will usually get a

pretty decent explanation and I think the latest launch had sort of a promotion or we could call it a demonstration, a little promotion video with Tyler Cowan demonstrating this capability for economics. So until this very model I have not found that much use for it in my work.

because it kind of generated a high school essay. But let's say if this is now achieving college essay levels, perhaps there's some use there and I'll certainly be playing around with it and experimenting.

Theo Jaffee (03:37)

what specific capabilities would you want to see it have before you would consider it to be genuinely useful? Aside from, you know, just...

Samo Burja (03:45)

Well,

Genuinely useful for different people might mean different things. It's already obviously genuinely useful. I think it's done a great humanitarian service to the world by automating homework, which we probably should have actually abolished long ago since the educational statistics show it doesn't make much of a difference. So it's kind of a strange, busy, make -work thing that we've imposed on the children and young people of the world for no real benefit, which actually

is a shocking amount of the economy. So I think that it has been already very good at fulfilling many of the roles that a literate citizen or literate employee has historically fulfilled, right? It could be used to do data entry. I honestly hope they make it easier and easier to have it do data entry because the amount of paperwork we all deal with has radically increased over the last few years.

We don't think of it as paperwork, but every time you have to re -enter your passport, your password, your date of birth...

your credit card number, your zip code, your address. Every time you do that, that is actually paperwork. Every time you have to use two factor authentication. Now, of course, there are password managers that are supposed to handle this, but they're brittle. I think an AI would be excellent at parsing UI. I personally never want my phone to ask me for my date of birth again. Like I'm just done with that. Every time you install a

app it wants you to sign up to everything. So in the war on paperwork where bureaucracies offload the work they're supposed to be doing onto the user onto the citizen I'm hoping that AI will, generative AI as such and especially the text incarnation, will allow us to spam the bureaucracies with as many pages of replies and paperwork as they spam us with. So you know that's that's my big hope and structurally I think this

will make society richer because it let human beings go do what human beings do best, which currently is various forms of physical labor and certain kinds of creative original thinking. Which actually brings me to sort of this point where, you know, I mentioned Tyler Cowan earlier, one of his books, Averages Over, feels oddly prescient in the aftermath of AI because what AI has done best is automate the

the median white collar profession. So in other words, if your job is done by millions of people, it can probably generate the data necessary for these models to learn what your job is. But if your job is much smaller, if it only has 20 people or so in it, if actually maybe your firm is the only firm that does something, then I think your white collar work is going to stay intact.

The you are, the more differentiated you are, the less of a training set that your field of study produces, the harder it will be to automate. Now, medicine and law, two examples I raised, are actually areas with huge datasets. So I actually expect if not this version of OpenAI's model, but let's say a version within the next five years, will achieve the reliability of an excellent doctor. However, finance,

law and medicine have political power that protects them.

they will mandate the use of a human and the oversight of a human expert, be it a human lawyer, a human doctor, a human financial advisor. By the way, you know that you actually as a normal citizen cannot just invest in random stocks without an intermediary financial priesthood, right? Like you can't actually do that. You can do it in crypto. You can't do it in traditional finance. Oddly non -democratic. There's no good economic theory for it. It's basically just paternalism to prevent people from gambling.

Theo Jaffee (07:49)

Mm -hmm.

Samo Burja (08:01)

on stocks. But if we're doing paternalism that way, maybe we should protect people from the consumer market, etc. So I claim finance is gate -kept and is there as well. So once these jobs are automated, any job with political protection, with a structural guild -like lock on credentials, those jobs will actually not be automated by AI. Let me explain what I mean.

the substantive work that they do will be fully automated. But you can't automate fake jobs. So since you can't automate fake jobs, instead of it being a 20 % self -serving job with 80 % drudgery, it'll become 100 % self -serving. If you can spend 90 % of the time or 100 % of your time lobbying for the existence of your job, oof, in a big bureaucracy, that's pretty powerful. And in a society, it's pretty powerful.

busy bureaucrats are at the end of the day actually politically not that powerful. It's lazy, well -rested bureaucrats that are powerful. So on the other side of this, any job that does not have such protection, that is open to market forces, well, it'll be partially obsolete. It will increase economic productivity. So in my opinion, the real race in our society is will generative AI

empower new productive jobs by automating old productive jobs faster than it will empower through giving them more time to basically pursue rent seeking, the rent seeking jobs of our society and never underestimate the ability of an extractive class to really like like lock down and crash economic growth. I think this is the default of human history and economic growth is the exception.

Theo Jaffee (10:03)

So speaking of human history and AI, on the grand...

Samo Burja (10:06)

And by the way, that's why I emphasize so strongly, I hope the AI helps us beat the bureaucracies rather than, you know, I don't think it'll eliminate it. I think we should use it as a weapon against bureaucracy. Yeah.

Theo Jaffee (10:18)

I agree. So on the grand, you know, millennium scale arc of human history, like what does AI mean? Is it like, will lead to more of an end state once we reach AGI? Will the post -AGI world be a kind of, you know, epilogue to human history or will it be something entirely different? Will it lead to new eras of human history?

Samo Burja (10:41)

Well, it really depends on what you mean by AGI. The term has like something like six distinct uses. There's the official OpenAI definition, which is a little bit circular.

Even if you read their documents, their official corporate definition is something like, AI that automates all jobs. Which, by the way, mind you, I think most jobs do not actually require agentic, human, general purpose intelligence. I think most jobs are actually just fairly complicated scripts, but most jobs, I think you could honestly automate them with a sufficient amount of spaghetti code.

we did not have this transformer architecture revolution and we had 10 ,000 years more to fiddle about with coding and programming. actually think just non -AI computer programs could handle 95 to 99 percent of the jobs out there including with robotics and so on even without learning, without machine learning. And I think that's because our economy is actually shockingly primitive.

I can give some examples of how our economies are shockingly primitive Australia, which we think of as a first world country is a resource -based economy dig up rocks Raise sheep sell this What is this minecraft like how can a first world country? achieve such a surplus by basically selling sheep selling copper Selling various minerals. It's kind of hilarious. It shouldn't happen in

2024. Yet it does, because the other economies are not that advanced either. At the end of the day, objectively speaking, a car, be it an internal combustion engine or an electric vehicle, is not that complicated a machine. You can explain it to a smart high schooler over the course of a week, every single component in that car. Metallurgy is a bit trickier. But at this point, when we're talking machine tools, metallurgy, robotics, cars, what is

That's the economy of Germany and Japan. Possibly the most complicated thing ever is something that can be handled quite well by a small island nation. How is it possible that Taiwan produces so much of the world's semiconductors? It's a country of 16 million people. What are the other 8 billion people doing? And the answer is the other 8 billion people on this planet are actually doing stuff not too dissimilar from Australia. Like they're digging up stuff, they're growing stuff,

About 300 million or so are engaged in making plastics, making steel.

100 million or so are busy making cars, busy manufacturing, et cetera, et cetera. And then we have all the lawyering and the bureaucracy, cetera, et And you know, let's say a million people, let's say 2 million people are directly involved in the manufacturing of semiconductors. If we added up the labor force of TSMC and like the labor force of, you know, Foxconn and the labor force of Tokyo Electronics and ARM,

in Britain and maybe let's count ASML too. Maybe if we jury rig this number we can get it to like 20 million people are maybe involved directly in the manufacture of semiconductors which are almost the most complicated machines in existence other than something you know singular like the Large Hadron Collider. Like the chip fabs are immensely complicated. So if I break down the world's economy as to what are these eight billion humans doing you realize we kind of don't need artificial intelligence

intelligence to automate these jobs. I don't know, Theo, if you were immortal and I gave you 200 years to write a program that doesn't use machine learning, that knows how to herd cattle, I bet you could do it. Right? I bet you could do it. I don't think it's that hard. Well, 200 years haven't passed, okay? We've barely automated spreadsheets in the 80s, right? We've barely figured out how to send money, a made up thing that could be easily represented with electrons.

Theo Jaffee (14:46)

Why hasn't it been done then?

Samo Burja (15:02)

over the place. So I really, you know, I really do think that

the world's economy is almost, well, okay, it's a bit more complicated. If you look at the history of automation, automation tends to happen right next to an automated field. So as soon as you automate something, you have made it machine -like predictable, you have eliminated variance, and then whatever output you produce there is now so regular that you can automate whatever is

taking that as an input. It is difficult to introduce automation into a system where everything is custom, unique, intermittent, following natural cycles day, night, etc. It's easy to automate something when you are working with the high predictability of a machine. So because of this, and they've been good economics papers, I recommend people read some of the economist Robin Hansen's writing on this.

Theo Jaffee (16:06)

Love, Robin Hansen.

Samo Burja (16:07)

he's great. So if you read on his econ papers and some of the papers that he cites, he points out that in fact, you have this almost spreading way automation through the economy where the easiest thing to automate is something that can take machine inputs. The hardest thing to automate is something that requires inputs from the natural world, like the behavior of sheep in this example, or the chaos

of geology we still actually don't have very good geological theories. If you dig down two miles there's it's a very hard with sensors and our theory to predict what you exactly would find at any point in the world. Mining always comes with surprises right? They kind of are doing exploratory digging that's why it's so expensive. Even for something as well understood as oil there are of course surprises.

And then you have, you know, let alone the sort of chaotic needs of like, when exactly do you need a divorce lawyer or something like that, or when someone dies and there has to be a will interpreted, et cetera, et cetera. Like if we go to the white collar world, it becomes very natural. The service economy is supposed to be human oriented. And when we see automation in the service economy, it's almost like a form of rationing, right? At the airport, you are supposed to learn how to you know, scan you.

your tag or get your tag printed, know, stick it yourself onto your luggage, put it out there. And there's still a human being walking around working out various issues. Like for example, maybe your ticket was booked through a different airline and the silly terminal doesn't understand or allow the input of the code of the other airline that's partnering with your airline. Trivial things like that, fragilities that happen because of, you know,

cases that haven't been exhausted. So I think that there's like a combinatorics problem here where it's just explosive number of cases. When you automate something, you're actually reducing the number of different outcomes. A robot putting a door onto a car, onto a car frame, will do it exactly the same every single time unless it breaks. If it breaks, it breaks totally. Someone comes, fixes it, then it puts the door back on exactly the same.

I don't think a human worker does it exactly the same way or at least even if a human worker does it exactly the same way a different worker will do it a slightly different way and You know, that's the Industrial Revolution. It's actually been artificial simplicity We have been producing artificial simplicity since the start of the Industrial Revolution by making every you know, every teacup every mug the same

we have used economies of scale to grow vastly wealthier. So if we then joke about the definition that open AI uses for general intelligence to loop back to your original question, you know, a machine that automates most existing work.

When was AGI achieved? Well, James Watt achieved it in the 18th century, right? The steam engine already achieves that. But of course, new jobs showed up and our economy complexified. So it's really my hope.

that this kind of significant machine learning transformer architecture based AI, whether or not we think of it as AGI, I think it will automate vast amounts. It will automate vast amounts of work. But

Hopefully it'll make our economy more complex and will create more jobs and They will be things that it can't yet do now with regard to true general intelligence so if we have a you know, say the difference between Me learning to play a game of chess and AI learning to play a game of chess Chess is kind of an easy example because there's an exhaustive rule set in a way chess is also artificial simplicity

Basically, the machine can play millions of games and I cannot. And what is the difference between me learning to write an email and the AI learning to write an email like a Google.

Well, how many emails does the Google machine get to read? A billion? Two billion? Ten billion? Fifty billion? I don't know. It's definitely somewhere in the billions. How many emails have I read in my entire life? Well, it might feel like a billion. It's certainly not. It's like maybe a hundred thousand. Maybe a million. A million is too generous, I think, if I count all the spam that's deleted. So let me just, you know, quickly estimate and say a hundred.

Like on the spot, if you push me, how many emails have I read in my entire life? Or skimmed, probably a hundred thousand. So...

What does this mean? Anywhere where there are a billion emails or where there is a rule set like in chess that can generate exhaustively all the cases or at least as many cases as the machine can ingest, big data will be sort of victoriously succeeding at performing peak human capability or even modestly superhuman, right?

But what about cases where there are not a billion examples? What if there are only 10 ,000 data points just in existence for a problem?

I actually don't think the AI will be very good at learning that. And I think that illustrates the sort of difference between what I think is happening with scaling. Let's remember it's not just the scaling of compute, it's the scaling of data. Either works super well. I'm sure OpenAI scraped to the entire internet, as have the other AI companies. And, you know, within the bounds of legality, presumably.

but I just don't, presumably, move fast and break things. That's what they say. So.

Theo Jaffee (22:32)

Presumably.

Samo Burja (22:45)

think that we will see some surprising differences between human intelligence that learns from few examples and few data points and current generation, the transformer architecture. And of course, let's not forget diffusion, right? Diffusion is what is actually generating all the pretty images. And by the way, isn't that interesting? Why are transformers worse at generating the images? If we presume intelligence is a single thing,

and humans have that single thing, surely it's the same skill I use to paint a picture as I use to write an essay or to solve an equation or perhaps even to throw a basketball. There are lots of people who are betting on the transformer architecture in the physical world. Yet, defeat. Yeah?

Theo Jaffee (23:34)

Well, in the GPT -4 .0 blog post, they showed examples of how they used it to generate images that were very, good. And they were very good in...

Samo Burja (23:44)

Is that a Sphinx where that's a call function like it wasn't the previous generation or are they claiming that it is the same architecture? Okay, so it is native. Okay, cool.

Theo Jaffee (23:49)

No, it's native. It's native, yeah.

And it's good in a different way from like mid -journey, for example. Mid -journey is very kind of artistic and it has like taste in a way that 4 .0 doesn't yet, I guess, but 4 .0 is able to have, you know, more precise text and, you know, image persistence and stuff. So I think that this is probably something that's solvable by just making the models more multimodal and trading them on more kinds of data.

Samo Burja (24:24)

Possibly, I still think that it is notable that transformer and diffusion architectures are comparably good, let's say. I will read the paper. I also ask my AI friends because I feel often people take a chimeric approach. It's not visible to the user, but I believe you. I believe you, I believe the paper.

point being the fact that completely different architectures are competitive at all at a similar level of compute suggests to me that in the near future we will see a Cambrian explosion of different forms of intelligence and that actually intelligence isn't one thing but it's almost like a family of radically different things. We have just only been exposed to human intelligence by a quirk of evolution though of course

even when we're looking at human intelligence and we interact with the animals we've domesticated, at times these much dumber animals really outperform us at tasks we would consider cognitive, or G -Loaded, or intelligent, or...

borderline magical, right? Like primitive people was considered animals had forms of magic. So even in the natural world between mammals and birds, let's say if those are the two smartest broad branches of life, I think that we perhaps already did see multiple types of something that could be called intelligence. And I think that in the next hundred years, we will be continuously surprised as architecture

change, a scales increase by all the amazing things that humans could never do that the different forms of artificial intelligence will be able to do. And also shocked and confused by all the things they can't do. So I actually think that the patterns of what different forms of intelligence cover will end up being radically different. Now, if I'm wrong,

this will be sort of disproven in the next five or 10 years. But I suspect there's going to be something very surprising waiting for us when we interrogate our primitive philosophical concept of intelligence. And you know, there's like a way in which if we reframe machine learning as industrial scale mathematics or industrial scale statistics, we get very different intuitions of what it can do and how far it can go. And of course,

not denying the deep socially transformative impact of it. At the end of the day, does a submarine swim? Does a plane fly? It certainly doesn't fly the same way as a bird does. A submarine doesn't swim the same way a dolphin or a human does. But obviously those are extremely useful things. But it's good to remember that until the most recent quadcopter revolution, birds could do things that jet aircraft never could.

They could land in tight spaces, leave tight spaces, hover a certain way, know, pick up pollen from a flower. And, you know, of course, jet aircraft in the 1960s could fly up in the stratosphere at Mach 5. And no bird can do that, OK? No bird can do that. So I think that that is like a surprisingly deep analogy where if we apply this to movement, if we apply the same thing to intelligence,

will learn surprising things. I think a lot of my friends, and maybe they were naive, a lot of my software engineer friends were genuinely confused when chat GPT went viral. They were like, but if you wrote a for loop, then this will be an agent. Do you remember all the agent startups that popped up?

Theo Jaffee (28:25)

Mm -hmm. Yep.

Samo Burja (28:26)

They didn't work. They basically didn't. It kind of decoheres, right? If you like loop it on itself without a human input, it kind of decoheres and doesn't really pursue agentic actions in the world. That's surprising because even if it's not multimodal, even if it's just text, dude, text can be an input for other things. It can have actuators, can have sensors that represent the data as text. Maybe all you need is text. That kind of should have worked. And I think we used to equate intelligence and agency.

and right now we're seeing the two decohere in an interesting way. People right now are not confused, but they were confused in 2022. And I think this is one of those things that as soon as we are less confused or where our concept of intelligence is enriched, either the popular concept or the philosophical concept or the engineering concept, we almost don't remember what it was like before.

Whenever your model of the world becomes more complicated, it can be hard to remember what people don't know. If you want a reminder of this, try talking about your field of expertise with someone who's not in your field. You will assume they understand far more than they do. And when you ask them for their concepts, you realize it's not there. And I think if we could talk to ourselves in 2020, almost everyone alive today could blow the minds of people in 2020.

when discussing intelligence in machines and so on they would say the Turing test was passed but we don't know how to have the AI pursue an agenda and we don't know how to have you know the AI not just lie and make up things let's say maybe with 04, sorry with 4 -0 maybe with strawberry it's like actually solved and I think that's a great achievement I have to test it first before I can say with confidence

But still, we would surprise people in 2020. And I think we'll find ourselves perpetually surprised. I think we should stop expecting the AI to fly like a bird or swim like a dolphin. And it will, in fact, go very fast and very, far. And certain unusual things will be left to us humans for a long time to come.

And I'm not sure when exactly we will exhaust this cane brain explosion of intelligence. But there will be radically different AI systems. They will come to pick up more and more of the economy. They will eventually, once the will problem is solved, once we figure out how to give them will and agency, they will become politically powerful. They will very quickly become more politically powerful than humans. If there is any resource scarcity on the margin, they will immediately use their political power to pull the plug on

any sort of UBI or environmental regulation that the humans need. The atmosphere has to be made of oxygen, say the puny little humans, but they don't matter. So then humans go extinct and that explosion continues and eventually we have a world of completely new life forms. Now, I think that is at the extreme, but up until the point where the value of human intelligence isn't exhausted, humans will keep getting richer and richer.

Though they might start becoming politically disempowered once machine agency enters the picture. I think it is we're pretty lucky that the AI has not gone political as soon as the AI is politically powerful We will be in trouble. I'm actually happy with open AI or an anthropic or these big companies being very politically powerful Because at the end of the day, they're still humans. They want the atmosphere to be composed mostly of nitrogen and oxygen They want the temperature to be in a habitable range

Maybe there's mild disagreement on the margin about how many parts per million of CO2 we want, but like it's broadly all okay.

Yeah, so I don't know, you know, humans are very power hungry. So that's sort of my optimistic vision for the future is that we ride this came random explosion intelligence. We ride it much further than it is right now because I have a lot of faith that particular kinds of human intelligence will have an advantage. And then at some point our monkey brain like freaks out and we're like, the machines are too powerful. And then we just stop.

and then we maintain political power and we just enjoy our multi -planetary, high intelligence, high wealth civilization and perhaps expand horizontally across the galaxy with slowly, light speed limited ships rather than go all the way to being politically replaced and disempowered. So, there. That's kind of my projection. My projection is, yeah.

Theo Jaffee (33:16)

Hmm, so, almost.

Almost like the the Ian Banks culture series

Samo Burja (33:25)

Not quite. In that case, the humans are kept as pets by the very advanced intelligences and clearly the motherships are much more powerful than the humans are. I'm sort of relying on man being a political animal and that we're going to have like a primitive animal -like cunning that will keep us one level ahead of a lot of the super intelligences that in theory should be able to think circles around us but are going to have extreme difficulties. And you know, there's fun science fiction of this type. There's like, you know, science fiction where

The machines don't know how to lie and the humans know how to lie, for example. Though I don't think that's the case here. Clearly we have trained Chachupiti to lie to us very well, right? But anyway.

I think that it is difficult to reconcile the existence of human beings with sufficiently advanced AI. However, that might not happen. And I think we have a far more interesting history ahead of us for the next few hundred years. I don't think it's going to be the Eliezer Yudkowsky sort of rapid takeoff scenario. I think it's going to be much weirder than that. It's going to be like an explosion of colour or

shapes or... we will find the cognitive environment much much diversified. The Cambrian explosion comes first and then eventually comes a mass extinction where one of the forms of intelligence just outcompetes all the others. But I think we're going to enjoy this Cambrian explosion of different forms of intelligence for very long time.

Theo Jaffee (35:02)

Yeah, I hope you're right. switching topics a bit. A couple months ago, someone tweeted, Palladium just wants Chinese technocracy with American characteristics. And I thought this was really interesting because this seems to be a common thread of critiques of this kind of palladium ideology, which is basically, Palladium wants America to become more like China. So...

Samo Burja (35:28)

No, it's just false. It's just butthurt libertarians, bro. It's just butthurt libertarians. They got triggered by a thread that one of my employees wrote, which honestly was a great thread because it pointed out that China is a consumerist capitalist society. I don't know why this is controversial in 2024. I don't understand it, but...

think it's cope, right? I think we want China to be like the Soviet Union because we know how to beat the Soviet Union. We just grow our economy better, And the claim that GDP go up is the same thing as ship, steel, and drone production go up. Well, that was kind of true in 1945 when America won a world war. It's not true now. So really, I think, you know, if I were to give a critique, I would say that I actually want America to be more like itself.

you

I want the government to be able to build a bridge. want the taxes to be lower. I want the inflation to be lower. But I Palladium has no single ideological position. We publish writers with a wide range of perspectives. There's, of course, many very smart libertarian friends who have written for us. We're nonpartisan. We've had people who have written immigration skeptical, immigration positive pieces. The tagline is governance futurism. And governance futurism presumes

that government and society and culture in the future will be different than they are now. So do we want America to change, to develop? Yeah, but we're not advocating for any specific thing. We are examining what happens around the world. And I refuse to take this like false dichotomy where I'm supposed to pretend China's gonna run out of food in five minutes or I'm supposed to pretend it doesn't matter that China builds five times more ships than South Korea, which builds five times more ships than we do. I refuse.

refuse to pretend that that's the world we live in and I refuse to be stupid and jingoistic. I would actually, here's the thing, I will never fire someone for tweeting or disagreeing with me in any way. I believe intellectual diversity is important, but you know, I would fire someone for being an idiot. So I really refuse to hire idiots. And by refusing to hire idiots,

I sometimes rub people the wrong way because anyone with the brain who is a genius or a smart even original thinker they will rub simple categories the wrong way. So let me challenge you right back like did you read Vitalik's piece on Zuzalu in Palladium magazine?

Theo Jaffee (38:15)

I don't think I read

Samo Burja (38:16)

Why I Build Zuzulu is a Vitalik Buterin piece where he talks about creating a pop -up city. Or there's another piece, how cryptocurrency will transform migration, where it actually argues that populations will become much more mobile around the world.

and state power over individuals will decrease. Or I could name any other dozen pieces. Look, I think people are just stupid about China and they want to hear America, yay, China, boo. And I'm like, hey, let's not ignore that China is destroying us industrially. We don't have to industrialize the same way, but we do need industry. We need to build chips. We need to build ships. We need to build EVs. Not even America.

actually. Like it's fine if we French or it's fine if Germany builds stuff. Oops, the German economy is tanking. It's fine if South Korea go build stuff for us. Oops, South Korea is going extinct because their TFR is 0 .7. I'm tired of pretending we don't have big problems because I like our civilization. I want it to do super well and all is well.

sort of let's go back to grilling, let me just code, whatever man. Politics is already harassing the code, you need to think about politics back and that's why think Palladium is really the first magazine of the 21st century because it refuses to do this like left right thing, it refuses to do this like kind of like blind very narrow

Yay, our team booed the other team. So if people want to read that as pro China, I think that just tells that in their mind, the only alternative to our dysfunction is China. And you know what? The Chinese agree.

The Chinese government actually agrees that the only alternative to American dysfunction is China. And I think we should blow up that dichotomy because that's an economy that ends with us censoring our Internet to protect democracy. It ends with us tracking the movement of all Americans. It ends with us continuing to buy all Chinese products, but slapping tariffs on them to save the Boeings and the Intels of the world rather than

the SpaceX's and the Androids of the world. So yeah, that would be my response. And I got quite animated because I'm just like, you know, it's like you can spend 10 years giving nuanced commentary and then a person on Twitter gives like a little dunk. Whatever. I disregard. I disregard. If you'd not asked me, I wouldn't have even, I've not thought of it twice since. just, you know, someone's an asshole blocks me, I'll block them back. And it's super funny because,

I don't really think that anyone remembers that a magazine is supposed to be an intellectual culture with many different views. I think we're so used to the hyper -partisan propaganda environment that we've lost the social technology. So it actually goes back to the view that I stated that Western civilization has almost completely lost the infrastructure for complex and nuanced thought.

I think everyone is simplified and stereotyped. In politics as well as industry, have produced artificial simplicity, making us artificially dumber than we actually are.

Theo Jaffee (41:47)

Okay, can we drill down on that a bit? Western civilization has lost the infrastructure for intellectual complexity and nuance.

Samo Burja (41:54)

for complex thought. Well, you know, this is actually a way in which there was a very excellent, since we're talking about palladium, there was a very excellent piece by my friend Ben Lando Taylor on the academic culture of fraud, which is documenting and discussing pervasively the prevalence.

of people not only p -hacking or statistically massaging the data, but outright fabricating datasets. And note, in fields like medicine, where that costs lives, where people die, and Ben proposes...

the radical but sensible solution that actually academic fraud should be not just a fireable offense. It should cause, it should risk jail time because you really are causing harm to others. With financial fraud, have this and with academic fraud, we should have some of this as well. The academic institutions today mostly hush up and protect proven instances of fraud. So I really recommend the

audience go and read the article it was shocking to me and revelatory to it was a revelation to what an extent basically an academic department will not want the reputation damage of having you know there been demonstrable fraud there so strike one for academia academia is failing to sustain the culture of science let's go for strike two the media environment most social media networks in the Western world

And this is the way in which I wish we were more different than China. I want us to be radically different than China Give Give straight statements of suggested censorship so they will give statements to social media companies like meta and Facebook and Sir, you know meta slash Facebook Like tick -tock and so on they will suggest you take this down

And in places like Britain we saw recently, they're not even averse to mass imprisoning citizens. The United States is lucky within the Western world to have the First Amendment. It protects us from state mandated censorship. But I do think that there is state -suggested censorship. We have plenty of evidence that old Twitter, the Twitter files that Elon encouraged people to read but no one read, I don't know why.

possibly because we know that you're going to end up having different views and you're going to feel emotionally disconnected from people who have conventional views. There's plenty of evidence of the White House, the State Department, DOJ,

Sending basically threatening emails to big social media companies telling them to ban people pull content So calling this state suggested censorship is a big deal and I think Elon Musk is doing the country a great service by opening a Freer discourse environment. So that's number two public discourse is threatened X is like the only movement X comm is the only website that is closer to the

of 2001, you know, adjusted for IQ of the general population, but still closer to the freedom of the internet of 2001 than the extremely gated, curated, manicured, and fake internet of 2018. I still remember when the YouTube comment sections first became much more polite and then they became much more stupid. Because if you enforce, you know, censorship in the name of, you know, fighting, hate, or whatever,

you're going to lop off both sides of the distribution and you're gonna have a chilling effect and then of course it'll get stupid, right? So that's you know sort of the next point of artificial stupidity and then perhaps like the most important one is I think we have

We have metabolized so much of our assumptions of what it means to be a citizen.

in a free country of what level of education and agency and individuality we are supposed to accept. We have burned through it. Every single political race of the last 50 years has weaponized more parts of individuals' identities and individual feelings. Did you ever read that study that compared the reading level?

of the State of the Union address over the last 200 years. Okay, it's going down, right? Exactly, it's very generalizable. And if you look at a televised debate, not a presidential debate, mind you, just a debate between intellectuals in like 1960s or 70s television,

Theo Jaffee (46:52)

Yes. Very generalizable,

Samo Burja (47:10)

my God, these people would be, each of them would be a Jordan Peterson type sized audience, but we somehow don't have as many of them. And I think it's because if you don't bat for your team a hundred percent of the time in a modern democracy, I think people assume you're a bad person, people on your team. So if you're a Democrat, you have a conservative opinion, or if you're a conservative and you have a progressive opinion.

I think you're kind of considered a bad person or not totally reliable. People have gone extremely moralistic. Pardon?

Theo Jaffee (47:43)

Arguments are soldiers. Arguments are soldiers.

Samo Burja (47:48)

Yeah, I mean, but they didn't used to always be. And Eliezer Yudkowsky actually writes about this, right? you know, he coined, I think, did he coined the phrase arguments as soldiers or was it someone else? I remember an essay. Yeah, yeah, yeah. Well, he points out that like, just the tone of a 1940s PSA is treating the citizens, the viewers, as adults.

Theo Jaffee (47:59)

pretty sure it was him but it was was on less wrong.

Samo Burja (48:14)

And a PSA today would never do that. It would just appeal directly to feeling. It would not try to invoke reason. It wouldn't try to invoke this concept that we should restrain our emotions and we should be more broadly aware.

Because the political race has sort of ground down over time, over the last 70 years we've had an erosion of the concept of a citizen where new pieces are chipped off every single presidential election, at least to be used as fuel to win our team or the other team, right? Because of that, it has become not.

in anyone's interest to educate people in the Aristotelian sense, Aristotle defined an educated mind as a mind that can consider opinions different than their own, like consider an opinion without accepting it. And I think right now, the cognitive barriers,

and cognitive sophistication has been broken down so much that even though our IQ is probably just as high as the 1960s, maybe a little bit higher due to the Flynn effect, though the Flynn effect's been going away since the 1990s.

It's like we immediately ingest the information. It immediately goes into our opinion. If we notice that it disagrees with our team, we get angry and we immediately morally disown the person that gave us that information. And then we go on believing what we believed before. We've been hardened.

And in that situation, no dialogue is possible. But in that situation also means that groupthink is more powerful. Like one way to think about this is like an analogy with superconductivity. You know, if you could get the resistance to drop to zero, no current is ever lost, right? If we reduce this mental resistance in people on our team, whatever our team is, I'm like, you know, I honestly don't even care who wins this election. That's another way in which I'm such a heretic.

care if it's Harris administration or Trump administration.

they will be bad in different and unique ways and it's totally fair to have strong feelings about how each will be bad. But I think it's such a small part of our system and our problem that no one who is president could possibly fix these more basic ones. But let's say on our team, if you have high intellectual resistance or the ability to view a different position without immediately adopting it and repeating it or immediately rejecting it and then refusing

to hear anything more of it.

parties get stupider. So it's not just two smart teams fighting each other, it's each team will be dumber because the selection filter on coherent ideas is gone. So in the process of two sides fighting each other, we have ground down our expectations of what it is to be a citizen. We have not educated people how to be citizens. And as a result, each of the group thinks on their own is much stupider. Like,

You know, you compare the Democratic Party in 1995 and 2025, it's like no question which is the stupider party. And you compare the Republican Party of 1995 and the Republican Party of 2025, so next year. And I guarantee you, the 1995 one, they'll just be smarter people on average with more nuanced arguments and more nuanced points. And we can make even the same comparison between 1995 and 1985. And note, I'm not talking here about

their socially conservative views. I'm just talking about how they speak to each other, how they come to consensus, how they organize things like party platforms. I know this is going to shock Gen Z, but even 10 or 20 years ago, politicians were not known for bangers. They were known for pieces of legislation they pushed through. And 30 or 40 years ago, people would actually read the party platform and care about it, like normal people, not even Noah Smith tier political monks.

So, I don't know. think, I think we need to reset our expectations of the cognitive sophistication of the citizens to a much higher level. And we need to viciously shame all attempts and pushes to simplify things. And

pursue group strategies rather than individual strategies because that's the only hope to make something like a parliamentary system or representative system work.

Otherwise, the democracy aspect will be reduced and arguably has been reduced to no more powerful in the American system than the Queen of England or the King of England now is powerful in the British system. Arguably, Britain is a bureaucracy pretending to be... Sorry, it is a...

It is a bureaucracy pretending to be a democracy, pretending to be a republic, pretending to be a monarchy. So they have several layers of political dissimulation. In theory, the king is sovereign, but oops, parliamentary supremacy. But actually the people have immense power, but actually, you know, populism is bad and we should have experts decide things. So in reality, our system of government has shifted from democracy to bureaucracy to varying extents. And America has the most democracy

Theo Jaffee (53:55)

Mm -hmm.

Samo Burja (54:05)

of any Western country except maybe Switzerland and that's why this is so disturbing and dangerous to see this erosion of citizens capabilities to work in it. So in other words I wish these citizens were much more politically sophisticated and I want them to hold their political opinions and convictions strongly and I want them to know how to disagree civilly.

Theo Jaffee (54:29)

Is that really true by the way that the US has more democracy than any other western country except Switzerland? You know seems like we have Sweden? Norway? France?

Samo Burja (54:35)

Who would you name? Which country is more democratic? I feel Sweden is an extremely well -run country.

I think Sweden is a very well run bureaucracy. What do I mean? Swedish civil servants received international world health guidelines for COVID. And instead of they looked at the data and they very autistically said, this doesn't quite make sense. We're going to lock up the old people because they die of COVID. And we're not going to have general lockdowns to lock down young people. And the result has been lower deaths. For example, Sweden also decided to pursue different

economic policies Sweden actually is a surprisingly capitalist country simultaneously as being a social democracy this is kind of a Nordic model but I think in Sweden decisions are mostly not made through elections they're made through experts and both Sweden and France mind you have Sweden and France have very much

like severe limits on speech, not perhaps in practice as many people are imprisoned and become political prisoners as in the modern United Kingdom, but certainly some. And in the case of France, like, you know, the individual,

liberties are much reduced. Now the French do have a right that Americans have much less of so the French can show up, protest and have the whole country be locked down because in their mythology of liberty, their mythological version of liberty isn't, it is the people gathered together and stormed the Bastille and behead the aristocracy. So that's why in France it's kind of illegitimate to suppress a farmers protest or a rail

or a union strike or something like that because you're going against the foundational myth of liberty, it would be kind of comparable if in America you seized all the guns because in the American mythology of freedom, it was, you know, people with guns shot at the government, the British government, until it went away. And both of these stories are kind of true and kind of dumb and false in their depiction of the American Revolution and the French Revolution, but the myth is very important for political legitimacy. So there is a way in

which that is democratic. So the fact the French can just go on strike on any random thing they want, that is democratic power in action. But I think you'd be hard pressed to deny that by any measure, France is like more regulated, there more laws, citizens in most ways have fewer rights, there's less free speech.

I think France is actually a surprisingly good elective monarchy because when it has a strong president, the president of France has like very significant powers, not only a longer presidential term.

But actually, the bureaucracy mostly listens to the French president. So you could argue that that's a democratic, monarchical aspect to the government. just the sheer number of departments, regulations, like try starting a startup there, Like economic freedom, but also political freedom, it's much more constrained. So yeah, I would claim France is more of a bureaucracy than the United States. I mean, would you disagree with that?

Theo Jaffee (58:04)

No, I would not.

Samo Burja (58:05)

Okay, well, perhaps the disagreement then could be is the US, you know, the US might have a mix of bureaucracy, plutocracy and democracy. And I think my center left friends would say, maybe Europe is more bureaucratic, but it's still more democratic because it's not plutocratic. But that argument doesn't really work for a place like Sweden or France either, because, you know, let's remember, second richest man in the world is a Frenchman that owns a bunch of luxury brands.

Luxury brands are the ultimate fake job. You are riding on incumbency. Actually high taxation would destroy you. So in France, Sweden, and lot of European countries, the social democratic pact is the following. If you have money, you can inherit it through loopholes. And old money persists. If you don't have money, your income will be taxed and it will be hard for you to make more money. So incomes are very harshly taxed, but you can have a family foundation.

that owns your company and you can be in charge of your family company and you can be in charge of the family foundation and you basically have a 0 % tax rate.

That's true of Austria, Germany, Sweden. This does a few good things. It preserves the Mittelstand economies, but an economically equal society that this is not. So I would say that Europe is plutocratic in a different way than America. In Europe, old money is supreme and the government approaches its old companies and ask them, what can we do for you? And in America, new money is supreme and companies show up and ask the government, hey, what can we do for you? Because we're just getting started.

and don't you want to buy our much cheaper drones, et cetera, et cetera, instead of the ones provided by the old companies. But that's only directionally, right? Both have elements of new and old money power.

And generally speaking, think bureaucracy is much stronger than plutocracy. And in America, would say democracy is very strong because it is possible to build a base of popular support and launch your political career. And by the way, on the left as much as the right, know, people, my conservative friends might not like this, but AOC is an example of democratic power.

She's speaking directly to the voters and a significant set of voters really like what AOC has to say. So AOC is a champion of democracy. Donald Trump is a champion of democracy. When you hear populism, that usually just means someone doesn't like democracy in action.

Theo Jaffee (1:00:39)

So.

I think we have time for one more question. So we talked about how complex political thought has gotten worse over the last few decades. And it seems like just philosophy in a lot of fields have reached almost the kind of stasis. So, you know, aside from Palladium, the first magazine of the 21st century, like what are some of your favorite ideologies and philosophers and work specifically from the last 20 to 30 years, the 21st century?

Samo Burja (1:01:10)

think a lot of the people who got their start from blogging, and some have migrated sub -stacks, some have not, have written very insightful stuff. I think that Paul Graham and his early essays, and even some of his more recent essays, is going to be understood as a significant writer of the last 30 years. I think that...

A lot of the mainstream polished pop intellectuals are actually overrated. There are a few that I think are decent. I think Steven Pinker's least popular works are his best and his most popular works are his worst. So Steven Pinker I think is actually a more serious intellectual than you would believe from his public profile.

Theo Jaffee (1:01:50)

Like who?

Samo Burja (1:02:07)

I think that... I think that Nickland...

will prove to be a much more important and subversive influence on both like far left and far right subcultures than is currently acknowledged. think ancestrally he has shaped a lot of the strands of accelerationism and you know there's sort of like the left -wing version of that and then there's the right -wing version and I think people are just now remembering that he wrote really bizarre things in the 1990s while working, you know, there's this informal group,

cybernetic culture research unit at the University of Warwick, which you know according to the University of Warwick never existed because of course universities don't allow unique or weird social or intellectual clubs. It has to be underground. It has to be unofficial. So I think he will prove to be a significant thinker because his thesis, he laid out this sort of basic thesis of techno capital, right?

which is this idea that capitalism itself was a form of intelligence. And I'm not sure if he's the absolutely first person to make this analogy, but he definitely made it forcefully and interestingly in the 1990s, long before the current machine intelligence explosion, right?

We could continue listing more thinkers. I'm going to say it's cringe to say, but Eliezer Yudkowski is a more significant philosopher than people would like to give him credit for because he single -handedly wrote the orthodoxy of the rationalist movement and the effective altruism movement. say what you will, those are very influential movements. He was not dumb. He wrote very clearly.

Theo Jaffee (1:03:48)

I completely agree.

Samo Burja (1:03:58)

one of the best stylists, honestly. think among his acknowledged influences was George Orwell, whose essay, Politics in the English Language, I warmly recommend. So it's a non -fiction essay. So I think Yudkowsky is also a significant thinker. And I think that because we live in emergence in a period where the Cambrian explosion of intelligence has happened,

we will tend to regard the thinkers who commented on topics related to artificial intelligence more highly than some of the other commentators. So as a last one here, I would say Robin Hansen is very much underrated. I sort of feel, you know, I know he came up with the whole prediction market thing. It's pretty cool. But I honestly find his like cosmology, human nature, and culture commentary

to be much more interesting than just the mechanism of prediction markets. feel like, you know, insurance schemes are neat and fun to think about, but you can only hear about them so many times before you lose interest. Yeah.

Theo Jaffee (1:04:57)

Yeah, absolutely.

Yeah, I mean, just to go on Instagram and see the like, lowbrow slop that they have and to see these slop accounts posting about like, presidential prediction markets and it's like, wow, I met the guy who invented this thing. Like, how cool is that? Yeah.

Samo Burja (1:05:21)

Exactly. That's a big influence. I could see prediction markets being actually very important in 10 years in even determining the election. But that will be their big test. When there's an incentive to like rig the market one way or the other, how much money will go into politics? Right? Like I think people are already trying manipulation in these very low liquidity markets because they are very low liquidity for now.

But yeah, think if they're not outlawed, they will ratchet up and hopefully the result is more accurate information and not just another information battlefield.

Theo Jaffee (1:06:02)

So I think that's a good place to wrap it up. Thank you so much, Salma Buria, for coming on the show.

Samo Burja (1:06:06)

Yeah, thank you Theo and thanks for the provocative questions.

Theo Jaffee (1:06:08)

Thanks for listening to this episode with Samo Burja. If you liked this episode, be sure to subscribe to the Theo Jaffee Podcast on YouTube, Spotify, and Apple Podcasts, follow me on Twitter @theojaffee, and subscribe to my Substack at theojaffee.com. Also be sure to check out Samo’s Twitter @SamoBurja (spell it) and his website samoburja.com, Bismarck Analysis at bismarckanalysis.com, and Palladium Magazine at palladiummag.com or @palladiummag on Twitter. Thank you again, and I’ll see you in the next episode.

Discussion about this podcast

Theo's Substack
Theo Jaffee Podcast
Deep conversations with brilliant people.
Listen on
Substack App
RSS Feed
Appears in episode
Theo Jaffee