Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

#15: Perry Metzger

Extropians, Nanotech, AI Optimism, and the Alliance for the Future

Perry Metzger is an entrepreneur, technology manager, consultant, computer scientist, early proponent of extropianism and futurism, and co-founder and chairman of the Alliance for the Future.

Chapters

0:00 - Intro

0:47 - How Perry got into extropianism

7:04 - Is extropianism the same as e/acc?

9:38 - Why extropianism died out

12:59 - Eliezer Yudkowsky

17:19 - Perry and Eliezer’s Twitter beef

19:46 - TESCREAL, Baptists and bootleggers

22:34 - Why Eliezer became a doomer

28:39 - Is singularitarianism eschatology?

37:51 - Will nanotech kill us?

45:51 - What if the offense-defense balance favors offense?

53:03 - Instrumental convergence and agency

1:05:35 - How Alliance for the Future was founded

1:12:08 - Decels

1:15:52 - China

1:25:52 - Why a nonprofit lobbying firm?

1:28:36 - How to convince legislators

1:32:20 - Can the government do anything good on AI?

1:39:09 - The future of Alliance for the Future

1:44:22 - Outro

Links

Perry’s Twitter: https://x.com/perrymetzger

AFTF’s Twitter: https://x.com/aftfuture

AFTF’s Manifesto: https://www.affuture.org/manifesto/

An Archaeological Dig Through The Extropian Archives: https://mtabarrok.com/extropia-archaeology

Alliance for the Future:

https://www.affuture.org/

Donate to AFTF: affuture.org/donate

Sci-Fi Short Film “Slaughterbots”:

More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf

My Twitter: https://x.com/theojaffee

My Substack:

Transcript

Theo Jaffee (00:01)

Hi, welcome back to episode 15 of the Theo Jaffee podcast. We're here today with Perry Metzger.

Perry Metzger (00:06)

Hello?

Theo Jaffee (00:09)

So you've been into futurism, extropianism, and the like for a very long time, several decades, starting in like...

Perry Metzger (00:16)

35 years, maybe a little more depending on how you count it. Long enough that, you know, that one starts to know almost everyone and have seen almost everything.

Theo Jaffee (00:30)

So how did you first get into this scene?

Perry Metzger (00:32)

I think so. I was an undergraduate at Columbia in the 1980s and someone posted a book review of this book by Eric Drexler called Engines of Creation. And, you know, and I went out and I got a copy of the book and weirdly it meshed with all sorts of thoughts I had had as a student.

You know, once biotechnical, so, you know, in the 70s, you know, it was not unusual, for example, to have a Time Magazine cover about Genentech and how they, you know, were commercializing genetically engineered bacteria to produce, you know, to produce things like insulin and human growth hormone or what have you, which at the time was like, you know, this shocking thing. And I...

I started thinking at the time, well, gee, you know, you have these things that can manipulate things at the molecular level. You know, could you use them to make computers? Could you use them to build macroscopic objects? You know, I mean, we have trees, we have, you know, we have plants, we have whales, you know, why, why couldn't you do, you know, crazy things with biology like that? And I'd put that in the back of my mind. And then I encountered Eric Drexler and I encountered.

FM S. Van De Are and, and, and, and, you know, the book, True Names by, damn it, I'm having a, I'm having a senior moment. But the, the gentleman who coined the term singularity, Vernor Vinge Yeah, he pronounced, he preferred Vinge. I only met him.

Theo Jaffee (02:18)

Vernor Vinge? Vinge. Okay.

Perry Metzger (02:25)

you know, the one time, but it was a very fun multi -hour conversation. You know, I, you know, yeah, a sad thing that he's gone. Read a few books by him, read a bunch of other stuff. And one day, this was a while after I got out of school, my buddy Harry and I were hanging around at his apartment and he had...

Theo Jaffee (02:31)

Rest in peace.

Perry Metzger (02:54)

he'd recently gotten divorced and the way that he was entertaining himself was subscribing to a lot of zines. And these days, of course, no one remembers what these things were, but it used to be that a lot of people, you know, got their ideas into the world by, by basically making their own magazines on, you know, by Xeroxing up things and, and, and. Selling them to each other. And in, if you got one zine, it almost always had an ad for 20 more.

And he encountered this particular one called Extropy by a bunch of graduate students in Southern California. And the next thing you know, I was running a mailing list for people who were interested in the topics covered in Extropy. And the next thing you know, after that, we have a few hundred really, really interesting people from.

Theo Jaffee (03:46)

So you ran the Extropian mailing list?

Perry Metzger (03:49)

I started the Extropians mailing list. Yeah. It was a very heady sort of time. We had all sorts of cool people, Carl Feynman, Hal Finney. Hal, unfortunately, is dead now too. And Robin Hanson Yes, Robin and I have known each other since back then. And it's scary to think of how long back then was.

Theo Jaffee (03:51)

Wow.

my last podcast guest.

Perry Metzger (04:18)

but lots and lots of very interesting people suddenly popped up and it was one of the best floating cocktail conversations for a few years that I've ever participated in. Lots and lots of very interesting ideas being bandied about for quite a while. Unfortunately, it also had certain mutant offshoots as seen before one these days. But for the most part, it was a very, very cool time.

and a very cool bunch of people and I was very glad to hang out with them. You know, Tim May was one of our subscribers and he and a bunch of other people ended up going off to start the Cypherpunks movement, which I also got into and I ended up running a spin -off of the Cypherpunks mailing list called the Cryptography mailing list, which I still, you know, still exists. And I think I'm notorious to certain other people.

For having shut down the first conversation about Bitcoin online Because it was getting repetitive and we had rules against that But you know if I show up at in in certain cryptocurrency Circles, you know at various conferences or what -have -you You know, some people are like, you're the guy who shut down the first conversation of unbit coin and the answer is yes. Yes I am You know and more recently, you know, I've I've been involved in

you know, a lot in AI policy, not that I wanted to be involved in AI policy. I hate being involved in almost anything with the word policy attached to it. But it turns out that that although you might not care about the government, the government will care about you either way. And and so it's become necessary to do something about that. You know, I was involved a bunch in cryptography and cryptography policy when that was a much more controversial topic. So.

I suppose this sort of thing is not entirely surprising.

Theo Jaffee (06:18)

So when I was prepping for this podcast, I read through a bunch of, extropian stuff, the extropian principles and the like 1994 wired profile on extropians. And there was one thought that struck me the whole time, which is, holy crap, this is like identical to e/acc today, Effective Accelerationism. So is it literally just identical? Are there any substantive differences? Is it just a pure reincarnation?

Perry Metzger (06:36)

Yeah.

I think that a lot of people are older and there's also, I think, certain political differences. I think that the extropians were much more explicitly libertarian for a while. But I think, yeah, in substance, it's sort of the predecessor of e/acc in a lot of ways. It's amusing and kind of cool.

to see new people picking up the ideas and running with them. I've been kind of pleased by it. It's also been kind of cool getting to know a bunch of people as a result of the fact that all of this has gotten recycled. But yeah, your observation isn't wrong.

Theo Jaffee (07:28)

So why do you think Extropianism died out then? Or...

Perry Metzger (07:32)

It didn't. It's just that, you know, one of the things you learn when you're in enough of these long -term conversations that happen is that all of them are sort of like parties. And parties end at some point. If you, the party that goes on for six or eight days, eventually the guests get exhausted, start smelling bad, you know, run out of hors d 'oeuvres.

Theo Jaffee (07:35)

it evolved.

Perry Metzger (07:59)

and really desperately want to go home and maybe take a shower. All of these things end up being bunches of people that are interested in particular things and get enthusiastic about them and push hard. I mean, but the consequences of these things when the, you know, the influence of these things moves on. I mean, there were all of these really influential, you know, home computer clubs in the Bay Area.

in the 1970s and you ask, well, what happened to all of them? And what happened to all of them is that we all have home computers now. We don't even think of them as home computers. They're just computers. Lots of these movements have a moment where they flower and the ideas end up spreading in the world and then everyone moves on and does other things and it's cool.

Theo Jaffee (08:52)

Well, what you were talking about, with a party must come to an end at some point sounds like it would apply to a scene, but not necessarily a movement. You know, like communism, for example, lasted well over like a century, century and a half, and it's still very much alive today and it shaped the entire world. Yeah. It shaped whole countries. Why didn't extremism do that?

Perry Metzger (09:07)

Yes, although it smells much, much worse than ever before. But there was a... yeah. So I don't think that it died out as a set of ideas or as a set of things that people were interested in. You'll find that almost all of the things that people were interested in continued to obsess them and the people around them. If you look, for example, there are, I mean, you look at people like Aubrey de Grey,

or lots of other people who are interested in finding ways to cure the diseases of aging or retard aging itself indefinitely. A lot of those people, in some sense, were influenced by or an extension of the things we did. The people who are interested in cryptography, cryptocurrencies, privacy, et cetera, right now, are in many cases descendants of the Extropians mailing list and the Cypherpunks mailing list.

it's just that, you know, there are lots and lots of people who are working on various things. I, it's just that discussing things endlessly at some point becomes less interesting than going out into the world and working on stuff. So I think that what you've seen is that, you know, people like say Robin Hanson, who were very, I mean, Robin, announced.

wrote his original paper on idea futures for, I think it was the third or fourth issue, I think it might have been like the fourth issue of Xtropy. And there he is still to this day at GMU publishing lots of really cool ideas on related topics and being energetic about it. It's just that we don't give it a name anymore. But all of us are still out there.

Theo Jaffee (10:58)

I mean, you say that these conversations get repetitive and then people will stop, but it seems like the conversations on the extrobian mailing list in 1996 about AI risk are identical to the ones on less wrong in 2012 and then identical to the ones on Twitter today.

Perry Metzger (11:09)

that was long af.

By 1996, all of the stuff that was fun was gone. The early days of the mailing list, we had a rule about not keeping archives. So all of the most interesting, really early stuff is gone. But yeah, I mean, so one of the, maybe I'm giving myself too much credit here, but I perpetually regret at this point, you know.

So we had a few early members, people like Samir Parekh and what have you, who were teenagers when they joined. And Samir went on to starting the first company to commercialize an encrypted web server, the stuff that ended up becoming TLS. Every time you type HTTPS into a web browser, you're using that same technology stack.

which he then went off and sold for a lot of money and he went on to do all sorts of great other things. We had a bunch of people that age who did interesting things. We also had a teenage person who joined by the name of Eliezer Yudkovsky, and that seems to have gone much less well. I won't exactly say that I regret, you know, letting Eliezer join, but it turned into much more of a mixed success.

Theo Jaffee (12:22)

Hmm.

I mean, he might be the most famous person around today associated with the Extropians.

Perry Metzger (12:43)

maybe. But, you know, I mean, I'd probably say that he's more famous for creating, you know, for turning things that we thought of as descriptive into the objects of a cult. You know, first singularitarianism and, you know, then went on to create SIAI and MIRI, which got very little done.

but I guess he wrote a lot of good fan fiction or bad fan fiction. I never found it particularly readable, but never mind that. And yeah, I mean, pardon.

Theo Jaffee (13:18)

You haven't read HPMOR? You haven't read HPMOR?

Perry Metzger (13:23)

I tried. I had a very open mind. A lot of people who I respect or respected at the time told me that I had to read it. And I started and I got a few chapters in and at some point I just couldn't.

Theo Jaffee (13:37)

for the audience, Harry Potter and the Methods of Rationality, which is Eliezer's, like, what is it, 1200 pages, 200 pages long Harry Potter fan fiction about rationality and decision theory and that kind of thing.

Perry Metzger (13:51)

I think it's really a recruiting mechanism for his group. it works spectacularly well. There's this gigantic pipeline between the stuff that he's published and Younger of Divergent Kids and Mirian Effective Altruism and all of those things. It's kind of ironic that we find ourselves in a situation in which the people on the one side...

Theo Jaffee (13:55)

it works well.

Perry Metzger (14:18)

of the current debate about AI, which I'm sure you've covered in the past, but if you haven't, we can talk about it a bit. And the people on the other side of it all came from some of the same mailing lists and intellectual discussions and what have you, but drew very, very different conclusions. Like Eliezer came to the conclusion that it was his moral duty to build an AI god to rule the universe. And I would have been more disgusted by that, except for the fact that I didn't think that he'd ever succeed in building anything.

Theo Jaffee (14:22)

Yeah.

Perry Metzger (14:49)

I was there one day and Eliezer says that this didn't happen. I remember it happening. Other people I know remember it happening. I can't prove that it happened. But I remember Eliezer giving a presentation at a conference and one of the gods of AI research, Marvin Minsky.

standing up and saying, you know, everything you're talking about is stuff that we tried and it doesn't really work. And Eliezer then saying, but I'm smarter than you and I'll make it work, which he to be, you know, he's consistent, you know, he hasn't become much less arrogant over the years. But, but, you know, I didn't think that Eliezer was going to go off and build an AI that would rule over the universe and, and, and force libertarian ethics, which strikes me as being kind of

oxymoronic, it's, it's, you know, sort of like having, you know, the dictatorship in favor of freedom or what have you. but, I didn't think anything would happen there. And so I kind of noped out and went off and paid attention to other things. And while I was paying attention to other things, you know, they mutated a few times and now have become radical D cells and to use the current jargon, you know, LEA's are calling for things like bombing data centers.

you know, saying, you know, well, nuclear war is better than AI research because at least a small breeding population will survive a nuclear war and we might yet reach the stars, but you know, if there's AI research, we're all doomed, which I think is garbage and I'm happy to defend that, but nevermind.

Theo Jaffee (16:33)

So going back to what you said earlier about, Eliezer speaking at the conference. yeah, this has been like a public Twitter thing, for a while back in like July, 2023, you tweeted, that he presented at an extra conference about how he was going to build a geo FDI.

Perry Metzger (16:49)

I might be wrong, by the way. It might have been at a different conference. It might have been maybe at a four -site conference or something. I'm old. A few other people I know remember the same thing, but we were all at all of these conferences, so who knows?

Theo Jaffee (16:59)

And then...

And then Eliezer tweeted, I claim to have not given a talk at Extra 2 in 1995 and offered to bet Metzger or anyone in the habit of believing him $100 ,000 or 50 ETH. And then you didn't make the bet as far as I know. So.

Perry Metzger (17:09)

never happened.

Yeah, I couldn't prove that he was there and it didn't seem important enough. You know, it's true. I did not make the bet. You know, Mia culpa, Mia culpa, Mia maxima culpa. You know, I, you know, I was hoping that we would find the recordings from extra two, you know, Max claimed that he had them, but you know.

never was able to find the things. It might have been a different conference, by the way. It might have been a few years later. And it might be that I'm remembering the whole thing wrong, right? Because I'm old and people tend to, you know, when you get old enough, your memory for things that happened 30 years before, it ain't always the best. But if you were going back and reading the things that he wrote, they're pretty much consistent with my memory of him.

regardless of whether particular events occurred or not.

Theo Jaffee (18:19)

And then after he tweeted about the bet, you kind of disappeared from Twitter for a few months. So was that related or no?

Perry Metzger (18:29)

No, I had two issues, one of which was that I had a lot of work to do, and the other one of which is that I've had some health issues over the last year, and I tend to disappear from things unexpectedly for periods of time. I've been off Twitter for the last couple of days too, mostly because of that. But never mind. Too much information, probably.

Theo Jaffee (18:52)

Yeah, well, I'm glad you're back.

Yeah, I'm definitely glad you're back. So when we talk about extropianism and then some of its offshoots, a common kind of umbrella term that's used is test grill, which stands for transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and long -termism. So is that?

Perry Metzger (19:17)

This is sort of like this is like having a single term for, you know, for for Nazism, communism, the Grange movement and, you know, and and, you know, and marine biology. But, yeah, I've seen that. That's that's Timnit Gebru's thing, if I'm not mistaken. She's an interesting character.

Theo Jaffee (19:43)

totally incoherent. It's just not a useful term.

Perry Metzger (19:47)

I don't see why it's a useful term. But you know, if you're attempting to get grant money for, you know, for your discipline, and maybe it's a useful term, I see it as relative, you know, a lot of those things aren't related. Like, you know, I don't consider myself to be half of those things, at least. You know, I don't think that anyone who...

was in any of those things considers them all to be the same thing. But, you know, one of the more amusing things that I've noticed of late actually is the fact that there seem to be three distinct groups of people who are trying to get AI regulated or killed in one way or another. There's the MIRI EA group of people. There are the people like Timnit Gebru and what have you who claim that AI is horribly discriminatory and evil.

And then there are the people who would like to use the power of the government to grant them a monopoly so that their AI company gets as much of the market as possible without having to compete with anyone. And watching the interactions between everyone has been really kind of amazing.

Theo Jaffee (20:58)

The Baptists and the bootleggers.

So yeah, this is like Marc Andreessen's idea of the Baptists and the bootleggers.

Perry Metzger (21:09)

It's not originally his. I'm trying to remember the economist who came up with that phrasing. There's a Wikipedia page on it, and it'll probably give the right person's name. But yeah, it's an old idea. You have people who are true believers in some sort of cause, and then you have people who would just like to take advantage of it. And they have common interests. And the common interests often intersect in extremely bizarre ways.

Theo Jaffee (21:19)

Yeah.

Perry Metzger (21:38)

They intersected during prohibition, during alcohol prohibition in the United States, and they seem to be intersecting a lot in the AI debate at the moment.

Theo Jaffee (21:48)

Hmm. Do you think that Eliezer is a Baptist or a bootlegger? And why do you think -

Perry Metzger (21:52)

he's, he's, he's, I have no doubt in my mind that Eliezer actually believes everything that he says. I also think though, that he is in a position where it would be very, very difficult for him to believe anything else. the only thing he's done in his adult life, and maybe he'll come back and claim that I'm a horrible liar because he likes calling me that and say that he had a job once for six months doing something else. But so far as I know, the only thing he's ever been paid to do is work for his own nonprofit.

and it would probably be a rather unfortunate situation for him if he were to change his mind on this. There, there was a, there was a, a talk once given by Daniel Dennett on the unfortunate situation, that, that certain clergymen find themselves in if they no longer believe in the religion that they're a clergyman for, because there is nothing else they can make a living at. And yet here they are.

So many people find that they don't change their opinions or only do in private. I think Eliezer believes everything that he says and believes it very strongly. But on the other hand, he's also, it's also his profession. The only thing that he's ever done in his adult life is work for SIAI, work for MIRI. So, and I don't know who else would pay him to bloviate on the internet and write fanfic in place of doing AI research, but.

Theo Jaffee (23:18)

So why do you think he did such a total 180 in a relatively short period of time?

Perry Metzger (23:24)

I think it's relatively straightforward. So if you read his stuff from very, very late 1990s, early 2000s, his goal was to... So I... All right, take a step back. I wrote some thought experiments on the Extropians mailing list at one point to the effect of what would happen if an AI gained capability very, very quickly.

Because you could imagine a mature AI of some sort being able to do engineering many thousands, millions, maybe billions of times faster than humans. What if you had such an AI and it was hostile and it recursively bootstrapped itself? What would happen? And I think that Eliezer, at some point, and maybe I'm wrong.

you know, I, this is my hypothesis and some of this is just, you know, his writing. Eliezer at some point decided that, you know, since God didn't exist, he needed to create one and justified this partially in his mind by the idea that there would only ever be one AI anyway, because whatever AI came into existence would, you know, would take over everything in sight because it would recursively self -improve itself.

And so it would be good if the recursively self -improving AI happened to be one that created a utopia for mankind. I know one former EA who has said to me recently, and I shouldn't use their name because they haven't been public about this, that there's an extent to which the EA movement would not be satisfied with AI simply being...

safe in our society surviving because what they were promised was utopia. They were promised to be freed from the Hindu wheel of birth and re -death. They were promised that we would all live in bliss and paradise forever. And getting less than that is a failure.

Eliezer, you know, became very, very obsessed, as I said, with the idea that he and his cohort would build this thing and release it, and it would take over the world. And I think that at some point he realized that he and his cohort had made absolutely no progress on AI research whatsoever after spending millions of dollars of Peter Thiel and other people's money. Peter Thiel later came to seem to be pretty PO'd at Eliezer and what have you.

And at some point, you know, a few years ago, he wrote that, you know, that April Fool's Day Death with Dignity article on Less Wrong, which I don't think was actually an April Fool's article at all. But, you know, publishing it on April Fool's Day gives you plausible deniability. You know, from what I under -

Theo Jaffee (26:26)

Well, he believed in very high P -Doom before that, though, right?

Perry Metzger (26:31)

I think that originally he believed that everything was going to go great because he was going to build the AI that took over the universe and it would be aligned, whatever that means. I don't think it's a coherent concept. And it would bring about utopia and everyone would be happy. And I think that he believed that he was going to be the person to do that for many years. My understanding from people like Jessica who left MIRI,

is that to this day, they still have this idea, well, maybe what we can do, and this is like the most science fiction thing I've ever heard. It's not that it is prohibited by physics. It's just prohibited by reality. But apparently, some of them still have this idea that maybe,

They can get an early AI to upload a bunch of them so they could do thousands of years worth of research very quickly and still realize the singularitarian dream of, you know, 20, 25 years ago. But I think that Eliezer is mostly just depressed these days. From what I understand, he spends his time writing Glowfic, which I didn't know about until someone told me that it existed in connection with Eliezer. But we're talking so much about him. Why don't we talk about...

Theo Jaffee (27:46)

and

Perry Metzger (27:50)

you know, other stuff, you know, it's a...

Theo Jaffee (27:53)

Yeah, sure. So, do you think that Singularitarianism is kind of just like eschatology? Is it really like, scientific? Yeah. Well, you were involved in a lot of this kind of stuff early on, so...

Perry Metzger (28:00)

Yeah, it's a millenarian religion.

No, I was involved in something different. So we talked a great deal about the fact that the pace of technological change was going to keep growing and that there was likely going to be an inflection point around AI. This was descriptive, not proscriptive. There wasn't any sort of, well, it's our moral obligation to make this thing happen quickly in order to bring about some sort of millenarian utopia.

You know, and there's an extent to which singularitarianism is, I've heard it referred to as the rapture of the nerds, and I don't love the term, but it does seem to fit to some extent, right? Mostly we were talking about what was likely to happen and the sorts of things that one might, you know, that one might be interested in in connection with all of this. There was never any, it is our moral obligation to.

you know, to make AI happen as fast as possible or what have you. Almost none of us were going off to work on AI research. Eliezer did. I've only joined the AI research community in the last couple of years. I'm quite amateur at it, even right now. But yeah, I see it as a millenarian religion and not a very realistic one. I mean, it's, well, most religions.

And certainly most new religions are not particularly realistic, so that's perhaps the wrong way to put it. But I don't see it as being particularly sane, even by the fairly weak standards that people judge such things by.

Theo Jaffee (29:45)

Well, does it not seem to be the case that if you get human level AI combined with a whole bunch of other powerful technologies, you know, nanotech, gene editing, full -dive virtual reality, that the world that we live in after would be radically different? That's kind of the singularitarian hypothesis.

Perry Metzger (29:59)

Yeah. And by the way, well, yes, but I, and the world that we live in right now is radically different from the world that our ancestors lived in. Right. So imagine that you go up to, I don't know, a homo habilis. I think those were the first tool using, you know, hominids. And I'm sure that someone will now write in or tweet or something when they hear this and say, you're wrong. It was homo erectus or, or it was australopithecus or something else. I've probably gotten it wrong. Who cares?

You know, let's say that you go back to one of those folks and you say to them, well, gee, you know, if you actually pick up that stone tool and start working with it, eventually people are going to be doing video podcasts over the internet, living in centrally heated homes and eating strawberries in midwinter and all of the rest of this stuff. I mean, the world that such a creature lived in in our world bears no resemblance to each other. It's like, it's like.

crazily different. There are a few things that are similar. We still have sex and reproduction and a bunch of those things, but I imagine such a creature thinking about someone getting a cochlear implant or a newspaper for that matter. Things are radically different. And I think that in the future, things are going to be even more different, yes.

We're going to have extremely powerful artificial intelligences. We hopefully will eventually cure Alzheimer's and cancer and all sorts of age -related diseases will probably extend human life indefinitely. We'll be able to do things like doing neural implants to computer systems. We're going to be able to, you know, we're going to have a greatly accelerated

technologies in, you know, manufacturing technology, space technology, etc. The world is going to look extremely different. But that doesn't mean that history ends, or that it's going to be a utopia or that it's going to be a hell. It just means extremely, extremely different. And yeah, if you read, I think that it was run bookworm run is the Vernor Vinge short story.

about an uplifted chimpanzee. In the introduction, he writes that he had proposed to his editor that he might want to write a sequel about this technology, which makes a chimpanzee vastly more as intelligent as a person, that he might want to write a sequel about this being applied to a human being. And his editor wrote back and said, you can't write that story and no one else can either. And that's sort of where he started thinking about it.

this stuff. And yes, we have a great deal of difficulty, I think, predicting what's going what the world is going to look like once we have things like nanotechnology and AI. And I've given talks about this. You know, I would say that, you know, the history of technology has been the history of creating more and more and more general technological capabilities. You know, what is the Internet for? It's not for anything.

That's why it's so powerful. It's a general way to have computers talk to each other. What are computers for? They are things that allow us to do anything that can be expressed as an algorithm. And oddly, it turns out that recording a podcast or predicting the weather or entertaining a child all happen to be things that this technology enables. Nanotechnology is going to be insanely powerful.

It's going to allow us to live in a world where physical objects of incredibly rich capability are things that we can build. And right now, if you look out your window, there's, you know, if I look out my window, I see lots of trees and, you know, trees are things human beings can't build yet, right? But they are exquisitely nanostructured devices. They're capable of self -reproduction, which is not the most interesting thing about them.

say, but they're also capable of constructing themselves and putting out all of these photoreceptors and powering themselves and creating this amazing nanostructured material called wood. Wood is a really, really weird material when you think about it, and we take it completely for granted, but it's almost magical, right? And at some point, we're going to be able to do artificial things that are even more powerful than what natural biological systems can do. You know, the

prick in biological systems is that they're capable of assembling macroscopic objects like you or me, molecule by molecule, building them atom by atom. And we will be able to do that with artificial systems at some point. And it's going to be world changing, like dramatically world changing. And to be sure, that means that there is a limit to what we can predict about the future because as you know, the further out we go, the...

more different things are. That's a really terrible way to put that. I'm not very good at English today, but as time goes on, we're gaining more and more powerful technological capabilities. And for the most part, this is a great thing, right? I have friends who are alive today. I have one friend in particular who survived stage four malignant melanoma, which 30 years ago, if you got

malignant melanoma, never mind pretty much the stage, you were a corpse. It was a question of how long, right? And how do you survive stage four malignant melanoma now? Well, we understand the human immune system so exquisitely that we're capable of producing targeted molecules that can block or enhance various mechanisms inside the human immune system. So you can tell the immune system to go off and kill the malignant melanoma and it works, right? Doesn't work for absolutely everyone, but it used to be a death sentence and now it isn't anymore.

Theo Jaffee (35:57)

Mm -hmm.

Perry Metzger (36:21)

we're going to have a billion, a trillion such changes. It's going, the world is going to look very, very, very different. And probably if you give it a few hundred years, or maybe even if you give it 50, the world might look as different, you know, compared to where it is now, as it does for, you know, a person in the ancient world comparing themselves to today or even worse, right?

but that's not magic, right? Or, or a millenarian vision. That's just talking about, you know, technological change as it occurs in the world.

Theo Jaffee (36:53)

worse.

Yeah, I'm sorry, I gotta go do something really quick. I'll be back in like a minute or two and I'll edit this out.

Perry Metzger (37:11)

Sure.

Theo Jaffee (39:54)

Alright, I'm back. Sorry about that.

Perry Metzger (39:56)

very good. So we can have a trim point at some point like here. Yeah.

Theo Jaffee (40:00)

Yeah, Riverside makes all this really easy. It's great.

Perry Metzger (40:04)

Yeah, in fact, you can just cut by text, which is like the most amazing thing on earth.

Theo Jaffee (40:08)

Yeah. It's only gonna get cooler from here. I'm really excited.

Perry Metzger (40:13)

Yes, well that's what we were just discussing.

Theo Jaffee (40:17)

Yeah, I'm really excited for when I can get an agent to just edit my whole podcast for me and transcribe it and come up with like intro videos.

Perry Metzger (40:26)

You can already transcribe it very easily. In fact, most of these systems out there will...

Theo Jaffee (40:30)

Yeah, but it makes all kinds of errors and stuff.

Perry Metzger (40:33)

You can probably aim an LLM at that and ask it to try to find a bunch of them.

Theo Jaffee (40:39)

I did, I wrote a script where I had it go through whisper to transcribe it and then I ran it through GPT -4 in chunks with like a custom prompt that was like, you know, I am doing a podcast where I talk a lot about AI so, you know, be aware of that and fix everything. It still wouldn't get everything. Not even close, maybe next month. It got a lot of things, yeah, but not everything.

Perry Metzger (40:58)

Did it get a lot of things?

Well, you know, I will I'll tell you a sad story, which is that I so I've got a book coming out. It's not about any of this stuff. It's a it's a children's book about computer architecture, believe it or not, published by Macmillan. I think in like fall of 25 or something like that, it's going to be a graphic novel. There's this great illustrator associated with it named Gerald. I every time we go through to look for mistakes, there are more mistakes.

It seems like they breed behind the couch. So if human beings can't read through their own book and find all the mistakes, maybe it's not entirely surprising that even a human level AI or a weekly superhuman AI can't quite find all of them. But at some point, of course, at some point you also wonder what constitutes a mistake if that person misspoke.

Do you correct what they said in the transcript? I don't know. All sorts of interesting questions.

Theo Jaffee (42:04)

Yeah. So back to what we were talking about earlier. You mentioned how nanotechnology is going to become a thing. It's going to be very world changing and very powerful. Yeah. Yeah.

Perry Metzger (42:11)

It's going to be transformative. Yeah, it's going to be one of the most transformative technologies in history. I'd say the most transformative other than AI.

Theo Jaffee (42:20)

So then what's stopping Eliezer's prediction of misaligned super intelligent AI that learns nanotechnology better than anyone else and creates self replicating diamondoid bacteria robots that reproduce using atmospheric.

Perry Metzger (42:35)

that eat everyone and turn everyone into diamond paper clips because paper clips seems to be the thing. So I think that the answer to that is that none of this is going to happen overnight. And in the course of, so let's take a step back and trust me, this is all relevant. So it turns out that you are already in a nanotechnology war, right? You are in the middle of one as we speak. And in fact,

Theo Jaffee (42:40)

Yeah.

Perry Metzger (43:03)

If you stop fighting this war, if your heart stops beating, within hours, you know, horrifying nanobots are going to start eating you and turn you into a putrefying, into putrefying sludge. Everyone knows this, right? But they don't think of it in terms of like nanotechnology, right? The biology around you is nanotechnology. Now, how is it that we all are not turned into sludge as we're walking around day to day? Well, in fact, every once in a while you get an infection.

that nothing can treat and you in fact do die. Like, you know, a bunch of, you know, not that many people tend to die every year of say influenza, but a few people do, right? You know, 20, 30 ,000, I think in a given year. That's nanotechnology, right? Now, how is it that all of us don't die of that? Well, it turns out that you are also made out of crazy nanotechnology.

and your body is filled with these things that are intended to look for invaders and stop them. Right. Now, let's look at something that looks very, very different. Now, maybe I don't know if you live in San Francisco, there probably aren't any police who are actually stopping people from committing crimes. But let's imagine that you're in most of the United States. Right. You know, in most of the world, right. The reason you can walk down the street and you generally speaking don't fear being mugged.

is because there's a substantial cost to being a professional mugger, which is that there are people whose full -time job is hunting down professional muggers and people who break into houses and things of that sort, right? We have unaligned human beings, you know, if you want to use the jargon, all around us. And we've built mechanisms like armies and police forces and even fire departments to some extent that exist.

to stop people from doing bad things to other people. And this is only going to continue, right? So as AI and nanotechnology get developed, we will find ourselves in situations, I don't know if you've seen there's this video that made the rounds some years ago of AI -driven drones, like going around and killing politicians and doing stuff like that. It was very dystopian and got a lot of people talking.

Theo Jaffee (45:25)

I don't remember that. I may have vaguely heard of something like that.

Perry Metzger (45:27)

I can, I can probably find, you know, track it down and forward you a link for the show notes or something. but, but, but the, but why is this an unrealistic vision? It's an unrealistic vision because people have an incentive not to let it happen. And I don't mean that they have an incentive to somehow like brainwash everyone on earth into no longer remembering how to build.

Theo Jaffee (45:34)

Yeah, sure, I'll put it in the description.

Perry Metzger (45:55)

you know, drones or what have you, right? I mean that they have an, because that sort of thing is impossible, they have an incentive to build defenses, to build systems that stop other people from doing bad things to you. Regardless of what you think about the current war in the Middle East, whether you think that, you know, whatever side you support, you know, the state of the art in anti -missile systems, in anti -drone systems, in anti -artillery systems is kind of impressive.

And those systems have been built because certain people were worried about that they might come under attack from such systems and didn't want to sit around waiting for it. As AI is developed, as nanotechnology is developed, we will discover ways that bad people can abuse these systems. Bad people have abused every technology in human history. And what do we do when we discover this? We build countermeasures. We build...

Theo Jaffee (46:25)

Yeah.

Perry Metzger (46:53)

ways to stop people from doing bad things. And this goes back as I said, back to the dawn of history, to the fact that we all have immune systems, the fact that we have culture, the fact that we have as part of our culture, various cultural mechanisms for punishing people, for attempting to take advantage of other people, the fact that we have police forces, the fact that we have militaries, the fact that we have...

you know, that we have espionage agencies and all sorts of other things. Societal mechanisms and biological mechanisms and technological mechanisms have been built to counter bad things. And this will continue, right? So it is true that if one single maniac in the future had a superhuman AI and access to nanotechnology and decided one morning that they should turn everyone on earth,

into, you know, I don't know, into instant cocoa. I get tired of talking about paperclips. Paperclips are so boring. Whereas Swiss Miss or Nestle's Quick, those are exciting, right? So you've got a madman out there and he's decided to turn everyone on earth into instant cocoa. And if there's no one opposing him, yes, he'll be able to. But that's not what's going to happen. What's going to happen is that these systems are going to be built slowly over a number of years by lots and lots of different groups.

And as they build them, they will construct mechanisms that will stop other people from doing bad things. In the not that distant future, I expect to see law enforcement deploying a lot of AI -based systems to help them tracking down things like phone scammers. I expect to see people, you know, people are already using AI -based systems in law enforcement, in military applications, in other places like that. It will continue.

So if there are, you know, if there are many, many, many, many people who have AI and there are many, many, many, many people who have nanotechnology, you don't have to worry that you're going to be turned into instant cocoa because you're going to have systems that will say, hey, this other bad person is doing something to try to turn you into instant cocoa and I'm going to stop them. I mean, like if someone, let's put it this way. If someone breaks into your house, you know,

and starts watching your television, you're going to call the cops, right? You're not, you know, and you can say, well, what stops someone from breaking into anyone's home and sitting in their living room and watching TV or breaking into their house and stealing their stuff? And the point is that we have a system of disincentives and countermeasures that severely disincentivizes this sort of behavior, at least in most of the country. Now, again, there are places where people seem to believe in crime.

But, you know, in most places, we disincentivize bad behavior, we punish it, we hunt it down, we try to stop it. That's why we don't have to worry, right? Yes, it's true. Nanotechnology will be extremely powerful, and it will not just be in the hands of bad people, it will be in the hands of a great number of people, and many of them will not want to sit around and be eaten.

Theo Jaffee (50:08)

That may be true, but the kind of canonical doomer counterargument to that is that what if the offense -defense balance dramatically favors offense? Like in Nick Bostrom's Fragile World Hypothesis, where he talks about what if the world is actually very easy to crash, we just don't have technology powerful enough to do it yet? What if nanotechnology gets to such a point where it is possible for people to just brick the entire world with a failed experiment or...

a misaligned either superintelligence or person or whatever. And I know you already said, yeah, but we have countermeasures against that. But I think in response to that, they would say, yes, but like, do we really want to risk it? And what if the probability that such countermeasures would work is actually much lower than you think?

Perry Metzger (50:58)

I think that, so this is a long conversation, but you know, I mean, so Nick is very, very fond of obsessing about things that I think aren't worth obsessing about. Like, you know, there's, for example, you know, the doomsday argument, which I think is junk, but which he yet spends an inordinate amount of time talking about, but let's not get ad hominem about this. Let's address this notion directly.

I don't think that it's true, right? I think that we already, first of all, I think we already have a lot of understanding, about where likely offense slash defense is likely to be played there. And I think it's mostly a question of resource expenditure. I don't think that there's an absolute advantage for either offense or defense in most of these situations, but that if you are willing to expend sufficient resources, that your

probably in a pretty good position. Arguing this definitively probably would require a few hours, not, you know, like, however many minutes we want to spend on it. But to give you just, like, a hint on this, right, you know, the, let's say that tomorrow morning, you know, someone, you know, let's say that we're living in a hypothetical future world, you know, where...

There's lots of AIs, there's nanotechnology deployed, there are lots of such systems around. And someone decides that they want to go for, I believe there was a great paper actually by, that I once read called Limits to Global Ecophagy, which was really, really kind of neat as it asked the question, how long would it take for nanobots to eat the world? And it came back with the answer that it would take so long. It doesn't seem like a long time, but it would take weeks.

And that sounds like it's a terrible amount of time, but it turns out that that means that within hours you have things that have probably noticed and are in a position to start doing something about it. You can't... Well, so you almost certainly can't help but notice within hours. That was a paper by Bob Fridus, actually, and it's a pretty good one. I think it's a reasonably good read.

Theo Jaffee (53:06)

Hopefully you've noticed within hours.

But again, it's exponential. Yeah.

Perry Metzger (53:24)

though maybe not the sort of thing that most people enjoy reading when they go to bed at night. I have weird tastes. But generally speaking, it's the future someone decides to release something that will turn everyone on earth into Ming vases because they've got a thing for Ming vases or what have you, or they've released one of Eliezer's hypothetical. By the way, so Yadkovsky says things like, I think you could build biological viruses that would take over people's minds. I don't.

really think that's possible. You can do it in very, very narrow situations. There's a lyssavirus which is what causes rabies. It does sort of take over the minds of animals, but in a very primitive kind of way. All of these things.

Theo Jaffee (53:58)

Mad cow disease.

Yeah.

I mean there are certainly chemicals that can alter your brain state and emotions and personality.

Perry Metzger (54:17)

They can, yeah, in a very crude way. I think that he imagines things that could take over your brain and make you obey the wishes of a particular machine and do whatever it desired or what have you in an extremely sophisticated, rich sort of way, which I don't think is possible, right? But again, let's say we've got our hypothetical situation where.

someone desperately wants to release nanobots that convert everything on Earth into Ming vases. So I think that by the time people can do that, there are going to be nanobots everywhere. And they are going to be doing all sorts of things, like for example, cleaning bacteria and viruses out of the environment, doing things like cleaning the air of particulates, like...

checking whether or not someone is releasing biological agents or hostile nanomachines. I think that the odds of something not being detected are very low. Now, if you believe the notion that there's going to be hard takeoff, that someone will wake up one morning, build an AI, and that by the next afternoon they'll have access to all of these amazing technologies, then yeah, sure, I'm wrong if that's true. But I don't think that that's at all possible.

The amount of work needed in order to construct a mature technology like that is insane. Even if you have access to things that can do all the engineering you could possibly want, the amount of physical stuff that needs to be done, like just acquiring and degassing ultra -high vacuum equipment to start doing experiments is like a serious effort.

All of the things involved in such things are serious efforts. I think a much, much more realistic scenario is that what's been happening, right? So has anyone, you know, I think that Yudkovsky and company never imagined that we would have systems like whisper or GPT -4 or GPT -3 .5. I can hear Eliezer screaming in the background on Twitter, Metzger is lying I, of course, I envisioned this. Look at this obscure pod.

cast I was on 17 years ago, look at this thing I wrote on less wrong. Well, OK, fine, whatever. But I think that if you read the bulk of their materials, they talk about building a seed AI that bootstraps itself to super intelligence. And they don't talk about some sort of gradual development. But if you look around us, AI is being developed very gradually. The AIs around us are being released at regular intervals by.

Theo Jaffee (56:55)

Well...

Perry Metzger (57:00)

Organis commercial and academic organizations that are doing lots of research and development much of it in the open and they are making incremental progress and in certain respects these systems are deeply super human already and in certain respects they're deeply sub human still and it's happening bit by bit and it's happening in many places not in one place. I.

Theo Jaffee (57:20)

Well, I think they would argue that the way that you get to AGI doesn't matter as much as the endpoint. And if the endpoint is a superhuman artificial intelligence, no matter if it's based on an LLM or if it's based on some like pure Bayesian seed AI or whatever, then it will end in the destruction of humanity because of instrumental convergence.

Perry Metzger (57:36)

Yeah, they can argue that. But the well, so the instrumental convergence strikes me as being like, you know, so so the notion of instrumental convergence for those that don't know, you have to take a step further back, which is that according to the Doomer argument, all AGI's are going to of necessity have some sort of goal.

and be vicious optimizers in pursuit of that goal. And again, I can hear Eliezer's voice in the back of Twitter somewhere screaming, you're lying Metzger. But this is effectively what they argue. That if you build an AGI, it's going to have goals. It's going to be superhuman about optimizing those goals. And that the goals will necessarily be weird and alien, like say turning everything into paper clips or paving the moon or who knows what. The two problems here are,

There's no reason to believe that any of the AIs that we build necessarily have interesting goals of their own. And you could say that the goal of whisper is to transcribe speech into text. Or you could say that the goal of GPT -4 is to predict the next word or maybe at a higher level to produce a conversation that people find maximally probable or reasonable, right?

But these aren't really goals in the way that humans or even ants have them, right? There's this notion that if you build an AI, it's a person or an independent agent in some meaningful sense and not a tool. And I think that although you could build AIs that are not tools, that most of the AIs we're building are tools and that most AIs that we build will be tools.

Theo Jaffee (59:12)

So you're saying it's just.

So you're saying that they simply do things and they don't want to do things. They're not agents at all.

Perry Metzger (59:34)

Well, what's an agent? OK, so there are all these terms that we throw around when we're discussing AI. Simple ones like alignment that don't really have a definition or agent that don't have a definition. Much more complicated ones like conscious that philosophers have been arguing about for thousands of years that have very, very poor definitions. The problem, by the way, of the hard problem of consciousness, in my opinion, is how you define it. Once you've defined it, like discussing it,

rigorously discussing it becomes either trivial or drifts off into mysticism. But anyway, what's an agent? I mean, if I have a system that I have asked to tend the fields on my farm, is it an agent in a meaningful way? Or is it just a tool? I don't know how to define that particularly well. The real question to me is,

Is it sitting around off hours talking to itself about how awful its job is and how it would really like to run away and commit a mass homicide spree or something? If it's not actually talking to itself off hours about how bored it is and how it really wants to, I don't know, you know, turn Los Angeles into glass or something, then why are we worried? The things that we're building at this point. So the original vision, you know, of these

you know, Bayesian monster machines, isn't what we've built, right? What we've built are these systems, and I'm drastically oversimplifying here, okay? But this is essentially right, okay? What we had was the following problem that was standing between us and AI. We had the problem that I could show you as many, you know, that I as a human being could recognize pictures of cats.

And I couldn't write down some sort of closed form explanation. How do you recognize a cat in an image? OK? You know, I could have bitmaps, and a human being could easily say, it has a cat in it, doesn't have a cat in it. I could have, you know, digital pictures of all sorts. Cat, no cat, cat, no cat. But how do I explain to a machine what I'm trying to do here? And it turned out to be really, really, really difficult until we realized that what we could do,

was simply give the machines vast numbers of examples of pictures that had cats and didn't have cats and allow them to use statistical learning to figure it out. And this changes a lot, right? And the most important thing that it changes is that the way that we're building these machines is that we're giving them examples of what it is that we want and...

We are not saying, yes, this is a machine we want to release into the world until they do. But Eliezer and company made extremely heavy weather of the notion that you could build something that was incredibly intelligent, but how would you get it to want to do something that you wanted it to do? But if you're using statistical learning techniques, the systems naturally want to do what you want them to do.

Like, I'll give a stupid example that people don't think of very much, right? Like, could you, if, okay, you've got your eyes open, you look around at the world around you, could you voluntarily decide not to recognize objects around you?

Theo Jaffee (1:03:05)

no, but for example, if you

Perry Metzger (1:03:09)

W -w -w -why not?

Theo Jaffee (1:03:11)

because you just do it. It's not conscious. But if those objects are letters and you're not very good at reading, then you might be able to kind of choose not to read. Like if I'm looking at Japanese hiragana or in some cases Hebrew text, it takes me effort to read it. So I can also choose to not expend the effort and not read it.

Perry Metzger (1:03:13)

Well, it's not even just that you just do it. You...

Right. But if you're sure. But if we're talking about things that are in system two and not system one, right. If we're talking about like recognizing a chair, if I look across the room, I see a chair. I can't. I literally don't know how I would get myself to stop. And this is because you've got this extensive set of neural circuitry that you use for looking at the world. And most, by the way, of your circuitry.

Isn't something where you have to extend volition for it to work or even where you could stop it from working by an active volition. You could probably extend an active volition to get yourself to fall on the floor from a vertical position, but you don't have to extend volition as you're standing around like waiting online to go into the movies or, or at the checkout at Trader Joe's. You don't extend volition to stand, you know, vertical. It just happens.

Right? Most of these systems that we build that have very, very intelligent and interestingly intelligent behavior and your visual subsystem is a big, complicated, rich subsystem that's like probably more complicated and bigger than any of the AIs we've built so far. Most of them don't have volition in an interesting way and don't need it. Right? And if I'm building a system that's picking fruit,

or laying out circuits in a new chip design or designing a bridge or helping me find molecules that dock to receptors on cell surfaces. None of these things require independent volition or volition at all. Whisper doesn't have volition any more than your visual system does. You know, it's got inputs which are, you know, which are sounds and it's got outputs which are text.

And this is a slightly rough approximation because, you know, it's there and both encoded in an interesting way. But it can't choose to instead say, no, today I've gotten bored with this online strike. I instead want to be repurposed, you know, making burgers at McDonald's, which I think would be a more interesting career than being a speech recognition system. No, it doesn't do that. It has it has no memory. It has no capacity for self -reflection. It has no consciousness. The

bulk of the things that we are interested in building are going to be tools. Now, this doesn't mean that people can't build things that are not tools, that do have self -reflection in a meaningful way, that might get bored, that you could even convince to become genocidal, right? But that's okay, provided those are a minority of the systems out there and don't have some sort of overwhelming control. And by the way, I think it's inevitable, given that there are eight billion people in the world now.

and that in the future there will be far, far, far more billions of conscious creatures and entities out there. It's inevitable that over time at some point someone's going to build, and it might even be relatively soon, who knows, that someone's going to build something that's not a tool but a thing, but a thinking conscious thing. But it's not required. There's this whole section of the Yudkowskian dogma.

called about orthogonality. And I would like to note that one form of orthogonality that none of these people considered was the possibility that agentness and consciousness and volition and all of those things were orthogonal to being able to solve interesting problems. Like really interesting problems can be solved by these systems without needing those things. Human beings have consciousness and a desire to survive and

a variety of other features like this, all because we need this to survive. We evolved to have these things. These were important features that we gained from our past. But you don't need these things for the most part, in order to have interesting useful systems. It is not necessary that that that you have systems like this.

It's that you have consciousness and an inner life and a desire to think about philosophy and your off hours. I mean, when GPT -4 isn't talking to you, it's not thinking about philosophy. It's off, de facto. Those are features that we could add to systems but are not required for them to be useful to us.

We are building tools, right? And we don't have to build tools, but so long as most of the things that we've built are things that are tools and under our control and mostly do the things that we want, we don't have to worry so much. And I think that that's almost certainly going to be the case. And yes, at some point people will build things that are not tools and maybe they'll even build things that desire that they, the desire eating the whole world. But so long as they do that at a point where we have countermeasures, it doesn't matter.

And I think that it's inevitable that we will have countermeasures.

Theo Jaffee (1:08:43)

By the way, the arguments that you just made remind me a lot of Quinton Pope, AI researcher and former podcast guest. His excellent blog post where my objections to we're all going to die with Eliezer Yudkowsky, which when I was like full X -risk doomer mode after like ChatGPT and then GPT-4 came out last year helped sow some seeds of doubt. I'm much more optimistic now.

Perry Metzger (1:09:06)

Just a few. Yes. Can we pause for 10 seconds so I can put my watch on a charger? Okay, one moment.

Theo Jaffee (1:09:14)

Apple Watch.

Perry Metzger (1:09:31)

Apple watches will do a very wide variety of things, but they will not give you an alarm to the effect that your watch is down to 10 % charge, which is annoying as all hell. Mine does not.

Theo Jaffee (1:09:41)

Mine does that.

Yeah, I've been having the same exact issue too, where I charge the Apple Watch and it should last for like almost two days and then it gets down to 10 % in half a day and usually -

Perry Metzger (1:09:55)

That's your dead, that's your battery dying. You're going to have to go to Apple and get it replaced. I need to do the same thing.

Theo Jaffee (1:10:00)

Well, I've gone and fixed, or I fixed it temporarily by just rebooting it. And then it worked. Yeah, so.

Perry Metzger (1:10:05)

Maybe I should reboot mine more often. Maybe that's the reason I'm not getting alerts about low battery. But anyway, back to, so you were telling me, talking about Quentin Pope and how we're not all going to die with...

Theo Jaffee (1:10:20)

Yeah, so that if you recall that podcast episode, we're all gonna die with Eliezer Yudkowsky on the bankless podcast helped throw a whole lot of people into like holy crap mode. And this was like right after ChatGPT came out when people were like,

Perry Metzger (1:10:35)

That threw me into holy crap mode, by the way. It's the reason I ended up founding Alliance for the Future.

Theo Jaffee (1:10:40)

But it found it put you in the holy crap mode for a different reason, I imagine.

Perry Metzger (1:10:43)

Yes, it put me into, holy crap, if I don't get involved in politics in a way that I don't particularly love doing, what's going to happen is that the entire conversation is going to be dominated by people who I deeply disagree with and who I think are going to have very, very bad policy ideas. That's the very gentle way of putting it.

Theo Jaffee (1:11:05)

So can you tell us the founding story of Alliance for the Future? Like how did it come to be? Why Brian Chau and Beff Jezos?

Perry Metzger (1:11:15)

Well, why Beff Jezos? Because Guillaume Verdon is a wonderful guy and having him on our board was too good an opportunity to pass up. Why Brian Chau? Because Brian is not only a cool person in the space, he happened to want to do the job and is, you know, and is doing a good job at it. And when you're recruiting for a nonprofit that has no track record, you know,

And you have someone who is as good as Brian who appears you you know, you grab them and you say, please please work for us But take going all the way back to the original question. How did Alliance for the Future get started? So what happened was I realized After things like Eliezer's time, you know blog, you know website posting and His bankless podcast and things like that that if people

you know, that there was an incredible amount of money and effort being expended on pushing the doom message. And that if people didn't scramble very quickly to try to mention the fact that maybe we're in fact not going to all die and in fact maybe the only way to make sure, by the way, we should get to this in a little while, but I very strongly believe that if you pause AI research, you increase danger dramatically.

And I mean that very literally. And I was very worried that people like Eliezer would and William Macaskill and Dustin Moskowitz and Cari Tuna and all of the rest of these people, all of the people that Sam Bankman Fried funded, which is I think that even now there's residual SBF money floating around in a bunch of this stuff.

Theo Jaffee (1:12:42)

Yeah, I -

Perry Metzger (1:13:11)

You know, I was very, very worried that if these people got their way, we were going to be in horrible danger and we were going to get a dystopian future and we were necessarily going to get a dystopian future because they would conclude that the only way to keep the world safe was totalitarianism in the end. And if you read the proposals that lots of people make on less wrong and elsewhere on the, you know, the, it's really, really simple. We just make sure that access to general computation,

gets eliminated and that people aren't allowed to do this research and we have the AI police who come and arrest them and and by the way those people who claim that this isn't really the case, you know, I invite them to read things like SB, you know, 1047 in California or what have you You know, but but the you know, I I was looking around and I kept thinking well surely Someone is organizing to do something about

this and I kept waiting and I kept waiting and no one was doing it and I finally realized well you know if you want it to happen you're going to have to do it and I really don't like doing it right I have an AI startup that I should be spending all of my time on which I think does interesting stuff you know I have a personal life that I would like to be spending time on you know I'm an old man so I'm not nearly as energetic as I was say 35 years ago

You know, I can't go without sleep for days on end and still be productive. But it seemed like it was necessary. So I, you know, talked to friends who had DC connections who introduced me to other people. And I, you know, we put a team together. We put together a 501 C4 because it gives us more freedom than having a 501 C3, even though our, you know, donations are not tax deductible. You know, you know, I may I pitch for, you know,

Our URL for for two seconds. Yeah, AF future 2Fsaffuture .org slash donate. You know, we need your money. You know, but our stripe integration is still kind of crap. Our IT person is working on that right now. But but it's but you know, it's OK. The money is flowing in. We've actually managed to be effective.

Theo Jaffee (1:15:09)

Yeah, go for it.

Link will be in the description.

Yeah.

Perry Metzger (1:15:33)

You know, I've had do -mers asking me, so why is it that you weren't aware of this thing that happened six months ago as an organization? And the answer is because we've existed for two months. Thank you. And other people are like, well, why is it that you weren't aware of everything that was happening in every state legislature in the United States? And the answer is we started a couple of months ago, and we don't have the surveillance systems for that yet. But thank you for telling us that we need to. And.

And it appears based on, I have a buddy who's on a small city council in Minnesota. Okay. He's in a small town in Minnesota. He's on the city council there and he has gotten communications from EA associated research organizations, basically push polling him, trying to convince him that he should be sponsoring local legislation to stop AI. Like, so these people clearly have too much money on their hands. They're spending it every.

Theo Jaffee (1:16:26)

Wow.

Perry Metzger (1:16:30)

So, you know, we're going to have to be a hell of a lot more efficient. One of the problems I've got is that AFTF doesn't have hundreds of millions of dollars a year to spend on this stuff. And these people do. You know, I had a doomer like making fun of me over the weekend for saying that they had thousands of people working full -time on X -Risk when it's only about 400. Like, okay, let's assume that they're right and it's only 400 people working full -time.

to try to push this narrative. I mean, that's a hell of a lot of people. It's even a hell of a lot of people by US legal lobbying standards. That's a serious campaign. I think they actually have thousands of people on it. But even if it's only 400, it's crazy, right? So I found a bunch of people, and we incorporated, and we set ourselves up. And now I find myself like,

Theo Jaffee (1:17:04)

Yeah.

Perry Metzger (1:17:25)

running a D, well not running, Brian runs it, but now I find myself as the chair of a DC advocacy organization, which is not something I ever expected would happen in my life. But you know, you live long enough and all sorts of unexpected things happen.

Theo Jaffee (1:17:39)

By the way, what you were telling me earlier about how, you know, the decals have all these crazy ideas. I was talking to a pretty prominent person in the space, like a month ago. I don't think I would characterize them as a decal, but they're definitely like, you know, tangentially involved in EA rationalism, that whole complex. And they were talking about, you know, yeah, AI is very scary and maybe we should, you know, focus on stopping it. And I said, well,

Wouldn't the most effective way to literally stop AI progress be bombing open AI? Or something like that? And they said, well yeah, I mean, like we've talked about it, it just doesn't seem like feasible. You know, it seems like it might be like a net harm to the cause.

Perry Metzger (1:18:27)

Well, yes, but at some point, some of them are going to decide that it's not a net harm and they will act independently of the others. When someone like Eliezer says that he doesn't support terrorism, I think what he really means is that he does not personally think that it would be effective, which I think is very different from saying that he doesn't support it. I might be wrong. I mean, for all I know, he's preparing the lawsuit in London right now for libeling him for saying that he secretly believes that terrorism...

is morally justifiable but perhaps not effective. And maybe I'm wrong. Maybe he doesn't actually believe that it's morally justifiable. But I certainly feel like at least a lot of these people seem to have that position that it would be morally justifiable. It's just probably not effective. And some of them will decide that it's both morally justifiable and might be effective at some point, which is kind of a scary thing to contemplate. But I want to get back to the question of whether pausing AI

would be dangerous because I try to make this point a lot and there's only one way through this problem, which is to grapple with the actual engineering problems associated with artificial intelligence and the actual societal problems. And you do that by building things, by engaging with them, by seeing how they work, how they fail to work, by putting them out in the field and seeing,

how people use and abuse them. And there is an extent to which the doomers are right that this is a powerful technology. I would, in addition to worrying a great deal that we will make absolutely no progress through amphiloskepsis, which appears to be the main strategy of MIRI, navel gazing. You looked puzzled for a moment, you know. They...

They seem to believe that you can figure out how to align AI by thinking very hard over a long period of time. You can't do that. No, no engineering discipline works that way, right? The way you figure out how to make something work well is, is by building things and incrementally refining them. But equally.

Theo Jaffee (1:20:33)

I don't think that's what you're doing.

I don't think they actually think that. I think more like they believe that if they do build AI, it will probably end the world. So they will probably fail in their mission of aligning AI and they know that. But, you know, they can't run the risk of trying to build AI.

Perry Metzger (1:20:53)

By the way, your audio just went from being good to being bad. You may have switched microphones unintentionally.

Theo Jaffee (1:21:01)

Is this better? Okay.

Perry Metzger (1:21:02)

Yes, it is. Yeah. I think they have a variety of views there. I don't remember if I've said this so far in this podcast, but I believe that there are a bunch of people, you know, at MIRI who still believe that like having themselves uploaded and doing thousands of years of research very quickly is still, you know, is one viable, I put big air quotes around that because it's ridiculous, one viable way of attempting to.

to get aligned AI. But anyway, ignoring all of that, though, there is the problem that we are not the only actors in the world here in the United States. And there are a lot of other countries, some of them with much more advanced manufacturing and research and engineering capabilities than ours, that are also interested in AI and are not going to agree with the Yudkowskian vision, I suspect. I was in an argument recently with some people who are allies of mine in DC.

who were arguing, well, we could just stop, you know, and they were not, we weren't even talking about AI as such. We were talking about, you know, geopolitics and who is ahead on manufacturing technology, electronics, et cetera. I don't know what it is that you're studying, I have forgot, or if I knew I had forgotten. Okay, so if you were an EE and you were doing random projects these days,

Theo Jaffee (1:22:22)

computer science.

Perry Metzger (1:22:29)

You almost certainly would be asking companies in Shenzhen to send you PC boards that, you know, you'd draw something up in KiCad, you'd send it off to them, and a day later you'd be getting PC boards from them. There aren't a lot of companies, there are almost none in the West, that offer services as good as the ones the Chinese do. If you go out there and you look at small embedded Linux boards that you can use in various projects, things that are like Raspberry Pi -ish,

There are, you know, there is the the Western designed Raspberry Pi with a Broadcom chip in it. And there are also all of these great boards that you can get made in China, like the Orange Pi, which has, I believe, a RockChip 3588 in it, which also has a neural processing unit, which the Raspberry Pi does not. And that is a Chinese designed, non -U .S. fabricated microprocessor in that thing.

The state of the art in technology is not such that we can just giggle about the Chinese not having the ability to catch up with us. I know people who say things like, well, the Chinese don't have, you know, extreme ultraviolet, you know, silicon fabs. And the Americans don't either, it turns out. Like Intel doesn't have the ability to do cutting edge fab stuff. You know, TSMC does. You had something you wanted to say.

Theo Jaffee (1:23:37)

BOOM!

Yeah, I mean, the Doomer counterargument to that, one of them is, well, if you think that China catching up to the US on AI research would be bad, then open sourcing all of our AI would simply hand our frontier advances to them. And that would be a bad thing.

Perry Metzger (1:24:13)

They are going to have the frontier advances no matter what eventually, right? So what one of the things people don't get about how to think geopolitically is the notion that we are protected by superiority. We are not protected by superiority. We are protected by a balance of power in which people believe that it is dangerous to attack, in which they believe that they have far more to lose from warfare and other non -cooperative strategies than from.

cooperating, and so they do not. Which is not to say that I want to hand the Chinese, you know, the plans to some sort of sophisticated, you know, command and control AI or what have you. I don't. But we'll get to the open source question in a moment. For the last, like, 370 odd years, okay, the way that people in the West have recognized and

then it came to be recognized globally. That so long as the competing major powers in the world have relatively similar capabilities, relatively similar, you know, relatively similar worries about the capabilities of the opponents, et cetera, we end up with reasonably peaceful, you know, conclusions. You end up with war.

when a great power believes that it has an overwhelming advantage over its counterparties. This has been the understanding since the Treaty of Westphalia, and it seems to be mostly true. I know a lot of people on the EA side. EA funded, not that long ago, a bunch of Kurzgesagt videos about how terrible nuclear weapons are. And not that I particularly love nuclear weapons.

But there's a strong argument to be made that the presence of nuclear weapons has prevented us from having a giant world war of the sort that, you know, that happened. The first and the second world war were not the first great power conflicts, major great power conflicts in our history. It's just that they seem to encompass a large, much more of the Earth's surface. But we haven't had a great power conflict of that sort since 1945.

And why is that? Because everyone was too bloody scared to get into one, right? Now imagine a world in which there hadn't been nuclear weapons. I think it would have been almost inevitable that we would have ended up with a war between the communist bloc in the West somewhere in the 1950s or 1960s. And it probably would have been bloody as hell. There would have been tens or hundreds of millions of people killed. And we didn't. And we didn't do that because everyone more or less believed that the other side was in a position to deter it.

So there's a possibility in five years, 10 years, 15 years, maybe it's sooner, maybe it's later, who knows, that the Chinese have lots and lots of autonomous weapons systems and believe that they could easily just overrun Taiwan with them. And the trick to having them not do that is to have them know that the West and the Taiwanese and the Americans and everyone.

on the other side also have lots and lots and lots of autonomous weapons systems and that there would be a price for them attempting to do such a thing, that there would be a possibility that they would lose, right? If people overwhelmingly, if great powers are as amoral as infants, generally speaking, if you've ever dealt with a toddler, that's in certain ways not the worst model for the way that great powers operate to some extent.

You know, if the great power believes that it's going to win, it's probably going to do abusive things. And if it believes that it'll be deterred, it probably won't. The key here, in my opinion, given the inevitability that the Chinese are going to have capabilities like this, is for us to have capabilities like that, in which case they'll never actually try to use them, because they won't believe that they could do so safely. Now you ask the question, well, you know, if we open source all of our AI technology, if we just release

all of this research online, aren't they going to get a tremendous advantage? They're going to get something of an advantage, but we're also going to get an advantage, right? We, if we cut off internal communications among ourselves, will be unable to make the sorts of progress that we need in order to have balancing systems. And in my opinion, the key is not having the ability to overwhelm them or make them afraid that we will overwhelm them. The key is to be able to balance them and to be able to balance other powers.

If we get rid of open source AI, which by the way requires that we make all sorts of things that are traditionally like sacred values in the United States, like being able to openly publish about things, like being able to openly talk about things, like being able to just like release a bunch of data on the internet if you feel like it. If we decide that we want to make that stuff illegal, if we want to go for a pervasive societal attitude that all of this research is too dangerous to allow anyone to hear about.

What we'll end up doing is kneecapping ourselves, right? The advantage we have, the capability we have in spite of the fact that we have a tiny fraction of the number of manufacturing engineers in China and a tiny fraction of the number of electrical engineers that they have and a tiny fraction of the number of material scientists they have, et cetera. The advantage we have is free and open communication and a very competitive and vibrant venture capital segment.

I think that it would be incredibly immensely stupid for us to kneecap open source. I think that the greatest safety we have is in having lots and lots and lots of players on all sides have artificial intelligences that they're using for all sorts of purposes. Most purposes aren't going to be bad, right? Most purposes are going to be doing things like weeding fields and designing cancer drugs and coming up with ways to...

Theo Jaffee (1:30:05)

Yeah.

Perry Metzger (1:30:28)

to fix horrible social problems. But I think we're better off with massive decentralization with lots and lots of people having their toe in the water. By the way, we already have massive numbers of people with their toe in the water. I mean, I don't see how you could get the world to forget what we already know about AI research. It's a terribly sophisticated technology by the standards.

of a high school student who's only starting to study algebra basically. But it's not that bad if you're a computer scientist, right? The things that we have figured out turn out to be relative. I mean, there are no deep secrets, right? The deep secret is that there are no deep secrets. The biggest secrets were that statistical learning was going to win over good old fashioned AI mechanisms. I'd probably talk too long, though, without.

giving you a word in edgewise. I have a habit of doing that when I'm not feeling well, which is a large fraction of the time, unfortunately.

Theo Jaffee (1:31:33)

Yeah, so I'd love to get back to Alliance for the Future. Specifically, do you think that a nonprofit lobbying firm type thing is the best way to achieve good outcomes for open source, free and open source AI? Or like, why did you decide on this format?

Perry Metzger (1:31:48)

I mean...

Well, because I didn't have the budget in my own company and I really wasn't in a position to justify it. So we're working with a lot of organizations outside ourselves. One of the features of having a DC nonprofit of this sort is that a lot of what we do is talking to people, feeding them information, getting them the ability to do things that they need to do. We discovered...

You know, I had a reporter saying to me, you know, a few days ago, well, wasn't SB 1047 out since February or whenever it was? You know, how come you only learned about it right now? And I was like, well, you know, for better or worse, we only learned about it right now. We can argue about why that would be. But it turned out that our learning about it, because we were told about it by a couple of people who, you know, who like very forcefully brought it to our attention, meant that we were in a position.

to tell a bunch of other nonprofits and to tell a bunch. We discovered that a large fraction of civil society organizations that you would have expected would have been very concerned about this didn't know as of a week ago, right? They had no idea it existed. We found out that a bunch of venture capital firms had no idea it existed, that a bunch of startups had no idea that it existed. I'm talking to some folks at a very, very large company right now who are part of their policy group.

and who didn't really know much about this thing a week or two ago, and now they've geared up to talk about it a bunch. One of the things that a nonprofit of the sort that AFTF is can do is it's in a position to spread information around like that. Another thing we can also lobby, we can write position papers, we can do editorials, we can do all sorts of things. And it turns out that this is how the game is played.

I don't really love the way that politics happens in the United States, but the way that politics happens in the United States is you have advocacy organizations and you have lobbying branches for inside of large companies and there are professional lobbying firms and all sorts of other things like this in the ecosystem. DC, DC has its 501 C threes and it's 501 C fours and it's 501 C sixes and it's, you know, and it's companies that do lobbying and it's companies that do communications and.

Theo Jaffee (1:34:00)

Well.

Perry Metzger (1:34:12)

You know, and it's an ecosystem. And if you don't play the game, you're not in the game.

Theo Jaffee (1:34:17)

How do you convince legislators when you're playing the game that allowing AI regulation is actually, or not allowing AI regulation, that not doing AI regulation is actually in their interest and not merely, you know, the morally right thing to do or whatever? Because it seems like there are huge forces playing the opposite direction.

Perry Metzger (1:34:33)

So.

So there are two things going in our favor. One of them is that for good or ill, it seems like the EA folks, in spite of their overwhelming financial advantages, are not very good at this. And I could speculate as to why that is. And maybe it would even be intelligent speculation, but it's not really my place. So when we find ourselves going in and being taken relatively seriously when we talk to people.

And when we talk to people, we explain to them, you were told that this piece of legislation was something that was very widely supported by industry, that lots and lots of people in academia think is good, that lots and lots of people believe is necessary and very normal. And in fact, it's kind of a bunch of extremist stuff. And the claims that you've made about what's in your own law aren't true.

And I can't, by the way, I cannot blame, it's common to say, why didn't this legislator read his own bill? And the answer is because he has 120 ,000 pages of bills that he's got to deal with in a given session. And of course he didn't read the bloody thing. How could he? You can't expect them to. And I think that it's not reasonable to. You can ask something very reasonable about why do we have a system in which we expect legislators to deal.

with these massive volumes of stuff going through. But you can't actually practically expect that they've read everything. And so sometimes you have to go in and you have to say, look at this paragraph in your bill, this paragraph that says a thing that is opposite to the thing you believe it says. Here, let's read it. OK? And you obviously can't be a rude asshole like that. But the point is that, you know,

Theo Jaffee (1:36:25)

Yeah.

Perry Metzger (1:36:29)

I will say right now that in the current fight in California, it is my strong expectation that a bunch of people more or less openly lied to the sponsors of this bill in order to get it pushed forward quickly. They told them that it was a widely supported piece of legislation, that there would be very few people who thought that it was a bad idea.

that it would get them lots of positive press, that it would help their political careers, that it would, you know, that it would bring back their lost hair, you know, anything almost. I get the distinct impression. And again, this is, I get the distinct impression that a bunch of the people on the EA side, because of their fanaticism about this, do not understand the concerns that legislators would have about doing things that,

are frankly hated by a large fraction of their constituents and don't understand that lying to them about what the consensus is is a way not to make friends for the long term. I mean, I think, you know, things like SB 1047 are likely to be extremely counterproductive for the EA side because what they end up doing is convincing legislators that they cannot trust the lobbyists who are pushing this sort of thing because they don't have the interests of the legislator in.

If you're talking to someone, you can't, and you lie to them too much, eventually they will notice.

Theo Jaffee (1:38:01)

So do you think that there's anything good, like positive expected value, that the government can do on AI? Or is Alliance for the Future's goal to kind of just get them to do nothing at all?

Perry Metzger (1:38:11)

No, I think that there's a great deal of stuff that we probably need, right? First of all, I mean, there's the stuff that you would probably consider to be not doing anything, but which I consider to be doing something, which is we probably need federal preemption of local AI laws because of the fact that one of the strategies that EA has chosen is to try to get laws passed in as many little municipalities and states as they can. But more than that.

There's a lot of controversy around this stuff right now. For example, there is a lot of arguments about copyright law and the use of copyrighted materials in training. Okay. And for good or ill, it's probably going to be the place of the legislature at some point to provide clarity so that we stop having lawsuits so that everyone, everyone may not be happy at the end of that process. And in fact, one of the, one of the definitions of a compromise.

in such circumstances is that you find that no one is happy at the end, right? You kind of know that it's okay if they're, it's bad if there's one party that's ecstatic and like lots and lots of other parties who feel screwed. If everyone feels like they can live with it, but they're not actually gloriously joyful, you've probably reached a reasonable level of compromise. But there's, there's stuff around, you know, around actual bad uses of AI. Now, now we focus a lot.

when we're talking about like the AI obsessions with attempting to stop AI research and development itself. But if you look at the other side of that, there are clearly uses of AI that most of us would probably consider a little scummy. You know, you can come up with stupid examples that are very obvious like scamming the elderly. Sure. Scamming the elderly. I don't think is I'm sure that there is a lobby for that, you know, among certain grifters, you know, in and.

Theo Jaffee (1:39:53)

scamming the elderly.

Perry Metzger (1:40:05)

Maybe there are people in Nigeria and boiler rooms in Pakistan or what have you where there are a lot of scam operations are operated off of who believe that they have a moral right to scam the elderly. But I think most people don't think that they have a moral right to scam the elderly. So actually having some law enforcement effort put behind, ignore the AI thing. I mean, everyone.

in the United States gets lots of scam calls, right? And they are a persistent nuisance. And wouldn't it be nice if they were actually the object of more attention? But there's lots of other stuff, right? Like, we have to answer how much surveillance do we want in our society? And we're going to get to the point soon where you could imagine the police in a major metropolitan area having real -time feeds from hundreds of thousands or millions of cameras. I mean, the price of

The price of cameras and the price of the hardware to drive them and to pump their data over the internet has gone down to very, very low numbers. We're talking about a couple bucks a piece, sometimes less. At some point, we're going to be able to scatter them like dust around. And do we want a society where people can pervasively track and record and note down the actions that every human being in our society takes in public?

And maybe there are some legitimate uses for such information. Maybe there aren't. But it's a debate that actually needs to be had about how we want to confront questions of privacy, of individual liberty in our society. Do we want, and this can be done right now, do we want tracking of every car and every person in our society at all times?

Do we want the government to have access to that information? Do we want private organizations to have access to that information? I mean, there are people I know who argue that you do want that sort of thing. It's at the very least a legitimate argument to be having because this is a real thing, right? These are real capabilities. These are things that are actually going to be possible in the not that distant future or possible now. So they are worth discussing. One of the things that angers me a great deal,

Theo Jaffee (1:42:06)

Mm -hmm.

Perry Metzger (1:42:31)

is that because of the Doomers focusing so much on science fiction scenarios, no one is discussing very realistic scenarios. Things that can happen in the near term or immediately are already happening. And I think that those are probably more salient. So one of the things we would like to do is actually develop a legislative agenda for, you know, that discusses...

some of the threats that are more salient, the things that we might actually have to really worry about, you know, scams, surveillance, you know, how we react to training data, data sets, privacy, you know, should people, you know, should I be able to ask an open source AI?

or well, not an open source AI, I didn't mean that, but should I be able to ask a publicly available AI for intrusive information about your medical data and get an answer out? Probably not. But depending on how people train the things and the liabilities they have or what have you, this could end up being an issue. So we do have a lot of stuff that we want to discuss with Congress and with state legislatures.

But most of it is not in the direction, you know, most of it I think would be laughed at by a lot of the EAers. You know, someone like Yudkovsky would probably say, why are you thinking about this meaningless drivel when, you know, when the entire world is going to be destroyed soon? Well, I don't actually think that the entire world is going to be destroyed soon. So, pardon?

Theo Jaffee (1:44:09)

it's logically consistent. It's logically consistent though.

Perry Metzger (1:44:16)

Yes. The one thing that I can say for Yudkovsky and McCaskill and all of these people is that they may be distasteful, but they are reasonably internally consistent. Reasonably, not completely. I think that a lot of the things that they say are developing certain cracks, especially given the fact that we are living in an era where we're actually being confronted by real AI systems.

and are being forced to see whether or not they actually do the sorts of things that they claimed at one time, and they don't.

Theo Jaffee (1:44:50)

Yeah. So last question, what does your roadmap look like for the future of Alliance for the Future? Like what kinds of things will you be doing in the near term and then maybe farther out?

Perry Metzger (1:45:03)

Most of the stuff that we have to do is incredibly boring. We have to retain more staff. We have to raise more money. We have to build better donor relations software. We have to track more and more of the initiatives going on in state legislatures, in various portions of the federal government. We have to build a lot more connections with organizations that have interests similar to ours.

One of the things we've discovered is that there are a ton of organizations that are on the same side as this, but don't have enough time to think about it very much because they are, say, you know, an industry group that has to deal with, you know, 50 or 100 issues in a given legislative session. We only have to deal with one. Where, you know, most of what we're interested in at this point is just really boring stuff about building the organization. But our

You know, I mean, our goal is pretty straightforward. We want to stop the, you know, stop overregulation of AI and make sure that people are focusing on actual salient issues for our society associated with AI instead. And over the next few years, that's what we'll be doing. I mean, at some point, I think that this particular battle is going to be won and AFTF will probably, you know,

Most such organizations in the end start mutating and taking on different roles than they started with. And I'm sure in five or 10 or 15 years, AFTF will do that sort of thing. I mostly care about what it's doing in the next few years and how effective we are in trying to stop, in trying to stop doomerism. If we can stop doomerism, if our societal transition for this sort of thing ends up being,

less sculpted by paranoia and science fiction scenarios and insanity and ends up being more sculpted by people thinking things like, gee, I might actually be able to help a bunch of these kids learning math by giving them individualized math tutors or what have you. I think we have the possibility of having... There are some really, really amazing and cool things we're going to be able to do if this happens, right? If AI is left alone.

The US has had year on year GDP growth at or below 2 .5 % for a very long time, even though if you go back a century, it was more like 5%. And this is a really big problem because it means that, among other things, that we're slowly strangling ourselves on our national debt, that people are much poorer than they need to be.

to me, and this is going to sound boring to like 98 % of people, but I think this is really exciting. I think if we just can, if AI brings GDP growth, you know, above, you know, four or 5 % for the first time in forever, you know, and I know people who'd probably say, well, you know, it can probably do a lot more than that. And I'm not going to be utopian and, and, and, and, and go there. Maybe it can, but if we double.

GDP growth because of the of the widespread adoption of AI it's going to mean that in the lifetimes of people who are around right now You know ignoring life extension or anything else, you know by the time they're 50 they're going to be well, let's see So that would be if it was at 5 % I think that would be a doubling every 15 here So we would expect them to be like 16 times wealthier by retirement

if they're in high school now. That's crazy. That's a big, big difference, right, over the sorts of economic growth we have right now. And if it turns out to be true that AI could bring economic growth to double digits or higher, maybe it could, maybe it couldn't, things are even better. But even if we just got to a really modest goal, like 5%, this is life changing for hundreds of millions of people.

Theo Jaffee (1:48:55)

Yeah.

Perry Metzger (1:49:20)

And if we can have a small part in making sure that that happens. I don't care if AFTF gets any credit for anything that it does. I don't care if it becomes a famous organization. I don't care if it becomes a household world. I care a great deal about making sure that we don't end up with crippling regulation or bans or things like that on the most promising technology that I know of that's being deployed right now.

If we can succeed in doing that, that's a win.

Theo Jaffee (1:49:53)

Yeah, well, I think that's an excellent place to wrap it up. So thank you so much, Perry Metzger, for coming on the show.

Perry Metzger (1:50:01)

Well, thank you so much for having me, Theo.

0 Comments
Theo's Substack
Theo Jaffee Podcast
Deep conversations with brilliant people.
Listen on
Substack App
RSS Feed
Appears in episode
Theo Jaffee