Theo's Substack
Theo Jaffee Podcast
#1: Greg Fodor
0:00
-2:26:32

#1: Greg Fodor

AI, knowledge acceleration, aliens, and VR.

Transcript

Note: This transcript was transcribed from audio with OpenAI’s Whisper, and edited for clarity with GPT-4. There may be typos or other errors that I didn’t catch.

Intro (0:00)

Welcome to Episode 1 of the Theo Jaffee Podcast. Today, I have the pleasure of speaking with Greg Fodor. Greg is a software engineer who's been involved in augmented and virtual reality for over a decade. He co-founded AltspaceVR, a virtual events company acquired by Microsoft, worked on Mozilla Hubs, a browser-based VR community platform, built Jell, a virtual environment video game for working, and is currently working on a new extended reality hardware stealth startup. On his Twitter account, @gfodor, he tweets about AI, VR and AR, unidentified aerial phenomena, aliens, and the philosophy of knowledge, all topics we discuss in this episode. This is the Theo Jaffee Podcast. Thanks for listening. And now, here's Greg Fodor.

Superconductors (0:46)

Theo Jaffee: Welcome to the first ever episode of the Theo Jaffee Podcast. Today, we have Greg Fodor. So, I guess we'll start off with today's big news elephant in the room, which is room temperature ambient pressure superconductors. Do you think this whole thing is legit? And if so, what applications are you most excited about for it?

Greg Fodor: Well, I'm not someone who knows too much about this stuff. I'm really just like everyone else in the public discourse, hoping that this is a real thing. Certainly, there will be some amazing breakthroughs if this is the case. But, I don't really have too many thoughts on this. I'm just like everyone else, hoping that it gets replicated quickly, so we can get clarity on if it's the case.

Theo: Yeah, it would be pretty great. I saw a tweet today about what if we had handheld railguns that used superconductors instead of using gunpowder for projectiles? That might be kind of cool.

Greg: I think this is one of those things where everyone hopes that this would happen at some point. It's like anything else with new knowledge like this. You can't really know a priority, whether or not it's going to happen tomorrow, a month from now, ten years from now or never. There's reasons for people to be skeptical of such things since, you know, you couldn't really know beforehand when it was going to happen. But, when these things do happen, then they happen. Then now you've got pure inductive learning from there.

Certainly if this is the case that these folks have made this breakthrough, I think what you would expect is, you'd see a variety of follow-on inductive kind of hill climbing going on on all fronts around that. Not just on the application side, but also, the mechanism of how it works, like understanding maybe some deeper causes that maybe we didn't even know about because we didn't really have the tools we needed to have a replicated environment for such things.

It's really easy to understate something like this if it turns out to be true because it does feel to me, at least as a layman, of that kind of breakthrough if it did occur. That'll unlock many, many doors. But, we'll see what happens.

Theo: Yeah. I mean, it is kind of interesting that they had this decades-defining Nobel Prize-deserving scientific discovery, and they kind of just threw it up on arXiv. Like, here you go. No fanfare. No gigantic announcement. Just this is it. I mean, it's kind of cool watching science happen in real time, assuming that it's real, of course.

Greg: Sure. I mean, replication also sounds like for this would be relatively easy, right? I mean, again, I don't really know much about this, but, it seems like up front, like it could have turned out that, assuming again that this is true, that it could have turned out that you would need some really esoteric or highly costly setup to be able to replicate this result. But the universe may have gifted us another gift on top of the gift, which is that perhaps it's also quite easy to replicate and maybe manufacture even.

The Turing test (3:51)

Theo: Yeah. So that's the elephant in the room for today. The elephant in the room for the last year, or at least the last eight months or so, has been AI. So you've been very interested in AI on Twitter. What are your personal AGI timelines?

Greg: Well, I think I kind of feel that we'll pass the Kurzweil-style Turing test probably within three to four years. That's just my personal belief based upon what I imagine might be happening with neural networks, kind of where they'll top out. And that test, according to him, is basically you empanel a large number of humans from diverse backgrounds, various levels of expertise, various people that may not have a specific domain expertise. And they have the ability to kind of have an unbounded, sustained challenge against the AI in the classic Turing test. But basically, it's parallel happening with many people. And ultimately, you're looking for a resolution where they all kind of flip in a way that they can't tell the difference between the AI and the humans.

So I think that that will probably happen. I think that the small margin there really is going to boil down to whether or not neural networks have this capacity for what David Deutsch calls the ability to generate new good explanations. And I think it's a little bit fuzzy to me if the Kurzweil Turing test will really tease that capacity out or not. Like the Kurzweil Turing test certainly might pass even if neural networks cannot do that. And my prediction is constrained on the idea that the network could pass even if it lacks that capacity.

So specifically, what that means is the neural network being able to create new risky conjectures that aren't really inductive out of its training set. And so I think it seems possible that the Kurzweil Turing test would pass even if it couldn't do that per se. Because it can generate a lot of stuff that is going to be close enough to that from the perspective of a human judge.I imagine that it would pass in a competitive environment versus humans. It's actually really hard to get a human to do that kind of thing in a conceivably repeatable way. So that's where we end up with that question. Now, I don't know if you would really consider that AGI. That's a separate question. But I think once you're at that point, you really are in a regime where unless you're physically with a person, unless you're asserting their physical presence of some kind, you now really do have a situation where you don't know if you're dealing with a person or an AI if you're interacting with them through any kind of mediated communication tool through the internet or anything like that.

That'll be a pretty insane space shift. I mean, I think it's already kind of happening to a certain degree. You're arguing with bots on Twitter and so on. And perhaps they are AIs. I feel like I might have been fooled a few times already by these LLMs into arguing with someone that actually turns out to just be some neural network. But I think once that Turing test passes, that specific kind, you've really kind of now crossed the frontier where it's the case for every person, for every interaction of most daily life is one where if you're not physically asserting the person's biological presence with you, that you really have no way to develop a prior on if they're a person or not a person or a human or not a human. So that's kind of where I think that's going to fall down in about three or four years.

Theo: So by Kurzweil Turing test, do you mean a group of laymen who are asking the AI or just having a normal conversation with it? Or do you mean like adversarial prompt engineering experts who know all the flaws in the AI systems and who will try to exploit them?

Greg: I mean, my understanding is this is basically the classic Turing test, but kind of expanded to include a large body of humans of all backgrounds kind of selected for a broad swath of the kinds of things you'd want to kind of adversarially test the AI against through a natural language process. So it will certainly need to include experts of various kind of scientific intellectual domains, but you really would want it to include many, many people from all different backgrounds that might be able to carve out some kind of basic fundamental capacity of the AI that others might miss. Everything from just the way it feels to talk to it, from the way the responses communicate to you, that it's actually understanding what you're saying. People of all different backgrounds would have to come in to try to sum over a set of things that we think would basically reduce the margin down to the point where we ought to expect if you added one more person to that test, they would 100% still fall into the category of not being able to classify it as an AI or a human.

I think that's like, there's probably some like, if I had to guess, this is really, really rough speculative, but like on the order of like a thousand people or something, there's going to be some population where if you can get the right population, get a large enough number of people that you ought to like expect to just eventually converge on a situation where adding more people to that is not going to change the outcome anymore. It's going to just basically be like, okay, yeah, we can't tell the difference anymore. And whenever that threshold is reached, you've kind of by definition made it so that that's true for all people.

Theo: Right. So this Turing test is about natural language. When do you think we might pass a Turing test with imagery? So like you say, generate an image of someone holding their hands up in the air like this, two hands out, palms open facing the camera, and it will just do it and know what you're talking about and not mess up the hands and get the details in the background right. That would take some probably some better world modeling than current language models have or current diffusion models have now.

Greg: You know, I think it's a tricky question because I don't actually know. I mean, really, at the end of the day, I think what you're talking about here is an avatar embodiment style of interaction with an AI, right? And so to actually convey an avatar based AI versus one that simply is talking to you through a text box, I'm not fully convinced that that's a huge gap. I think if you have a thing that can spit out text basically instantaneously that passes the Kurzweil Turing test, to get it from there to something that feels like you're interacting with a person via some kind of avatar embodiment, I think is a very small step and maybe not even one that requires AI neural networks per se to do at that point, especially three or four years from now.

Ultimately what you would need to be able to do is when you're interacting with an avatar, there's a couple of things you need to get right. You need to get the nonverbal communication right, which is based on their body language, whether or not they're looking at you the right way, their eye motion, their hand and arm motion and that kind of thing. The thing about these kinds of social cueing is that sometimes it's really, really sensitive. If you do it wrong, people notice very quickly. If somebody looks at you and they're talking to you and they're not making eye contact with you at all, then you kind of get a totally different sense that they're really not a human. But there's a very large area where you have a lot of leniency in terms of how well you have to get it. You really just need to convince the person on the receiving side that the sender is not lying about their social cues. So I think that that kind of thing is already pretty close.You certainly should be able to string together some various mocap things. If you have a neural network that can stream text and answer things in a way that passes the Kurzweil Turing test, you certainly can wire that up to one that would effectively stream commands to an avatar puppeting system that would leverage a variety of hooks to express what kind of body language you want to convey moment by moment. You can then drive that using traditional algorithms.

The other thing with avatars is that there's actually a lot of ability for you to dial this down. If you just have a sphere with two eyes and a mouth, that's doing lip syncing off of the audio stream, you can get a sense of social presence through that, that's surprising. That will be another degree of freedom to reduce the need for these Kurzweil passing neural networks to communicate with you in a way that feels natural, as if they're embodied as a person in front of you.

I think all of that's going to fall into place, honestly, the technology will probably be in place before the Kurzweil test pass. You'll already have avatars interacting with you that aren't Turing passing neural networks. You'll know that they're robots by what they say, but not by how they appear. They'll basically be failing for the reasons of what they say. The last frontier will be when they start to say things that you just can't tell anymore, that they're not human.

Theo: When we do pass the Turing test, how long is it just a straight linear path from an AI that can reliably sound like a human and pass some tests to the kind of world-altering transformative AI that automates decades of scientific discovery, or will that take something entirely different? Because David Deutsch seems to take the view that it would require something totally different.

Greg: Well, I mean, I think there's a lot of elements to that. Trying to guess things that are a few foundational breakthroughs ahead is really hard to do. Trying to guess things that are one foundational breakthrough is also really hard to do. The question is, are these kinds of basic agentic AIs going to require any kind of new foundational transformative understanding of how things work in our minds or things like this?

Now, where I fall on that right now is that I don't think that it really is going to require that for us to unlock significantly capable agents out of the existing architectures and existing paradigm of neural networks. This is just an intuition thing, but I really do think there's enough smart people focused on this right now, and there's enough demand, and there's enough incentives to figure it out that you'll see some kind of breakthrough.

I mean, somebody just posted something that I think probably is maybe getting closer where you're going to, you can't know for sure, but I do think you'll see something that feels closer to a working agent that can do reasonably cool things with an API access to the internet, like relatively soon. I mean, within a year or something, people will start to pay more attention to this stuff as it works.

I don't think any of this requires it forming risky conjectures. I don't think it requires it trying to come up with explanations for things it doesn't understand that it can't fish out of its training set. I think this is really just about being able to understand what facilities are available to it to achieve its dictated goals, and then constructing a plan to execute and validate each step and not mess it up along the way.

And then, where you end up with that is that you've got this little agent that can start to do very basic things by interacting with the internet, but from an actions perspective instead of a generation perspective.

Theo: So, what was the project that you saw that you think is closer to a working agent? Because so far, the agents that we've seen have not been very good at all.

Greg: I don't know if it actually was an agent that I'm thinking about it. I can share it with you after, but I think somebody was just posting something they did with some code generation stuff.

So I mean, obviously, code generation is somewhat adjacent to agents, because if you can generate code, maybe you actually can generate atomic actions and so on that you can execute. So maybe it's like a kind of attached on problem.

AI risks, alignment, and knowledge acceleration (16:21)

Greg: But anyway, I mean, I think in general, when I think about it is, I mean, I wrote this long essay about this. And so, I really genuinely think AI risk due to agents is a significant near term risk. There's a lot of people talking about super intelligence risk, like all these rationalist theories around goals and convergence around various outcomes. And I just really think that a lot of this, I mean, it may be correct, it may not be correct, right? I can argue either way, depending on what assumptions you bake into it. But the biggest thing that I feel that's true that I feel like not enough people are talking about is most of this stuff is motivating, like, really kind of two key things, right? It's motivating, A, do we need to solve the alignment problem quickly, or at all, I guess, but quickly being a more relevant question. And then the second is, do we need to slow down AI development somehow, through regulation or things like this? When I look at those two questions, most of the discourse around whether or not those things are the case is revolving around conjectures and thinking about the nature of super intelligence, whether or not we're going to pass the Turing test, or agents are going to become certain levels of superhuman. I really don't think you need any of that to motivate the answers to those two questions.

On the question of the alignment problem, you really have to just dial in and ask what's the most near term situation where you have at least not necessarily an existential situation where humanity can get wiped out, which people seem to love to talk about, but at the end of the day, that's the end of the book. I think there are definitely situations where you have something akin to a global scale crisis, or even a country size crisis, where during the pandemic, it's affecting everybody, adding a drag on society, really slowing things down and making the world less safe for everybody. I think those kinds of scenarios can be conjured up, based on some very minimal assumptions about the nature of the evolution of agent-based AI systems over the next couple years.

If you put all that together, what you end up with is you say, we definitely need to solve the alignment problem, we need to solve it a lot sooner than we thought, because we need to actually solve it before something like a COVID pandemic style crisis occurs. Obviously, the species would survive such a thing in this analogy, but we really should not want that to happen. It would be phenomenal if we at least had some kind of breakthrough that would ensure such a thing would not be possible.

Now, the alignment problem in general is a dicey thing to talk about, it's not really well defined. That all is a problem unto itself. But at the end of the day, the question about agents is really going to be about could agents plunge us into some kind of pandemic level crisis? I think the answer is absolutely yes. And I think absolutely, probably within a few years, I could see that happening. That alone motivates the first order approximation of the alignment problem, which is basically, can you get an AI to refuse an order? Can you get an AI to not do things that will be reasonably understood to lead to such a crisis in the minds of most, if not all humans, other than the person commanding it to do so?

So I think that motivates the alignment problem on its own, without having to argue about super intelligence, and the orthogonality thesis and all these things. I think you can just get there and get to an agreement that we need to solve this problem quickly because of this risk from agents.

Then the second pillar of a lot of the discourse is how do we regulate AI? How do we slow down and all of this stuff? This is obviously a very complicated topic, but the way I come down on that is that you really have a few assumptions you have to start with. The first question is, can you actually put a global pause on this trend? Can you actually get to a point where you're meaningfully slowing down all actors? In this case, all actors would be all the US AI companies, all the governments of the world who basically, as they one by one flip on the idea that this technology can lead to an outsized amount of sovereign power are going to continue to pursue it. Can you actually pause all the actors? My answer to that is no, I don't think that you can pause all the relevant actors in this. You really only need to say that you can't pause all of them. If a single actor continues to make progress at a large scale advantage to the others, a lot of your solution space quickly evaporates because now if we're talking about trying to regulate it to minimize existential risk, your single actor just now is the frontier and is basically going to get there way before everybody else.

Theo: So you're saying it's a Molochian coordination problem.

Greg: What do you mean?

Theo: Have you heard of Scott Alexander's Meditations on Moloch?

Greg: Oh! Right. That makes sense to me. I mean, it's certainly like one of these situations where you're kind of putting all of your eggs in one basket in the sense that you're assuming we have such a killer pause mechanism that we have a silver bullet that's going to pause all the actors, even though all the actors have counter incentives to keep going in their own way. So I think that's one big hole. And so the other big hole. That one, I think, has been pointed out by a lot of people, and I think it's completely correct. But I think the other big hole that I noticed, and I think I'm not the only one, I think OpenAI noticed it too, honestly, but the other big hole is that we're in this really strange situation.

If you assume that the alignment problem needs to be solved quickly, and if you assume that it requires new foundational knowledge to solve it, not just incremental knowledge, but we need to actually understand. Understanding something fundamental about the universe is crucial if we want to have a hope of solving it in any durable way. Now we're in a situation where we're really gated on fundamental research. We're gated on the capacity for humans to rapidly come up with tests and validate very unconventional hypotheses. The strange situation we find ourselves in, though, specifically now, is that the entire conversation about this is happening around a technology that is fundamentally an accelerant for that very same process.

If we need to create new knowledge, new foundational knowledge quickly through fundamental research to protect ourselves from AI risk, like through solving the alignment problem, we actually need to ensure that we have the best tools to create new knowledge. It just so happens that it turns out it looks to be the case that one of the best, if not the best tools to create new knowledge quickly is the very same technology that we're trying to de-risk. So it's this very strange circular reference in that if we don't have state of the art artificial intelligence in hand, we may be in a situation where we will never solve the alignment problem absent having a certain level of AI technology in hand to do it.

Theo: This is the fundamental problem with MIRI, with what Eliezer Yudkowsky tried to do back 20 years ago before there was anything resembling functional general AI, where he tried to solve the alignment problem using abstract logic and decision theory. Earlier we were talking about how neural networks might not be able to conjecture and create new explanations in the Deutschian sense. So how do you square that with neural networks being able to create new foundational knowledge to solve the alignment problem? Is it that they can take care of everything else and leave alignment researchers to create the foundational knowledge?

Greg: The idea would be that the neural networks wouldn't necessarily be doing that, assuming they can't, which is still an open question. But the idea is that if you have a bunch of humans with a set of tools, they can produce knowledge at some rate. And then if you add artificial intelligence technologies, pre-AGI AI technologies, for example, depending on how capable those technologies are, you get some multiplier out of their ability to do that because the AI can effectively help them think through things better. It can help them do research better. It can help them construct experimental structures, anything that's inductive in the chain. The AI of a certain caliber will be able to rapidly accelerate the process of creating new knowledge. It just means though that perhaps the neural networks couldn't do the entire thing by themselves, but there's obviously going to be some kind of multiplier effect on our ability to generate that kind of knowledge.

I think that's really the weird situation we're in, where you could really imagine a scenario where, let's say, let's call where we are right now as Generation Alpha for where we are with just neural networks in general. And let's call where we are a year from now Generation Beta and then Gamma, Delta, etc. It could turn out that the alignment problem only yields once we get to, let's say, Gamma. Now, let's say we hit the brakes at beta. What that means is we have a situation where all of the good actors, the ones that would have pursued the alignment problem with Gamma, are now boxed out. They are never going to solve the alignment problem until they're handed generation Gamma. Meanwhile, assuming you weren't able to stop all other actors, you've basically now guaranteed death.

This is obviously conditional on a lot of assumptions here so I could be completely wrong, but if it turns out that AI technology above a certain level of capability is a necessary precondition to solving the alignment problem, then pausing AI itself in this way that's only a partial pause on actors would actually be the determining effect that kills everybody. This is insane, but it's actually the case that under that regime that would be the case because if we had not paused, we could have gotten to gamma, we could have handed it to the good actors, and they could have solved alignment before it was too late.

So, that really strange regime is the one that I'm very fearful of because it does seem to me that it's highly likely that we need foundational knowledge to solve alignment. It's highly likely to me that we need to solve alignment quickly because of the risk of agentic AI, not because we basically need to just front run how a super intelligence is going to work. Eliza everybody might be right about how that's going to end up right maybe it turns out they're all right but it doesn't actually matter in terms of motivating the alignment problem. So if it turns out that we try to do this kind of crazy regime where we're pausing half of the actors, and not the other half. And then it turns out the other half are like rogue actors, or don't buy into the problem, or aren't interested in aligning AI, then that move is actually the determining move with the whole game and we just lost.

Theo: Just in case anybody watching is interested, Gfodor wrote a great article called “To De-risk AI, the Government Must Accelerate Knowledge Production”, which pretty much talks about exactly what he's been talking about for the last few minutes. Essentially, it's this list of policy recommendations to create some kind of almost like a decentralized Manhattan project. It's a challenge to solve alignment. I think it's a great article. I quote tweeted it and I agree with it pretty much entirely. Since the article came out, OpenAI has announced their Superalignment initiative. And just today, OpenAI, Anthropic, Google, and Microsoft announced a new frontier model forum to coordinate work on their most capable models. So, how does that affect your picture? Do you think they're implementing the plan you suggested?

Greg: I'd have to look at that. The big tell of if somebody is implementing what I'm writing about in there is a lot of people have tried to frame research institutions around this and so on. Some people have proposed giving these institutions access to AI technology, obviously for the purpose of working on it to try to align it. It's kind of like if you were a lab and you were working on training animals to experiment on. But the tell that they're thinking about it in the same way I'm thinking about it is, you're not giving them the AI, per se, as the test subjects, although that is the case that they would be in a certain sense that test subjects, but you're also kind of simultaneously giving them the AI as an essential component of how they work, and it's an opportunity for them to succeed.

This is like the strange situation we're in where if you're doing some kind of animal testing, and you need to use the animals themselves to do the testing, and you need to test on the animals at the same time. It's not really that obvious. If your framing is on what AI is actually capable of being able to do for people, especially if it continues to accelerate the way it looks like it is, you really need the initial conditions to be set up so that these people across the entire earth are basically the most over leveraged on using AI itself at the very closest to the frontier you can get them.

The way I structured in my argument is like you need this group of people to basically have carte blanche to go to the government and say, we need inference cycles on this cluster at Google, we need to be able to access the GPT-4.5 training run from April or whatever. They need to be able to have the government do the blocking and tackling if it's necessary. Hopefully, the government's never involved, but you need them to basically be at the frontier not because they need the animals to test on it, but because they need to actually ride the animals to get to the end of the finish line. They need that stuff to ensure that they win before some rogue actor accidentally unleashes some hellscape for all of us to live through. You need them leveraging the best technology that we have across all of the actors. Maybe one day Google's a little bit ahead, maybe one day OpenAI is ever ahead, maybe one day John Carmack's startup is ahead of everybody else. The point is though that these researchers need to be at the very frontier of everybody, and that's the essential component.

Theo: So, I think the world is kind of shaping to be as you described, where you have lots of frontier alignment researchers who have access to all the super frontier models. It does look like Anthropic, OpenAI, and Google and the others are going to be sharing their top findings with each other. I think in this case, they may care more about the survival of humanity than they do about their own short term competitive advantages but we'll see.

What is alignment? (33:22)

Theo: When it comes to the alignment problem itself, what do you picture when you think of the alignment role? Some people, like Yudkowsky, picture it as an abstract decision theory, a mathematical proof that comes from formalizing human values. Some people, like Yann LeCun, picture it as basically just reliability engineering, like you engineer a rocket to not blow up. And some people still have entirely different views on it because the alignment problem means alignment with human values and then there's a question of whose human values and it kind of falls apart from there.

Greg: Well, I think the problem is obviously underdefined in a general sense. The only way I've ever been able to think about it is to start from the most extreme cases where you really, ultimately from what I can tell it really boils down to a situation where there's got to be some surface area of genie wishes that people are going to be able to make of these things that just simply cannot be manifested. You can kind of start from the things that everybody agrees upon ought not to be manifested, and now work backwards from there just to motivate the idea that the problem is coherent.

Theo: So invert the problem, basically?

Greg: Just motivate that the problem is actually real, cause I think there's a big segment of people that think this whole thing is just a bunch of nonsense. When people say "who's human values", it usually feels like a way to dismiss the problem in general. It's basically saying the problem is incoherent, there is no problem. This is a bad way to think about it. The way to think about it is like, there absolutely is a problem. There has to be some subset of things that we all agree on. For example, if an arbitrary human decided they had a death wish for the species and knew a priori that if they typed in a certain command, it would kill everybody, we ought to all agree that all AI systems should not be able to do that. So, really, at the end of the day, if you zoom out, what does that boil down to? There are really only two possibilities. The first is that you preempt or you post them. Either you set things up so no human would ever make such a request, or you set things up so that after that request is made, it's never actually executed on, it's never actually possible to run.

Now, both of those seem impossible to solve, which is why I basically think of this fundamentally as about finding new foundational knowledge. It's not that I say I know they can be solved, or that I know what the knowledge is that will solve it. It's just the fact that they both seem to be impossible to solve tells me that there's some foundational knowledge that we need to find if we intend to solve it.

Now, certainly, it's really good to have these intermediate measures where we rely on centralizing forces such that their models are baked in, and that everything that goes over the fence is somehow irreversibly lobotomized so that you can't do certain things. That also has obviously its own set of problems. It's not a solve, it's not anything. It's basically just a little bit of a friction force that slows some of this risk down. But really, at the end of the day, it does boil down to this fundamentally seemingly impossible problem, which is why I immediately go to the idea that we need to develop some kind of foundational research program.

A good foundational research program comes in with basically no assumptions about what the solution space is like. One of my biggest problems is that by front-loading your solution space to be a very specific set of potential solutions, and if it turns out you're wrong, you've just wasted a decade or two on something. So, at the end of the day, if there is going to be this kind of research program, a huge unforced error would be to smuggle in some kind of assumptions. If we don't do it on purpose, it's kind of like actively hedging yourself to avoid making assumptions because we are talking about something that seems to be impossible.

One analogy I make is like if you go back before we understood electromagnetism. If you asked somebody, "I want to be able to talk to a person on the other side of the country, instantly. I want to say a word and I want them to hear it in their ear." That would sound impossible. That would sound insane. When you say it out loud, it sounds crazy that we can do that. And that's the situation with the alignment problem. It's the kind of thing that if you just sit down and you frame it at the very base case, no human should ever be able to admit a command that we all know a priori would kill everybody, and the AI could even understand a priori would kill everybody. We need to solve it so that that command doesn't get executed across all humans across all AIs. That seems impossible to solve.

Theo: Roon tweeted recently that “it's obvious that we live in an alignment by default universe but nobody wants to talk about it”. And of course, it's hard to know what Roon is thinking. Really, it's hard to know what OpenAI is thinking. His Twitter account is like the tiniest, blurriest window into whatever is going on inside that place. But it seems to be that there's a decent chance that just kind of RLHFing and bootstrapping a model will effectively solve the problem, which of course could be very wrong.

Greg: I think alignment by default is a little bit of a limited thing. I think there's a very good chance that we're in a universe where unalignment is very unlikely. I think that's certainly possible, and it would be somewhat coherent with a lot of the UFO stuff, which we could talk about. I think at the end of the day, the alignment problem must be solved, especially if we're going to just get to a case that we can all agree on. There's got to be at least one case that we can all agree on, and as soon as you have one case, that motivates the problem. You have a situation where the base case is motivated, the problem is motivated, it must be solved.

Yeah, maybe it just converges onto a solve by itself, but if it doesn't converge by itself, it looks to me like the kind of thing that we need new foundational knowledge to solve. If physical reality just naturally converges on a solve, that's excellent, but at the end of the day, that's inert. We can't do anything with that. You can just kind of sit in a box and pray, but if you're going to talk about optionability, you have to actually start with a kind of grounding assumption about what it will take to solve it. To me, if I think about, I sit down and consider what it's going to take to solve this problem. In terms of the difference between what we can do versus what we didn't do counterfactually, we need to find the best optimal system to accelerate the generation of any foundational knowledge that could possibly lead to a solution. The only way I know that happens is by investing human and capital into a research institution, and in this particular case, by ensuring that we don't make the unforced error of not giving them the state of the art AI systems to accelerate them. That's where I fell down on it.

I don't particularly like the argument around the risk of regulation. I would love to sit and talk to an advisor or whoever. I want to understand why that's wrong, because I really don't understand why it's wrong. Hopefully, I'll interview him soon. You have to start and say, okay, is there any regulatory ... Because it's not about just slowing things down. There are two conjoined assumptions. If you can't slow down every act, if you can't stop every actor, and if it ... The big question is, a lot of people think, well, we just need more time. We just need to work the problem. We need more time, so we're just buying time.

But the problem is, if it turns out that you don't need more time, if you actually need AI technology of a certain capacity, on top of time, to solve alignment, the regulation argument is a disaster. So really, what you need to be able to do is to really try to figure out beforehand, is it the case that we need AI of a certain capacity to solve alignment? I think we probably do, because it's a foundational knowledge-based problem, and AI is an accelerant towards that.

Theo: To steelman Eliezer's case, I would say he thinks that once you get to a certain level of capability, we're all dead, it's all over. And that has to be prevented above all else. The way we can do that, we have some precedent with international nuclear nonproliferation treaties and drug import and export agreements and such to regulate, compute, track all the GPUs sold, add trackers on all the latest NVIDIA drivers, and drone strike the AI centers. Yudkowsky himself admits this. He says this is a bad plan. He expects that we all die, but it's the best that we have. He still does expect AI alignment research to go on, but in very trusted, secure labs, which I think actually worsens the risk, and you probably agree with that.

Greg: Well, I mean, I don't know if I would say I agree or not. I'm just saying that conditional on these two assumptions, and if those assumptions are wrong, that's great. I pop champagne, hooray, we're in some other regime where I don't have to worry about this. Where I stand right now, those assumptions seem pretty damn good. Assumption A being that you can't pause AI development across all actors. If you try to clamp down research on this technology that stands to cure cancer, cure the energy crisis, fix climate change, get us onto Mars, what you're going to end up having is a counterforce. And the counterforce is going to be some segment of people who have the will, capacity, belief that that was wrong and that we need to keep going.

There are plenty of people today that will come out and say that. What percentage of people who are self-identifying AI accelerationists, who will set up the cruise ship that is seasteading, that is doing GPU training off the grid, or who's going to go move their family and their network to some North Korean province to do underground AI training? There are going to be people trying to do that.

Theo: Bomb them. Drone strike them.

Greg: You're not going to know. This is the whole idea. There's no way when you're talking about something that has this kind of incentive and this kind of capacity to be done, there's no way to stop it from all actors. The incentives are just too high.

The incentives are basically world domination in the hands of them, or the more techno-optimist side, every day, I could steal me on the whole argument, okay, every day that we slow down AI, some number of people die because we timed it wrong. And we were perfectly safe, say for another decade. But because these people were so scared, we didn't cure cancer five years in advance. So there's going to be plenty of people that on principle, just decide that the regulation is wrong, and are going to have a sense of moral duty to continue the train up to the point where they feel it's not safe anymore. So that will happen. You're going to see the pressure move somewhere else. You might slow it down a little bit in some way somewhere, but it's not going to be this singular thing that just clamps it. The pressure is going to flow out and go somewhere else.

AI doom (46:50)

Theo: So, how worried are you really in the conventional Nick Bostrom, Eliezer Yudkowsky, orthogonality, plus instrumental convergence, plus capabilities kind of paperclip AI doom scenario, or really just any scenario where AI leads to the extinction of the human race?

Greg: I'm very much not worried about recursive self-improvement, or this kind of scenario where someone plugs together a few things, and lo and behold, they unleash the demon. From an engineering standpoint, I don't think that's a huge risk. In practice, there are limiting factors that will get you 80% of the way there. People are gonna poo their pants, thinking they're about to unleash the demon, but we'll know. It's not going to be this thing where we're just tinkering and then suddenly, the demon pops out. I believe there will be some rate limiter that will warn us when we're about to get our hand burned on the stove. That's just the nature of any recurrent process. It takes a lot of work to get it to close the loop, and as you get closer to closing the loop, the danger will become apparent.

Theo: I think that also, a lot of the early AI safety writings assumed that advanced AI, the first AGI, would be primarily written out of code, like some kind of small computer program, and not giant inscrutable matrices of floating point numbers. When you have a computer program that's written like a Python script, that’s 150 IQ, it can rewrite itself to be 155 IQ and then the next version it could be 160 and then, before you know it, you accidentally unleash the demon. I think with neural networks, it would be a lot harder. You have to train entirely new networks. It's compute limited. Sam Altman talks about how it just takes time to build new data centers, for the concrete to dry and the wiring to be added in.

Greg: There's tons and tons of inertia across every dimension of this foom scenario. If you want this scenario to happen, it's going to take a lot of effort to push it. You're going to need a lot of people pushing as hard as they can to make foom happen. And then, if you get a lot of people pushing and the demon is right there telling them, "I'm about to kill everybody," that's the regime you're in. It's going to require a sustained effort to push it over the line for that specific kind of doomsday scenario.

As for whether neural networks can have super intelligent capacity, I think that seems pretty likely. There are definitely arrangements of neural network substrate running things that you could theoretically make today that would result in outputs similar to a high IQ or higher IQ human. So I think the first principles on that are okay. As far as the risky conjectures, I’m not so sure, but a lot of intelligence is induction and synthesis. So I think that part seems sound.

Where things fall apart rapidly for me beyond that is trying to guess the outcomes or decision-making. I haven't read everything there is to read about this, there’s like reams and reams of things that have been written on it. I know there's a lot of theories that Bostrom and others have put forward. They really kind of come at this in a vacuum where they're trying to reason from first principles about how these things will act. That was an okay thing to do for most of the history of AI, when we were just getting our foothold and we didn't know what technology was going to do it, how it was going to work, or what the base cases were going to look like. But now we actually have a pretty good idea of what everything is going to be conditioned on.

Everything is probably going to be conditioned on a situation where these are neural networks. They're running on silicon. They have been trained on a large amount of human generated data, like human writing, human thinking out loud. I think that's basically given now. There's always something out in left field. Maybe somebody has a totally separate AI development track and we're going to get to AGI on some new hill climb happening somewhere else. But it certainly seems like the base of the hill is that you train it on the internet and on all the books, images, and stuff humans have ever done.

If that's the base case, I think a lot of this reasoning and inductive reasoning about super intelligence goes completely out the window because it's not conditioned on all that. So if you want to start doing that process, fine, but you really do have to go back and start from a much more refined base case. Assuming it's neural networks, assuming it's on GPUs, assuming it takes this amount of energy, it's going to have this kind of FLOPs, it's going to be trained on the internet, it's going to have this kind of latent space encoding style of all these things. Assuming there's an ecosystem of the internet software, all these things around it, what kind of super intelligence do we expect to pop out of that? What kind of constraints will it have on its goals and all of these things? I think that's where we're really in the unknowable territory.

I mean, certainly someone could maybe make an argument. Maybe I'll be persuaded about it. But for me, I don't really feel any reason to say anything about it because I just don't think there's any way to get enough of a foothold to make any kind of sane predictions about the nature of such a super intelligence. What it's going to do, what the effect is going to be on us, how we're going to interact with it, what it's going to want to do or not want to do. I just feel like we're so outside of the realm of being able to make reasonable predictions about that that I just kind of blank out. I'm just like, okay, let's not worry about that right now. Let's worry about the things that we can have much more certainty about that, by the way, basically motivate everything exactly the same, right? It's like, instead of fighting over things in this very unknowable regime, let's like look at the things that most people can see are coming and like realize, oh my God, we actually have to do the same thing we were gonna do if that was true, but we need to do it a lot sooner actually if we wanna avoid some like legit crisis this decade. So that's kind of how I come down on it. I mean, yeah, I don't think it really animates too much in terms of human action, right? I think it actually might just be undermining human action right now.

Theo: I understand that this is a little bit of a Bayesian LARP, but if you had to come up with a number to represent your aggregate probability of doom from AI before the year 2100, doom meaning human extinction or something very close to it, what do you think you would say?

Greg: Obviously it's impossible to know. I mean, I think right now my probability of like the chain of stuff leading to human extinction is extraordinarily low, right? And I'll explain why, okay? So the first reason is that I think conditional on the idea that there's literally any civilization that was within reach of us, within our light cone or whatever, like some margin of that, I think we will be stopped from unleashing any kind of existential risk to anybody off of earth. I think that will basically is guaranteed, okay? So maybe we're completely alone within that region, like whatever the fastest you can get some kind of interventionary mechanism to us, but like conditional on us not being the only like life form of advanced intelligence, we will be stopped by definition because it's in their self-interest to stop us. So I think that already cuts down a lot of my probability.

Now, of course, there's scenarios where we build some AI system that blows up earth but doesn't risk anybody else. But like, I don't know that from like an interventional standpoint, if you're gonna really like mess around with that. Like if we present any x-risk to anybody else in the galaxy or outside the galaxy within that region, like they're gonna stop us from touching anything that could be close to unleashing that. And anything that would kill everybody on earth I think would be close enough that they probably would like intervene before then, right? So that's just like one big cut down.

The second cut down is like, I think the alignment by default thing, certainly has some merit. So it might mean that alignment is actually relatively easy, not hard. So that's another big cut down. The second is that I do think that basically there might be some like limiter, right? There might be some like intelligence ceiling that comes along that like basically just turns out we can't get past a certain ceiling. That's another one. Then the foom takeoff scenario is obviously the one that really is like rattling in terms of this kind of x-risk scenario since we have no time to react and adjust to what we're doing. And so that scenario I think is very, very low likelihood. So I think that also cuts down my risk.

So I don't know. I mean, the scenario where I think we could be extinct from this is like, if the foom thing is way more likely than I thought, if we're completely alone and there's no conscious life anywhere within us that we pose any risk to at all. And there is some like very clean path to getting some kind of runaway like thing in the next couple of years that'll like surprise us all very quickly. I don't think that's very likely. I think it's much more likely that any of those other things are true. And then basically like we might have a situation where some really bad stuff happens, like there's crisis and certainly maybe there's like some large scale like cataclysmic type things going on locally where like we get like a virus outbreak or something like this type of scale event. But I think as long as there's a margin for us to course correct and react to it as a species, I'm not like, I mean, to be completely clear, like we're talking about like complete eradication of humanity. So like, that's a very specific condition.

I really do think there's gonna be like some seriously massively bad shit that's gonna happen like pretty soon. And like, that is very not like this. I'm not happy about that. But as far as like the very specific condition where like the AI just like destroys everything, that to me is pretty tough. Like I think it's pretty damn low, like less than a percent.

e/acc (58:01)

Theo: So one last question that I had before we move on. Effective accelerationism, e/acc is really having its moment right now. All kinds of people on tech Twitter, Garry Tan, who's the head of Y Combinator, Marc Andreessen, who's the head of Andreessen Horowitz, and lots and lots of other people are putting e/acc in their bios. So three months ago, long, long before it was as famous, you tweeted “e/acc is a dumb philosophy that is going to generate as many zealots as Yuddism. Throw it into the dustbin.” So do you still think this and can you elaborate on that a little?

Greg: Yeah, so I mean, I think fundamentally, right, what motivates e/acc, I mean, you know, I don't wanna speak too generally, but the thing that disturbs me about it is, you know, it's fundamentally rooted in a reactionary response to this idea that we're gonna regulate technology and slow it down, right? And particularly AI, but you know, it's also kind of response to like degrowthism and all of this. I think there's two major things that concern me about it. The first is that any kind of reactionary philosophy that is basically conjured up to quell what is perceived as a major, tangible risk that specific humans are doing to like our future. This is just like a recipe for zealotry and radicalization, right?

Now, a lot of these philosophical debates going on in AI, they're very narrowly happening on Twitter among, you know, technical types and these kinds of people. And there's really no risk of these kinds of like seriously dangerous consequences of this debate at this stage. But, you know, I think one of the things I worry about, you know, having grown up through, you know, 30 years, 40 years of internet stuff now, like there's a tendency for like things that started on the internet among like geeks to once in a while explode out and become part of the international zeitgeist. And e/acc as a moral philosophy, as a kind of motivating principle, reactionary philosophy is the kind of thing that if it does end up getting taken seriously by the mainstream and then it does become a kind of way of living, it'll harden up a little bit, but it is the kind of thing that I could imagine some radical segments kind of adopting and acting on like their worst tendencies to kind of harm others and things like this.

If you're gonna position yourself as a philosophy that is trying to prevent an existentially bad outcome for humanity, and you're saying that that's it's like life or death, like if an AI doomer has basically caused the country to regulate AI, they're basically like dooming humanity. That is not the recipe for clear headed thinking. And it's not the recipe for avoiding zealots from infiltrating your movement and hijacking it away from you to basically turn it into a violent movement. That to me is very much not a good recipe. And I think part of the reason that I also reacted that way is because I really do think that, you know, this technical optimist point of view isn't necessarily one that you need to have a reactionary philosophy. I mean, like, you know, we've talked about, you know, knowledge accelerationism and things like this. And so I think that does provide a more kind of like, if you want to make the analogy, it's kind of like a liberalizing approach where like you're trying to make it into a positive, some type of situation where it's about not necessarily an adversarial situation, right? You're not really trying to like defeat some opposing force through that philosophy. You're trying to basically build up some kind of positive, some situation where all things are getting better. We all have kind of a long line goal of improving our understanding of the world and all that kind of thing.

And, you know, I've seen the e/acc philosophy is trying to dialectically figure out what it actually is, what it actually isn't. I kind of reacted negatively to it from the beginning because I saw that kind of reactionary. You know, I think there's this like recipe of like animating people through a sense of, a combination of fear and righteousness, right? So like, that's kind of going on right now on both sides of the AI debate, right? You have, you know, you have doomers who are obviously animated by fear. And some of them are like, I've seen, I saw one person, I won't name who it is, but like there is a little twinge of righteousness that I hadn't seen before, where basically like the mantle of, taking up the mantle that they are a scientific, they're taking a scientific approach. And if you are not agreeing with their conclusions, you're not taking a scientific approach to the problem. You're like anti-science. I saw like a twinge of that coming out.

I saw a twinge of righteousness that I hadn't seen before. Taking up the mantle that they are taking a scientific approach, and if you are not agreeing with their conclusions, you're not taking a scientific approach to the problem. You're like anti-science. I saw a twinge of that coming out. I don't think this is happening on a large scale, to be clear, but that's where you can see a sense of righteousness come into the humorous side of things, which I haven't seen before. We know we are right, and there's a certain entitlement that comes along with that. I think there's still a good amount of humility in the buck. I don't have a lot of concern about that, but it's that fear and righteousness thing. e/acc has both. It has a sense of fear because there's a sense of fear that if AI accelerationists or decelerationists succeed, then you're going to end up in this dystopian hellhole where they're basically regulating all GPUs. You can't even do matrix multiplies in your living room anymore because the government is going to come and take your GPUs away. That's an animating sense of fear. Then there's a sense of righteousness in the sense that we're on the side of causing humanity to succeed and explore the stars. You guys want to drag us down and slow everything down and prevent our children from inheriting their cosmic endowment or whatever. You do have this recipe for zealotry and radicalization. It's just an ugly mix of things.

I was disappointed to see it gain increasing momentum because it just feels like it's not really where... I've known a lot of really smart technical people for a long time. They are really good-natured techno-optimist people. They really want people to be happy and fulfilled and flourishing. The e/acc stuff, it really feels like there's some kind of latent vileness to it that I just never really liked from the beginning.

Theo: Actually, I think e/acc has gotten tempered recently because they used to be all about, we need to serve the thermodynamic God and increase the entropy of the universe and fulfill our destiny and all that. Now, with such mainstream people embracing it, it's just cooled down to a general sense of techno-optimism. I'll admit, I bought an e/acc shirt with the Effective Accelerationism logo with the exponential on it because it looks so cool.

Greg: There's always a lot of dimensions to this stuff, right? When I talk about critiquing this stuff, I'm coming at it from the point of moral philosophy and thinking about how it can inform you and how you ought to live and what you ought to prioritize and how you ought to think about your place in terms of what animates you to do what you do. If you're coming in and you have a philosophy of an adversarial nature where it's like, I get up because I want this other thing to stop. I get up because I want this other person to not do what they're doing. From a moral philosophical sense, I'm not a fan of those kinds of moral philosophies.

OG Marxism was like, we need to stop the capitalists from doing what they're doing because it's an evil force. That kind of moral philosophy, for me personally, I don't like it. I also think it tends to bring out the worst in people. When I'm critiquing the act, I'm not critiquing the individuals, the people that are basically trying to figure this out.

The discourse that comes out of a lot of it, I really think, though, that as soon as you position something as a moral philosophy, an ideological movement, okay? As soon as you do that, you're telling me that I need to take it seriously. Okay, well, if you want me to take it seriously, I'm going to take it seriously. Because, look, I have an MFR 3D NFT guy in my thing. I have all these ridiculous hashtags. But very few of these things are in this bucket where it's people telling me how we ought to live. So, if you're going to say that that's what you're actually trying to do, now you're operating at the top of the stack. Moral philosophy drives everything. How people think about what their meaning is in life, how they think about what they ought to do, how they think about their place in the world and the purpose of the human experience. That is the highest lever that we have, that we've seen, drives humanity.

And so, if you're going to enter and put a chip on the board that says, okay, this is a new chip for this domain, you're going to get my most critical feedback. You're not going to get me shitposting or whatever. You're going to get me saying, no, this is a dead end. You should stop doing this. You should think about these other ways to do it. Because if you actually let your little experiment get out of the lab, what you're going to get is you're going to get a massacre. You're going to get bombings. You're going to get violence. You're going to get war. So, you don't mess around with these kinds of moral philosophically things, unless you're really, really serious and you've really, really thought about it and you've really run the simulation in your head now.

So, what if 99% of humans all have adopted your philosophy? What does that look like? So, that's where I start to really start to come in hard on these kinds of things. And I think in this case, that's kind of where I am on this one. This one is just not a good idea. This is something that should just basically be... It should be accelerated to its endpoint sooner. If I'm wrong and it turns out that this moral philosophy is one that is really great for humanity, then let's accelerate it either way. But I think if we accelerate it either way, it needs to accelerate to the point where it just implodes on itself, because I think that's where it ends up.

Now, maybe it'll implode and it'll turn into something else. People will dialectically revolve into something. Maybe they'll find, oh, yeah, it turns out David Deutsch figured this out a while ago. That guy actually had it figured out. So, let's go do that. Maybe that's how it goes. I don't know. But I think you want to hit the gas on it so that it basically just peters out if that's its destination versus, you know.It's a slow boil up to the point where people are using it as a tribal signal to start. You raise the flag, you sing the songs, you know the handshakes. But then, everything behind that is now everyone's reading the book together. Everyone's reading the manifesto. That hasn't been written yet. But when you get to a certain scale, people demand to know what the moral philosophy is. How should I get up in the morning? What should I work on? Should I go to the gym? Should I not go to the gym? Should I tie my shoes? Should I not tie my shoes? This stuff is serious stuff. So, I think that's why I think it needs to be critically talked about. Let's really run it down to its inductive end as fast as we can.

Theo: In a way, the whole e/acc versus Doomer thing reminds me a lot of reactionary anti-wokes against people who they perceive to be woke. In both cases, you have a group of people who just cannot stand being talked down to by people who they perceive as condescending and moralizing and standing in the way of progress. And so, they go too far in the opposite direction.

Greg: There's another latent variable to this. There are definitely ideological groups where they are effectively bad actors. They're subversive. They're trying to gain power. I genuinely don't see AI Doomers in this category. That's a particularly large scale factor in how much tolerance I have for some kind of reactionary counterforce to them. For me to get behind that kind of mentality, I need to be very much convinced that the people we're talking about are legitimate enemies of humanity. They're literally subversive. They have a hidden agenda. And I'm just really not there with a lot of the people who are cautioning us on AI.

I think they have some fundamentally wrong assumptions. I think they have some fundamentally reasonable assumptions. I think their tactics and their strategy have been horrible if they actually want to de-risk this whole situation. I know people think they're making progress, but the problem is that the reactionary forces they've conjured up are part of the consequences of their actions. You can look and say, okay, well, they're getting progress on the regulatory front. They're getting people to believe what they're saying. But you're also getting reactionary reactions. That's an unforced error.

I really do think that there was a scenario here where, a year ago, that if people who were on the side of AI being extremely risky, they could have constructed a whole different approach that would have cleaved off way more people that would have otherwise ended up in these extreme reactionary points of view. That would have made this way less contentious and would have made it, I mean, some of the stuff I was talking about, just, let's start with the stuff people can agree on and then really acknowledge it instead of it being this very extreme outcome kind of scenario. So I think there's a lot of unforced errors and things like this that I think they've made. But I don't think that they're bad actors. I don't think that they're subversive or any of this.

Theo: It is interesting that Eliezer Yudkowsky was the original e/acc. If you go back to his pre-2001 writings, “Staring into the Singularity”, “The Meaning of Life”, his goal originally with MIRI was to create and bootstrap an AGI by himself that would be formally aligned, a formally aligned Singleton and basically take over the world and save the world. Which, of course he ended up not doing that. He ended up totally pivoting it. But it is interesting that he did that in the first place.

Greg: I understand the chain of reasoning. I think there's really just some fundamental assumptions that are different. And I might be wrong. Maybe I haven't thought about it enough. But ultimately, I don't know. I still generally think that the odds of us, ultimately, if you run all the simulations down of these kind of extreme cataclysmic events are very, very low. And you have to zoom out and really think about things from a galactic scale to really get there.

I'm kind of one of these weird people where, if I talk to AI Doomers, they're going to think I'm an accelerationist. And if I talk to accelerationists, they're going to think I'm an AI Doomer. I've seen that. So, I basically have this very synthesized view, which happens to be a little bit of both. But the reason I am there is because I want to motivate a way to talk about things that can basically solve what we actually know. We ought to all know what we need to solve. And it's frustrating when you see people that really, they should know that there's certain problems that exist and need to be solved. And they just don't or are dismissing them because they're kind of jumping ahead. A common thing I've noticed is this tribal war of sorts. If you ask someone to acknowledge the significant risks from AI in certain ways, they may feel that admitting to these risks will be exploited. There's a fear that if they admit there's a significant risk of a really bad agentic AI scenario, their enemies will use that as an excuse to regulate GPUs, for example. This is not a path to problem-solving and clear thinking. I've been trying to construct a framing of the argument and proposals that is engineered not just upon what I actually think, but also one that can fit into the mental models of both sides. The goal is to find a consensus around what makes sense to do or avoid.

UAPs and aliens (1:16:28)

Theo: On a different topic, you seem to be relatively convinced that there are aliens, extraterrestrial life, relatively close to Earth, that they visited Earth before, that the government is aware of this, and that the government is concealing this from the general public. Can you explain some of your reasoning on that?

Greg: Well, that's not exactly what I think. Here's what I'll say that I think is almost certainly true. I'm basically near 100% certain that there are metallic objects moving around in the atmosphere that have been seen by pilots and aviators for a long time. These objects exhibit extreme accelerations that cannot be easily explained as human-created technology. I'm fairly convinced that's true. The second thing I'm convinced of is that there are some actors in the US government who believe they've proven this is the case. That's the limit of where I'm really certain. I do think that these things are flying around up there. They're not Chinese drones or geese flying around. I think there are definitely some objects up there that have demonstrated really insane accelerations.

Theo: In today's congressional hearing, a whistleblower said that the US government for sure has a craft of non-human origin. Naturally, myself and many people are skeptical because of Sagan’s rule that extraordinary claims require extraordinary evidence. So what brought you to this conclusion that this is real and not people seeing it wrong or lying for clout?

Greg: The reason I became convinced of that is just through the preponderance of expert witness journalism and evidence that there are pilots who have seen things that they describe in this way. There's not really a lot of room across all of that for there to be no incidents where there was an actual object doing some crazy stuff in front of these aviators. I don't really incorporate any ground sightings into my priors. I'm looking at the baseline minimum of what I think is true. I believe there are these objects moving around up there doing insane accelerations that aviators with many years of experience and high classified clearances have come to believe they've seen. There's enough of those that I'm convinced that there's something to that.

Theo: So if there are advanced aliens out there who would be sending some kind of spacecraft to earth, why would they be sending giant detectable objects and not some tiny thing the size of a bacterium that can just quietly surveil us without us ever knowing?

Greg: Right. So, that's my first belief that I think is very much true. My second belief is that there are people within the government who have seen the evidence, seen the data on that specifically, and have been convinced that it's demonstrated non-human technology. Those are the two things I'm really sure of. So now the question is, if that's true, what's going on? I don't have a really good explanation for this yet. Would you like to explain this situation? The best explanations for this are ones that seem like they're obviously the case with a certain amount of induction from very obvious first principles. So, the first principle here ends up being about AI. I'm not the first to think of this, but I independently thought of it and it made it click for me. I believe this is what's going on.

At a minimum, this doesn't explain everything. This is just the tip of the iceberg. But if you believe as an assumption that neural network systems can generate intelligence and have a certain capacity, then the risk of an extra-earth impact of this technology being developed on earth seems non-zero. Now there's a big difference between saying we're all going to die because of this. That's a high 100% certainty, but this is the inverse case. This is where it's at least an epsilon. It's not zero.

From the standpoint of an external observer, if a planet can produce any risk at all of harming you at a distance, that planet is not just an interest, but now you have the entire problem of all the planets posing a non-zero risk to you. I think that's actually the case. I think from a human standpoint right now, we can say this is the case that all planets that are within some reasonable light speed radius expose us to some non-zero risk of them harming us. Because if they create some kind of intelligence in their systems that can self-replicate and decide it wants to destroy us, it could. So, I think that we do know today that there is non-zero risk presented to us from other planets if there's any chance that they have any life on them at all.

Theo: Earlier you said that your priors for AI doom are very low because other civilizations in our local area would know that and try to take measures to prevent it. But what if the measures to prevent it are they send over the Death Star and blow us up? That seems to be the easiest way to do it.

Greg: Well, that's the thing people will say, but I think that the trick there is you smuggled in the word easy. If you're an actual advanced civilization, there is no easy or hard. You just choose what you want to do. And really when you're choosing what you want to do, your optimization is not about whether this is going to take effort on our part. We just say what we want. So, you're kind of stuck in a scenario where you try to guess what's the ethics or motives of an external civilization with regards to external x-risk.

I think what's going to happen is if humans cross the chasm and end up building these AI systems and we manage to align them and they all work, we immediately have to do this thing, which I think has already happened. We need to build a system that goes out and basically monitors every single possible place life could be in the galaxy that we can't get to fast enough to stop. And we're going to need to build those things and we're going to need to send them out. We're going to need to do this within a hundred years tops. I mean, we absolutely need to do this.

Theo: That kind of sounds like Yudkowsky's idea of a pivotal act.

Greg: Maybe, maybe, I mean, I think it's just a species level demand. Like we have a, we actually have to do it if we want to expect to survive. It's like another form of risk. Like we need to build an anti-asteroid system. We need to be able to get off the earth. We need to be able to prevent the sun from consuming us by getting away from it. We also need to basically deploy a system that hopefully we deploy early enough. And maybe if we're first, we can get to every single planet in time to basically monitor it and prevent them from unleashing some kind of feedback cycle, whether it's AI or something else that will kill us.

Now, the question is if we're the first to do that or are we not? I think the odds of us being the first to do that are pretty slim. But if we aren't the first ones to do that, then you ought to expect to find things flying around, probably of some kind that are basically keeping an eye on stuff now.

Theo: So, the Fermi paradox.

Greg: Well, the Fermi paradox goes away. I mean, if you have evidence of aliens on Earth, then it turns out the Fermi paradox was just a bunch of bullshit.

Theo: It seems like the most plausible answer that most people agree on with the Fermi paradox (by the way, I am kind of agnostic, since there is no evidence one way or another, it’s competing explanations) is that there are no aliens, or if there are then they’re just bacteria billions of light years away and we are super early.

Greg: I mean, the Fermi paradox is conditional on the idea that you don't observe aliens. If you observe aliens, you have the solution. You're not first. If you don't observe aliens, the question is, why could that be? And I think, yeah, it could be the case that you're early and you can make an inductive argument like with grabby aliens that we should assume we're early because, given we don't see aliens, these kinds of chains of probabilistic things would imply that we are among the earliest within our lightcone. But if we're not the first to be able to do things, even if a lot of people think, "Well, if there were aliens, we would see all this other stuff," I don't know that I have any reason to think we'd see anything. But what we should see at a minimum is something here on earth, which has the goal of ensuring it has full observability on us to make sure that we don't ever cross some threshold that could confer harm to anybody else in the galaxy or within this region. That is the minimum thing.

You asked why they are not smaller, all this other stuff? I can't really answer that. There's a lot of second order things that I don't have an explanation for and everything I said could be completely wrong. But where I came down on this issue is that there is at least one minimum motivating cause for why you ought to expect to see some non-human technology at earth. Because a lot of people think, "Oh, they're tourists. They're coming to visit us. They're coming to look at us." That all sounded silly to me until I realized there is a strong motivating thing that would cause non-human technology to be at earth. It's a self-preservation motivation and we will develop technology to do this to somebody else very soon. So that's why none of this stuff surprised me.

Now, the thing is though, a lot of the stuff going on in the last couple of days have blown up all of my priors over this. I have no reason to believe that there's alien bodies. I have no reason to believe that there's crashed UFOs in this hypothesis, none of that stuff falls out of this naturally from those assumptions. So all of that could be true. It could not be true.

Theo: It does sound weird that the aliens would send actual squishy biological aliens instead of a drone. Why would they do that?

Greg: I can make up reasons, but none of them really are strong to me as the base case assumption, which is that we, if we are not the very first civilization that exists, not in the galaxy per se, but in the radius of exhibiting harm to other actors, we ought to expect to find evidence of non-human technology, unless you think there's some way that those are completely hidden from us.

Now, here's the other thing. This does oversimplify stuff because that's the beginning of the causal chain. If it turns out we ought to expect there to be non-human technology at earth, that doesn't necessarily mean that all the observables, all the things we're talking about like large UFO type things or biological aliens or any of that are those things. That doesn't mean that those are the same things, but it does imply that perhaps what's going on is that yes, the base case is true. There's non-human technology at earth. Maybe we can't even see it. That is here for this purpose. That to me is inductive from those very basic assumptions. So that to me seems strongly likely to be the case. Everything else might be a result of that consequence played out over millions of years with countless actors that incidentally looks chaotic to us. It looks like it makes no sense, but there might be some both parochial and inductive things that led to that kind of stability point.

It could turn out that if every planet in the galaxy has some type of non-visible observation system on it, it could turn out that a consequential corollary of that is you get large anti-gravity orbs with humanoids flying around in them. Maybe there's some chain of causality there. I don't know. But as soon as you've kind of broken through and you've said, yes, the situation should be that there's non-human technology at every planet that has life and there's non-human technology on earth. As soon as you've crashed through that window now, you should expect all kinds of insane things. And so maybe it turns out there's some very weird causal chain that causes those other weird things to come along for the ride. And maybe at first that wasn't how it was, but then maybe over time events happened and that came along for the ride. There's a whole iceberg of consequences that would come if it turns out at the minimum, there's some non-human technology at earth.

Risks from aliens (1:32:05)

Theo: So you seem to be a bit worried about UAPs, unidentified aerial phenomena. When you tweet about it, it seems to be, I think you said something today along the lines of like, we should treat the results of the UAP hearings, like we would treat the results of a biopsy. So if they're here to solve the alignment problem for us and not to blow us up, then where exactly does the worry come from? Simply the unknown?

Greg: Well, I think what I would say is that it's not necessarily one dynamic. I think there's probably some latent capacity of this to not necessarily solve the alignment problem for us, but de-risk us from everybody else. The minimum demand of my hypothesis is that there's something that's non-human technology that's at earth to de-risk us from everybody else. So now the question is, how ought to we, what should we expect things to be like if it turns out that's true and it turns out that there's even more things that are going on?Building on that, there are some inductive elements that may not necessarily be part of that system. Now, the way I think about this is by pulling in all the other allegations and assigning some non-zero probability to them, some low, some high. But I view this from a different angle. Regardless of whether or not this stuff going on is entirely about de-risking us or not, or it's only a bit about that and there's a bunch of other stuff, surprising corollaries to that, the thing I see that's the elephant in the room is that this really looks to be like, if this stuff does come spilling out, that the government has done an incredibly good job of covering it up using a large variety of methods.

So they've used the kind of half disinformation thing where they've let a lot of things out. This is all conditional on it coming out that this is true. If this comes out that it's true, well then it turns out that a lot of the stuff we were told that was true and then a bunch of it was nonsense. And it was all mixed together into this amorphous blob where we can't make sense of anything. So that's one thing that they would have done if it turns out this is true.

Another thing is there's a bunch of people that didn't talk. If a bunch of people didn't talk for 80 years, the low level minions probably didn't talk because they were fearful of consequences to them from their superiors or allegedly there were death threats and things like this. We'll see if that turns out to be true. But then there's a lot of people all the way up that didn't talk about this. And always talk about it at glancing angles.

So, the unfortunate thing I conclude from this is that that kind of insane cover up is not about conspiracy. I think Naval put it best is like you don't need a conspiracy when you have incentives. You can definitely hold together a short term conspiracy of a small number of actors, but to really get to a point where you've done this kind of insanely effective and consistent and deep and long campaign to keep so many people in the dark and scrambling to make sense of everything, it's a state of mania.

I mean, these people that have been following this stuff and really living this, they don't know what to believe about anything. And it can completely turn into crazy people some of them.

Theo: Tinfoil hats?

Greg: Well, it's just like you don't know what to make sense you're being fed lies you're being fed true things you don't know anymore

Theo: This is where the original tinfoil hat meme came from. People wearing a tinfoil hat to protect themselves from aliens, or something.

Greg: No, I mean the way I look at it though is like if you have this kind of unparalleled successful cover up, it totally blows the doors off anything else in terms of just know it because I mean think about it right. It's not like the JFK thing or any of these other classic conspiracy theories. If you literally have a situation where the entire public has legitimate capacity to collect their own evidence so you like if this turns out to be true. A chunk of those cell phone videos are real, a chunk of these maybe abduction stories are real a chunk of everything you've heard maybe there's some of it that turned out to be really true.

This is the kind of cover that they needed to do where it wasn't just something that happened and is over. It's an ongoing thing, the public has some capacity for evidence collection, you need a massive amount of incentives for people to be really hardcore about executing on operations to kind of ensure that stuff.

So, anyway, that's just to tee up my argument which is that if it turns out this stuff is true. And you had this kind of insanely successful cover up that did this to everybody for so long. In light of the difficulty of that. You really need some insanely good incentives. And the only kind of incentives I could imagine could lead to this kind of outcome are not ones were like somebody's fearful of their own life. Not even ones where they're fearful of the lives of the ones they care about.

Over 80 years, if you're talking about this kind of secret. Some crazy person is going to decide, you know what, they're lying I don't believe that they're going to kidnap my kids or whatever I think I'm going to just tell everybody and just roll the dice because this is too important. Well, is that what's happening now with the whistleblowers. I'll get to that in a second.

But I think for this kind of cover up to have worked, you needed everybody who was read in on it to actually believe that if this leaks, everybody was going to die. I think that's actually the only real not everybody was going to die, but that it was going to be really bad for everybody if this leaked. Maybe everybody was going to die. Maybe we thought maybe society was going to collapse, and they didn't need to just kind of half believe it. They need to be sure.

Now, that's why. Now, look, I hope I'm wrong. I hope that the cover up is just bureaucrats and people in the military, being like we don't want the Russians to find out first or like we don't want our national security to be at risk. But if I'm right what it was was like, no, this is no joke like if this gets out, everything I know everything I love everyone I know everything's going to end. In some existential way, that's what I think.

Theo: Like the world’s biggest infohazard?

Greg: Exactly. This is the kind of thing you would need to get this kind of outcome, if it's true and it's not some crazy psyop CIA thing. I think if it's true. That's the kind of incentive you needed for not a single person to really blow the doors off of it and make it their thing. Now, the question is, why is this changing now? That isn't good news either. If it turns out that I'm right, and what motivated this was an extent, a sense of existential dread about what would occur if it came out, you have to ask yourself, was the existential dread mitigated? Is it now safe to talk about it? I hope so. I hope that's what it is. That's the best case scenario. It turns out they were going to vaporize us or we were in a zoo or something, and then we just cracked superconductors or something, and now we're cool. We could say this is real and we're all going to be happy. I hope that's the case.

But what I think is actually going on is the game theory changed, and the people who were measuring whether or not we needed to learn about what was really going on with all this decided that now time is important. The time has suddenly appeared. And now it makes more sense from a maximization of human wellness standpoint to actually tell us.

Theo: If the aliens were concerned about us developing AI soon or some kind of other very risky technologies, and the aliens were some years away from us, why would they start worrying now, and not like 50 years ago or 500 or 5000 years ago? Maybe, if we're leaning into some alien conspiracies, maybe Roswell actually was aliens and maybe the aliens actually did help build the pyramids.

Greg: I don't think they worry about us at all. Right, I think they know exactly what we're going to do if they're out there and I think they're just letting things roll out the way that they expect. I mean, it's not too hard to predict what we're going to figure out and when we're going to figure it out if they've seen the chain of it. We can't predict knowledge a priori that we're going to generate, but if somebody who's already done all the work and knows what you need to have as a precondition, they can predict our tech tree probably within months of advance.

So, what I think is that they're not worried about us, but there's probably a frustrated sequence of events, and there's some kind of urgency right now. I don't know what the urgency would be. The urgency is not about probably them doing anything per se on their worry scale. They already have everything planned out. The urgency is on our side. There's something about this current moment, again I have to keep saying it because I don't want people to think I'm certain of any of this, just running from my assumptions, the urgency now is on our side. There's something that is causing our government to decide that now there's a timetable for which the public needs to become aware of this.

There's a whole spectrum of possibilities I can speculate. For example, if it turns out that it's going to be the case that there's certain things we're not allowed to do. People are not going to accept the government just saying like, "Hey, Elon Musk, you know Mars mission you want to do? Yeah, you can't do that. We said so." People are not going to tolerate that, but if it turns out that we're not allowed to go to Mars, because it's against the plan, then they actually need to tell us because it's like a war if we're told we can't shoot rockets to Mars and put people there. At some point, I mean that's just a guess right so I don't think that's really what's going on but it's just boxes into our planet and not the solar system. At least I'm just framing the structure of the situation.

Alien zoo hypothesis (1:43:11)

Theo: What do you believe about the alien zoo hypothesis?

Greg: I think that's the most likely one to me right now. That's consistent with all of the evidence. I think the most likely scenario is that basically there's some urgency around the fact that we need to know that we can't do certain stuff, because we're basically a subservient species. We are either being protected or we're being kept under some kind of safety zone. I don't know what it is but I think there's a point where a species is doing stuff on their planet. And then they start to do stuff.

I'm not talking about the AI side of X risk to other people. I'm talking about a scenario where there's an envelope of shielding by a higher power around the earth. So the species are just doing some stuff, then they start to hurt themselves quite a bit and it's kind of interesting. We want to make sure they don't completely destroy their planet. But then they start to get to a point where they can really mess up things for everybody else or they can start to get out past the point where they can really be reined in, and we can know where they all are. We can keep them bounded in terms of what they're doing. And it just feels like, on a variety of fronts, we're poking holes and that kind of regime. As soon as we're moving towards anything that's extra planet, in terms of either projecting risk, projecting force, projecting expansionism.Projecting anything like that, if we're in any kind of enclosed planetary scale bounding situation, that would be when you would expect this kind of news to become necessary to tell the public. Humans are going to be limited in their agency at that point, so they need to know why.

Theo: How is the alien zoo hypothesis falsifiable? What kind of test could you formulate to prove that we're not in the zoo? Is it falsifiable?

Greg: Well, I think for me, at this current point we are in, I've not been a big person to get out and make really hard predictions about UFOs for most of my life. I really only started getting concrete about what I thought was going on in the last few years. If you find yourself in 2028, four or five years from now, and there hasn't been disclosure about this at a basic level, and we're not looking at some kind of significant AI phase shift in our technology and our ability as a species, then I completely blow all of my probabilities away. I've said on the record on Twitter, if we don't get some kind of extraterrestrial oriented confirmation in the next three years, I'm perfectly willing to say, okay, we're by ourselves. There's nobody else around. We're totally cool. This is great. I mean, I'm excited if that's true. But it really feels to me like a lot of these things, like the zoo hypothesis, all this stuff, the next couple of years, it all is going to just fall out.

Theo: Would you be excited if that was true? Because you did say that one of your reasons for having low probability of AI doom is that aliens would prevent it in one way or another.

Greg: Oh, I would increase my odds that we're going to hurt ourselves. But I think they're connected. What I want is I want us to be alone, and I want us to be able to navigate the AI problem without fume. I think that's my best case scenario. A scenario I don't want is we aren't alone. Because I think particularly on the other side of this cover up, if we didn't have this cover up and all this other stuff, I would be neutral on if it's good news or bad news, if it turns out we're not alone. But if we aren't alone, I feel really strongly that it's bad news. So right out of the gate, I just don't want us to not be alone. Now, if we're not alone, then yeah, that solves our fume problem, but it creates our zoo problem.

What I think actually is the case is I think the fume problem is not really a real problem, just regardless of aliens in general, because it's problematic for other reasons. But I think it's particularly not a problem because I don't think we're alone. I do think we're probably under some kind of planned either bounding or hopefully if things go the right way, maybe it is the kind of thing that some people are hoping for where we've unlocked the door and the key in the door. And now we have graduated to being a serious species and everyone comes and says hello to us and is like, yo, what's up? Like, welcome to the club. I think that is an optimistic view. Galactic Federation.

When mankind zapped into sentience, we found ourselves in the state of nature. The world was really harsh. It didn't care about us. If we turn out that we're not somehow out against all odds, grabby aliens is wrong. And the Fermi paradox is legit and we're not legit. And we're basically sitting inside of a larger scale state of nature. I mean, the odds that this is a good scenario for us, I just, I agree with David Deutsch. In the sense that I do think that there's a cross cutting thread of personhood that we would share with any intelligent extraterrestrial life. I think they would recognize us as people in a certain sense. But that's not enough for me to get over the line, particularly in light of the coverup that the coverup basically is my confirmation. Again, if it's all true, if it's true enough that we have to think we're not alone, it's confirmation that we're living in a state of nature situation. For me, it's like, we're not living in some galactic Star Trek Federation thing. If the government couldn't tell us about the Federation for like 80 years, then probably we're in more of a state of nature situation where it's bad news.

I hope I'm wrong. I mean, I really do. Like, I'm trying to think of what's the steel man of that we're going to get this graduation ceremony. AI takeoff is really special. AI breakthrough is really special. Like getting to the point where you can confer risk to other people in the universe is a significant event. Is it an event where you've proven yourself as a competent, morally worthy species? I don't know. If we knew that there was some planet that just managed to cobble together some neural networks and we were hosting their protection for them. If they pulled off something like this, would we go on to celebrate them and bring them into the fold? I don't think so. I think we would basically just be like, you guys can't do that. Sorry. Keep going and live.Living your life on your unique planet of octopuses or whatever, you guys weren't first, we were, so we're keeping you guys where you're at and enjoy. We're not going to let you guys blow up or anything, but maybe they'll pluck the smartest humans out and let them go live with them. But I just don't think it's going to be the kind of thing where we graduate. I don't know. Maybe.

Virtual reality (1:50:50)

Theo: You posted a lot, especially in the lead up to the Apple Vision Pro announcement, which I'm excited about, about augmented reality and virtual reality. You even changed your profile picture from the little dude with the VR headset to the little dude with the eyes out in the VR headset, the pass through. So far with what we have in 2023 VR, which headset do you think is coolest? We got Apple Quest Three, Quest Pro, Bigscreen Beyond, whatever Valve has cooking. So what's your favorite right now and why?

Greg: Well, I think right now, it's kind of a cliche, but I think Apple is basically setting the standard for everybody in terms of the frontier. Bigscreen Beyond looks amazing. I haven't used it, but I think that's going to be a significant win because there's this really important need to make these things really comfortable to wear and really light. The head weight is such a huge variable. I'm optimistic about those two.

What I'm working on is still kind of in the XR realm. One of the things that is basically like, I can't go into too much detail, but I think one of the things I've come to believe is that this whole idea of spatial computing and XR stuff is well on its way to becoming the primary paradigm for computing, or at least sitting alongside the other ones as a peer. And so I think that being the case, you do kind of run into a little bit of a catch-22, where the current form factors and paradigms for this, where you're basically doing full photonic override in front of the person's eyes on their head is definitely not a solution that will cover every human at all times to participate in this paradigm.

And so I think there's basically going to be room for additional device categories that allow participation in various ways in this spatial computing paradigm. And so I'm currently working on new concepts and ways you can give people access to this stuff that is not necessarily a thing they would wear on their head. And so I think a lot of the stuff going on with the Vision Pro is pulling headsets in one direction, Big Screen Beyond is trying to push what can we get to be the most comfortable thing you could wear all day for great VR? And so I think there's this kind of orthogonal vector where it's like, what can you do that's not going to require the person to wear something on their face to participate, even if they have that, right, there's going to be modalities of using this kind of medium that don't necessarily require that. And so that's kind of what I've been exploring recently as kind of the go left when everyone else is going right type of thing.

Theo: What do you think is the kind of minimum viable product that people would actually wear all day? Because, you know, wearing glasses like this, I barely even notice that I have them on most of the time, except for when I'm working out. And even then I barely notice. So what would it take for people to wear VR?

Greg: Well, I think it depends on the value proposition, right? These things are all interconnected. If you there's a certain baseline where you have to have it so that you feel comfortable wearing it. And I think that's part of the reason I've been feeling like people haven't really been focused on that for a while. It's been less of a priority. But form factor wise, I mean, I don't know, I'm sure Apple has a generation or two out where they are going to take a shot at that.

I thought that you could kind of leapfrog that a bit by making a flip-up AR pass-through headset. Going into it, I thought that's what Apple was going to try to pull out of their hat for getting us there sooner. I figured they would have some crazy industrial designers trying to crack some kind of slide up or flip up pass-through device that was super light, but would fail through to being transparent in a certain sense, because you could flip it up. Whereas the current one, the problem is that as soon as it runs out of juice, you can't see anything. And so, you know, if you want to get there sooner, you need something that's going to fail through to transparent.

I think the fail through the transparent approach they're probably taking is going to be a more kind of fancy, display oriented one. But that's just me speculating. I think fail through to transparency is the key thing that gets you what I call all day wear, not all day use. You want to get to a point where a lot of people think, okay, well, if you're wearing a headset, it's on and you're using it all the time. I think the first frontier of a real all day pass through AR headset is going to be the one that when it runs out of juice, you still want to wear it. At least want to wear it enough that you'll tolerate wearing it so that you can have it accessible in your life habitually all day long.I thought Apple's play was going to be more like the Apple Watch style paradigm, where it would not be in use most of the time and would elegantly be off like a watch when its face is off. I thought that's where they were going to go. I think probably what it boiled down to is they just probably decided it was not the right trade off to start with a dev kit style thing and then push on the transparent OLED fall through later.

Theo: I really wish I could talk and comment more about it, but I haven't even seen an Apple vision Pro, let alone use one. And I won't for the next six months. So from now to then it's just speculation.

Aside from being fashionable when turned off, what other hardware and software do you think would be necessary to get universal adoption to the point where people are replacing their phones and maybe even replacing their computers?

Greg: Well, I've had a few thoughts about this. It's hard to know for sure. One thing that is underemphasized is the potential for hearing enhancement. I had some hearing issues a while back and with AirPods, if you have them in, you can turn up the volume and so on. Right. So you can hear better.

I think there's going to be some very simple things that you can do for people if they have an all day pass through air headset that will generally just make visual perception more enjoyable. I recently got glasses and I was surprised to see how much of a life improvement it was because my vision wasn't exactly amazing. It was okay, but it wasn't great. I think there's an analogy there. If you have a pass through AR headset on, it will correct for farsightedness. Beyond that, you can just add some light image-based processing that will increase the fidelity, the contrast, highlight things, maybe do some spectral sliding around the hue, and just make it more enjoyable to look at stuff.

I think that's the thing that I expect to be a decent needle mover for getting people to wear these things all day. Just like when you take your glasses off, the world starts to suck a little bit if you can't see. I think there might be a chance that the level of enhancement you can afford through these pass-through things will have a similar dilutive effect on when you take it off. It feels not as good.

Theo: The best analogy that I've seen for that is that eventually, we'll get to a point with AR and VR so that when you take it off, it will be like going to the bathroom without your phone. Before phones, I guess people just took for granted just going to the bathroom, and now you can't do anything without phones. And I bet that VR will have the same kind of effects on people.

Greg: The big challenge is just depending on if you can really get the value prop high enough and if you can get the weight low enough that you're going to feel okay having it around all day. But I think it'll get there. I had a pretty different conception. I wrote this long prediction thread for Apple's thing. I got some of the things right. I think probably I did better than on average, just because I've been following this for a while. But they really didn't go for it on the first version to the degree I thought they were going to go for it in terms of having something that theoretically could hit consumer escape velocity.

They obviously went for it on a million other things, so this isn't underplaying what they achieved. But I thought they were going to put something out that was in a position to go vertical if it turned out they had product market fit, even if they thought they didn't. But they really didn't put something out that I think it has a ceiling right now on how far it can really go given what it is, just because it doesn't have all day wear and all day wear, not all day use. I think that's once you unlock that, now at least you have the potential to get to mass adoption all day wear. But without that, you've kind of just guaranteed you won't. But that's okay. They'll figure it out.

Theo: So what software would you want to see on a phone replacing VR headset? Personally, I would want to see something like Jarvis from Iron Man where you're just like, that'd be awesome. Like Jarvis, call my friends. Jarvis, schedule a meeting for this time and this date. I mean, it's kind of like what people pictured in Siri, but actually usable, functional, just totally built into your cognition almost.

Greg: There's a couple of things people have thought about. I mean, the contrarian stuff that I think is interesting are actually, because a lot of people think, oh yeah, adding stuff is going to be really cool. And it will be. But I think there's also a big opportunity to take stuff away. So if you think like an analogy is like when you're playing first person shooters, right? Like you'll often dial the graphics quality down. So you can focus on the essentials of what's going on. You can see the rocket, you can see the other player, you can see the walls. You don't really get distracted by the texture maps. I think that we don't have that knob. In life, we often have to turn things down and a pass through AR will allow us to turn it in both directions. For instance, when I'm having a conversation with someone, it would be great if it could not just tune out the noise audio-wise, but also make us feel like we're in a more intimate environment than we actually are. Or if I'm looking at a room, I would love to be able to dial down various elements to make me feel more at ease. I have two young kids. They're making a mess all the time. There's always stuff strewn everywhere. I certainly wouldn't mind if my AR headset removed the things from my field of view that it knew posed no danger to me. For example, my kid spilled something, dumped all of his Legos on the coffee table. Sure, I'm not near the coffee table. Just get rid of those. So while I'm looking over there, I don't see them. I mean, that's just a rough example, but there's just all these degrees of freedom to dial down on visual noise that I think is going to be a big value add of these things once they get AR in them.

The other one is social presence. That's the one you can't really do until you have mass adoption. So you have the single-player game killer app, which I think might be something like that. And then the multiplayer game killer app is going to just be, you know, something I thought would be here probably by now if you asked me a while back, but basically, attention on tap. Any human on earth can be with any other human on earth at any time by clicking a button. Zoom is not that. That capability is not science fiction. We know how you could do it. And that is a transformative capability. So I think that will be the other killer app for this stuff, but it has to go ubiquitous first.

The future of VR (2:05:37)

Theo: So what do you think are some of the problems inherent to AR? You talked about this a little bit on Twitter. You said there are AI doomers and crypto doomers and doomers for every area of technology so far, except for AR and VR. So I guess the classic example is like men will spawn in a VR, Ana de Armas girlfriend who's powered by an AI and just not interact with people in the real world. And this is already happening with chatbots on the level of BERT.

Greg: The AI trend is happening sooner. And so a lot of people are thinking about these dystopian scenarios with AR and VR now through the lens of having really sophisticated AIs. And I think those are definitely legitimate risks.

A lot of the stuff I was thinking about a few years ago with this I encapsulated in this idea of avatarism, which I still think is going to happen. It's now going to be really intermixed with this AI curve as well. But I think ultimately what it boils down to is there is going to be an epistemological crisis around who has authority over how we are seen. When I put these glasses on and I call you on zoom, you could theoretically remove the glasses from my picture, but you don't really do that. There's not a readily available thing right now. And in person, you definitely don't have any way to override how I choose to appear to you. So the way I call it, what I call that is I call that freedom of form. It's like when we're meeting in person, I have the ability to basically choose how I appear to you. And I just take advantage of the way light bounces off of me. And I put stuff on my body so that I have high confidence I'm going to appear in a certain way to you.

So I think that whole idea of freedom of form is going to be a major discussion. I think it could get quite intense. I haven't really fully updated on it recently, but I still think it's a legitimate phenomenon that's going to occur where you're going to have, the reason I know this is going to be very contentious is because it's a good example of the kind of thing where if you put forward a hypothetical question about it, like specifically, if I'm hanging out with you virtually as an avatar and you're beaming into my living room, who gets to decide how I look to you? Do I get to decide how I get to look to you? Do you get to decide in what way and so on? And if you propose this question in various ways to people, half the people will say, what a stupid question. Well, they'll all say, what a stupid question. And then half the people will say, that's a stupid question because obviously I get to decide. And then the other half will say, what a stupid question. Obviously you get to decide or whatever, right? And so that to me means, okay, this is going to be actually a really big problem because that capacity will exist. The agency will exist. And so people are going to fight over who gets to decide in this cases where there's a conflict. And we never really had that before in a large scale way where it's really deeply interleaved in our lives. Like it certainly seems to be the case that if you have ubiquitous AR and VR, you're going to be in situations where you're hanging out with five people and you're talking to them and two of them are physically co-present with you and the other two are not, but you have no way to actually, you don't actually know offhand, who's actually with you or not physically most of this time.It's interesting because once you have a situation where everyone's being mediated by technology, their physical appearance, even the people that are with you physically, can be altered. I had this experience years ago using VR. I worked at a VR company and we would do meetings where we were all in VR together simultaneously. Some of us were in one big office and a few people were remote. During these meetings, I wouldn't know if the person I was talking to on my left was literally sitting next to me or on the other side of the country.

Theo: Even bad VR systems can be very disorienting, almost making you forget that you're in reality. I did a demo for a school project where I tried on different people's MetaQuest 2s for different projects. Every once in a while, I would almost think to myself, am I really standing in a house next to the beach? Then I'd take it off and be like, whoa, what?

Greg: Oh yeah. You can totally lose yourself in this stuff. The specific thing I'm referring to is whether or not a person you're engaged with as an avatar happens to be with you physically or not. I think in a situation where people are all wearing pass-through-air headsets and humans are being stitched out with avatars and some other humans are being beamed in as avatars, your mind is not going to really know whether or not a person's with you or not. Most of the time you're not going to be thinking about that.

So now you've got this weird thing where this freedom of form question starts to inject itself into every human-to-human interaction. Once you have a situation where every human-to-human interaction is by definition running up against this principle, and then you have a principle where half the people believe it should go one way and half the people think it should go the other way, now you've got a recipe for a big problem. I think that's one big thing I think is a major risk they are. I think it's still fine, we're going to figure it out, but it's just one that I think is going to happen and we should be mindful of.

Theo: My solution I would propose to that problem is that you would decide how you would want to appear, but the final decision for how you would appear to my headset would be from me. Maybe it really makes a difference in some cases, like if person B doesn't know that their appearance has been changed, then what exactly is the difference? If person A wants to, while walking down the street, make all unattractive people attractive or something like that, how would they be stopped from doing that?

Greg: I think you're kind of coming at it and you're concluding certain things or how things ought to go. What I'm telling you, though, is that if you talk to other people, you'll find that they have a similar sense of intuitive truth about how it should go that isn't that, which is surprising. So, for example, people that do put a lot of stock in how they appear to others, and they put a lot of identity baked into intentionally appearing the way that they do for various reasons. We don't have to get into various ways this manifests today, but there are people who really do care that they're seen how they want to be seen, and they would take it as deeply offensive and a personal offense at a minimum.

Here's a thought exercise for you. Let's say that there was an AR filter that you used in your headset that just basically rendered the person naked. Now, you can run down the idea maze of like, okay, well, how does that work? Who would make it? Who would run it? What would be the thing that you would have options for? And you can kind of pop out the other side, and you're like, okay, well, at the end of the day, some people are gonna be really angry that this is being used on them. And the question is like, where does the limit of their ability to limit others from using it run down?

Theo: Well, I would say there's not a clear distinction there between an AR filter and imagination. I could walk around the street and imagine everyone naked, and people might consider that disrespectful, but there's nothing that can really be done about that. There's no actual definable harm being done to people. Maybe these will become more serious in the future.

Greg: So just to be clear, I'm not personally crafting any particular views on this. I've tried not to front run this, because I think it's something I need to think about myself. But I do think that it's not obvious to me that everyone's going to agree with that. I think there's going to be people that will actually want to ban those kinds of things. There's going to be people that would say, well, if you want to use your Apple headset, the Apple people have centralized control over that. And if you're running some of these unsanctioned filters on people, we've decided as a company, Apple doesn't endorse that. And so we say they're going to be impossible or hard for you to install those things. So it's just another example of who has agency over this. But in this case, the specific problem is that people are going to be able to override how other people appear, which is very uncharted territory.

Theo: I bet in some of the first implementations, it will just be banned by default for the same reason that getting ChatGPT to write a speech endorsing Hitler is banned by default, because they just don't want to deal with it. And it's easier to just ban it and forget about it than it is to allow it.

Greg: It gets very tricky. I mean, there's a whole bunch of unknown things about what kinds of avatars there are going to be. There's a certain point where you can't outrun culture. So let's say there's dragon avatars, and there's elf avatars, and people who are dragon avatars get really upset if you view them as elf avatars. And now there's a special rule, like if you're elf avataring somebody that was a dragon, now you're a dragonist or something. And you're just, I mean, this is just me spitballing. But there's a certain point where the culture is going to move a lot faster than the technology companies can. And it's just going to be a giant mess.

I mean, it's fine. It's not very high on my list of concerns. But you asked about AR risk, that to me is a significant one that's going to cause a lot of cultural rifts and contention that's kind of off of everybody's radar right now.

Greg’s Intellectual Journey (2:17:38)

Theo: So last topic, where, how would you trace your intellectual journey over time? How did you come to become interested in VR, in aliens, in AI and programming?

Greg: I don't know. I mean, I've always been a software engineer, and I've always worked on computers and stuff. So I had a past life as an engineer working on surface search infrastructure, and systems and things like this. And then I kind of just went through a transition where I was like, okay, what should I work on? And I tried an Oculus Rift, TK1. And I was like, okay, this is obviously insanely cool. I want to work on that.

I kind of started writing more aggressively on Twitter only in the last few years, I've found it to be a really productive way to think out loud. I think it's a generally good exercise to write if you want to think about things. And so I've been using that as a way to increase the breadth of my mental models and how I come to any conclusions about anything. But I haven't really been talking too much about my current work. Most of my time is spent developing new XR hardware and software that I hope to be sharing more about soon. So I know I tweet a lot, but most of my actual intellectual energy is going into building something really cool right now. And so it's more of a side interest, a lot of these intellectual pursuits, but I enjoy them.

Theo: You also talk about how you read The Beginning of Infinity by David Deutch, which is a book that a lot of very smart people have cited as a huge inspiration. Naval, Brett Hall created a whole podcast called The Theory of Knowledge. Highly recommend, love that podcast about it. And you said something along the lines of, like in reference to e/acc, like we don't need e/acc because there already is a better techno-optimist philosophy out there and this is it. So when did you first discover Beginning of Infinity? And how did it change your worldview?

Greg: I read it a long time ago and I really had no idea that it was something that other people really got a lot out of until recently. I didn't really know Naval had read a lot and taken a lot from it. I read it and it was one of these books that it just left an imprint on me. A lot of it wasn't even obvious at the time. It's only obvious in retrospect how much it was informing how I thought about things. There's a certain intellectual consistency to it and intellectual honesty to it. It closes a lot of poor reasoning and really bad philosophy I've read over the years of how to think about things. Maybe there's selection bias, being a technologist, being a techno-optimist by default, it was primed for somebody like me to latch onto it and adopt some of its principles in how I think about things. I read it a long time ago. I've read it one or two more times since then. It just kind of naturally was my favorite book ever with people. And then I found out many years after that, that I wasn't alone in that. I didn't even know anybody even knew about it. In fact, I don't even know how I found that book originally. It was such a good book that I read it and it immediately became my favorite book on anything close to philosophy or things like this. It's basically that simple. I think over time, having now more directly engaged with people who have thought about it, I have found that there are some elements to things that I've kind of carved out my own interpretation of some of this stuff, which is a good sign of a healthy philosophy where there's slight nuance and internal discussions about it, but it's all really productive and good.

One thing I thought of the other day in light of the e/acc stuff, I actually worked with Balaji Srinivasan a while back in helping him with his Network State book. That work made me think that The Beginning of Infinity does deserve a first-class public introduction for everyone in this moment. I think it really is time for everyone to have read this book. My pitch to David Deutsch, if he's out there, is that I really think that that book deserves to be brought in, made open into the public domain. It deserves to be publicly available on the internet for anyone to read around the world in any language at any time. Maybe it could be a second edition, maybe he would do some revisions or something. I think it just really feels like the time is right for that philosophical work to be put forward and packaged up for the world at large to really genuinely engage with in a way that I think it really has been isolated to a small set of people.

Now that these technology curves are really forcing us to reckon with a lot of important things around how do we actually create knowledge? How do we think of ourselves as humans or as people? Are we special or are we not? I think now is the time for this philosophy to be well understood and part of the global zeitgeist. I really hope that, if there's any way I can help, I'd be happy to.

Theo: You can ping him on Twitter. He's constantly active on Twitter. I've pinged him before and he's actually responded to me a couple of times. Maybe that's something that he would be open to. I did tweet once about what if somebody created a website like LessWrong, but instead of being based around Eliezer Yudkowsky and the sequences, it was based around David Deutsch and the Beginning of Infinity. I still think that that's a pretty good idea that I might be open to building at some point.

Greg: Certainly, there's plenty of conversations that could hang off of the book for sure. I think there's going to be a lot of search for meaning and there's going to be a lot of search for purpose and there's going to be a lot of search for how to think clearly when and if either it turns out we're able to create things that look and feel like people in computers, or if it turns out that we're not the only people and there are other people already out there that are now starting to actually interact with us.

I think both of those things, regardless if it's good news, bad news, having at least some basic foundational understanding of how knowledge is created, what it is to be a person, what we ought to expect from these things, and how to actually know if they're doing what we say they're doing, is essential. These are tools of literacy now going forward as we start to run up against some of these big questions and realities.

Theo: Well with that, I think that's a pretty good place to wrap it up. So thank you so much, Greg Fodor, for being the first guest ever on my podcast. I had a great talk. Stay optimistic and we'll see you in the next episode.

Outro (2:26:15)

Thanks for listening to this episode with Greg Fodor. If you like this episode, be sure to subscribe to the Theo Jaffee podcast on YouTube, Spotify, and Apple Podcasts. Follow me on Twitter @theojaffee and subscribe to my Substack at theojaffee.com. Thank you again and I'll see you in the next episode.

0 Comments
Theo's Substack
Theo Jaffee Podcast
Deep conversations with brilliant people.
Listen on
Substack App
RSS Feed
Appears in episode
Theo Jaffee