Why I'm (Mostly) Optimistic About AI
Prepare for a vastly different, and mostly better, world.
I was born in 2004 and grew up fully immersed in the era of technology. The iPhone released when I was in preschool, I practically grew up on Wikipedia and YouTube, and I got my first phone (a Samsung Galaxy Core Prime) in sixth grade. I’ve followed technological and AI progress closely my whole life, including being an early adopter of GPT-3 (mostly for help with essays and copywriting). I’ve lived through and personally experienced the advent of the iPhone, Bitcoin, smartwatches, smart speakers, cloud computing, ride-hailing, online streaming, and much more. I currently study computer science in college. Yet when I checked Twitter on December 1, 2022, I was, for the first time in my life, completely and utterly blown away by a new piece of technology.
Almost every tweet on my timeline was one thing: screenshots of the then brand-new ChatGPT. Here we had an actually useful general-purpose AI capable of writing essays and code, answering questions on any topic imaginable, and even simulating a Linux terminal. Despite its tendency to make up answers and avoid hard questions, its responses were shockingly detailed and believable. I soon got the sinking feeling that everything was about to change.
When Bing Chat released on February 7, Meta’s LLaMA on February 24 (which was then leaked to the internet nearly immediately), and the ChatGPT API on March 1, I began to feel even more uneasy. It seemed as though AI advances were happening faster and faster. Then, in the last twelve days:
Stanford researchers used LLaMA and GPT-3 to create Alpaca, a model with similar performance to ChatGPT that cost only $600 to train and used 96% fewer parameters
Open-sourcers on the Internet managed to run LLaMA on cheaper and cheaper hardware: first brand-new MacBook Pros, then old smartphones, then Raspberry Pis
OpenAI released GPT-4, which is not only significantly more creative, intelligent, and accurate than GPT-3 but has an input limit nearly 10x that of ChatGPT and can process images as well as text
Google announced it will be releasing its PaLM model to the public, and Anthropic announced the same for its model Claude
Google and Microsoft both announced new AI integrations with their productivity apps
Midjourney released v5, which creates incredibly realistic digital art
PyTorch 2.0, the latest version of the most popular AI library, was released
Open-sourcers are actively figuring out how to give language models access to outside tools so they can break out of their environments, using technologies like ReAct and LangChain that can be implemented with tiny amounts of code
Runway announced Gen 2, a text-to-video model
Adobe announced Firefly, a suite of new AI tools integrated directly into their existing products
Google began releasing Bard, its AI chatbot
GitHub announced GitHub Copilot X, a massive upgrade to their already powerful AI coding assistant
Microsoft Research released a study showing the depth of GPT-4’s cognitive abilities
OpenAI announced new plugins for ChatGPT, including the ability to browse the web in real time, execute Python code, and connect to the Wolfram Language
As Marc Andreessen said, “I’ve seen some incredible weeks in my life, but this one is up there.”
This sent me into a spiral of existential angst: will AI destroy my career prospects as a software developer? Will it prevent me from ever being a successful entrepreneur or investor? Will it end the ability for people to have a career in general? What’s the point in doing anything productive if it will all be made irrelevant by AI? How can we live meaningful lives if AI overshadows everything we could hope to achieve? What if AI wipes out all life on Earth?
These are deep questions with no easy answers, and I still ponder them daily. Fears of AI misuse, mass unemployment, human irrelevance, or even human extinction are reasonable, and should be taken as seriously as possible. However, I believe there’s good reason to be optimistic. For reasons I describe more in detail below, mass unemployment and imminent superintelligence seem unlikely, and when they do arrive, the world will likely be a much better place.
Let’s start with how AI will impact jobs and the economy over the next few years.
AI (Probably) Won’t Take Your Job (Yet)
The explosion of language models onto the scene has tremendous implications for work, especially white-collar and/or knowledge work. Even before GPT-4 was released, preliminary studies found that GitHub Copilot, a non-state-of-the-art(!) coding assistant, more than doubled developer productivity and that ChatGPT could impact more than 50% of tasks for 19% of the US workforce. Understandably, this leaves a lot of people very worried about AI taking their jobs.
This worry has been seen before many times throughout history. Famously, in the early 1800s in England, the Luddites rebelled and destroyed textile machinery out of fear of losing their jobs. Not only is our world today vastly better than the world of the 1800s, but unemployment is at a near record-low of 3.6%. There’s good reason to believe that in the short term, AI will augment knowledge work rather than replacing it: further reducing the drudgery of certain tasks and allowing workers to focus on more enjoyable and/or productive parts of their jobs, enabling organizations to output much more. It’s also likely that AI will create entirely new jobs that don’t even exist currently, similarly to how a weaver in the 1800s could never foresee becoming a software engineer.
Even if you don’t accept this argument, the idea of impending mass unemployment from AI has three more obstacles that will prevent it from happening.
First, language models are flawed. They have significant issues with factuality, accuracy, and judgment; and no way to reliably understand that they are wrong, let alone avoid it. Jobs that require these characteristics, like doctors and investment managers, are safe in the short term. Similarly, although self-driving cars have been on the horizon for years, they need to be literally 99.9999% crash-free to replace human drivers. It’s usually okay if an email contains a poorly written line, but nobody would fly on a plane with a 1% (or a 0.1%) chance of crashing. Language models are also limited in their scope. Engineers, scientists, entrepreneurs, and anyone handling complex and fragile systems or emerging knowledge are safe.
Second, robotics (AI in the physical world) is nowhere near AI in the virtual world. While virtual AI can now create nearly perfectly realistic-looking people and convincing rabbinical sermons, robots can barely grab things and move around, let alone build a house or cook a meal. Moravec’s Paradox observes that fine motor skills are actually much harder to compute than advanced reasoning, and contrary to the predictions of The Jetsons, we have AI video generation before something as “basic” as a robot maid. (It’s important to note, however, that just like we were shocked by ChatGPT, it’s possible that robotics could advance just as unexpectedly.)
Finally, as Marc Andreessen points out, many fields—the government, education, healthcare, law, finance, and more—have heavily protected, regulated, and secured jobs. It’s common for technologists to forget how much inertia there is in certain areas - a full 7% of American adults, or more than 18 million people, reportedly still do not use the Internet. While creative destruction is a powerful force, it is not all-powerful.
We can think of current and near-future AI as a remote, polymathic intern. While it knows about everything and can do anything (at least, anything that doesn’t require a physical body), its abilities are limited to that of an entry-level employee. The AI can’t think for you, only help you think. Given its limited reliability, it shouldn’t be trusted with important decisions. While AI is limited now, however, it will not be forever. When AI becomes as reliable, intelligent, and accurate as humans at any conceivable task, what happens to the economy and our day-to-day lives?
Existential angst
Although it won’t happen tomorrow, it’s highly likely that within most of our lifetimes, artificial general intelligence will become better than humans at almost everything, and replace nearly all economically productive human labor. Given that humans are used to being indisputably the smartest creatures in the world (and the known universe), this understandably causes people a lot of existential angst.
To anyone who feels this way about AI: you are not alone. Worrying about future uncertainty is a part of life, and everyone feels this way at times—even Sam Altman and Elon Musk. It’s important to remember three things: we live in the present, humans are remarkably bad at predicting the future, and humans are remarkably good at adapting to change.
Many people naturally fear a world with abundant superintelligence, a world that will almost certainly be vastly different from our own. To deal with this fear, imagine a medieval peasant who is suddenly transported into the 21st century. He would be shocked to see transportation in loud metal boxes, humans living in massive agglomerations of tremendous structures, humans spending much of the day on small metal slabs with glass panels that magically change what they display, the decline of religion and monarchy, and so on. For those of us who are adjusted to this life, however, it would be hard to imagine going back to the Middle Ages—a time with so much more death, destruction, poverty, tyranny, and illiteracy than our own.
The root of much of today’s existential angst comes from the thought of being replaced. Many artists and programmers in particular feel this way. They wonder, how will humans be creative if AI can do everything? How can I possibly compete?
Imagine that you are a talented mountain climber. A company releases a robot that can climb mountains faster and better than any human. Would you stop climbing mountains? Of course not, because it’s not about trying to climb mountains better than an arbitrary robot—it’s about loving what you do. Did landscape painters all quit as soon as photography became possible? No, they adapted to the new changes and enhanced their art. Traditional Egyptian tile-makers and Greek phyllo dough bakers didn’t quit when machinery and mass production “replaced” them, studio musicians didn’t quit when recorded audio and synthesizers “replaced” them, chess and Go players didn’t quit when AI “replaced” them. Humans adapted to living meaningful lives after learning we live in a vast and empty universe, and we will adapt to living meaningful lives in a world with tools better than ourselves at what we do.
Far from replacing it, AI will enable an explosion in human creativity the likes of which has never been seen before. Everyone will have their own personal expert tutor, artist, writer, programmer, and musician. People who love world-building and plot design will be able to write amazing fantasy books by getting an AI to help with writing style and syntax; non-coders will be able to write software that makes their everyday lives easier; product designers will be able to get AI to help with sales and marketing; and so much more. In fact, this is already happening. The leverage AI will bring to people is like the leverage the printing press brought to writers—but much more so.
People also worry about issues such as misinformation and spam on the Internet, and people losing human connection. These worries are not new—they have been around as long as the Internet has (and probably much longer). Spam and misinformation is actually a very solvable problem thanks to cryptography. Cameras and microphones can add a cryptographic key to their metadata and store it on the blockchain so it can’t be copied, ensuring the authenticity of most photos and videos. As for human connection, if COVID taught us anything, it’s that people need in-person interaction with other people. The rise of convincingly fake AI personas on the Internet could make in-person interaction even more important and meaningful—an AI can’t yet fool people in the physical world. While this could change in the far future, (most) humans have no theory of mind allowing them to relate to a machine as a person, meaning that worries about AI replacing humans as friends or lovers are far-fetched, at least for many years.
But what about the biggest possible threat AI could pose in the long term?
Existential risk
What if AI overthrows humanity and kills us all, either because it was given evil goals or just because it was directed to make as many paperclips as possible? What if a superintelligent AI eternally tortures everyone who didn’t actively work to bring it into existence? What if these scenarios are literally a few years or even months away?
Prominent AI thinkers such as Nick Bostrom and Eliezer Yudkowsky take these scenarios very seriously. Yudkowsky believes that if humanity created a human-level AI (an “artificial general intelligence”, or AGI) unaligned with our goals and values, it would cause a “singularity” where the AI rapidly improves itself to become unfathomably superintelligent to the point where we’d stand no chance against it. Worse, this could happen even if we give the AI a simple task such as making paperclips—due to instrumental convergence, we can expect the AI to expand its own cognitive capabilities, prevent itself from being turned off, aggressively acquire resources, and eventually use nanobots to rearrange all molecules in the universe (including the ones that make up humans) into the form of paperclips. Worse still, AGI will happen very soon, and we have no plan for how to ensure that it is robustly aligned. In other words, so says Yudkowsky, we’re screwed.
Much like existential angst about being replaced, anyone feeling existential dread about death, either individual or collective, is not alone. This dread is a part of human nature and has been forever. In fact, one of the oldest recorded stories in human history, the Epic of Gilgamesh, is about a king who embarks on an unsuccessful quest for eternal life. After 1945, the date of the first detonation of a nuclear weapon, many people suddenly became aware that humanity now had the power to wipe itself out entirely. For anyone worried about extinction, I highly recommend C.S. Lewis’ 1948 essay “On Living In An Atomic Age”.
In one way we think a great deal too much of the atomic bomb. “How are we to live in an atomic age?” I am tempted to reply: “Why, as you would have lived in the sixteenth century when the plague visited London almost every year, or as you would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents.”
In other words, do not let us begin by exaggerating the novelty of our situation. Believe me, dear sir or madam, you and all whom you love were already sentenced to death before the atomic bomb was invented: and quite a high percentage of us were going to die in unpleasant ways. We had, indeed, one very great advantage over our ancestors — anesthetics; but we have that still. It is perfectly ridiculous to go about whimpering and drawing long faces because the scientists have added one more chance of painful and premature death to a world which already bristled with such chances and in which death itself was not a chance at all, but a certainty.
This is the first point to be made: and the first action to be taken is to pull ourselves together. If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things: praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate our minds.
Both OpenAI and Anthropic have written about their plans for impending AGI, and prediction market Metaculus currently predicts AGI to arrive in 2032, with 25% of people predicting before the end of 2026. But is AGI an impending certainty, to happen within the next few years? Although we can’t know for sure, I’m inclined to say no.
Most people who believe AGI is impending believe in the scaling hypothesis—the idea that taking existing AI techniques and simply throwing more and more computing power and training data at them will eventually create AGI. This has been vindicated by the fact that language models, which are essentially predictors of the next word in a text (explained here by Stephen Wolfram), have achieved all kinds of unexpected, emergent abilities that even their creators do not understand fully. There’s no question that the top AIs today have at least some degree of genuine intelligence. Since ChatGPT, and now GPT-4, are already so advanced, it’s only a matter of time before they become AGI.
Yet the scaling approach has many issues. Language models, by their very nature, are not just unreliable but expensive to run, which will hinder their economic adoption. There is good evidence to believe part of GPT-4’s surprising performance on exams boils down to contamination, whereby the AI’s training data includes either the actual exams or sufficiently close problems such that the AI can either repeat the answer from memory or perform very shallow reasoning. As David Deutsch points out, language models are not agentic (they do not have their own goals), and they have yet to create any truly new knowledge, such as proving unproven math theorems or making scientific advances. The top language models today are trained on nearly all human content, potentially creating a shortage of data for new models to be trained on in the future. There could also be diminishing returns to simple scale: according to anonymous sources, GPT-4 has 1 trillion parameters, compared to GPT-3’s 175 billion (and GPT-2’s 1.5 billion), a nearly 600% expansion for a ~20-40% improvement on most performance metrics. While simple scale approaches have worked well for narrow tasks like chess and Go, they have yet to produce a reliable self-driving car, let alone a human-level brain. It’s also possible that government regulation can slow down AI progress, much as it has slowed down the development of nuclear weapons.
Finally, there’s the question of what will happen with AGI once it finally arrives. It’s important not to hand-wave all of the reasonable objections of the AI doomers: they are based on the fact that a superintelligent AI will be vastly more powerful than humanity in at least some areas, and that it could plausibly defeat us all if given the opportunity. Even with AGI, however, there is good reason for optimism.
First, we have no idea what will happen with AGI. This claim is used by the doomers to justify the precautionary principle in slowing down AI development, but it can be used in reverse. It’s true that it could kill us all, but it’s also true that there are substantial weaknesses in the standard Yudkowsky-Bostrom AI doom arguments (I recommend this post, though a bit technical and detailed). Historically, doomers have always been wrong. From Thomas Malthus’ prediction that there wouldn’t be enough resources for an exponentially growing population, to Edward Teller’s fear that the first nuclear weapons test would ignite the atmosphere and end all life on Earth, to Bertrand Russell’s prediction that nuclear weapons would lead either to extinction or an all-powerful world government, history is full of pessimists who have been wrong.
Even if an unaligned superintelligence manages to arise and escape, it might not be the end of the world. As Curtis Yarvin points out, there are likely diminishing returns to intelligence that would limit the power of a superintelligence, and thus its potential danger. Humans cannot understand or trust an AI, limiting its persuasive power. Humans control the world’s physical infrastructure, such as manufacturing and weaponry, and are unlikely to give control of these to AI to build nanobots or launch nuclear weapons, and even if they did, these plans would likely be obvious to detect and prevent. Even a superintelligent AI would likely not be able to brute-force modern encryption techniques such as SHA-256, which would require trying 2^256 different combinations, roughly as many combinations as there are atoms in the observable universe. Certain problems are simply computationally intractable, and more intelligence cannot magically surpass this barrier.
Even if we assume unaligned AI to be powerful enough to defeat humanity, we still have the potential to make progress in AI alignment. Routes such as interpretability (where we attempt to understand how the underlying models work, essentially “reading the mind” of the AI to predict what it will do) and constitutional AI (where we fine-tune AI systems to align to certain principles), are far from formal and robust alignment, but they show promise. Companies like Anthropic and Conjecture, and labs like the Alignment Research Center are hard at work on the AI alignment problem. Much like we’ve been surprised at the rapid pace of AI capabilities, we could be surprised at the rapid pace of AI alignment, especially if slightly superhuman AIs could help us to align unfathomably superintelligent ones.
Finally, humanity is resilient and adaptable. Take nuclear weapons, for example. Since the first Soviet nuclear test in 1949, two enemy nations have always had access to nuclear weapons, and yet they’ve never used them for warfare since then. In fact, no country other than North Korea has tested a nuclear weapon since 1998. Even in false-alarm situations when they were under the impression of imminent nuclear attack, people such as Vasily Arkhipov and Stanislav Petrov have refused to launch nuclear weapons. There are good arguments for the idea that the existence of nuclear weapons, through the principle of mutually assured destruction, has been largely responsible for the relative lack of violence since World War II. Furthermore, the same nuclear technology that has been used to develop bombs has been used to develop abundant, cheap, clean energy that is only getting better. This ability to avoid disaster extends to AI as well—after an early version of Bing’s chatbot went crazy and threatened and/or professed its love for some users, Microsoft did the right thing by quickly fixing the situation, something that even Yudkowsky admitted made him more optimistic.
Utopia
If we can manage to create an aligned superintelligent AI, the benefits will be incredible. We’d likely be able to unlock all kinds of new technologies: life extension or even immortality, nanobots, immersive virtual reality, unlimited energy, full automation of the economy, space travel, and things we wouldn’t be able to dream of today. People would be freed from the tedium of their jobs and become able to pursue what they really want to do in life. AI-utopia doesn’t need to be an utterly foreign experience like uploading our consciousnesses to the cloud and merging with AI, nor full of hedonistic and self-indulgent pleasures with no challenges at all, but simply a better version of our existing lives—complete with new experiences, genuine human interaction, and challenging and rewarding work.
It’s becoming increasingly clear that not only will AI transform the world hugely in the future, it is already doing so. More so than any technology in human history, both the benefits and risks of AI are truly enormous. We are currently well on a path that could lead to either human extinction or boundless utopia. The stakes have never been higher. Yet if there’s anything we can learn from human history, it’s that we are remarkably good at adapting to change and preventing the worst outcomes. When superintelligence eventually arrives, there’s a very good chance we’ll be able to make the most out of it.