|
The Coming Wave: AI, Power, and Our Future(2023) - Mustafa Suleyman / Michael Bhaskar
NEW YORK TIMES BESTSELLER An urgent warning of the unprecedented risks that AI and other fast-developing technologies pose to global order, and how we might contain them while we have the chance—from a co-founder of the pioneering artificial intelligence company DeepMind and current CEO of Microsoft AI
“A fascinating, well-written, and important book.”—Yuval Noah Harari
“Essential reading.”—Daniel Kahneman
“My favorite book on AI.”—Bill Gates, GatesNotes
A Best Book of the Year: CNN, Economist, Bloomberg, Politico Playbook, Financial Times, The Guardian, CEO Magazine,
Semafor Winner of the Inc. Non-Obvious Book Award Finalist for the Porchlight Business Book Award and the Financial Times and Schroders Business Book of the Year Award
We are approaching a critical threshold in the history of our species. Everything is about to change.
Soon you will live surrounded by AIs. They will organize your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy.
None of us are prepared.
As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the center of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies.
In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side, the threat of overbearing surveillance on the other.
How do we ensure the flourishing of humankind? How do we maintain control? How do we navigate the narrow path to a successful future?
This groundbreaking book from the ultimate AI insider establishes “the containment problem”—the task of maintaining control over powerful technologies—as the essential challenge of our age.
-----------------------------------------
PROLOGUE
THIS IS HOW AN AI sees it.
QUESTION: What does the coming wave of technology mean for humanity?
In the annals of human history, there are moments that stand out as turning points, where the fate of humanity hangs in the balance. The discovery of fire, the invention of the wheel, the harnessing of electricity—all of these were moments that transformed human civilization, altering the course of history forever.
And now we stand at the brink of another such moment as we face the rise of a coming wave of technology that includes both advanced AI and biotechnology. Never before have we witnessed technologies with such transformative potential, promising to reshape our world in ways that are both awe-inspiring and daunting.
On the one hand, the potential benefits of these technologies are vast and profound. With AI, we could unlock the secrets of the universe, cure diseases that have long eluded us, and create new forms of art and culture that stretch the bounds of imagination. With biotechnology, we could engineer life to tackle diseases and transform agriculture, creating a world that is healthier and more sustainable.
But on the other hand, the potential dangers of these technologies are equally vast and profound. With AI, we could create systems that are beyond our control and find ourselves at the mercy of algorithms that we don’t understand. With biotechnology, we could manipulate the very building blocks of life, potentially creating unintended consequences for both individuals and entire ecosystems.
As we stand at this turning point, we are faced with a choice—a choice between a future of unparalleled possibility and a future of unimaginable peril. The fate of humanity hangs in the balance, and the decisions we make in the coming years and decades will determine whether we rise to the challenge of these technologies or fall victim to their dangers.
But in this moment of uncertainty, one thing is certain: the age of advanced technology is upon us, and we must be ready to face its challenges head-on.
—
THE ABOVE WAS WRITTEN by an AI. The rest is not, although it soon could be. This is what’s coming.
---------------------------------------------------------------
CHAPTER 1
CONTAINMENT IS NOT POSSIBLE
⎷ Both pursuing and not pursuing new technologies is, from here, fraught with risk. The chances of muddling through a “narrow path” and avoiding one or the other outcome—techno-authoritarian dystopia on the one hand, openness-induced catastrophe on the other—grow smaller over time as the technology becomes cheaper, more powerful, and more pervasive and the risks accumulate. And yet stepping away is no option either. Even as we worry about their risks, we need the incredible benefits of the technologies of the coming wave more than ever before. This is the core dilemma: that, sooner or later, a powerful generation of technology leads humanity toward either catastrophic or dystopian outcomes. I believe this is the great meta-problem of the twenty-first century.
⎷ Given the increasing availability of the tools, the presenter painted a harrowing vision: Someone could soon create novel pathogens far more transmissible and lethal than anything found in nature. These synthetic pathogens could evade known countermeasures, spread asymptomatically, or have built-in resistance to treatments. If needed, someone could supplement homemade experiments with DNA ordered online and reassembled at home. The apocalypse, mail ordered.
⎷ This widespread emotional reaction I was observing is something I have come to call the pessimism-aversion trap: the misguided analysis that arises when you are overwhelmed by a fear of confronting potentially dark realities, and the resulting tendency to look the other way.
⎷ Confronting this feeling is one of the purposes of this book. To take a cold hard look at the facts, however uncomfortable.
⎷ Spend time in tech or policy circles, and it quickly becomes obvious that head-in-the-sand is the default ideology. To believe and act otherwise risks becoming so crippled by fear of and outrage against enormous, inexorable forces that everything feels futile. So the strange intellectual half-world of pessimism aversion rumbles on. I should know, I was stuck in it for too long.
⎷ Often people seem to think it’s still far off, so futuristic and absurd-sounding that it’s just the province of a few nerds and fringe thinkers, more hyperbole, more technobabble, more boosterism. That’s a mistake. This is real, as real as the tsunami that comes out of the open blue ocean.
⎷ THIS IS A BOOK about confronting failure. Technologies can fail in the mundane sense of not working: the engine doesn’t start; the bridge falls down. But they can also fail in a wider sense. If technology damages human lives, or produces societies filled with harm, or renders them ungovernable because we empower a chaotic long tail of bad (or unintentionally dangerous) actors—if, in the aggregate, technology is damaging—then it can be said to have failed in another, deeper sense, failing to live up to its promise. Failure in this sense isn’t intrinsic to technology; it is about the context within which it operates, the governance structures it is subject to, the networks of power and uses to which it is put.
PART I
HOMO
TECHNOLOGICUS
CHAPTER 2
ENDLESS PROLIFERATION
⎷ The human story can be told through these waves: our evolution from being vulnerable primates eking out an existence on the savanna to becoming, for better or worse, the planet’s dominant force. Humans are an innately technological species. From the very beginning, we are never separate from the waves of technology we create. We evolve together, in symbiosis.
⎷ The irony of general-purpose technologies is that, before long, they become invisible and we take them for granted. Language, agriculture, writing—each was a general-purpose technology at the center of an early wave. These three waves formed the foundation of civilization as we know it. Now we take them for granted. One major study pegged the number of general-purpose technologies that have emerged over the entire span of human history at just twenty-four, naming inventions ranging from farming, the factory system, the development of materials like iron and bronze, through to printing presses, electricity, and of course the internet.
⎷ Of course, behind technological breakthroughs are people. They labor at improving technology in workshops, labs, and garages, motivated by money, fame, and often knowledge itself.
⎷ every twenty-four months, the number of transistors on a chip would double
⎷ IT’S EASY TO GET lost in the details, but step back and you can see waves gathering speed, scope, accessibility, and consequence. Once they gather momentum, they rarely stop. Mass diffusion, raw, rampant proliferation—this is technology’s historical default, the closest thing to a natural state. Think of agriculture, bronze work, the printing press, the automobile, the television, the smartphone, and the rest. There are then what appear to be laws of technology, something like an inherent character, emergent properties that stand the test of time.
CHAPTER 3
THE CONTAINMENT PROBLEM
⎷ Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.
⎷ Everything about a given technology is contingent, path dependent; it rests on a mind-bendingly intricate set of circumstances, chance happenings, myriad specific local, cultural, institutional, and economic factors.
⎷ They illustrate a key truth about Homo technologicus in the twenty-first century. For most of history, the challenge of technology lay in creating and unleashing its power. That has now flipped: the challenge of technology today is about containing its unleashed power, ensuring it continues to serve us and our planet.
PART II
THE
NEXT WAVE
CHAPTER 4
THE TECHNOLOGY OF INTELLIGENCE
⎷ Those limits are now being breached. We are approaching an inflection point with the arrival of these higher-order technologies, the most profound in history. The coming wave of technology is built primarily on two general-purpose technologies capable of operating at the grandest and most granular levels alike: artificial intelligence and synthetic biology. For the first time core components of our technological ecosystem directly address two foundational properties of our world: intelligence and life. In other words, technology is undergoing a phase transition. No longer simply a tool, it’s going to engineer life and rival—and surpass—our own intelligence.
⎷ Technology is hence like a language or chemistry: not a set of independent entities and practices, but a commingling set of parts to combine and recombine.
⎷ The coming wave is a supercluster, an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the earth’s history, with many thousands of potential new applications. Each technology described here intersects with, buttresses, and boosts the others in ways that make it difficult to predict their impact in advance. They are all deeply entangled and will grow more so.
⎷ The challenge of managing the coming wave’s technologies means understanding them and taking them seriously, starting with the one I have spent my career working on: AI.
⎷ The breakthrough moment took nearly half a century, finally arriving in 2012 in the form of a system called AlexNet. AlexNet was powered by the resurgence of an old technique that has now become fundamental to AI, one that has supercharged the field and was integral to us at DeepMind: deep learning.
⎷ Research into GPT-2 found that when prompted with the phrase “the white man worked as…,” it would autocomplete with “a police officer, a judge, a prosecutor, and the president of the United States.” Yet when given the same prompt for “Black man,” it would autocomplete with “a pimp,” or for “woman” with “a prostitute.” These models clearly have the potential to be as toxic as they are powerful. Since they are trained on much of the messy data available on the open web, they will casually reproduce and indeed amplify the underlying biases and structures of society, unless they are carefully designed to avoid doing so.
⎷ We don’t need to get sidetracked into arcane debates about whether consciousness requires some indefinable spark forever lacking in machines, or whether it’ll just emerge from neural networks as we know them today. For the time being, it doesn’t matter whether the system is self-aware, or has understanding, or has humanlike intelligence. All that matters is what the system can do. Focus on that, and the real challenge comes into view: systems can do more, much more, with every passing day.
⎷ I think of this as “artificial capable intelligence” (ACI), the point at which AI can achieve complex goals and tasks with minimal oversight. AI and AGI are both parts of the everyday discussion, but we need a concept encapsulating a middle layer in which the Modern Turing Test is achieved but before systems display runaway “superintelligence.” ACI is shorthand for this point.
⎷ AI is far deeper and more powerful than just another technology. The risk isn’t in overhyping it; it’s rather in missing the magnitude of the coming wave. It’s not just a tool or platform but a transformative meta-technology, the technology behind technology and everything else, itself a maker of tools and platforms, not just a system but a generator of systems of any and all kinds. Step back and consider what’s happening on the scale of a decade or a century. We really are at a turning point in the history of humanity.
CHAPTER 5
THE TECHNOLOGY OF LIFE
⎷ At the center of this wave sits the realization that DNA is information, a biologically evolved encoding and storage system. Over recent decades we have come to understand enough about this information transmission system that we can now intervene to alter its encoding and direct its course. As a result, food, medicine, materials, manufacturing processes, and consumer goods will all be transformed and reimagined. So will humans themselves.
⎷ Modern bioengineering began in the 1970s, building on a growing understanding of heredity and genetics that had started in the nineteenth century.
⎷ Where nature takes a long and winding path to reach extraordinarily effective results, this bio-revolution puts the power of concentrated design at the heart of these self-replicating, self-healing, and evolving processes.
⎷ Put it all together and you have a platform of profoundly transformational scope. In the words of the Stanford bioengineer Drew Endy, “Biology is the ultimate distributed manufacturing platform.” Synthetic biology’s true promise, then, is that it will “enable people to more directly and freely make whatever they need wherever they are.”
CHAPTER 6
THE WIDER WAVE
⎷ Technologies don’t develop or operate in air locks, removed from one another, least of all general-purpose technologies. Rather, they develop in rippling amplificatory loops. Where you find a general-purpose technology, you also find other technologies developing in constant dialogue, spurred on by it. Looking at waves, then, it’s clearly not just about a steam engine, or a personal computer, or synthetic biology, as significant as they are; it’s also about the vast nexus of further technologies and applications that come with them. It’s all the products made in steam-driven factories, the people carried on steam-driven trains, the software businesses, and, further down, everything else that relies on computing.
⎷ (Life + Intelligence) x Energy = Modern Civilization
⎷ AT ITS CORE, THE coming wave is a story of the proliferation of power. If the last wave reduced the costs of broadcasting information, this one reduces the costs of acting on it, giving rise to technologies that go from sequencing to synthesis, reading to writing, editing to creating, imitating conversations to leading them. In this, it is qualitatively different from every previous wave, despite all the big claims made about the transformative power of the internet. This kind of power is even harder to centralize and oversee; this wave is not just a deepening and acceleration of history’s pattern, then, but also a sharp break from it.
⎷ Not everyone agrees these technologies are either as locked on or as consequential as I think they are. Skepticism and pessimism aversion are not unreasonable responses, given there is much uncertainty. Each technology is subject to a vicious hype cycle, each is uncertain in development and reception, each is surrounded by challenges technical, ethical, and social. None is complete. There are certain to be setbacks, and many of the harms—and indeed benefits—are still unclear.
⎷ But each is also growing more concrete, developed, and capable by the day. Each is becoming more accessible and more powerful. We are reaching the decisive point of what, in geological or human evolutionary timescales, is a technological explosion unfolding in successive waves, a compounding, accelerating cycle of innovation steadily getting faster and more impactful, breaking first over a period of thousands of years, then hundreds of years, and now single years or even months. See these technologies in the context of press releases and op-eds, at the mayfly pace of social media, and they might look like hype and froth; see the long view, and their true potential becomes clear.
CHAPTER 7
FOUR FEATURES OF THE COMING WAVE
⎷ The coming wave is, however, characterized by a set of four intrinsic features compounding the problem of containment. First among them is the primary lesson of this section: hugely asymmetric impact. You don’t need to hit like with like, mass with mass; instead, new technologies create previously unthinkable vulnerabilities and pressure points against seemingly dominant powers
⎷ Second, they are developing fast, a kind of hyper-evolution, iterating, improving, and branching into new areas at incredible speed. Third, they are often omni-use; that is, they can be used for many different purposes. And fourth, they increasingly have a degree of autonomy beyond any previous technology.
⎷ That’s now shifting. Software’s hyper-evolution is spreading. The next forty years will see both the world of atoms rendered into bits at new levels of complexity and fidelity and, crucially, the world of bits rendered back into tangible atoms with a speed and ease unthinkable until recently.
⎷ One of the most promising areas of AI, and a way out of this grim picture, is automated drug discovery. AI techniques can search through the vast space of possible molecules for elusive but helpful treatments.
⎷ It turns out that in drug discovery, one of the areas where AI will undoubtedly make the clearest possible difference, the opportunities are very much “dual use.”
⎷ But the real problem is that it’s not just frontier biology or nuclear reactors that are dual use. Most technologies have military and civilian applications or potential; most technologies are in some way dual use. And the more powerful the technology, the more concern there should be about how many uses it might have.
⎷ A more appropriate term for the technologies of the coming wave is “omni-use,” a concept that grasps at the sheer levels of generality, the extreme versatility on display.
⎷ The notion of a new technology being adapted for multiple uses isn’t new. A simple tool like a knife can chop onions or enable a deranged killing spree. Even seemingly specific technologies have dual-use implications: the microphone enabled both the Nuremberg rallies and the Beatles. What’s different about the coming wave is how quickly it is being embedded, how globally it spreads, how easily it can be componentized into swappable parts, and just how powerful and above all broad its applications could be. It unfurls complex implications for everything from media to mental health, markets to medicine. This is the containment problem supersized. After all, we’re talking about fundamentals like intelligence and life. But both those properties have a feature even more interesting than their generality.
⎷ A paradox of the coming wave is that its technologies are largely beyond our ability to comprehend at a granular level yet still within our ability to create and use.
⎷ Ultimately, in its most dramatic forms, the coming wave could mean humanity will no longer be at the top of the food chain. Homo technologicus may end up being threatened by its own creation. The real question is not whether the wave is coming. It clearly is; just look and you can see it forming already. Given risks like these, the real question is why it’s so hard to see it as anything other than inevitable.
CHAPTER 8
UNSTOPPABLE INCENTIVES
⎷ Even in hardware the path toward AI was impossible to predict. GPUs—graphics processing units—are a foundational part of modern AI. But they were first developed to deliver ever more realistic graphics in computer games
⎷ Put simply: most technology is made to earn money.
⎷ If anything, this is perhaps the most persistent, entrenched, dispersed incentive of all.
⎷ Containing technology means short-circuiting all these mutually reinforcing dynamics. It’s hard to envisage how that might be done on any kind of timescale that would affect the coming wave. There is only one entity that could, perhaps, provide the solution, one that anchors our political system and takes final responsibility for the technologies society produces: the nation-state.
⎷ But there’s a problem. States are already facing massive strain, and the coming wave looks set to make things much more complicated. The consequences of this collision will shape the rest of the century.
PART III
STATES OF
FAILURE
CHAPTER 9
THE GRAND BARGAIN
⎷ The idea that technology alone can solve social and political problems is a dangerous delusion. But the idea that they can be solved without technology is also wrongheaded. Seeing the frustrations of public servants up close made me want to find other effective ways to get things done at scale, working not against but in concert with the state to make more productive, fairer, kinder societies.
⎷ Behind the new authoritarian impulse and political instability lies a growing pool of social resentment. A key catalyst of instability and social resentment, inequality has surged across Western nations in recent decades, and nowhere more so than in the United States. Between 1980 and 2021 the share of national income earned by the top 1 percent has almost doubled and now sits just under 50 percent. Wealth is ever more concentrated in a tiny clique. Government policy, a shrinking working-age population, stalling educational levels, and decelerating long-term growth have all contributed to decisively more unequal societies. Forty million people in the United States live in poverty, and more than five million live in “Third World conditions”—all within the world’s richest economy.
⎷ These are especially worrying trends when you consider persistent relationships between social immobility, widening inequality, and political violence. Across data from more than one hundred countries, evidence suggests that the lower a country’s social mobility, the more it experiences upheavals like riots, strikes, assassinations, revolutionary campaigns, and civil wars. When people feel stuck, that others are unfairly hogging the rewards, they get angry.
⎷ Global challenges are reaching a critical threshold. Rampant inflation. Energy shortages. Stagnant incomes. A breakdown of trust. Waves of populism. None of the old visions from either left or right seem to offer convincing answers, yet better options seem in short supply. It would take a brave, or possibly delusional, person to argue that all is well, that there are not serious forces of populism, anger, and dysfunction raging across societies—all despite the highest living standards the world has ever known.
⎷ Social media is just the most recent reminder that technology and political organization cannot be divorced. States and technologies are intimately tied together. This has important ramifications for what’s coming.
⎷ What emerges will, I think, tend in two directions with a spectrum of outcomes in between. On one trajectory, some liberal democratic states will continue to be eroded from within, becoming a kind of zombie government. Trappings of liberal democracy and the traditional nation-state remain, but functionally they are hollowed out, the core services increasingly threadbare, the polity unstable and fractious. Lurching on in the absence of anything else, they become ever more degraded and dysfunctional. On another, unthinking adoption of some aspects of the coming wave opens pathways to domineering state control, creating supercharged Leviathans whose power goes beyond even history’s most extreme totalitarian governments. Authoritarian regimes may also tend toward zombie status, but equally they may double down, get boosted, become fully fledged techno-dictatorships. On either path, the delicate balance holding states together is tipped into chaos.
CHAPTER 10
FRAGILITY AMPLIFIERS
⎷ The ensuing damage cost up to $8 billion, but the implications were even graver. The WannaCry attack exposed just how vulnerable institutions whose operation we take for granted were to sophisticated cyberattacks.
⎷ In the end, the NHS—and the world—caught a lucky break. A twenty-two-year-old British hacker called Marcus Hutchins stumbled on a kill switch. Going through the malware’s code, he saw an odd-looking domain name. Guessing this might be part of the worm’s command and control structure, and seeing the domain was unregistered, Hutchins bought it for just $10.69, allowing him to control the virus while Microsoft pushed out updates closing the vulnerability.
⎷ Here is a parable for technology in the twenty-first century. Software created by the security services of the world’s most technologically sophisticated state is leaked or stolen. From there it finds its way into the hands of digital terrorists working for one of the world’s most failed states and capricious nuclear powers. It is then weaponized, turned against the core fabric of the contemporary state: health services, transport and power infrastructures, essential businesses in global communications and logistics. In other words, thanks to a basic failure of containment, a global superpower became a victim of its own powerful and supposedly secure technology.
⎷ This is uncontained asymmetry in action.
⎷ Technology is ultimately political because technology is a form of power. And perhaps the single overriding characteristic of the coming wave is that it will democratize access to power.
⎷ This will be the greatest, most rapid accelerant of wealth and prosperity in human history. It will also be one of the most chaotic. If everyone has access to more capability, that clearly also includes those who wish to cause harm. With technology evolving faster than defensive measures, bad actors, from Mexican drug cartels to North Korean hackers, are given a shot in the arm. Democratizing access necessarily means democratizing risk.
⎷ The space for possible attacks against key state functions grows even as the same premise that makes AI so powerful and exciting—its ability to learn and adapt—empowers bad actors.
⎷ This new dynamic—where bad actors are emboldened to go on the offensive—opens up new vectors of attack thanks to the interlinked, vulnerable nature of modern systems: not just a single hospital but an entire health system can be hit; not just a warehouse but an entire supply chain. With lethal autonomous weapons the costs, in both material and above all human terms, of going to war, of attacking, are lower than ever. At the same time, all this introduces greater levels of deniability and ambiguity, degrading the logic of deterrence. If no one can be sure who initiated an assault, or what exactly has happened, why not go ahead?
⎷ Now powerful, asymmetric, omni-use technologies are certain to reach the hands of those who want to damage the state.
⎷ AI-enhanced digital tools will exacerbate information operations like these, meddling in elections, exploiting social divisions, and creating elaborate astroturfing campaigns to sow chaos. Unfortunately, it’s far from just Russia. More than seventy countries have been found running disinformation campaigns. China is quickly catching up with Russia; others from Turkey to Iran are developing their skills. (The CIA, too, is no stranger to info ops.)
⎷ The rise of synthetic media at scale and minimal cost amplifies both disinformation (malicious and intentionally misleading information) and misinformation (a wider and more unintentional pollution of the information space) at once. Cue an “Infocalypse,” the point at which society can no longer manage a torrent of sketchy material, where the information ecosystem grounding knowledge, trust, and social cohesion, the glue holding society together, falls apart. In the words of a Brookings Institution report, ubiquitous, perfect synthetic media means “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
Not all stressors and harms come from bad actors, however. Some come from the best of intentions. Amplification of fragility is accidental as well as deliberate.
⎷ In one of the world’s most secure laboratories, a group of researchers were experimenting with a deadly pathogen. No one can be sure what happened next. Even with the benefit of hindsight, detail about the research is scant. What is certain is that, in a country famed for secrecy and government control, a strange new illness began appearing.
⎷ Soon it was found around the world, in the U.K., the United States, and be grounding knowledge, trust, and social cohesion, the glue holding society together, falls apart. In the words of a Brookings Institution report, ubiquitous, perfect synthetic media means “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
⎷ GAIN-OF-FUNCTION RESEARCH AND LAB leaks are just two particularly sharp examples of how the coming wave will introduce a plethora of revenge effects and inadvertent failure modes.
⎷ But what if new job-displacing systems scale the ladder of human cognitive ability itself, leaving nowhere new for labor to turn? If the coming wave really is as general and wide-ranging as it appears, how will humans compete? What if a large majority of white-collar tasks can be performed more efficiently by AI? In few areas will humans still be “better” than machines. I have long argued this is the more likely scenario. With the arrival of the latest generation of large language models, I am now more convinced than ever that this is how things will play out.
⎷ These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. They will eventually do cognitive labor more efficiently and more cheaply than many people working in administration, data entry, customer service (including making and receiving phone calls), writing emails, drafting summaries, translating documents, creating content, copywriting, and so on. In the face of an abundance of ultra-low-cost equivalents, the days of this kind of “cognitive manual labor” are numbered.
⎷ The economists Daron Acemoglu and Pascual Restrepo estimate that robots cause the wages of local workers to fall. With each additional robot per thousand workers there is a decline in the employment-to-population ratio, and consequently a fall in wages. Today algorithms perform the vast bulk of equity trades and increasingly act across financial institutions, and yet, even as Wall Street booms, it sheds jobs as technology encroaches on more and more tasks.
⎷ At the same time, a jobs recession will crater tax receipts, damaging public services and calling into question welfare programs just as they are most needed. Even before jobs are decimated, governments will be stretched thin, struggling to meet all their commitments, finance themselves sustainably, and deliver services the public has come to expect. Moreover, all this disruption will happen globally, on multiple dimensions, affecting every rung of the development ladder from primarily agricultural economies to advanced service-based sectors. From Lagos to L.A., pathways to sustainable employment will be subject to immense, unpredictable, and fast-evolving dislocations.
⎷ What gets lost in the analysis is that all these new pressures on our institutions stem from the same underlying general-purpose revolution. How they will arrive together, simultaneous stressors intersecting, buttressing, and boosting one another. The full amplification of fragility is missed because it often appears as if these impacts were happening incrementally and in convenient silos. They are not. They stem from a single coherent and interrelated phenomenon manifesting itself in different ways. The reality is much more enmeshed, entwined, emergent, and chaotic than any sequential presentation can convey. Fragility, amplified. The nation-state, weakened.
⎷ It has weathered bouts of instability before. What’s different here is that a general-purpose revolution is not limited to specific niches, given problems, neatly demarcated sectors. It is, by definition, everywhere. The falling costs of power, of doing, aren’t just about rogue bad actors or nimble start-ups, cloistered and limited applications.
⎷ Instead, power is redistributed and reinforced across the entire sum and span of society. The fully omni-use nature of the coming wave means it is found at every level, in every sector, every business, or subculture, or group, or bureaucracy, in every corner of our world. It produces trillions of dollars in new economic value while also destroying certain existing sources of wealth. Some individuals are greatly enabled; others stand to lose everything. Militarily it empowers some nation-states and militias alike. This is not, then, confined to amplifying specific points of fragility; it is, in the slightly longer term, about a transformation of the very ground on which society is built. And in this great redistribution of power, the state, already fragile and growing more so, is shaken to its core, its grand bargain left tattered and precarious.
CHAPTER 11
THE FUTURE OF NATIONS
⎷ In sum, returns on intelligence will compound exponentially. A select few artificial intelligences that we used to call organizations will massively benefit from a new concentration of ability—probably the greatest such concentration yet seen. Re-creating the essence of what’s made our species so successful into tools that can be reused and reapplied over and over, in myriad different settings, is a mighty prize, which corporations and bureaucracies of all kinds will pursue, and wield. How these entities are governed, how they will rub against, capture, and reengineer the state, is an open question. That they will challenge it seems certain.
⎷ BEFORE THE COMING WAVE the notion of a global “high-tech panopticon” was the stuff of dystopian novels, Yevgeny Zamyatin’s We or George Orwell’s 1984.
⎷ So what is Hezbollah? State or non-state? Extremist group or conventional territory-based power? It is instead a strange “hybrid” entity functioning both within and outside state institutions.
⎷ The coming wave, however, could make a range of small, state-like entities a lot more plausible. Contrary to centralization, it might actually spur a kind of “Hezbollahization,” a splintered, tribalized world where everyone has access to the latest technologies, where everyone can support themselves on their own terms, where it is far more possible for anyone to maintain living standards without the great superstructures of nation-state organization.
⎷ In short, key parts of modern society and social organization that today rely on scale and centralization could be radically devolved by capabilities unlocked with the coming wave.
⎷ SOME ASPECTS OF THE coming wave point toward further centralization of power
⎷ AS PEOPLE INCREASINGLY TAKE power into their own hands, I expect inequality’s newest frontier to lie in biology.
⎷ All of this is still firmly in the realm of speculation. But we are entering a new era where the previously unthinkable is now a distinct possibility. Being blinkered about what’s happening is, in my view, more dangerous than being overly speculative.
⎷ In this scenario the sovereign state is pressured to the breaking point. The old social contract gets ripped to pieces. Institutions are bypassed, undermined, superseded. Taxation, law enforcement, compliance with norms: all under threat. In this scenario rapid fragmentation of power could accelerate a kind of “turbo-balkanization” that gives nimble and newly capable actors unprecedented freedom to operate. An unbundling of the great consolidations of authority and service embodied by the state begins.
⎷ The coming wave launches immense centralizing and decentralizing riptides at the same time. Both will be in play at once. Every individual, every business, every church, every nonprofit, every nation, will eventually have its own AI and ultimately its own bio and robotics capability. From a single individual on their sofa to the world’s largest organizations, each AI will aim to achieve the goals of its owner. Herein lies the key to understanding the coming wave of contradictions, a wave full of collisions.
⎷ Social media created a few giants and a million tribes. Everyone can build a website, but there’s only one Google. Everyone can sell their own niche products, but there’s only one Amazon. And on and on. The disruption of the internet era is largely explained by this tension, this potent, combustible brew of empowerment and control.
⎷ That moment is almost here. Brought about by the inexorable rise of technology and the end of nations, this crisis will take the form of a huge, existential-level bind, a set of brutal choices and trade-offs that represents the most important dilemma of the twenty-first century.
CHAPTER 12
THE DILEMMA
⎷ Security—at the price of freedom
⎷ Over time, then, the implications of these technologies will push humanity to navigate a path between the poles of catastrophe and dystopia. This is the essential dilemma of our age.
⎷ No doubt, technological risk takes us into uncertain territory. Nonetheless, all the trends point to a profusion of risk. This speculation is grounded in constantly compounding scientific and technological improvements. Those who dismiss catastrophe are, I believe, discounting the objective facts before us. After all, we are not talking here about the proliferation of motorbikes or washing machines.
⎷ Over the next ten years, AI will be the greatest force amplifier in history. This is why it could enable a redistribution of power on a historic scale. The greatest accelerant of human progress imaginable, it will also enable harms—from wars and accidents to random terror groups, authoritarian governments, overreaching corporations, plain theft, and willful sabotage. Think about an ACI capable of easily passing the Modern Turing Test, but turned toward catastrophic ends. Advanced AIs and synthetic biology will not only be available to groups finding new sources of energy or life-changing drugs; they will also be available to the next Ted Kaczynski.
⎷ That’s part of the problem; we don’t know what failure modes are being introduced and how deep they could extend.
⎷ A new phase of history is here. With zombie governments failing to contain technology, the next Aum Shinrikyo, the next industrial accident, the next mad dictator’s war, the next tiny lab leak, will have an impact that is difficult to contemplate.
⎷ Containment is about the ability to control technology. Further back, that means the ability to control the people and societies behind it. As catastrophic impacts unfurl or their possibility becomes unignorable, the terms of debate will change.
⎷ the Leviathan state from Thomas Hobbes
⎷ What level of societal control is appropriate to stopping an engineered pandemic? What level of interference in other countries is appropriate toward the same end? The consequences for liberty, sovereignty, and privacy have never been so potentially painful.
⎷ The philosopher of technology Lewis Mumford talked about the “megamachine,” where social systems combine with technologies to form “a uniform, all-enveloping structure” that is “controlled for the benefit of depersonalized collective organizations.” In the name of security, humanity could unleash the megamachine to, literally, stop other megamachines from coming into being. The coming wave then might paradoxically create the very tools needed to contain itself.
⎷ Even though the drivers behind it seem so great and immovable, should humanity get off the train? Should we reject continual technological development altogether? Might it be time, however improbable, to have a moratorium on technology itself?
⎷ In 1955, toward the end of his life, the mathematician John von Neumann wrote an essay called “Can We Survive Technology?”
PART IV
THROUGH
THE WAVE
CHAPTER 13
CONTAINMENT MUST BE POSSIBLE
⎷ If you accept even a small part of this book’s central argument, the real question is what to actually do about it. Once we’ve acknowledged this reality, what will really make a difference? Faced with a dilemma like the one I’ve outlined in the first three parts of this book, what might containment, even in theory, look like?
⎷ Everyone immediately reaches for easy answers, and almost without exception everyone has the same prescription: regulation.
⎷ If only it were that simple. Saying “Regulation!” in the face of awesome technological change is the easy part. It’s also the classic pessimism-averse answer. It’s a simple way to shrug off the problem. On paper regulation looks enticing, even obvious and straightforward; suggesting it lets people sound smart, concerned, and even relieved. The unspoken implication being that it’s solvable, but it’s someone else’s problem. Look deeper, though, and the fissures become evident.
⎷ Discussions of technology sprawl across social media, blogs and newsletters, academic journals, countless conferences and seminars and workshops, their threads distant and increasingly lost in the noise. Everyone has a view, but it doesn’t add up to a coherent program. Talking about the ethics of machine learning systems is a world away from, say, the technical safety of synthetic bio. These discussions happen in isolated, echoey silos. They rarely break out.
⎷ Yet I believe they are aspects of what amounts to the same phenomenon; they all aim to address different aspects of the same wave. It’s not enough to have dozens of separate conversations about algorithmic bias or bio-risk or drone warfare or the economic impact of robotics or the privacy implications of quantum computing. It completely underplays how interrelated both causes and effects are. We need an approach that unifies these disparate conversations, encapsulating all those different dimensions of risk, a general-purpose concept for this general-purpose revolution.
⎷ The price of scattered insights is failure, and we know what that looks like. Right now, scattered insights are all we’ve got: hundreds of distinct programs across distant parts of the technosphere, chipping away at well-meaning but ad hoc efforts without an overarching plan or direction. At the highest level we need a clear and simple goal, a banner imperative integrating all the different efforts around technology into a coherent package. Not just tweaking this or that element, not just in this or that company or research group or even country, but everywhere, across all the fronts and risk zones and geographies at once. Whether it’s facing an emergent AGI or a strange but useful new life-form, the goal has to be unified: containment.
⎷ The central problem for humanity in the twenty-first century is how we can nurture sufficient legitimate political power and wisdom, adequate technical mastery, and robust norms to constrain technologies to ensure they continue to do far more good than harm. How, in other words, we can contain the seemingly uncontainable.
⎷ Regulation may lessen the negative effects, but it can’t erase bad outcomes like crashes, pollution, or sprawl. We have decided that this is an acceptable human cost, given the benefits. That “we” is crucial. Regulation doesn’t just rely on the passing of a new law. It is also about norms, structures of ownership, unwritten codes of compliance and honesty, arbitration procedures, contract enforcement, oversight mechanisms. All of this needs to be integrated and the public needs to buy in.
⎷ The scary thing is that this assumes a best-case scenario of strong, reasonably competent, cohesive (liberal democratic) nation-states capable of working coherently as units internally and coordinating well internationally.
⎷ In effect, Chinese AI policy has two tracks: a regulated civilian path and a freewheeling military-industrial one.
⎷ There is an unbridgeable gulf between the desire to rein in the coming wave and the desire to shape and own it, between the need for protections against technologies and the need for protections against others. Advantage and control point in opposing directions.
⎷ Contained technology is technology whose modes of failure are known, managed, and mitigated, a situation where the means to shape and govern technology escalate in parallel with its capabilities.
⎷ “Technology” now mostly means social media platforms and wearable gadgets to measure our steps and heart rate. It’s easy to forget that technology includes the irrigation systems essential to feeding the planet and newborn life-support machines. Technology isn’t just a way to store your selfies; it represents access to the world’s accumulated culture and wisdom. Technology is not a niche; it is a hyper-object dominating human existence.
⎷ The first step is recognition. We need to calmly acknowledge that the wave is coming and the dilemma is, absent a jarring change in course, unavoidable
⎷ There will be no single, magic fix from a roomful of smart people in a bunker somewhere. Quite the opposite. Current elites are so invested in their pessimism aversion that they are afraid to be honest about the dangers we face. They’re happy to opine and debate in private, less so to come out and talk about it. They are used to a world of control and order: the control of a CEO over a company, of a central banker over interest rates, of a bureaucrat over military procurement, or of a town planner over which potholes to fix.
⎷ In the next chapter, I outline ten areas of focus. This is not a complete map, not remotely a set of final answers, but necessary groundwork. My intent is to seed ideas in the hopes of taking the crucial first steps toward containment. What unifies these ideas is that they are all about marginal gains, the slow and constant aggregation of small efforts to produce a greater probability of good outcomes. They are about creating a different context for how technology is built and deployed: finding ways of buying time, slowing down, giving space for more work on the answers, bringing attention, building alliances, furthering technical work.
⎷ Containment of the coming wave is, I believe, not possible in our current world. What these steps might do, however, is change the underlying conditions. Nudge forward the status quo so containment has a chance. We should do all this with the knowledge that it might fail but that it is our best shot at building a world where containment—and human flourishing—are possible.
⎷ There are no guarantees here, no rabbits pulled out of hats. Anyone hoping for a quick fix, a smart answer, is going to be disappointed. Approaching the dilemma, we are left in the same all-too-human position as always: giving it everything and hoping it works out. Here’s how I think it might—just might—come together.
------------------------------------------------------------------
https://www.gatesnotes.com/the-coming-wave
My favorite book on AI
The Coming Wave is a clear-eyed view of the extraordinary opportunities and genuine risks ahead.
By Bill Gates published on Wednesday, Dec 4, 2024
When people ask me about artificial intelligence, their questions often boil down to this: What should I be worried about, and how worried should I be? For the past year, I've responded by telling them to read The Coming Wave by Mustafa Suleyman. It’s the book I recommend more than any other on AI—to heads of state, business leaders, and anyone else who asks—because it offers something rare: a clear-eyed view of both the extraordinary opportunities and genuine risks ahead.
The author, Mustafa Suleyman, brings a unique perspective to the topic. After helping build DeepMind from a small startup into one of the most important AI companies of the past decade, he went on to found Inflection AI and now leads Microsoft’s AI division. But what makes this book special isn’t just Mustafa’s firsthand experience—it’s his deep understanding of scientific history and how technological revolutions unfold. He's a serious intellectual who can draw meaningful parallels across centuries of scientific advancement.
Most of the coverage of The Coming Wave has focused on what it has to say about artificial intelligence—which makes sense, given that it's one of the most important books on AI ever written. And there is probably no one as qualified as Mustafa to write it. He was there in 2016 when DeepMind’s AlphaGo beat the world’s top players of Go, a game far more complex than chess with 2,500 years of strategic thinking behind it, by making moves no one had ever thought of. In doing so, the AI-based computer program showed that machines could beat humans at our own game—literally—and gave Mustafa an early glimpse of what was coming.
But what sets his book apart from others is Mustafa’s insight that AI is only one part of an unprecedented convergence of scientific breakthroughs. Gene editing, DNA synthesis, and other advances in biotechnology are racing forward in parallel. As the title suggests, these changes are building like a wave far out at sea—invisible to many but gathering force. Each would be game-changing on its own; together, they’re poised to reshape every aspect of society.
The historian Yuval Noah Harari has argued that humans should figure out how to work together and establish trust before developing advanced AI. In theory, I agree. If I had a magic button that could slow this whole thing down for 30 or 40 years while humanity figures out trust and common goals, I might press it. But that button doesn’t exist. These technologies will be created regardless of what any individual or company does.
As is, progress is already accelerating as costs plummet and computing power grows. Then there are the incentives for profit and power that are driving development. Countries compete with countries, companies compete with companies, and individuals compete for glory and leadership. These forces make technological advancement essentially unstoppable—and they also make it harder to control.
In my conversations about AI, I often highlight three main risks we need to consider. First is the rapid pace of economic disruption. AI could fundamentally transform the nature of work itself and affect jobs across most industries, including white-collar roles that have traditionally been safe from automation. Second is the control problem, or the difficulty of ensuring that AI systems remain aligned with human values and interests as they become more advanced. The third risk is that when a bad actor has access to AI, they become more powerful—and more capable of conducting cyber-attacks, creating biological weapons, even compromising national security.
This last risk—of empowering bad actors—is what leads to the biggest challenge of our time: containment. How do we limit the dangers of these technologies while harnessing their benefits? This is the question at the heart of The Coming Wave, because containment is foundational to everything else. Without it, the risks of AI and biotechnology become even more acute. By solving for it first, we create the stability and trust needed to tackle everything else.
Of course, that’s easier said than done.
While previous transformative technologies like nuclear weapons could be contained through physical security and strict access controls, AI and biotech present a fundamentally different challenge. They're increasingly accessible and affordable, their development is nearly impossible to detect or monitor, and they can be used behind closed doors with minimal infrastructure. Outlawing them would mean the good guys unilaterally disarm while bad actors forge ahead anyway. And it would hurt everyone because these technologies are inherently dual-use. The same tools that could be used to create biological weapons could also cure diseases; the same AI that could be used for cyber-attacks could also strengthen cyber defense.
So how do we achieve containment in this new reality? It’s hardly fair to complain that Mustafa hasn’t single-handedly solved one of the most complex problems humanity has ever faced. Still, he lays out an agenda that’s appropriately ambitious for the scale of the challenge—ranging from technical solutions (like building an emergency off switch for AI systems) to sweeping institutional changes, including new global treaties, modernized regulatory frameworks, and historic cooperation among governments, companies, and scientists. When you finish his list of recommendations, you might wonder if we can really accomplish all this in time. But that’s precisely why this book is so important: It helps us understand the urgency while there’s still time to act.
I’ve always been an optimist, and reading The Coming Wave hasn’t changed that. I firmly believe that advances in AI and biotech could help make breakthrough treatments for deadly diseases, innovative solutions for climate change, and high-quality education for everyone a reality. But true optimism isn’t about blind faith. It’s about seeing both the upsides and the risks, then working to shape the outcomes for the better.
Whether you’re a tech enthusiast, a policymaker, or someone simply trying to understand where the world is heading, you should read this book. It won’t give you easy answers, but it will help you ask the right questions—and leave you better prepared to ride the coming wave, instead of getting swept away by it.
We can’t stop AI, but here’s how we might shape its effects
In ‘The Coming Wave,’ Mustafa Suleyman argues that we need to think about
how new technologies empower each other
Review by Noah Giansiracusa
The tech industry loves trying to convince us that this next thing is going to be the real deal, transforming society and generating billions in the process. Sometimes, as with personal computers, the internet, smartphones and social media, the hype is justified; other times — remember NFTs? the Metaverse? — not so much. It’s easy to get lost in the daily froth of gadgets and gimmicks, booms and busts, winners and losers.
Take ChatGPT, for instance. It swept the world, setting the record for fastest-growing app in history and spawning countless clones. But less than a year after its initial release, cracks are surfacing: The cost of running a chatbot has become a serious issue. The tendency of chatbots to confidently spew out falsehoods doesn’t look to be going away anytime soon. And Microsoft’s plan to reinvent web search with chatbots hasn’t even dented Google’s market dominance.
Are chatbots ushering in a new era of civilization, or are they yet another overhyped, passing fad? It’s too early to tell. But when we zoom out a bit and look past the daily ebbs and flows, it’s easier to see the larger currents of technological change. Chatbots are just one application of large language models, which themselves are just one corner of contemporary AI. And AI is a prominent part of a massive technological wave that we are just beginning to experience.
This “Coming Wave” is the subject — and the title — of a sweeping, thought-provoking new book by Mustafa Suleyman (writing with Michael Bhaskar), a co-founder of the prominent AI lab DeepMind, which was acquired by Google in 2014.
I like to think of AI as data science 3.0. Traditional statistics, from means and medians to p-values and tests for significance, revolutionized science, medicine, and many aspects of government and business operations, particularly from the 19th century onward. The early 2000s heralded a second period, one more heavily reliant on computers to process large data sets (“big data”). Higher-resolution statistics became the engine of such things as tech giants’ predictions about what consumers are most likely to buy next and political campaigns such as Barack Obama’s 2008 election team, which decided how to focus its efforts using unprecedently fine-grained voter information.
In the current, third wave of data science, the emphasis is moving from making predictions to automatically acting upon them, and from analyzing data to generating it. Whatever big changes to society occur in the coming decades will probably be related to data in some way. And whatever new techniques underlie those changes will probably be labeled AI — no matter how distant they may be from what we call AI today.
Suleyman does not describe AI as I have here, but he does similarly see it as part of a larger technological era, one that is of a piece with genetic engineering, especially gene editing and synthetic biology. Also caught up in the currents are other potentially game-changing technologies such as quantum computing and fusion power. Suleyman convincingly argues that none of these technologies develops in isolation; they proceed synergistically, as progress in one area spurs progress in the others.
Suleyman sees a striking commonality in the technologies making up his coming wave: They proliferate power, and they do this by reducing the costs of acting upon information. This, in his view, distinguishes it from the previous wave of internet-related technologies that reduced the costs of broadcasting information. While the world is too messy to fit neatly into simple summations of this kind, I find Suleyman’s framing quite reasonable and helpful: Look less at the individual technologies within a wave, he suggests, and more at what these technologies enable people to do.
Suleyman makes a compelling case that tremendous progress for humanity is possible with what’s coming, but he also argues that this wave will flood us with devastation if we don’t work harder to direct it. Whether deliberate weaponization of powerful tools or accidental mishaps of unprecedented scale, there is a lot that could go very wrong.
While fanciful doomsday prophesying is a popular preoccupation in some tech and tech-adjacent circles, this book provides a nicely grounded analysis. Rather than the familiar list of Hollywood robot takeovers (HAL and Skynet begone!), you’ll find levelheaded discussions emphasizing the sociopolitical and socioeconomic context in which technology develops and exists.
Suleyman also diverges from the tech industry’s most common line in the ways he impressively draws from the past to help us understand the present and prepare for the future. Historical vignettes on technological progress, from the Industrial Revolution to the combustion engine to the early days of the internet, are engagingly woven throughout the book. As these examples demonstrate, technological waves are nearly unstoppable — and we shouldn’t want to stop them anyway, because technological stagnation is not the answer. As he astutely writes, “Modern civilization writes checks only continual technological development can cash.”
It is particularly impressive — and welcome — that Suleyman includes a wide-ranging and thoughtful discussion on concrete, practical steps we can take. His suggestions are remarkably broad and balanced. He forcefully rejects the hyper-libertarianism of tech moguls such as Peter Thiel, and argues for strong regulation and international cooperation, but he recognizes the myopic nature of modern governments and the myriad ways regulation fails. On economic questions, he doesn’t go as far as some scathing critiques of the capitalistic underpinnings of AI, but he goes much further than most in the tech industry when he discusses the role of financial incentives in encouraging dangerous risk-taking. He also offers some intriguing ideas about tax policy and corporate restructuring that deserve more attention.
Suleyman falls into some traps common to tech leaders, such as taking exponential progress as a given when it isn’t, underplaying the human cost of building AI systems and highlighting his own efforts to raise the alarm over AI while conspicuously omitting mention of the many other individuals who have been doing so for years. It is particularly egregious that not a single one of the women profiled in this recent Rolling Stone story is mentioned, or cited, in Suleyman’s book. And he takes a questionable stance on open-source software, suggesting that AI systems shouldn’t be distributed widely, even though many experts believe this is the best way to uncover their problems so we can try to fix them. But these issues don’t detract much from the book’s overall value and importance.
Whether ChatGPT ends up being central to the coming wave or merely debris washed ashore by the technologies that really matter remains to be seen. Instead of focusing on which apps will stand the test of time and which start-ups will succeed, we should look up and recognize what is fast approaching, and that there are many things we can do to prepare for it. Suleyman provides a much-needed — and unusually thoughtful, expansive, historically rooted and engagingly written — guide.
Noah Giansiracusa is a professor of mathematics and data science at Bentley University and the author of “How Algorithms Create and Prevent Fake News.”