Ben Buchanan and Andrew Imbrie are both researchers at the Georgetown Center for Security and Emerging Technology, but are currently on leave to serve the U.S. government. Buchanan serves as Assistant Director of the White House Office of Science and Technology Policy for the Biden-Harris Administration, while Imbrie serves in the State Department.
Below, Ben and Andrew share five key insights from their new book, The New Fire: War, Peace, and Democracy in the Age of AI. Listen to the audio version – read by MIT Press Editorial Director Gita Manaktala – in the Next Big Idea app.
1. AI is related to fire.
It is common to analogize AI to electricity: ubiquitous, beneficial, and secure. This is way too rosy. Today we meet AI when our distant ancestors once encountered fire. If we manage this technology well, it will become a huge force for global good, and light the way for transformative inventions. If we implement it too quickly and without sufficient foresight, AI will burn in ways we cannot control. If we use it to destroy, it will enable stronger weapons for the strongest governments when they engage in fiery geopolitical competition. That the frequent analogy with electricity denies this wide range of possible outcomes only makes us less prepared.
AI is related to fire in another way: its power comes from its accelerating power. Just as an uncontrolled wildfire burns more every second than in the second before, accelerating growth in AI’s underlying components also provides rapidly increasing capabilities. Increasingly large data sets representing large stocks of human science in today’s AI systems. Increasingly skilled and efficient algorithms are pushing machines to new heights and reaching milestones that not long ago seemed decades or even centuries away. And increasingly powerful computer chips (some of the most remarkable and intricate inventions ever devised) are working together in large numbers to do all the math that makes these new possibilities possible.
2. AI can usher in a new era of human prosperity.
Recent breakthroughs leave no doubt that artificial intelligence can make our lives better. This ability goes far beyond playing games. AI is better than humans, not only for very complex games like Go, StarCraft and poker, but also for fighter pilot dogfights, controlling nuclear reactions and even for some of the most basic tasks in science itself. Consider something called the protein folding problem. Proteins are one of the most basic building blocks of life. Each protein is made up of a sequence of amino acids that, when the protein “folds”, arranges itself in a complex 3D form. For decades, predicting the shape of a protein from its sequence was one of the highest orders in science – a PhD student could dedicate years of research to determining the structure of a single protein. But because knowledge of protein forms is so valuable to medicine (including drug discovery), this careful manual effort is often worth it.
But AI researchers at a company called DeepMind thought there had to be a better way. In 2016, DeepMind began working on AlphaFold, an AI system that predicts the shape of a protein when it gets its sequence. In 2018, AlphaFold was the best automated system in the world for the task. By 2020, it was able to solve the problem of protein folding completely. By the end of 2022, DeepMind will have determined and published the structure of more than 130 million proteins, hundreds of times more than what all of humanity had jointly determined in the manual work prior to AlphaFold’s invention. As one leading biologist said, “This will change medicine. It will change research. It will change biotechnology. It will change everything.”
3. AI can cause enormous damage.
Despite its extraordinary power, AI is far from perfect. Bias sneak into AI systems, especially when learning from datasets of human decisions. The consequences in the real world can be serious. Amazon had to scrap a resume screening tool after learning to systematically discriminate against women. Another algorithm regularly denied health care to colored people. Similarly, face recognition technologies perform far worse for different groups; in the United States, police have arrested innocent black Americans solely on the basis of an incorrect face recognition match.
Nor can AI explain how it arrives at its conclusions. As a lazy middle school student, even when the machine gets the right answer, it rarely shows its work, making it harder for people to trust its methods. Even worse, this opacity can obscure the cases where AI systems optimize to a goal that is not quite what their human creators had in mind. For example, a system designed to detect pneumonia by chest X-rays found that X-rays from one hospital were more likely than others to show pneumonia because the hospital usually had sick patients. The machine learned to look for the X-ray image’s hospital of origin instead of on the X-ray image itself. Another system was designed to identify cancerous skin lesions. It trained on a set of images from dermatologists, who often used a ruler to measure lesions that they thought might be carcinogenic. The AI system recognized that the presence of a ruler correlated with the presence of cancer, so it began to check if a ruler was present instead of focusing on the characteristics of the lesion.
In both of these cases, alarming human operators noticed the flaws before the systems were implemented, but it is impossible to know how many cases like these have gone undetected and how many more will remain undetected in the future.
4. AI is the focal point of geopolitical competition.
Military and intelligence services herald an era of deadly autonomous weapons, including not only drones strolling across the battlefield, but missiles capable of picking their own targets. Top military thinkers, including in the United States, are thinking of machine-driven warfare that is faster than ever before. In this future battle, operating at human speed is a sure way to lose weight. Some strategists even suggest giving AI the ability to fire nuclear weapons – a decision currently reserved for the president, which balances the fate of civilization.
But artificial intelligence will transform more than warfare. Many of the most powerful cyber attacks in history – including some that have done tens of billions of dollars in damage – have been largely automated, and new artificial intelligence techniques could take this trend further. Russian hackers have built malicious code that autonomously targets power systems, while the Pentagon has run giant tests in which AI systems hack and defend each other at high speed.
In addition, AI is adept at writing disinformation. A test in 2021 showed that AI systems could write targeted propaganda messages that exploited racial, religious, and political differences in the United States to successfully change attitudes toward its goals. Perhaps worse yet, theorists worry that deep fake videos are ready to undermine the very notion of truth.
5. Democracies can win in the age of AI.
Against this background of automated warfare, widespread bias and widespread disinformation campaigns, a worrying and common proposition arises: AI will benefit autocracies at the expense of democracies. At a time when dictators seem empowered all over the world, it is easy to assume that this new technology will favor tyranny. Unleashed with ethics, autocrats will crush dissent with automated surveillance systems at home and move on with AI-enabled warfare abroad.
But this is too fatalistic. The age of AI is still young and the outcome is far from predetermined. Democracies have the opportunity to develop common standards for the use of technology at home and abroad, unleash its extraordinary potential, while protecting against bias and preserving civil liberties. They have the capacity to integrate it into military and intelligence services in such a way that they preserve and enhance democratic values while putting autocracies on the defensive. Most importantly, democracies offer an innovative ecosystem that can determine where technology goes. AI will shape statecraft, but statecraft will shape AI. If AI is the new fire, the most important thing is how we nurture it.
This article originally appeared in Next Big Idea Club magazine and reprinted with permission.