Reimagining Democracy’s Defense

Issue Date July 2023
Volume 34
Issue 3
Page Numbers 173–77
file Print
arrow-down-thin Download from Project MUSE
external View Citation

Four Battlegrounds: Power in the Age of Artificial Intelligence. By Paul Scharre. New York: W.W. Norton & Company, 2023. 496 pp.

It seems that we are in the throes of an AI arms race. A recent open letter calling for a pause in artificial-intelligence research, with signatories from Elon Musk to Yoshua Bengio, states that “AI labs [are] locked in an out-of-control race to develop and deploy AI” that they cannot “understand, predict, or reliably control.” Responses to the letter included exhortations from U.S. senators and CEOs for the United States to instead “step up” its AI arms race against China for fear of being left behind, safety risks from corporate competition notwithstanding.

Paul Scharre, a vice-president and director of studies at the Center for a New American Security, takes the latter position, although he carefully sidesteps using the term “arms race.” In his comprehensive Four Battlegrounds: Power in the Age of Artificial Intelligence, Scharre makes the case that the United States is locked in a “race . . . to lead in AI and write the rules of the next century to control the future of global power and security” (p. 8). While the United States is currently in a favorable position, the stakes are too high and the outcome too uncertain for complacency. Scharre calls for a renewed program of AI investment, innovation, and diffusion to cement a U.S. lead.

About the Author

Divya Siddarth is the director of the Collective Intelligence Project. She was formerly associate political economist and social technologist at Microsoft, and also holds positions at University of Oxford’s Ethics in AI Institute, Stanford University’s Center for Human-Centered AI, and Harvard University’s GETTING-Plurality Lab.

View all work by Divya Siddarth

Of course, races to control technology among rival powers are nothing new. The sixteenth century B.C.E. saw the first use of the chariot as a weapon, altering the balance of power in Egypt’s favor and sparking a centuries-long arms race from Anatolia to Mesopotamia. The Roman ballista, itself derived from earlier Greek designs, set off its own race among rival states in the ancient world. The modern era has seen its share of such races, each more destructive, from the Anglo-German naval arms race, which contributed to the tensions that sparked World War I, to the destruction unleashed and threatened by the development of nuclear weapons during the Cold War.

But as technology becomes more powerful, the consequences of these arms races rise commensurately, as does the risk—increasingly borne not only by combatants, but by the world at large. Nowhere is this clearer than in the race to lead in AI, both on and off the battlefield (but mostly on).

Scharre’s focus is on the risk that falling behind on AI would be a death blow to democracy. As Scharre puts it: “If the United States and other democracies do not work together to lead in AI and shape the rules for how it is used, they risk a creeping tide of techno-authoritarianism that undermines democracy and freedom around the globe” (p. 7). This is very much a Biden-era book, framing this global conflict as one between democracies and autocracies that the United States must focus on winning. If there were any doubts as to the purveyors of this techno-authoritarianism, Scharre puts them to bed early, opening the book with quotes from Xi Jinping and Vladmir Putin expressing their own ambitions to dominate in AI.

China, in particular, is the target of Scharre’s concern. If China becomes the world’s AI leader by 2030, as is its goal, Scharre foresees a future of widespread surveillance, human-rights abuses, and the erosion of global freedoms. Worryingly, according to Scharre, China may be poised to do just that. The country produces more AI papers than the United States does, collects and can utilize far more data than the privacy-concerned West can, and is on track to spend billions on research, training, and launching large-scale AI projects. If democracies do not propose an alternative model for AI governance, Scharre fears that the speed and scale of China’s action may bulldoze other efforts.

Scharre enumerates four battlegrounds on which the future of AI will be decided, each of which is described as a surface area of competition between the United States and China: data, compute (or computing hardware), talent, and institutions. Throughout the book, he ties proposed policies back to these four key battlegrounds, which together lay out a program of U.S. leadership in AI.

Data is the most familiar of the four, with the now-inescapable comparisons to oil cementing it as a crucial resource in the geopolitical balance of power. It has been posited that China has an inherent data advantage, given its larger population and allegedly more lax privacy laws, but Scharre points out that data quality, not quantity, reigns supreme. Further, while Beijing may collect more data on its citizens, China actually has more stringent privacy protections over corporate data collection than the United States does.

Next is computing hardware. Cutting-edge AI requires massive amounts of computing power, and the post-pandemic semiconductor shortage threw into sharp relief just how concentrated the supply chain is. As an example, one Taiwanese firm (TSMC) dominates the contract foundry market, while the Dutch firm ASML is the only supplier of extreme ultraviolet lithography tools. However, with a significant share of the semiconductor-manufacturing equipment market and dominance in chip-design software, the United States is in a far better position than China. Scharre argues that the United States should extend its lead through policies like the CHIPS Act, and by maintaining China’s foreign-chip dependence via multilateral export controls, re-shoring, and tighter information security on core components.

Scharre is a bit more forceful on the question of talent. The United States is still a global hub for the best and brightest, but it is standing in its own way—so much so that the National Security Commission on AI has called expanding high-skilled immigration a “national security imperative.” Scharre calls for immigration reform across the board for those with advanced degrees, again coupled with the imperative to crack down on Chinese espionage, recruitment, and intellectual-property theft.

Finally, Scharre makes the crucial point that success in AI comes down to deployment and diffusion, “transforming the raw elements of data, compute, and talent into practical applications” (p. 35). This brings us to the final and most important battleground: Institutions, especially military institutions, which must be able to access cutting-edge AI technology. Roadblocks abound, from the Defense Department’s labyrinthine procurement processes to the highly publicized refusal of Google employees to work on the defense-associated Project Maven. Scharre recommends a multipronged approach to integrate AI into the military, centered around demystifying and accelerating the processes by which start-ups can work with the Pentagon. Some of the most interesting passages in the book chronicle the many dead ends that technology upstarts face in integrating with the military and the government more broadly; Scharre is right that we need greater links between private innovation and public benefit.

Four Battlegrounds is wide-ranging and exhaustively cited work. Scharre lays out the interaction between advancements in AI and warfare: risks, opportunities, forecasts, and strategic implications for everything from AI-generated voices to game-playing algorithms that can analyze millions of combat scenarios a second. He expands a bit outside this frame as well, touching on the need to reduce bias and discrimination in deployed models, to lead on multilateral coordination on technology standards with allies, and to put forward a more ethical approach to AI than China does.

However, the book falls short on one important count: in answering Scharre’s own call for democracies to “advance a positive agenda for AI governance” (p. 305).

Of course, such an agenda may lay bare many more battlegrounds than the four that Scharre has prioritized. Beyond building a pipeline between companies and the military, how will societies deal with the market concentration in AI, with a few private corporations in charge of the most advanced existing models? In an age of democratic backsliding partly driven by economic frustrations arising from globalization, how will democracies support citizens through the labor impacts of AI? A positive agenda for AI governance will require grappling with core political-economic questions to steward the transition beyond the short-term.

As with Four Battlegrounds, the core of any such agenda is its institutions. From the research labs and start-ups that Scharre relies on to produce AI advances to the public entities, universities, corporations, and international fora he relies on to diffuse them, our existing institutional models hinder us as often as they help us. Technological progress has far outpaced institutional innovation. Navigating the “Age of AI” will require correcting this gap.

This cuts both ways. AI can be leveraged to improve institutional effectiveness. This could look like expanding on experiments such as DeepMind’s Democratic AI, which used deep reinforcement learning to collectively determine economic policy, or more prosaically like using large language models to improve communication, collaboration, and granular decisionmaking. But institutions must also be targeted toward addressing AI. For example, the grave dangers posed by global climate change helped to lead to the creation of the Intergovernmental Panel on Climate Change (IPCC). Collaborators and I have put forward a proposal to create a similar body for AI to track society-wide impacts of AI through networks of “sensors” and collective intelligence at the point of use, which can inform policymaking and investment in the space. Much as with climate, risks from AI could have global consequences, which will require cutting through competition with China to ensure coordination when it matters.

Finally, a democratic vision for AI must involve small-d democracy. Scharre describes the difference between decisionmaking in Europe and the United States versus China thusly: “Both have a common starting point. They are grounded in democratic processes that grant their approaches legitimacy. The same cannot be said of China or other authoritarian regimes in which the citizens do not get a vote” (p. 112).

But if we are truly to engage with AI as citizens, what are our civic duties, rights, and obligations? As it stands, there are few avenues for the public to meaningfully adjudicate the risks and rewards of incorporating AI into our institutions. Discussions in the AI space, from the open letter calling for a pause to calls to embed AI into the military, lack a key component: public legitimacy. We are building technologies that can ingest incredible amounts of information and hand down lightning-fast, black-boxed decisions. Unless we carve out room for human self-determination and freedom—the very things we aim to defend in a democracy—we may find ourselves following the gradient toward greater automation without greater autonomy.

Arms races can be inherently reactive, constraining innovation to the needs of competition. Advances in governance and arms races do not typically go hand in hand. Avoiding the label, or casting the current race as a binary contest between democracy and autocracy, does little to mitigate this fact. Scharre is right that the United States is well placed to defeat China on the battlegrounds he has laid out. His proposals for immigration reform, institutional experimentation, and accelerated innovation are timely and well-argued. But a democratic vision for a new age requires a reimagining, not just a defense, of the territory.

 

Copyright © 2023 National Endowment for Democracy and Johns Hopkins University Press