Not Just (Computer) Viruses: What AI Policy Can Learn from Pandemic Preparedness
U.S. policymakers are sprinting to bolster the federal government’s capabilities to detect and counter AI-related national security threats. As they grapple with this critical challenge, valuable lessons can be adapted from other domains where the nation has faced sudden, highly uncertain emergencies with severe implications for security and public safety. The U.S. government’s experience with pandemic preparedness stands out as a particularly instructive case study. Confronting the possibility of a catastrophic outbreak at any moment, authorities have honed strategies to rapidly identify emerging biological threats and mount an effective response to safeguard the homeland.
The experiences and approaches gleaned in pandemic preparedness could serve as a valuable blueprint for formulating strategies to address AI-related national security emergencies. Both domains demand the government devise robust mechanisms to rapidly detect and respond to time-critical threats shrouded in uncertainty. By scrutinizing the pandemic preparedness protocols, AI policy experts can pinpoint sound practices that could be adapted effectively while simultaneously discarding approaches that may be ill-suited to the unique challenges posed by AI-driven security risks.
AI, National Security Threats, and the Executive Order
In March of 2023, OpenAI released GPT-4, sparking a national and worldwide conversation about the governance of artificial intelligence (AI). Leading AI experts have warned about risks from misuse of the technology and risks from uncontrolled AI. Dario Amodei, CEO of Anthropic, told the Senate Judiciary Committee that AI systems could enable large-scale biological attacks within 2-3 years. Yoshua Bengio, a former Turing Award recipient, has described how rogue AI systems may become powerful enough to escape human control. Paul Christiano, the newly-appointed head of AI safety at the U.S. AI Safety Institute, has claimed that he believes there is a 50% chance of an existential catastrophe from AI. Alondra Nelson, former director of the Office of Science and Technology Policy, has compared the risks from AI to existing catastrophic risks in domains like nuclear harm and climate change. AI experts have even signed a statement declaring that AI extinction risks should be taken as seriously as pandemics and nuclear war risks. Signatories include CEOs of major AI companies, professors like Dawn Song and Anca Dragan, and policymakers like Ted Lieu (US House of Representatives) and Audrey Tang (Taiwan’s Minister of Digital Affairs).
Policymakers’ interest in AI notably surged following the release of chatGPT. In October 2023, the White House issued Executive Order 14110 (the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) to improve the federal government’s understanding of AI and its risks. The Executive Order (EO) tasks various agencies with duties relating to their areas of expertise. For example, the Department of Homeland Security was instructed to investigate AI’s potential to cause threats to critical infrastructure, and the Department of Commerce was tasked with developing voluntary industry standards for the safe and responsible development of AI. Further, EO 14110 instructed the Department of Energy to develop testbeds and model evaluations, and the Department of Homeland Security to examine how AI can pose threats to critical infrastructure.
The EO represents an important first step toward improving the federal government’s preparedness for AI national security threats. However, the EO still leaves some critical gaps unfilled. One limitation is that the current approach does not address some of the novel risks related to the technology. When handling national security threats from other emerging technologies, the government ensures that the technology does not fall into the hands of bad actors. With AI, in addition to risks from bad actors, some risks emerge from the technology itself. Many experts have warned that powerful AI systems may escape human control if developed before the proper safeguards are discovered. Such risks are exacerbated by competitive pressures that fuel a race toward dangerous AI systems. Under EO 14110, it remains unclear which agency (if any) is responsible for examining threats from autonomous agents, loss-of-control scenarios, and race dynamics.
Moreover, the current approach emphasizes model evaluations that allow the government to detect risks, but such evaluations must be accompanied by preparedness work that helps the government to respond more thoughtfully to such risks. The EO instructs the Department of Commerce and the Department of Energy to develop and perform model evaluations– empirical tests that allow the government to better detect when AI systems will have capabilities that pose serious national security threats. However, the EO leaves a critical question unaddressed: what should the government do once the model evaluations are triggered? To illustrate this point, suppose the Department of Energy discovers that a new AI system can meaningfully contribute to developing novel weapons of mass destruction, or the U.S. AI Safety Institute predicts that models will soon be likely to escape human control. At this point, the federal government’s response would likely be reactive and rushed– it would need to prepare and execute an appropriate response very suddenly. The Executive Order’s focus on model evaluations and reporting requirements can help improve the federal government’s ability to detect these kinds of time-sensitive threats. Still, a gap remains in how to ensure that the federal government has the infrastructure, expertise, and preparation required to implement adequate responses to such threats.
Applicable Practices from the PREVENT Pandemics Act and the OPPR
The COVID-19 pandemic illustrated the value of detecting emergencies in advance, developing a playbook of options ahead of time, responding rapidly, and ensuring a clear chain of command. In 2023, Congress passed the PREVENT Pandemics Act to establish the Office of Pandemic Preparedness and Response Policy (OPPR) and revise the federal government’s approach to handling pandemics.
There are a few specific aspects of the approach taken in the PREVENT Pandemics Act that might be relevant for AI emergency preparedness:
- A holistic approach to threat assessment: OPPR is tasked with holistically ensuring that the federal government can “prepare for, and respond to, pandemic and other biological threats.” In AI development, similar language could ensure that certain kinds of national security threats not currently “covered” by other agencies are identified and appropriately managed.
- Coordination: OPPR supports “the assessment and clarification of roles and responsibilities related to such Federal activities.” Given the rapid nature of AI progress and the many agencies involved in AI national security preparedness, an office that helps clarify roles and responsibilities could help improve efficiency and reduce bureaucracy-induced delays.
- International cooperation: OPPR is tasked with “assessing and advising on international cooperation in preparing for, and responding to, such threats to advance the national security objectives of the United States.” Addressing AI-related national security threats may require ensuring that certain safeguards are implemented internationally and certain dangers are reported across countries, underscoring a need for international cooperation.
- Drills: OPPR is instructed to conduct “drills and operational exercises conducted pursuant to applicable provisions of law.” Such drills could also help ensure that the United States can respond to AI risks that require an immediate or time-sensitive response. For example, consider a scenario in which a whistleblower identifies a new technique that allows a deployed AI model to develop a novel biological weapon, and the United States has a limited window to intervene before such knowledge becomes widely available. As another example, consider a scenario in which an AI system begins to automate AI research and development, leading to the sudden development of powerful AI systems that are not properly contained or controlled by current safeguards.
- Close communication with the President: The OPPR lives within the President’s Executive Office and serves as the “principal advisor to the President on all matters related to pandemic preparedness and response policy.” This ensures that the President has access to relevant information and is alerted swiftly during a time-sensitive emergency. As mentioned earlier, swift responses could be critical for certain AI threats, so speedy communication with the President could also be an essential component of AI emergency preparedness.
Unique Challenges in AI Emergency Preparedness
While efforts to prepare for AI national security threats can draw from lessons learned in pandemic preparedness, it is worth acknowledging the differences between pandemics and AI emergencies. One key example is that biological threats are better understood than AI-related national security threats. Scientists generally have a much better understanding of biological agents, whereas frontier AI systems are “black boxes” that are not well understood, even among AI scientists. Furthermore, many of the most concerning risks from AI have to do with AI that does not yet exist (though it may exist within a few years).
Moreover, whereas dangerous biological research is currently regulated, it can only occur within verified facilities, the most powerful AI systems are currently developed by powerful technology companies that emerge from a “move fast and break things” culture. Finally, threats from biological agents are relatively homogeneous– although there are many different kinds of pandemic-related threats, all of the threats involve biological agents. AI poses a more heterogeneous array of national security threats (e.g., bioterrorists’ malicious use of the technology, loss of control to “rogue” AI systems, threats from foreign adversaries developing or deploying AI without proper safeguards). Given these differences, AI preparedness efforts will require unique strategies and approaches.
Nonetheless, there are also some core similarities—such as the need for holistic risk detection, coordination between agencies, developing and testing plans in advance for the most concerning risks, and swift executive responses in an emergency. As the U.S. government decides how to detect and respond to AI national security threats, AI policy experts can draw inspiration from lessons learned in pandemic preparedness.
This is a guest contribution by Akash Wasil, an AI policy researcher and incoming Security Studies Program (SSP) student. Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Credit: University of Ottawa