The Double-Edged Sword: Opportunities and Risks of AI in Biosecurity
Biosecurity has an AI problem. And AI has a biosecurity problem. According to some scientists, engineers, and AI specialists, the use of artificial intelligence in the life sciences is a double-edged sword. While AI promises great potential for biotechnology solutions in medicine, agriculture, and sustainability, new AI-created capabilities also pose a risk of potential misuse. Figuring out how to manage the dual-use nature of AI is a rising security issue.
While AI’s current impact on biothreats is minimal, gaps in regulation are a cause for concern. Implementing legislation requiring—rather than just recommending—monitoring and red-teaming in AI development would help fill these gaps. Bolstering regulation of manufacturing supply chains, tailored explicitly to address the intersection of AI and biotechnology, would also help rebuild barriers to the physical development of bioweapons. Ultimately, the impact of AI on the evolving life sciences calls for a reassessment of the biological threat landscape and the governance needed to address it.
New Technology, Old Threats?
In March 2023, Vice President Kamala Harris spoke on the future of AI, acknowledging that it “has the potential to do profound good.” She also referenced the threat of AI-formulated bioweapons to the “existence of humanity.” The technology sector concurs—former Google chief executive officer Eric Schmidt called AI’s most pressing issue its potential use in biological warfare.
Biological weapons are microorganisms—viruses, bacteria, and fungi—or toxins produced by living organisms that are “released deliberately to cause disease and death in humans, animals, or plants.” Of course, biological warfare is not new. Several countries experimented with biological weapons programs during both World Wars. The Soviet Union stockpiled bioweapons throughout the Cold War. Anthrax attacks killed five in the United States in 2001. Research shows that the current state of AI will not necessarily exacerbate existing capabilities. Nevertheless, the dual-use nature of AI invokes fear that tools intended for biomedical research could be harmfully repurposed and open opportunities for biothreats that did not exist before.
Dual-Use AI and Biosecurity
Large language models (LLMs) and biological design tools (BDTs) are AI systems scrutinized for, among other things, their prospective use in the creation of bioweapons. LLMs, such as OpenAI’s Chat GPT-4, are trained on massive amounts of text data to learn how to generate and process text in a process known as deep learning. While LLMs are trained on natural language, BDTs are trained on biological data. BDTs can “design new proteins or other biological agents,” helping users better understand and manipulate biological systems.
These AI tools alone cannot create bioweapons but can simplify the process by providing more easily accessible information. This democratization of knowledge lowers barriers to entry into the field and characterizes the double-edged nature of AI in biosecurity. LLMs, for instance, aid biological and chemical research and reduce the time and cost of drug discovery. But, they could also benefit novices by consolidating online sources on biowarfare and refining results into easily digestible terms. BDTs significantly benefit the life sciences and can accelerate advancements in areas such as vaccine development. However, their application is also prone to misuse: the same technology used to suggest chemicals for medical treatment can be used to produce harmful substances like VX, one of the world’s most potent poisons. Ultimately, political leaders are concerned that state or non-state actors interested in pursuing unconventional weapon capabilities will take advantage of this dual-use technology.
Challenges in Governance and Regulation
AI is difficult to govern—it progresses rapidly and spans industries and borders. There is no comprehensive federal legislation regulating AI in the United States, and governance is often vague and overarching, relying on companies and institutions to self-regulate. The Biden administration issued a sweeping executive order in October 2023 outlining standards for AI developers. In biotechnology, this included recommendations to adhere to reporting requirements and safety tests. Since the executive order, many companies have implemented AI red-teaming—an organized attack on systems to test and improve security—and are funding research on how to make their systems safer.
In an agreement signed by over ninety scientists and biologists in the AI field, the parties agreed to take precautionary measures against their work being misused. Leading AI company Anthropic worked with biosecurity experts throughout the development of its chatbot, Claude. OpenAI is taking steps to prevent the misuse of its chatbots. The organization released an open letter in January 2024 that provided the public with red-teaming research conducted to address that its latest model, GPT-4, offers a “mild uplift in biological threat creation accuracy.”
However, some AI developers believe they are only partially responsible for mitigating biosecurity threats. David Baker, director of the Institute for Protein Design at the University of Washington, thinks the “appropriate place to regulate” lies in the physical barriers to bioweapons: the laboratories and equipment that enable processes like DNA synthesis. DNA synthesis is used in various applications, including vaccine development, but can also provide components for harmful pathogens. Biden’s executive order recommends that suppliers of DNA synthesis material should screen purchases to monitor misuse; however, there are no actual requirements that suppliers must comply with this recommendation.
Implementing legislation that enforces proper monitoring along supply chains will shore up biosecurity and fill gaps in AI regulation. For example, the EU AI Act governs AI technology based on its risk level classification. However, with some dual-use AI models, anticipating and managing risks is arduous. LLMs are difficult to predict—their capabilities evolve so quickly that new and dangerous uses are often only detected after the LLM is in use. On the other hand, over-regulation risks stifling the benefits of AI biotechnology. Thus, policy geared towards regulating specific areas in the physical production of bioweapons can help address problems in AI governance. DNA synthesis, for instance, requires materials that are low-cost and accessible. Targeting industries known to be more accessible can rebuild barriers to creating bioweapons.
Preparing for the Future
We are at an inflection point for AI and biosecurity amid a rapidly changing security environment. The United States’ meager biodefense capabilities leave many unconvinced of its ability to counter large-scale attacks, especially since the COVID-19 pandemic response cemented fears that a significant biological attack could lead to widespread chaos and human suffering. On the bright side, experts suggest we are not quite at that point yet. While the possibility of AI influencing a biological catastrophe exists, its current impact on biothreats is minimal.
Experts correctly assuage fears over large-scale bioweapon attacks while simultaneously taking the threat AI-driven technology might pose quite seriously. To address potential risks, more concrete legislation emphasizing red-teaming and compliance with standards and regulations will help demystify the convergence of biosecurity and AI. Bolstering regulations on laboratories, equipment, and other resources needed to create and store harmful bioweapons, specifically those connected to AI capabilities, could also help to prevent biothreats. Preserving AI’s beneficial use in life sciences will require a tricky balance. Managing the double-edged potential of AI in biotechnology will be a long, complicated pursuit with global ramifications, but one worth monitoring as AI progresses.
Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Credit: Canva Image Generator