Pandemic Roulette: Risks and Rewards of Virological Gain-of-Function Research
It all starts with a handful of unexplained fevers. A lab technician in a high-security research facility begins feeling ill but dismisses it as a mild case of the flu. Unbeknownst to her, she has been exposed to an experimental virus—one designed to be more infectious for the sake of scientific study. Days later, the illness spreads beyond the lab’s walls, first to family members, then to coworkers, then to strangers on a crowded subway. Within weeks, hospitals are overwhelmed, and governments scramble to contain an outbreak they do not yet understand. Millions or more could perish, and by the time scientists trace it back to the lab, it is too late.
This is not just a thought experiment with no basis in reality. The possibility of a pandemic sparked by a laboratory accident has been a concern for decades, with multiple documented incidents of leaks from high-security labs studying deadly pathogens. Gain-of-function research, which adds or modifies properties to an existing organism or virus, can sometimes lead scientists to create new pathogens that are more deadly than any that previously existed in nature. Now, the Trump administration is moving to block federal funding for gain-of-function research, reviving a policy debate that has led to division in the scientific and policymaking communities. The core question remains: do the benefits of this research outweigh the risks, or are we playing with fire atop a mountain of dynamite?
A Future Pandemic Is One Lab Leak Away
Gain-of-function research intentionally manipulates pathogens to, for example, amplify their contagiousness or lethality, allowing researchers to gain insight into how these pathogens function––and then create effective countermeasures like vaccines. These experiments often involve modifying pathogens in ways nature has not yet achieved, creating novel biological threats. For example, scientists might splice genes from one virus into another to create a hybrid strain or repeatedly pass a pathogen through animals to force evolutionary jumps. That is what researchers did in 2014, serially infecting ferrets with slightly mutated H5N1 strains to create a new version that spread through the air. In humans, H5N1 has about a 50% fatality rate, meaning that those researchers created a new way to spread an extremely fatal disease rapidly. While intended to reveal vulnerabilities for vaccine development, these methods necessarily generate pathogens far deadlier than their natural counterparts.
The potential for a supercharged pathogen to leak out of a ‘secure’ lab is not mere speculation. Since humanity has gained the ability to engineer ever-deadlier viruses, these viral agents have also escaped numerous times. The 1977 Russian flu was caused by a flu strain nearly identical to a variant in circulation 20 years earlier. Some experts concluded that this was not a coincidence and that the outbreak was probably caused by a leak from a laboratory conducting gain-of-function research on the H1N1 virus.
Smallpox was eradicated in 1977 in one of the greatest triumphs of human talent, will, and cooperation to date. The last person to die from smallpox, however, was infected in 1978. Janet Parker, a British medical photographer, did not contract the disease while reporting far away from home in a country with limited public health capabilities. She was infected in Birmingham, England, by a smallpox virus strain that was grown in a research laboratory on the floor below where she worked. While it was at first assumed that the virus was transmitted through the hospital’s air ducts, several experts and a later court case acquitting the hospital of violating health laws have discredited this theory, leaving the origin of her infection from a secure lab a mystery. This means that the virus most likely originated from the lab, but it remains a mystery as to how it got out. The Birmingham incident contains an important lesson. While some lab leaks are caused by insufficient security measures, over a long enough time horizon, they are also simply inevitable. Perfect containment is an illusion.
Dozens of people were killed by anthrax in 1979 when spores of Bacillus anthracis were leaked from a Soviet bioweapons lab in Sverdlovsk and inhaled by people in the neighboring town. The lab, run by the Soviet agency Biopreparat, had extensive biosafety measures, including a double-door airlock system, sealed suits and protective equipment, decontamination and sterilization, and heavily restricted access, among other precautions. Despite these measures, an outbreak could not be prevented. As in the smallpox case, this example goes to show the inherent limits of lab security measures when dealing with deadly pathogens. Both technical failure and human error are inherent and insoluble sources of peril.
These problems have not been eliminated in the 21st century, nor is the United States immune to them. In 2014, 75 CDC staff in Atlanta may have been exposed to anthrax. Furthermore, according to the CDC, “procedures used in two of the three labs may have aerosolized the spores.” Inhalation anthrax, in comparison to cutaneous (skin-based) anthrax, is nearly always fatal, meaning that even in the modern United States, the most dangerous pathogens in the world are only one lab leak away from being released to the outside world.
The origin of the COVID-19 pandemic is the most recent elephant in the room. Some experts argue that the virus is a product of gain-of-function research leaking out of a lab, as COVID-19’s emergence is unusual for a zoonotic origin. SARS-CoV-2 probably originated in one of the few cities in the world with an active lab conducting gain-of-function research on bat coronaviruses but with relatively few natural bat coronaviruses. Hence, while it is still very much possible that SARS-CoV-2 did have a zoonotic origin, the circumstances of its outbreak do point in a particular direction. If COVID-19 had emerged from a lab leak, what occurred in Wuhan is entirely consistent with expected observations: a bat coronavirus emerged from a city without a significant natural host population.
Critics of the lab-leak hypothesis argue that no direct evidence conclusively shows that SARS-CoV-2 originated in a lab. That is true, just as it is also true that there is no such definitive evidence that it had a zoonotic origin. The U.S. intelligence community is divided on the matter, with more agencies preferring the zoonotic hypothesis. Despite this division, the mere possibility of a lab leak as the cause of one of the top five deadliest pandemics in history should be sufficient to inform decision-making in gain-of-function pathogen research.
Policy and Prevention: Balancing Risk and Scientific Progress
The United States has long conducted gain-of-function research on deadly pathogens with pandemic potential. In 2014, the Obama administration instituted a moratorium on gain-of-function research funding on influenza, MERS, and SARS after five biosecurity incidents occurred in U.S. labs that year and instituted a new policy that aimed at strengthening institutional accountability for biosecurity. Since then, the moratorium was lifted as Health and Human Services (HHS) established a multidisciplinary review process in 2017 for any gain-of-function research that increases a pathogen’s transmissibility or virulence. Under this framework, proposed research must undergo agency-level scrutiny if it involves pathogens likely to cause uncontrollable spread and significant mortality. In May 2024, the White House expanded these rules under a unified policy that replaced the guidelines previously established in 2014. The 2024 framework narrowed the scope of regulated experiments to exclude low-risk pathogens (for instance, common cold viruses) while requiring stricter review for studies that could plausibly enhance pandemic potential. Now, according to recent reporting, the Trump administration is considering a pause on funding all gain-of-function research that makes pathogens more deadly or contagious. Florida, meanwhile, banned such gain-of-function research entirely in 2023.
Gain-of-function research can be helpful. Even in the most dangerous studies, including the 2012 research that created airborne H5N1 in ferrets, researchers revealed influenza mutations that could signal when a flu strain is becoming a pandemic threat. The first COVID-19 vaccines were designed within 48 hours of the genome being published in January of 2020, and gain-of-function research likely helped. Critical to this rapid design was the use of a prefusion-stabilized spike protein, a conformation first identified through gain-of-function studies on MERS-CoV, a virological cousin of COVID-19. Despite this apparent benefit of gain-of-function research in combating COVID-19, the wealth of available research on coronaviruses makes this case an exception. Progress was made quickly, and with very little risk, due to the existing groundwork. It should not be seen as a model for the purported benefits of gain-of-function research.
Independent of a possible ban or curtailment of gain-of-function research, previous outbreaks and pandemics demonstrate the urgent need for intensified efforts to eliminate the risks stemming from the most dangerous pathogens. Central to this endeavor are enhanced biosurveillance capabilities like wastewater monitoring, investments in personal protective equipment, and improved air filtration for all relevant facilities.
Calls for more stringent regulation of gain-of-function research is a common-sense solution to this problem, seemingly preserving its benefits to medical research and pandemic prevention while addressing its most significant risks through stricter control measures and increased oversight. Regulation, however, has so far proven insufficient. The National Institutes of Health, the very body that is responsible for exercising expert judgment to determine which studies are too dangerous, approved the 2012 research mentioned above that created airborne H5N1 in ferrets. New regulations may solve this problem, but it is also possible that the government will be hesitant to reduce funding or approval for potentially life-saving research—allowing research to keep taking enormous risks. Taking into account the futility of strict regulation, the inherent risks of gain-of-function on deadly pathogens, and the strength of alternative means of pandemic prevention, a permanent ban on dangerous gain-of-function research appears reasonable.
The debate over gain-of-function research is unlikely to be resolved anytime soon. While its proponents highlight its potential for pandemic preparedness, history has shown that even the most secure labs are vulnerable to failure. Strengthening biosafety measures and improving oversight may reduce risks, but they cannot eliminate them entirely. Given the stakes, it may be time to reconsider whether the potential benefits justify the inherent danger or whether it is time to stop playing the viral roulette that puts the entire world at risk.
Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Source: CIDRAP