Even before 1999 I have been reading, collecting, and analyzing tens of thousands of news articles from two newspapers, the Washington Post and the Washington Times. Each have been printing my "letters to the editor" linking national security issues, political priorities, historical events, warnings, bi-partisan Presidential commissions, hundreds of C-Span programs, think tank articles, scientific reports, and personal/professional experiences within dozens of not-for profit organizations starting in 1981 at the Hunger Project in San Francisco. In 1988 I was hired by RESULTS as its first Media Director, so I moved my family to Washington DC. Fired eight months after starting, the RESULTS Board chairman hired me as Advocacy Director of the Alliance for Child Survival. Three years later I was volunteering with Friends of the United Nations Environmental program, then Christian Israel Public Action Campaign & Middle East Research Center until being hired by the National Council for International Health (later renamed the Global Health Council). After being elected by my peers to Chair the U.S. United Nations Council of Organizations and to be one of 14 members of the Action Board of the American Public Health Association, I ended my last four professional years inside the beltway as Issues Director for the World Federalist Association. Its focused originated with Albert Einsteins advocacy in hopes of avoiding another world war, genocide, or the use of WMD by replacing the existing global 'law of force' with the global 'rule of law'. I was fired from that job on the first anniversary of the attacks on September 11, 2001. I had pushed too hard internally for WFA to resist support for the Global War Against Terrorism...a tactic that could never be defeated with war... a prime example of WFA abandoning its original mission.
I began my advocacy journey in 1977 as a High School biology teacher/wrestling coach for 6.5 years after graduating from Colorado State University with a dual degree in Biology and Secondary Education. Fired from my last teaching job for speaking my mind about the need to connect biology to global issues - many of which have now become obvious (pandemics, 'terrorist' attacks, unwinnable wars, failed nation states, declining democracies, unsustainable government debt. reactionary government policies, corruption linked to offshore accounts, the evolution of weaponry, and other issues linked to US national security threats. All connected - and at great cost in human lives and dollars. And most organizations not sticking to their fundamental mission or a willingness to adapt to clearly predicable trends in technology, health, politics, environmental issues, or economics. Some went bankrupt. Others never achieved their stated goals. Obvious goals that were needed, agreed to, achievable, and affordable - but missed because there was an unwilling to move in the direction of prevention and addressing root causes. And thus, we have many of the new or accelerating threats to our nation and most of humanity now.
This book has been burning in me for decades. Other books have offered many of the same warnings, predictions, and documented their direct causes. Some have even championed wise investments in action plans that usually lacked the national 'political will' to execute. I'm not of the opinion that books make a difference. If the Bible, Torah, Quran, and other idealistic spiritual guides have not been effective and too often cause more problems than they solve. Many other insightful and inspiring books have been written, yet the polarization of our species continues toward the cliff of oblivion. So, what difference would one more book make?
In the end, it won't matter what we know or believe. Or what resources we have. It will only matter what we do. And unless we urgently do what has been preached and taught for thousands of years our species doesn't deserve to survive. If most harmful trends continue as they are now most of humanity will experience great that chaos coming. I don't believe our species will soon go extinct. Many may find ways to survive. But the number of preventable deaths and human suffering will be greater than anything humanity has ever experienced. Some of the horrors to come will result from inevitable natural events (super volcano eruption, solar electromagnetic pulse event) or human error (biosecurity lab accident, cyber coding error, or an accidental missile launch). And knowing these will happen we still resist investing in resiliency measures or faster recovery.
Note that during the Cold War (between 1958 and 1962) the US government did build a secret underground bunker in West Virginia as a safe haven for new legislators in the event of all-out nuclear war. And their first act would be to "shred the Constitution". I'm guessing they would do this because the previously honored document didn't work as intended.
*******
The next blog post will identify multiple flaws in the Constitution that if ignored, will result in the failure of our "American experiment'. Simply because we failed to adapt to the truth of the reality that we now face.
Below may be the best example of perhaps the greatest security threat persistently ignored by every organization I've presented it to. And I'm guessing even this new report will be ignored as well.
AI and the Evolution of Biological National Security
Risks: Capabilities, Thresholds, and
Interventions
By: Bill
Drexel and Caleb
Withers August 13, 2024 Print Download
PDF
Executive Summary:
Not long after COVID-19 gave the world a glimpse of the
catastrophic potential of biological events, experts began warning that rapid
advancements in artificial intelligence (AI) could augur a world of
bioterrorism, unprecedented superviruses, and novel targeted bioweapons. These
dire warnings have risen to the highest levels of industry and government, from
the CEOs of the world's leading AI labs raising alarms about new technical
capabilities for would-be bioterrorists, to Vice President Kamala Harris’s concern
that AI-enabled bioweapons “could endanger the very existence of humanity.”1 If
true, such developments would expose the United States to unprecedented
catastrophic threats well beyond COVID-19’s scope of destruction. But assessing
the degree to which these concerns are warranted—and what to do about
them—requires weighing a range of complex factors, including:
- The
history and current state of American biosecurity
- The
diverse ways in which AI could alter existing biosecurity risks
- Which
emerging technical AI capabilities would impact these risks
- Where
interventions today are needed
This report considers these factors to provide policymakers
with a broad understanding of the evolving intersection of AI and
biotechnology, along with actionable recommendations to curb the worst risks to
national security from biological threats.
The sources of catastrophic biological risks are varied.
Historically, policymakers have underappreciated the risks posed by the routine
activities of well-intentioned scientists, even as the number of high-risk
biosecurity labs and the frequency of dangerous incidents—perhaps including
COVID-19 itself—continue to grow. State actors have traditionally been a source
of considerable biosecurity risk, not least the Soviet Union’s shockingly large
bioweapons program. But the unwieldiness and imprecision of bioweapons has
meant that states remain unlikely to field large-scale biological attacks in
the near term, even though the U.S. State Department expresses concerns about
the potential bioweapons capabilities of North Korea, Iran, Russia, and China.
On the other hand, nonstate actors—including lone wolves, terrorists, and
apocalyptic groups—have an unnerving track record of attempting biological
attacks, but with limited success due to the intrinsic complexity of building
and wielding such delicate capabilities.
Today, fast-moving advancements in biotechnology—independent
of AI developments—are changing many of these risks. A combination of new gene
editing techniques, gene sequencing methods, and DNA synthesis tools is opening
a new world of possibilities in synthetic biology for greater precision in
genetic manipulation and, with it, a new world of risks from the development of
powerful bioweapons and biological accidents alike. Cloud labs, which conduct
experiments on others’ behalf, could enable nonstate actors by allowing them to
outsource some of the experimental expertise that has historically acted as a
barrier to dangerous uses. Though most cloud labs screen orders for malicious
activity, not all do, and the constellation of existing bioweapons norms, conventions,
and safeguards leaves open a range of pathways for bad actors to make
significant progress in acquiring viable bioweapons.
But experts’ opinions on the overall state of U.S.
biosecurity range widely, especially with regard to fears of nonstate actors
fielding bioweapons. Those less concerned contend that even if viable paths to
building bioweapons exist, the practicalities of constructing, storing, and
disseminating them are far more complex than most realize, with numerous
potential points of failure that concerned parties either fail to recognize or
underemphasize. They also point to a lack of a major bio attacks in recent decades,
despite chronic warnings. A more pessimistic camp points to experiments that
have demonstrated the seeming ease of successfully constructing powerful
viruses using commercially available inputs, and seemingly diminishing barriers
to the knowledge and technical capabilities needed to create bioweapons. Less
controversial is the insufficiency of U.S. biodefenses to adequately address
large-scale biological threats, whether naturally occurring, accidental, or
deliberate. Despite COVID-19’s demonstration of the U.S. government’s inability
to contain the effects of a major outbreak, the nation has made limited
progress in mitigating the likelihood and potential harm of another, more
dangerous biological catastrophe.
New AI capabilities may reshape the risk landscape for
biothreats in several ways. AI is enabling new capabilities that might, in
theory, allow advanced actors to optimize bioweapons for more precise effects,
such as targeting specific genetic groups or geographies. Though such
capabilities remain speculative, if realized they would dramatically alter
states’ incentives to use bioweapons for strategic ends. Instead of risking
their own militaries’ or populations’ health with the unwieldy weapons, states
could sabotage other nations’ food security or incapacitate enemies with public
health crises from which they would be unlikely to rebound. Relatedly, the same
techniques could create superviruses optimized for transmissibility and
lethality, which may considerably expand the destructive potential of
bioweapons. Tempering these fears, however, are several technical challenges
that scientists would need to overcome—if they can be solved at all.
The most pressing concern for biological risks related to AI
stems from tools that may soon be able to accelerate the procurement of
biological agents by nonstate actors. Recent studies have suggested that
foundation models may soon be able to help accelerate bad actors’ ability to
acquire weaponizable biological agents, even if the degree to which these AI
tools can currently help them remains marginal.2 Of
particular concern are AI systems’ budding abilities to help troubleshoot where
experiments have gone wrong, speeding the design-build-test-learn feedback loop
that is essential to developing working biological agents. If made more
effective, emerging AI tools could provide a boon to would-be bioweapons
creators by more dynamically providing some of the knowledge needed to produce
and use bioweapons, though such actors would still face other significant
hurdles to bioweapons development that are often underappreciated.
AI could also impact biological risks in other ways.
Technical faults in AI tools could fail to constrain foundation models from
relaying hazardous biological information to potential bad actors, or
inadvertently encourage researchers to pursue promising medicinal agents with
unexpected negative side effects. Using AI to create more advanced automated
labs could expose these labs to many of the risks of automation that have
historically plagued other complex automated systems, and make it easier for
nonspecialists to concoct biological agents (depending upon the safety
mechanisms that automated labs institute). Finally, heavy investment in
companies and nations seeking to capitalize on AI’s potential for biotechnology
could be creating competition dynamics that prioritize speed over safety. These
risks are particularly acute in relation to China, where a variety of other
factors shaping the country’s biotech ecosystem also further escalate risks of
costly accidents.
Attempting to predict exactly how and when catastrophic
risks at the intersection of biotechnology and AI will develop in the years
ahead is a fool’s errand, given the inherent uncertainty about the scientific
progress of both disciplines. Instead, this report identifies four areas of
capabilities for experts and policymakers to monitor that will have the
greatest impact on catastrophic risks related to AI:
- Foundation
models’ ability to effectively provide experimental instructions for
advanced biological applications
- Cloud
labs’ and lab automation’s progress in diminishing the demands of
experimental expertise in biotechnology
- Dual-use
progress in research on host genetic susceptibility to infectious diseases
- Dual-use
progress in precision engineering of viral pathogens
Careful attention to these capabilities will help experts
and policymakers stay ahead of evolving risks in the years to come.
For now, the following measures should be taken to curb
emerging risks at the intersection of AI and biosecurity:
- Further
strengthen screening mechanisms for cloud labs and other genetic synthesis
providers
- Engage
in regular, rigorous assessments of the biological capabilities of
foundation models for the full bioweapons lifecycle
- Invest
in technical safety mechanisms that can curb the threats of foundation
models, especially enhanced guardrails for cloud-based access to AI tools,
“unlearning” capabilities, and novel approaches to “information hazards”
in model training
- Update
government investment to further prioritize agility and flexibility in
biodefense systems
- Long
term, consider a licensing regime for a narrow set of biological design
tools with potentially catastrophic capabilities, if such capabilities
begin to materialize
Introduction
In 2020, COVID-19 brought the world to its knees, with
nearly 29 million estimated deaths, acute social and political disruptions, and
vast economic fallout.3 However,
the event’s impact could have been far worse if the virus had been more lethal,
more transmissible, or both. For decades, experts have warned that humanity is
entering an era of potential catastrophic pandemics that would make COVID-19
appear mild in comparison. History is well acquainted with such instances, not
least the 1918 Spanish Flu, the Black Death, and the Plague of Justinian—each
of which would have dwarfed COVID-19’s deaths if scaled to today’s populations.4
Equally concerning, many experts have sounded alarms of
possible deliberate bioattacks in the years ahead. There is some precedent: in
the weeks following 9/11, letters containing deadly anthrax spores were mailed
to U.S. lawmakers and media outlets, and the attack could have been
considerably worse had the perpetrator devised a more effective dispersion
mechanism for the anthrax. The episode could portend a future in which more
widely available biological capabilities mean malicious individuals and small groups
devastate governments and societies through strategic biological attacks. Jeff
Alstott, former director for technology and national security at the National
Security Council, warned in September 2023 that the classified record contained
“fairly recent close-ish calls” of nonstate actors attempting to use biological
weapons with “strategic scale.”5
Accurately weighing just how credible such dire warnings are
can feel next to impossible, and requires clear judgment in the face of opaque
counterfactuals, alarmism, denialism, and horrific possibilities. But
regardless of their likelihood, the destructive potential of biological
catastrophes is undeniably enormous: history is littered with examples of
societies straining and even collapsing under the weight of diseases—from
ancient Athens’s ruinous contagion during the Peloponnesian War, to the bubonic
plague that crippled the Eastern Roman Empire in the 6th century, to the
cataclysmic salmonella outbreak in the Aztec empire in the 16th century.6 It
is essential that U.S. leaders soberly address the risks of biological
catastrophe—which many claim will change dramatically in the age of artificial
intelligence.
Government and industry leaders have expressed grave
concerns about the potential for AI to dramatically heighten the risks of
catastrophic events in general, and biological catastrophes in particular.7 In
a July 2023 congressional hearing, Dario Amodei, CEO of leading AI lab
Anthropic, stated that within two to three years, there was a “substantial
risk” that AI tools would “greatly widen the range of actors with the technical
capability to conduct a large-scale biological attack.”8 Former
United Kingdom (UK) Prime Minister Rishi Sunak similarly expressed urgent
concern that there may only be a “small window” of time before AI enables a
step change in bioterrorist capabilities.9 U.S.
Vice President Kamala Harris warned of the threat of “AI-formulated bio-weapons
that could endanger the lives of millions . . . [and] could endanger the very
existence of humanity.”10 These
are serious claims. If true, they represent a significant increase in
bioterrorism risks. But are they true?
This report aims to clearly assess AI’s impact on the risks
of biocatastrophe. It first considers the history and existing risk landscape
in American biosecurity independent of AI disruptions. Drawing on a sister
report, Catalyzing Crisis: A Primer on Artificial Intelligence,
Catastrophes, and National Security, this study then considers how AI
is impacting biorisks across four dimensions of AI safety: new capabilities,
technical challenges, integration into complex systems, and conditions of AI
development.11 Building
on this analysis, the report identifies areas of future capability development
that may substantially alter the risks of large-scale biological catastrophes
worthy of monitoring as the technology continues to evolve. Finally, the report
recommends actionable steps for policymakers to address current and near-term
risks of biocatastrophes.
While the theoretical potential for AI to expand the
likelihood and impact of biological catastrophes is very large, to date AI’s
impacts on biological risks have been marginal. There is no way to know for
certain if or when more severe risks will ultimately materialize, but careful
monitoring of several capabilities at the nexus of AI and biotechnology can
provide useful indications, including the effectiveness of experimental
instructions from foundation models, changing demands of tacit knowledge as lab
automation increases, and dual-use AI-powered research into host genetic
susceptibility to infectious diseases and precision pathogen engineering. Lest
they be caught off guard, policymakers should act now to shore up America’s
biodefenses for the age of AI by strengthening screening mechanisms for gene
synthesis providers, regularly assessing the bioweapons capabilities of
foundation models, investing in a range of technical AI safety mechanisms, and
preparing to institute licensing requirements for sophisticated biological
design tools if they begin to approach potentially catastrophic capabilities.
No comments:
Post a Comment