Apathy and hyperbole cloud the real risks of AI bioweapons
September 25, 2024

(image credit: TechCrunch, CC BY 2.0, via Wikimedia Commons)

 

“Can chatbots help you build a bioweapon?” a headline in Foreign Policy asked. “ChatGPT could make bioterrorism horrifyingly easy,” a Vox article warned.  “A.I. may save us or construct viruses to kill us,” a New York Times opinion piece argued. A glance at many headlines around artificial intelligence (AI) and bioweapons leaves the impression of a technology that is putting sophisticated biological weapons within the reach of any malicious actor intent on causing such harm with disease.

 

Like other scientific and technological developments before it, AI is dual use: It has the potential to deliver a range of positive outcomes as well as to be used to support nefarious activity by malign actors. And, as with developments ranging from genetic engineering to gene synthesis technologies, AI in its current configurations is unlikely to result in the worst-case scenarios suggested in these and other headlines—an increase in the use of biological weapons in the next few years.

 

Bioweapons use and bioterrorism has been, historically, extremely rare. This is not a reason to ignore AI or be sanguine about the risks it poses, but managing those risks is rarely aided by hype.

 

AI-enabled bioweapons? Much of the security discussion to date has focused on large language models (LLM), which power AI chatbots such as ChatGPT, and the potential these tools and models have for enabling biological weapons. As one recent piece put it, AI and bioweapons are the latest security obsession. OpenAI, which developed ChatGPT, stress-tested the chatbot for biosecurity concerns and publicly released a “system card” in spring 2023 addressing the risks as the company saw them. The company claimed that “a key risk driver is GPT-4’s ability to generate publicly accessible but difficult-to-find information, shortening the time users spend on research and compiling this information in a way that is understandable to a non-expert user.” The stress test indicated that “information generated by the model is most likely to be useful for individuals and non-state actors who do not have access to formal scientific training.”

 

Read the full article here on Bulletin for the Atomic Scientists

Comments

Log in to comment.

Deep Dive