The Next Deadly Plague Will Come From AI?
Mustafa Suleyman, co-founder of Google DeepMind and a pioneer in artificial intelligence, has raised alarm bells about the potentially catastrophic consequences of AI manipulation in virology. Suleyman’s concerns center around the creation of synthetic pathogens through AI, which could inadvertently or intentionally become a transmissible deadly plague.
The nightmare scenario Suleyman envisions involves individuals using AI to accidentally or intentionally engineer a deadly plague.
As AI technologies continue to evolve and become more accessible, the potential for misuse grows exponentially. The nightmare scenario Suleyman envisions involves individuals using AI to accidentally or intentionally engineer a deadly plague. These AI-manipulated viruses could spread faster, be more lethal, and unleash devastation on an unimaginable scale.
“The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible,” Suleyman said on a recent episode of The Diary of a CEO podcast via The Byte. One of the core concerns expressed by the Google DeepMind co-founder is the unrestricted access to AI tools and biological materials.
With the democratization of technology, more people than ever can experiment with genetic engineering. This ease of access raises the risk of someone unleashing a virulent virus that could rival or surpass the deadliest pandemics in human history. To address the threat of an AI plague, Suleyman is advocating for a containment strategy similar to the one established by NATO for nuclear weapons.
“The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible.”
Mustafa Suleyman, co-founder of Google DeepMind
This strategy would entail stringent limitations on who can access AI software, cloud systems, and biological materials necessary for synthetic virus experimentation. While such measures may initially seem restrictive, they are essential to prevent a potential AI plague doomsday scenario. Suleyman is not alone in his concerns about AI’s potential to cause global havoc.
The upcoming AI summit led by Senate majority leader Chuck Schumer is set to bring together luminaries from the tech industry, including OpenAI CEO Sam Altman, Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, and Tesla CEO Elon Musk who will engage in discussions about the future of AI.
Additionally, an excerpt from an upcoming biography about Elon Musk details a discussion about the perils of AI between the billionaire, former President Barack Obama, and Google co-founder Larry Page. However, according to Musk’s recollection, neither of them displayed a willingness to take concrete actions in response to various concerns like AI plagues.
These AI-manipulated viruses could spread faster, be more lethal, and unleash devastation on an unimaginable scale.
Elon Musk has warned about other risks associated with artificial intelligence, including its potential to cause civilization destruction. He believes that AI is more dangerous than other industries, such as mismanaged aircraft design or inadequate car production, because it can potentially cause catastrophe. He also warned that there is a “non-zero chance” that rogue advanced AI could wipe out humanity.
Like Suleyman’s warning about an AI plague, Musk has called for regulations to prevent AI from morphing into something that could have deadly consequences for humanity. He even joined more than 1,000 experts in advocating for a six-month pause on advanced AI products until proper safety measures and guardrails could be put in place.
Suleyman’s warning serves as a call for humanity to tread cautiously in the realm of AI. While the technology has the potential to revolutionize countless aspects of our lives, it also carries the dangerous power to disrupt and destroy. Responsible AI development, which emphasizes ethical considerations, safety protocols, and the containment of potential existential risks like viral plague, is necessary.