AI As Dangerous As Climate Crisis?
Of all of the cool ’80s sci-fi movies to have come true, why’d it have to be The Terminator? As the Guardian reports, a leading figure in the Artificial intelligence game recently warned that AI poses as much risk to humanity as the ongoing climate crisis and should be treated accordingly.
Experts Believe We Need An Intergovernmental Panel To Oversee The AI Industry
Demis Hassabis, chief executive of Google’s AI unit, spoke recently ahead of an upcoming UK summit on AI safety about the risk of allowing computers to continue their rapid cognitive development unchecked. Hassabis suggested that a body similar to the Intergovernmental Panel on Climate Change—or IPCC for short—should be organized to provide oversight of the fast-growing AI industry. The executive stressed that time was of the essence when it comes to getting ahead of AI development.
Google’s Chief Executive Of The AI Unit Thinks The Issue Is As Concerning As Climate Change
“We must take the risks of AI as seriously as other major global challenges, like climate change,” Hassabis said. He then admonished the global powers that be for taking “too long to coordinate an effective response,” in regard to climate change and emphasized that the same dillydallying can’t be allowed to happen when it comes to AI and the risk it poses to humanity. Risks that, according to Hassabis, include bioweapons—a particularly relevant threat in the wake of the COVID-19 epidemic.
The message Hassabis delivered about AI wasn’t all doom and gloom. Hassabis also called AI “one of the most important and beneficial technologies ever invented.” Demis was the Google exec in charge of the unit responsible for a revolutionary program designed to depict protein structures named AlphaFold—making him uniquely qualified to point out the benefits of AI as well as the risks.
AI Needs To Be Researched And Reported On In A Safe Manner
Despite acknowledging the positive side of AI, Hassabis still thinks an equivalent to the IPCC is needed to watch over the new technology. “I think we have to start with something like the IPCC, where it’s a scientific and research agreement with reports, and then build up from there.”
Hasssabis told The Guardian. Hassabis said from there, he would like to see something like CERN—the European Organization for Nuclear Research—established to research AI safety and risk management and eventually an agency like IAEA— International Atomic Energy Agency—put together to actually regulate AI.
Should AI Technology Be Regulated By The Government?
Hassabis confessed that none of the regulatory analogies he used for AI risk management were “directly applicable” to the specific problem of AI but that “valuable lessons” could be taken from existing organizations. He isn’t alone in his fear of unchecked AI and the risk therein, either.
Recently, Eric Schmidt, former chief executive of Google, and Mustafa Suleyman, co-founder of DeepMind, also called for an IPPC-type panel on AI. Suleyman and Hassabis even signed an open letter warning about the threat of humankind’s extinction at the hands of AI. The letter, which was released in May, basically states that AI should be considered as big a risk to the Earth’s future as pandemics and nuclear war.
Like The Climate Crisis, It May Already Be Too Late
Any efforts to curb AI now, however, might largely be viewed as too little too late. Much like the climate crisis, the time to combat AI was years ago when it was still a manageable threat. Many tech industry insiders now believe that AGI, or “god-like” AI, is only a few years away from emerging.
On the other hand, there are also those who view doomsayers like Hassabis as overdramatic and feel like too much of the AI panic is overplayed. They view AI and the risk presented by Artificial Intelligence running amok as a much smaller deal than everyone else is letting on. Of course, there are also many people who feel the same way about climate change.