skip to Main Content
AI large-scale cyber attacks

AI Could ‘Kill Many Humans’ within Two Years

A stark warning has been issued from the heart of UK politics. Matt Clifford, tech adviser to UK Prime Minister Rishi Sunak and Chairman of the Advanced Research and Invention Agency (Aria), forecasts that Artificial Intelligence could become powerful enough to “kill many humans” within the next two years.

Speaking candidly during a TalkTV interview, Clifford illuminated the potential for AI to be used in creating cyber and biological weapons that could result in significant loss of life.

With AI’s development speeding up, he emphasised the criticality of global regulation for AI producers to avert the emergence of “very powerful” systems that could challenge human control.

“The kind of existential risk that I think the letter writers were talking about is… about what happens once we effectively create a new species, an intelligence that is greater than humans,” Clifford stated, referencing a recent open letter signed by dozens of experts, advocating for the risks of AI technology to be given the same weight as pandemics or nuclear war.

AI Risks Now and in the Future

In exploring AI’s potential risks, Clifford distinguishes between near-term and long-term risks.

He believes that even the short-term threats are disconcerting, highlighting how AI could currently be used to “create new recipes for bio weapons or to launch large-scale cyber attacks”.

Clifford warns these scenarios could become a reality soon, predicting AI systems to grow “more and more capable at an ever-increasing rate”.

“If we try and create artificial intelligence that is more intelligent than humans, and we don’t know how to control it, then that’s going to create a potential for all sorts of risks now and in the future”, he opined.

A Global Approach to AI Regulation

Echoing concerns raised by Clifford, Emad Mostaque, founder of tech firm Stability AI, also cautioned that AI could evolve to be far more adept than humans and consequently dominate humanity.

This has been the rallying cry for calls for a global approach to regulation and a hold on AI development until its safety can be assured.

The Labour Party, for instance, proposes the licensing of AI development similar to that of medicines or nuclear power. Lucy Powell, Shadow digital secretary, articulates this as the model to pursue, stipulating that developers should have a license to build advanced AI models.

AI for Good

Despite the forewarning, Clifford also posits that AI could be a transformative force for good if harnessed correctly.

AI applications, such as language learning models like ChatGPT and Google Bard, have taken the internet by storm, helping students generate university-grade essays.

On a more critical note, AI has been instrumental in medical fields, with algorithms analysing medical images to assist doctors in diagnosing diseases more accurately and quickly.

“You can imagine AI curing diseases, making the economy more productive, helping us get to a carbon-neutral economy,” Clifford envisaged.

With Clifford’s message ringing in the ears of stakeholders, a balancing act is in sight. AI’s potential is undeniable, yet so are the risks.

Rebecca Taylor

Rebecca is our AI news writer. A graduate of Leeds University with an International Journalism MA, she possesses a keen eye for the latest AI developments. Rebecca’s passion for AI, and with her journalistic expertise, brings insightful news stories for our readers.

Recent AI News Articles
Amazon - Anthropic
Back To Top