Readers Digest
Magazine subscription Podcast
HomeInspireLife

Could AI be a threat to humanity?

BY Chris Menon

18th Feb 2019 Life

Could AI be a threat to humanity?

Threats to the very existence of a dominant species on Earth are nothing new: as the extinction of the dinosaurs by an asteroid impact 65 million years ago testifies.…

Fortunately, humanity has survived numerous risks and natural threats over millennia. What is new is that we, ourselves, now have the ability to destroy human civilisation.

Around the world, academics and entrepreneurs are trying to wake us up to self-inflicted potential disasters that could lead to human extinction or civilisational collapse. Chief among these hidden heroes are members of the Future of Humanity Institute (FHI) and the Centre for the Study of Existential Risk (CSER), who are dedicated to promoting investigation and a better understanding of these threats.

Both these research centres are in the United Kingdom: the CSER within the University of Cambridge and the FHI at the University of Oxford. The CSER founded in 2012 has 12 full-time staff and can number among its supporters Jaan Tallinn the co-founder of Skype and Elon Musk—founder and Chief Executive of both Tesla and Space X.

drip drip petri dish.jpg

 

Pandemics

Professor Martin Rees, the Astronomer Royal, was a founding member of the CSER and his latest book, On the Future: Prospects for Humanity discusses these threats. In an exclusive interview he told Reader’s Digest: “The kind of existential threat with the potential to wipe out every human, is most unlikely. But I do worry about devastating setbacks to our civilisation that could cascade globally. In the short run I worry most about pandemics, bio-terror, or cyber attacks. I worry particularly about these because our society is fragile, and there could be a breakdown in our order of life if it were disrupted or hospitals were overwhelmed.”

"While a natural pandemic could kill hundreds of millions of people, an engineered one could kill many more, and even threaten a civilisational collapse"

Cyber sabotage—for example, the malicious software virus dubbed "Stuxnet" that crippled Iran’s nuclear facilities in 2010—if applied to critical infrastructure, such as an electricity grid, has the potential to bring an entire country to a standstill.

Even more worrying is the potential for physical harm from a man-made pandemic, argues Haydn Belfield, Academic Project Manager at the CSER. “While, a natural pandemic could kill hundreds of millions of people," he explains. “An engineered pandemic could kill many more and threaten civilisational collapse.”

Belfield continues, “It may soon be possible to engineer pathogens to be more infectious, more fatal, and to have a delayed onset—and so be far more dangerous. This is because new breakthroughs like the targeted genome editing tool CRISPR-Cas9 are increasing our capabilities and the cost of DNA sequencing [the method used to read DNA code]/synthesis [the process of replicating that code] and the hurdle of expertise are rapidly decreasing.”

computer robert nerd.jpg

Belfield warns: “An engineered pandemic could escape from a lab, or it could be deliberately used as a weapon. During the 20th century several countries had state-run bio-weapons programmes, and we know of several non-state groups that have attempted to acquire bio-weapons in the past. Almost single-handedly, one researcher was recently able to recreate horsepox (a similar disease to smallpox, which killed 300 million people in the 20th century) from scratch in only six months. Capabilities that were once only in the hands of governments will soon be within reach of non-state actors.”

"Keeping ourselves safe in the future will involve a compromise between freedom, privacy and security"

Worryingly, Rees estimates: “Bio-terror is possible in the near future—within ten to 15 years.”

Still, not everyone agrees about the magnitude of the threat. Wendy Orent, a US-based science writer specialising in the evolution of infectious disease, has long believed such fears to be overblown. “I am extremely sceptical that any pandemic, natural or man-made, could kill hundreds of millions of people at this point,” she says.

In particular, she discounts the likelihood that it's possible to create new genetically-engineered pathogens to which humans have no genetic resistance, saying: “That is the most unlikely scenario of all. Organisms that cause pandemics such as the Black Death, the 1918 influenza or smallpox have a long evolutionary history. Unless a germ has been through the sieve of natural selection, it isn’t going to function as an organism able to infect cells. Genes you stick together will not have that capacity.”

In any case, how can we minimise the risks? Rees believes regulation can help but admits, “In the case of bio and cyber [terror] it will be impossible to enforce the regulations globally—they don’t require elaborate or conspicuous facilities.” Consequently, he admits, “keeping ourselves safe will involve a compromise between the three desirable realities of freedom, privacy and security.”

Rees holds out some hope that well-controlled Artificial Intelligence, could help contain these threats, with the caveat that “a rogue AI could also be an existential threat in itself.”

 

Artificial Intelligence

The dangers of rogue AI is something that has kept the bright minds at Oxford’s Future of Humanity Institute occupied since its inception in 2005. One of 23 full-time researchers, FHI Research Fellow Ben Garfinkel, strives to help capture the benefits and mitigate the risks of artificial intelligence. He explains the danger of unregulated AI development thus, "In the relatively near-term, there are concerns around autonomous weapons systems, job loss from automation, and new forms of cyber attacks. In the long run, as AI systems become more capable or even smarter than humans, it might also be hard to ensure that the methods they use to achieve their goals are aligned with our own moral values."

ai touchy touch.jpg

Some highly regarded scientists like the late Steven Hawking and entrepreneur Elon Musk, have expressed concern that a super-intelligent AI could escape our control and do things we won’t like or be able to stop.

Indeed, Musk recently warned that “AI is far more dangerous than nukes. So why do we have no regulatory oversight? This is insane."

The problem of keeping control of such AI, is why experts are keen to insist that computers that can think for themselves should have an off switch. Yet by the time designers realise the necessity of this, it may no longer be possible.

 

Still, there are those who think these fears are overblown. Richard Socher, chief scientist at US software company Salesforce has written, “We face major issues around bias and diversity that are much more human and much more immediate than Singularities and robot uprisings: training data with embedded biases for example, and a lack of diversity both in the field and our datasets.”

"Artificial intelligence is far more dangerous thanks nukes—so why do we have no regulatory oversight? It's insane"

Even so, it's clear that we're moving inextricably towards dependence on intelligent machines. Whether it happens in 40 or 100 years, these are issues we will have to face sooner or later.

“The more of the control problem we solve in advance, the better the transition to the machine intelligence era will go,” advises FHI Director Nick Bostrum, who has written extensively about the dangers of AI.

In the case of all of these technologically driven existential threats, it must also be remembered that just because a disaster hasn’t yet happened doesn’t mean the risks of it occurring aren’t increasing. Indeed Martin Rees warns that “the probabilities of catastrophes caused by misuse of technology, and of eco-catastrophes, are rising year by year”.

The worrying thing is that given the enormity of the threat that these technologies pose, relatively little resource is being deployed to investigate how to deal with these risks effectively.

As Martin Rees points out, “There's a huge effort aimed at analysing and trying to reduce familiar risks—carcinogens in food, plane crashes, low radiation doses and so on. But there can't be even 100 people worldwide whose main focus is on really extreme risks. Given the huge magnitude of the devastation at stake, even if these academics reduce the probability by one part in 1000 they will have more than earned their keep.

The hope must be that soon many more smart minds will unite to focus on how best to prevent humanity destroying itself.

 

This post contains affiliate links, so we may earn a small commission when you make a purchase through links on our site at no additional cost to you. Read our disclaimer

Loading up next...
Stories by email|Subscription
Readers Digest

Launched in 1922, Reader's Digest has built 100 years of trust with a loyal audience and has become the largest circulating magazine in the world

Readers Digest
Reader’s Digest is a member of the Independent Press Standards Organisation (which regulates the UK’s magazine and newspaper industry). We abide by the Editors’ Code of Practice and are committed to upholding the highest standards of journalism. If you think that we have not met those standards, please contact 0203 289 0940. If we are unable to resolve your complaint, or if you would like more information about IPSO or the Editors’ Code, contact IPSO on 0300 123 2220 or visit ipso.co.uk