A Stark Warning by Experts about Malicious AI
Written by Karolis Liucveikis on
Normally when people are warned about the dangers of technology they laugh it off as alarmist and go straight back to see how many likes their latest post has received. It is easy to dismiss warnings as alarmist especially when they entail the end of the world resulting from a much favored Hollywood apocalypse scenario. While scientific consensus agrees with mounting evidence that we are influencing and exacerbating climate change, many still are willing to stick their heads in sand and whistle to themselves.
Cyber-security is a field where warnings are dished out daily. These warnings are generally ignored by the public at large. The commandment of ensuring software is updated regularly is laughed off till the next outbreak of an easily preventable ransomware strain. This week a 100-page report was released, authored by over 20 experts in their respective fields. The topic concerns the use of Artificial Intelligence (AI) for malicious purposes. While the report acknowledges the usefulness of AI in programs that will come to define future computing it presents a stark and too rational warning for the malicious use of AI by authoritarian regimes and unscrupulous people. The report represents the modern day equivalent of a Pandora’s Box scenario.
The Report in a Nutshell
The main aim of the report, titled "The Malicious Use of AI: Forecasting, Prevention, and Mitigation", was to focus on what potential attacks could be seen in future leveraging AI. These attacks are assumed to be acted on victims who haven’t sufficiently prepared for the possibility of malicious AI use and thus do not have adequate defenses to such a threat. It is detailed in the report that the researchers looked at how current attacks and malware can be expanded, stating that they are of the belief that, “The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence, and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.”
Further, the report looked at the future of new attacks and that if a change to the typical character of threats would occur. This was summarised in the report as:
“New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders…” and “We believe there is a reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems.”
Some Key InfoSec Ares Explored in the Report
The report compiled by researchers from the University of Oxford, Future of Humanity Institute, Center for the Study of Existential Risk, the Electronic Frontier Foundation, the Center for a New American Security, and OpenAI, a leading non-profit research company looked to shine some light on how AI could be leveraged maliciously. One of the apparent ways this could be done is by using the already rapid growth in the use of bots to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.
It appears that the very nature of AI is what would make it a key area for hackers to try and exploit. AI by its very nature exhibits a dual nature, meaning that it can be effectively used for civilian and military uses. This, in turn, means AI can be used for good or harm depending on the driving force behind the development. For example, systems that examine software for vulnerabilities have both offensive and defensive applications, and the difference between the capabilities of an autonomous drone used to deliver packages and the capabilities of an autonomous drone used to deliver explosives need not be very great. AI technology also promises to be incredibly efficient and scalable. This means that AI could complete jobs faster and cost less than if a human were to do them and the technology is scalable as it can complete tasks by either copying itself or accessing more computing power. For example, a typical facial recognition system is both efficient and scalable; once it is developed and trained, it can be applied to many different camera feeds for much less than the cost of hiring human analysts to do the equivalent work. Just those two factors if employed by attackers successfully would make existing and future malware variants a frightening prospect.
AI Could Be a Political Threat
As AI technology would certainly change the nature of communication between individuals, firms, and states, to such an extent that they are increasingly mediated by automated systems that produce and present content. Given we are currently experiencing a trend in fake news generation and interference in democratic processes such as elections, AI could make such existing trends more extreme, and enable new kinds of political dynamics. Worryingly, the features of AI described earlier such as its scalability make it particularly well suited to undermining public discourse through the large-scale production of persuasive but false content. Used by well-funded authoritarian regime values like free speech could easily be drowned out by automatically generated fake news and false content.
The report is worrying for multiple reasons, far more reasons than what is listed here. While the report is worrying, a major aim of the report is to assist in developing countermeasures to attacks using AI maliciously. It is hoped by discussing the future potential uses of AI technology the industry and humankind as a whole are not caught with their pants down. So while the report does not present the reader with a view that AI will be the end of mankind, rather it presents the dangers posed as real but can be combatted. This led researchers to conclude,
“Whether AI is, all things considered, helpful or harmful in the long run is largely a product of what humans choose to do, not the technology itself…”
▼ Show Discussion