A hijacked drone to hit a target, an autonomous car that attacks its driver, a digital campaign targeted denigration based on a deep fake that destroys a reputation: all these sci-fi scenarios could come true … quickly. A report uncovered by the Financial Times alarmingly highlights the risks associated with a rapid and unregulated development of artificial intelligence.
Called “The Evil Use of Artificial Intelligence: Forecasting, Prevention, and Reduction” (“The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation“), this 100-page report written by researchers from Oxford, Yale, Cambridge, Open AI, and the Electronic Frontier Foundation. Sentiments of the sector that draw a pessimistic view of the situation: the explosion of AI performance over the last five years will facilitate the design of new cyber-weapons by lowering their cost. This will make possible not only the expansion of existing threats but also the introduction of new types of attacks.
The report is quite specific in the potential threats ahead: drones and unmanned cars could be used not only on their own but also in the form of clouds to become real weapons of war. Or how adding intelligence to everyday objects magnifies the threat that everyday objects can pose.
Added to this are more digital attacks, such as hacking accounts, sites, etc. which would not only be easier but especially better-targeted thanks to the learning capabilities of AIs.
If the threats are easy to identify, the Financial Times regrets the lack of leads provided by the report as regards the prevention of drifts of the AI. Researchers advocating more consultation and forms of self-censorship – not publishing the details of certain “sensitive” algorithms, etc. But in a context of fierce competition around artificial intelligence where the US and China are fighting against each other, researchers have a tendency to go out of their way to find the best employer … without necessarily thinking of all the applications of their research. including military and/or diversions that may be made.
One thing is for sure: with the arrival of autonomous cars, it will be necessary to make decisions and set a framework around artificial intelligence. Before it is too late.