A new report from the European Sherpa project created with F-Secure illustrates the attack techniques used to pollute data and confuse the algorithms behind intelligent systems
by ARTURO DI CORINTO
MEN are attacking Artificial Intelligence systems and not vice versa, as we would have expected from science fiction films. But to do what? To manipulate search engine results, modify social media algorithms, website ranking and reputation, disrupt drone flight or trick a self-driving car. And so, contrary to what we hoped, the malicious use of artificial intelligence tools did not stop at the creation of sophisticated disinformation campaigns.
The alarm comes from a report of the European Sherpa project, dedicated to human rights and artificial intelligence, and which highlighted how individual criminals and organized hackers have found a new goal in attacking the ‘smart’ systems that suggest Amazon purchases or on the Apple Store, which define the ranking of restaurants on TripAdvisor or which predict the likelihood of criminal events and smart city energy consumption.
We had to expect it. Smart information systems have reached a level of development that will have an increasingly significant impact on individuals and society. Think of the enormous spread of virtual assistants like Amazon’s Alexa, Google’s autocomplete functions, Ai’s algorithms used on Facebook and Youtube to ‘personalize’ our social experiences. However, the software that predicts the ‘social risks’ thanks to a peculiar combination of Artificial intelligence systems, based on computational machine learning and deep learning techniques, and the Big Data applied to the provision of services, to medicine, to the insurance and finance.
It is not just about evasion techniques to fool the defenses of computer networks and systems, but to pollute the data on which the intelligent systems base their output, or to carry out sophisticated electronic scams even over the phone. Already today it is indeed possible to use natural language generation models to create spear phishing messages (an email scam through which unauthorized access to sensitive data is obtained) optimized for specific groups or individuals.
The manipulation of algorithms
Thus the primary focus of this study is on how the bad guys can abuse artificial intelligence, machine learning and intelligent information systems. The researchers identified a variety of potentially harmful attacks on society ranging from the manipulation of artificial intelligence algorithms in smart cities, — where they are used to optimize transport, electricity consumption and the waste cycle — to cases in where algorithmic governance can prioritize hidden interests, damage companies by manipulating data and information online or benefit government parties by influencing political debate through fake news.
And while we are all a bit worried about the advent of the Singularity, the moment in which according to scholars like Ray Kurzweil artificial intelligence will match human intelligence, we do not realize that the basic techniques that now only emulate some human cognitive functions , are used for harmful purposes, every day, are able to undermine all those systems we use against computer intrusions, medical diagnoses, water supply levels, up to the banks’ anti-fraud systems.
Andy Patel, researcher at the center of excellence for artificial intelligence at F-Secure, partner of the Sherpa project, explains: “People mistakenly identify artificial intelligence with human intelligence, and I think that is why they associate the threat of Ia to killer robots and out-of-control computers. But human attacks against IA are happening all the time. “
Fake accounts and reputation
Sybil is one of these: the attack is based on the creation of fake accounts in order to manipulate the data that the IA uses to make decisions. A popular example of this attack is the manipulation of search engine rankings or reputation systems to promote or ‘hide’ certain paid content or not. However, these attacks can also be used for social engineering of individuals in attack scenarios targeted at public decision makers, company captains and senior government officials.
The report finds that artificial intelligence techniques are used to produce extremely realistic written, audio and visual content. This is the case of deep fake videos, completely fake clips, thanks to which it is possible, starting from a television shoot, to mimic the mouth of a speaker to make him say things he would never dream of. What use can be made of it? For example to manipulate evidence, create a diplomatic crisis, deceive a competitor.
The malicious uses of artificial intelligence are already among us without having to imagine a killer like that of the movie Terminator. This is why researchers suggest designing safe intelligent systems immediately.
This story has been originally published in italian by La Repubblica: https://www.repubblica.it/tecnologia/sicurezza/2019/07/18/news/perche_gli_umani_attaccano_i_sistemi_basati_sull_intelligenza_artificiale-231461578/