Arturo Di Corinto
What would you say if looking at the Wikipedia page you found that Benito Mussolini was a great head of state? What if Google’s self-completion system tells you that gays are sick people? Yet this is exactly what happens when you access a social network: the contents presented are not neutral, but depend on your previous choices, the pages you have visited, the likes you have placed, the geographic position detected, the popularity of the posts and the advertising. So why do we accept it on social media?
Social media became tools, theater and space of conflict between powers that fight for our attention and manipulate our perceptions. A political goal that is engineered into the very functioning of digital tools to transform us into consumers. Because, as Lawrence Lessig said, whatever lessons you want to learn from the January 6 uprising on Capitol Hill, the thousands of Americans who invaded the Capitol to “stop the theft” were among the millions of Americans who were led to believe to a lie: that Joe Biden had stolen the 2020 election.
The fault lies with the algorithms, of course, but computer algorithms, as Chris Wilye explained well in the book Mindf*ck: Cambridge Analytica and the Plot to Break America, 2019 (1), are only and always “the political output” of a mathematical procedure. In the era of surveillance capitalism (2), as Shoshana Zuboff defined it (3), and as Lessig reminds us, every action we take online is collected for one purpose: to shape us and our behavior, to better predict what we will do. These predictions are the oil of the digital age. They drive advertising, they drive the modern game of politics, they drive everything that benefits from our attention, which is everything in the digital economy. But, again, those predictions are not driven by rationality. Instead, they are driven by the exploitation of the psychological characteristics and weaknesses of us human beings. The work done by Donald Trump’s digital strategist Brad Pascale was not aimed at seeking solutions to the problem of immigration or terrorism, but aimed at anger, at isolating people to categorize them, at triggering mass reactions, at confirming our prejudices. .
Social, all social networks, were born this way: they serve commercial persuasion, political manipulation and state surveillance.
Brain Hacking, the hacking of our brains
Social media are a significant source of distraction. Even when you delete the notifications from your smartphone, the temptation to see if someone has commented, liked, insulted us, remains strong: a behavioral reflex that is based on the ability of the brain to activate itself in the face of a reward: the discovery of something new, a gratification. When we talk about information, it is precisely the “news” that is rewarding, often regardless of the content.
The mechanism that leads us to consult the smartphone thousands of times a day is scientific and has been engineered into its software: Tristan Harris, former vice president of Google calls it brain hacking.
Attention. Nuances,doubts, reasoning, excuses are not made for social media.
Have you ever noticed that attention in social networks where everything is discussed is activated above all in the face of conflicts, those that generate discussion ‘threads’ to which, for pride, belonging, vanity, sometimes out of respect for the interlocutor, we tend to respond, keeping them alive?
But we often end up arguing. In addition to the fact that in this way we ruin our day and also alienate the sympathy of old friends, we support the business model of social platforms based on the volumes of traffic produced, indifferent to the quality and accuracy of posts and replies. Indifferent to the quality of human relationships lost or acquired due to this verbal bulimia. Only the numbers matter.
The golden rule of social production is that the attention created around a post is inversely proportional to its complexity, quality and length. The “spectator with power of speech” will react more easily to short, apodictic, tranchant and immediately comprehensible texts.
The brevity of posts and conversations engineered in social networks — think of the 280 characters of Twitter — produces a strong polarization that is based on linguistic and conceptual shortcuts, dividing the world into good and bad. The nuances, the doubts, the reasoning, the excuses, are not made for social media. The positions taken, yes. This is why they are used so much by politicians.
Algorithms: it is they who decide the presentation of the contents
The artificial mechanism that fosters this verbal incontinence is the algorithms that decide the presentation of the contents, the true industrial secret of Facebook, Twitter, Youtube and company. It goes something like this: the more times you’ve clicked on a certain type of content, the easier it will be for it to appear in your news feed. It is an algorithm that decides the most relevant content “for you”. This mechanism is also the basis for the diffusion of “false news”, those with screamed headlines and content ungrammatical or reduced to memes. Memes, minimal conceptual units, in the form of an image + slogan “work” because they are digestible even by functional illiterates that the linguist Tullio De Mauro calculated to be almost a fifth of the Italian population: people who do not have the ability to understand a complex content but not even to summarize the summary of a news article but who react emotionally to cognitive stimuli.
Dunning Kruger effect: “The less you know, the more you think you know”
The algorithm, together with the returning illiteracy, the claim that one is worth one, enhances the Dunning Kruger effect. It is a psychological effect that takes its name from those who theorized it by winning an Ig-Nobel (Ignobel), the Nobel Prize which is jokingly awarded to the most unlikely researches, and which we can summarize as follows: “The less you know, the more you think you know”.
The effect explains the positive self-assessment of those who, despite being ignorant of such a subject, consider themselves to be an expert. Chemtrails, surveillance chips, vaccines that cause autism, plots and conspiracy theories, are just the epiphenomenon of this effect on people who believe that the laws of physics, medicine or economics are decided by a show of hands as in a scout meeting. Sorry to tell you, but the boiling temperature of the water is not decided by vote, raising hands.
Exhibitionism between voyeurism and narcissism, seeing and seeing oneself, making oneself seen and being seen
According to the analysis of the Italian psycholinguist Raffaele Simone, the stubborn and violent self-representation of oneself on the social stage is only the extreme effect of the voyeurism/narcissism dynamic, driven by the dependence on feedback and notifications, it takes the form of four different behaviors: for Voyeurism, the first effect sought is “seeing” (what others do and say), the second is “seeing oneself” (to build and refine one’s social image); vice versa, for Narcissism the first effect is “to show” (what we are or would like to be with respect to others), the second is “to be seen” (to feel part of the community, receive confirmation and stimuli and feedback to fix our image in relation to our values).
Gratuitousness, an explosive mixture of self-promotional drives and defective mechanisms
Free access to social platforms and messaging apps represents an explosive mixture for all these drives and motivational factors: with little effort, our time and our attention, priceless but very precious and non-renewable goods, we plan to promote and create social consensus around us and our activities, be they playful, emotional, professional or political. At a second glance, these mechanisms of self-communication — “me-communication”, in the theses of the sociologist Manuel Castells (4) — are flawed: we are never given back enough of what we give, unless we undertake the work of youtuber or the influencer. And you, do you know someone who has found a job by putting their profile on LinkedIn?
Politics, disinformation, manipulation, cyber espionage, and what remains of the Islamist propaganda
Social media have become the new battleground for disinformation campaigns and electoral political manipulation. This is not only the case of Cambridge Analytica, but also of the armies of trolls governed by “Putin’s cook” through the Internet Research Agency in St. Petersburg, and of the Riyadh chatbots who attacked the late Jamal Khashoggi every day, a journalist killed because he was critical of the government of his country, Saudi Arabia; there are the honey traps of fake Chinese managers on LinkedIn who do cyber-espionage and try to recruit professionals for their secret services. There are also videos and bogus news of Jair Bolsonaro’s gallops, Brazilian candidate for president, who have clogged up the WhatsApp groups as reported by the Brazilian association of independent journalists Aos Fatos. And there is what remains of ISIS spreading encrypted messages on Instagram. And we could go on.
Autonomy. Social media as weapons of mass distraction and political influence
We’ve already written that Google has admitted reading pur emails and letting app developers read them; Edward Snowden explained how the surveillance of the National Security Agency works and the secret services explained to us that agents of hostile countries are trolling our content online (the Electronic Syrian Army), meanhile armies of russian bots produce personalized political and commercial messages to lure us into networks spread by others. In short, social media have turned into weapons of mass distraction and political influence.
The Tik Tok case, improper use of artificial intelligence and the risk of cyberbullying
Starting February 9, 2021, Tik Tok asks users to indicate their date of birth again before continuing to use the app by removing children under 13. Since it is not certain that the children will honestly declare their age, the platform announced that it could evaluate it with indirect systems. Like using Artificial Intelligence (AI), subject to an agreement with the Privacy Authority of Ireland where the company has its european headquarters. The remedy could be worse than the disease. Age verification may involve the collection and analysis of all data referable to a user, from the IP address with which he connects to the network to the sifting of data and behavior through psychometry and facial biometrics techniques, a sector in which the Chinese are at the forefront. Bytedance, owner of TikTok, in fact, does not only deal with digital platforms but rather with services based on machine learning algorithms as in the case of its news aggregator, Toutiao, built on the preferences and tastes of users. TikTok already uses artificial intelligence to analyze interests and tastes with the aim of personalizing content.
How do they do it? When you enter a social network, the contents offered are divided by age group and year of birth. The age range is deduced from in-app behaviors such as likes, comments, post scrolling frequency, connection times and frequency, and then from the network of friendships. With these methods TikTok is already able to identify 14–17 year olds quite well but not young people under thirteen because, they say, they cannot collect the data. But that’s exactly what all other social networks do.
Why is it not talked about? Radio, TV and newspapers do not have enough experts, funds, and space to talk about it, but perhaps it is also a reflection of a strong cognitive resistance to evaluating the negative aspects of the use of Web platforms, messaging apps and social media. This is why the Italian Privacy Authority obtained from TikTok to monitor the access of minors to the platform because its users do not always have the necessary maturity to assess the risks. But there is not only TikTok. Young people use video game chats, and most of them use Discord and Snapchat, leaving Facebook, Twitter and LinkedIn to adults. And Facebook, despite the bad reputation after the Cambridge Analytica scandal, according to the Digital Forensic Lab of the Atlantic Council, continues to host spies, hackers and buffaloes: and to collect data on our behavior online, even through its subsidiaries such as WhatsApp.
The Toxic Web
Thanks to the rhetoric of everyone’s communication towards everyone, the fear of being cut off from public discourse, the desperate search for relationships, the first defenders of this toxic web are precisely the adults who aspire to a glimmer of notoriety, which justify their presence. with business reasons and who, with their ignorance of the grammar of security, open the doors to hate speech, cyberbullies, stalkers and criminal hackers.
Instead of teaching the value of freedom of expression that the Internet promotes, they overturn it into its opposite, silencing the kindest voices and the most fragile users fed to haters and slackers.
Behind these behaviors there is a psychological resistance to make digital platforms accountable for the algorithms that reward online conflict, the distribution of fake news, the profiling of behavior for commercial purposes, but above all there is the stupid underestimation of the value of our privacy.
For this reason, the decision of the Italian Communications Authority, Agcom, is to be welcomed with relief. The Authority which has started a mapping of all the services currently offered on the online platforms (5) with the aim of bringing out, alongside the individual and collective advantages, also risks and problems: from illicit behaviors that endanger small and medium-sized enterprises, to hate speech and more generally to violations of fundamental rights “capable of compromising the integrity of democratic processes, the decision-making autonomy of individuals, the tightness of the social fabric, informed pluralismvo and the protection of minors “(6).
In authoritarian countries one dies in order to express oneself, in democratic ones freedom of expression is abused. In Italy the number of those guilty of cyberbullying, cyberstalking, hate speech, revenge porn is increasing every day. The law on cyberbullying, the interventions of the Privacy Authority, the communication campaigns do not seem to be enough.
Social networks continue to magnetize the anger and repressed aggression of those who feel cheated of something by someone.
The preferred targets remain politicians, professors, journalists, anyone who embodies authority or an alleged elite. But it’s just smoke and mirrors to distract us from the true dynamics of power. Thanks to social immobility, the economic crisis, the inability to understand global phenomena such as immigration, anger explodes looking for targets at random. Aided by an unscientific attitude, a vast anti-cast sentiment, online rallies based on fuck you, and a profound ignorance of how the Web really works, many, too many, think that anything can be said on social networks. Thus, sheltered from an alleged anonymity, freedom of expression is confused with the freedom of insult.
Often it is the parliamentarians of the Republic themselves who make posts and comments over the top. Matteo Salvini had recycled and exasperated Renzian criticism of owls and professors while Luigi Di Maio and Alessandro Di Battista promoted journalists to “whores, pennivendoli, lowest jackals”.
With these examples we cannot expect that the ox people equipped with smartphones will be able to renounce to express with threats and evil words a discomfort that should be treated in health clinics.
Facebook, Twitter, Instagram, are designed to make us react in an emotional way and, when you don’t have many opportunities to discuss and reason, the impulse to argue on social media while guarding the shed, waiting for the customer to enter the store , between a work email and another, it brings out the worst in us.
Thinking of influencing the debate on social media, guilty used as a source of information by traditional media, we arrive at extreme manifestations, even wishing the interlocutor’s death. But No one is above the law which, if they wish, has all the tools to investigate and sanction those who, abusing the freedom of the Internet, are the perpetrators of gross behavior that leads to actual crimes. But that’s not enough. And it is not only for the slowness, the costs and the contradictions of our local justice. The solution is culture. In Italy, second in Europe for functional illiterates, penultimate in the ranking of graduates, with a low reading rate of books and newspapers, another way has to be found.
Facebook sells hate
The Intercept, the online newspaper (7) founded by Glenn Greenwald, Laura Poitras and Jeremy Scahill, proved that Facebook was selling racist and anti-Semitic advertisements to its users. Right in the days of the Pittsburgh synagogue massacre. The perpetrator of the murder of 11 people was convinced of the existence of a plot to decimate the “white race”, known as the “White Genocide”. It is a conspiracy theory that “blacks” conspire to drive whites off their lands.
Despite international efforts to expose the false thesis of a “White Genocide,” Facebook was selling the conspiracy theory to advertisers to a “detailed target” of an interest group of 168,000 users who had expressed support for similar content. The company, contacted for comment, eliminated the targeting category, apologized and said it should never have existed. How many pages and how many ads of this type still exist on Facebook?
We don’t like to admit it, but social networks, not just Facebook, are hate spreaders.
Their business model is based on the sale of users’ personal data and the ability to direct their attention to specific advertising targets. The more users they have, the more traffic they can generate and the greater their value to advertisers. The more users, the greater the profits, the greater the value of the shares, the greater the dividends for shareholders. To expand the audience of subscribers to social networks, the first goal is to make them usable through simplified interfaces and reward systems. The same digital devices that we use to access them are already set up to do so thanks to apps, dedicated software that is incapacitating.
No, don’t be offended. Websites, apps, social networks and devices are engineered like the controls of a washing machine to be used without understanding how they really work. Thanks to user-centered design logic, they must be able to be used by everyone and therefore leverage common human skills: perceptual coordination, language (poor) and memory (Google). But social networks are places of interaction that react not only on those skills but on our behavioral “frames”, modifying them.
In summary: why do we fight so much on social media? Because the physical absence of the interlocutor eliminates the fear of physical reprisals. Why are opinions on social media so polarized? Because users can take advantage of anonymity and feel less responsible for what they publish. Why do they hate so much? Because we are assigned to categories of users similar to us who see and read the same things, reinforcing conformity and group thinking.
Social platforms have no ideology other than that of the market. It does not matter who you are or how you think, what counts are the numbers you do — like, fans, followers — together with the spending power known from the intersection of factors and information also external to the platform.
For this reason, social media policies place as few constraints as possible on user behavior, often under the banner of an alleged freedom of expression. The effect is that people who would never admit in public that they are anti-Semitic and racist towards blacks, gays, or other “minorities” do so frequently on social media. If we add to this the social anger of a blocked society like ours, we understand the success of online hatred.