1. Introduction
The post-truth era revitalized the democratic debate on several fronts: What is disinformation? What challenges and implications does it pose for democratic societies? How can media and digital literacy skills be strengthened to help citizens identify false or biased content? Contents increasingly try to manipulate the public life of countries and the relations between them, thus eroding democratic systems and creating strong social instability. Has the information society encountered its biggest problem since it emerged? And will the role of the journalist be to “validate the information” now that he can no longer be a gatekeeper? And what happens when traditional media start to make ample use of content produced by viewers, and social media become sources for journalists? There are many questions that deserve careful thought.
In this work, we intend to highlight the debate around the disinformation issue, at a time when the digital has become a form of social architecture and, for this very reason, it seems to have "the ability to transform ways of being, including knowledge processes and decision making” (). This transformation profoundly altered the perception of truth and knowledge, even forcing rethinking epistemic processes, as emphatically observed by .
The effort is to understand what triggers this digital mutation in the democratic model, whose latest version is characterized as a “democracy without filters”, whose narratives circulate unchecked. We are witnessing the corollary of a previous metamorphosis, where the transfiguration of a “democracy of institutions” to a “democracy of audiences” had already taken place, according to . This allows us to attempt a formulation that we can call post-mediated time, following post-democracy proposal; Is the democratic process delegitimized by the argumentative polarization built in a public space permeated by fallacious information and weak consensus?
In our view, it is therefore pertinent to engender reflection guided by a literature review that allows us to scrutinize the academic contributions that made it possible to look at the effects resulting from this technologically disinformation process on democratic societies. Methodologically, we will operate in a double way; on the one hand, we will present during the text, secondary data that emphasize a democratic time permeated by discursive polysemy, which is challenging with regard to the sense of veracity of circulating information.
On the other hand, in the tradition of metatheoretical studies, we emphasise the reviewing of the vast conceptual arsenal that has been unleashed over time, considering the proliferation of texts produced about disinformation. This aspect is particularly important, given that the concepts produced quickly entered current language, being widely reproduced but sparsely discussed.
Our main objective is, therefore, to carry out a critical review that problematizes the nature of the post-truth study question, which has woven a vast lexical repertoire that needs scrutiny and clarification. It is precisely here that we begin our discussion.
2. Conceptualizing disinformation
We live in a communication context of large-scale circulation of messages and information (). The abundance of information, intensified by social media and technology, poses new challenges for our democratic coexistence and for the liberal social pact (; ). In the flow of the information that is exchanged, it is increasingly difficult to scrutinize its veracity. Journalism, which had a primordial function as a filter and for decoding messages in transit, proved to be impotent under this polyphonic framework (). Traditional media have lost their hegemonic role of information control, in a daily life permeated by content produced by users/spectators/consumers (UGC), who, namely through social media, disseminate information at a frantic pace (). The result of this change is that we live under the spectre of disinformation, in such a way that the European Union, recognizing the importance of this issue, created a Code of Practice to counteract the phenomenon. It starts by clarifying what disinformation means:
Information demonstrably false or misleading which, cumulatively.
(a) is created, presented and disseminated for economic advantage or to deliberately mislead the public;
(b) is likely to cause public harm, understood as threats to democratic political processes and policy-making processes, as well as public goods, such as protecting the health of EU citizens, the environment or security ().
This disinformation issue ended up leading to recurrent fake news, concepts that have become interchangeable, deserving a conceptual remark on our part. Each of them means different things because news cannot, by definition, be “false” or they would not be news (). And because news means “information that can be verified in the name of the public interest” ().
However, the expression fake news has become widespread and cannot be ignored in the literature on disinformation. Even in France, where an attempt was made to adopt another designation, the effort was not very successful (). It should be noted that “fake news” existed long before the Internet () and are an intrinsic mechanism in the exercise of power, as detailed by :
No one has ever had any doubts that truth and politics are on rather bad terms, and no one, as far as I know, has ever counted truthfulness among political virtues. Lies have always been regarded as necessary and justifiable tools not only in the politician’s or the demagogue’s but also of the statesman’s trade.
Such predisposition conditions the functioning of the information ecosystem, which, when vehemently harassed by the lies of the privileged journalists' sources, the institutionalized elites, constitutes a simulacrum of the truth (adopting Baudrillard's concept). The so-called fake news is an attempt to “simulate journalism and its search for the truth” with “completely different goals” (). There is yet another definition of “fake news” as “news that conveys or incorporates false, fabricated, or deliberately misleading information, or that is characterized as or accused of doing so” ().
Still, the attempt to establish a definition of disinformation is not even recent. It is a concept that became popular after World War II, which, according to , is associated with a strategy aimed at “the manipulation of public opinion for political purposes, with information being processed through the back door”. Interestingly, this author's entire approach was based on the psychological propaganda used by the military and considered disinformation as a “weapon of war” (). This approach can be explained in light of the context we lived then, and which has not changed that much, since disinformation can have a great impact today, above all on the economies of many countries, whether developed or emerging economies (). The same applies to diplomatic relations between states, where conflicts usually lead to large-scale crises.
In the same way, it is important to make a clear distinction between two other concepts that tend to get confused: we refer to misinformation and disinformation. The first concept refers to “incorrect or inaccurate information” and the second to “false information created to cause harm” (). However, the term misinformation is considered more accurate, since it is not always possible to know the real intentions of those who produce or disseminate this type of content. It should also be noted that this attempt to clarify concepts has some weaknesses, since, in many cases, what is found is a mixture of true and false elements.
At this stage of definition of concepts, another important distinction between "disinformation" and "propaganda" must be made. Propaganda is done with the idea of persuading people about a certain idea or point of view while disinformation is always aimed at creating harm (). To clarify concepts, let us establish that the concept of propaganda is "communication designed to manipulate a specific population by changing their beliefs, attitudes or preferences in order to obtain behaviour in accordance with the propagandist's political objectives" ().
In the literature we find more definitions of disinformation. refers to “[…] deliberate (often orchestrated) attempts to confuse or manipulate people through the distribution of dishonest information”. Or “deliberate false information, especially when produced by a government or its agent, provided to a foreign power or the media, with the intention of influencing the policies or opinions of those receiving the information” (). This is an incessant debate that continues to grow.
3. The spread of disinformation: media, social media and deepfakes
In order to understand the phenomenon of disinformation it is necessary to look at a more general picture, and examine factors such as the market, the behaviour of consumers and/or information producers, the dispersion of attention, and changes in the media system. At a time when attention span has become a scarce resource (), the selection of information and the time availability for it become an object of study in the analysis of disinformation. In the process, one must not forget that disinformation and lying are clearly more “appealing”, in terms of clicks, than political information and “truths” ().
The exponential growth in the use of social media has also led to the emergence of a vast field of disinformation. The example of the United States of America must be given: The Pew Research Center has monitored the figures on the use of social networks: in 2005 (the first year for which data are available), the number of Americans who claimed to use at least one of these platforms was 5%. Since then, there has been continuous growth and in 2020, the number reached 72%. According to the same study, the most used platform (again about US users) until 2017 was Facebook (68% in 2017) but, from 2018 onwards, it was YouTube, which, in 2019, was used by 73% of Americans.
Fake news depends on sharing on social media to survive (). The abundance of information also creates unanticipated problems: some authors use the term infobesity (a mixture of information and obesity) to characterize people who consume such a large amount of information that “it ends up having a negative effect on their well-being and ability to concentrate” ().
An example of the use and dissemination of manipulated information can be identified in the 2016 elections in the United States of America. A 2018 study by the Knight Foundation analysed more than 10 million tweets, posted by around 700,000 accounts, linked to 600 “news” handling organizations [outlets]. In the month before the aforementioned 2016 elections alone, more than 6.6 million tweets linked to “fake news” and conspiracy theories were detected, originating in 454,832 accounts. 89% of the most active accounts identified in this study were still active in 2018.
Another aspect has a preponderant weight in the study of disinformation. The media market has undergone an important change in its functioning and in the correlation between transmitters and receivers. Thanks, above all, to technological changes and digitization, consumers have also become producers of information (), now giving consumers a “preponderant role in defining the relevance of the content produced by them, but also by the media industry itself” (). The media themselves have altered their news production routines in order to incorporate viral topics and stories into their agenda setting processes. The fact that a story “is viral” became, in itself, “news”, regardless of its veracity ().
Mass distribution of television (cable, satellite, and internet) and the associated proliferation of channels helped to create greater polarization in the distribution and consumption of media (). Naturally, the US market is a better example of this reality due to the different characteristics of the sector's regulation and of the media culture itself. It must be born in mind that, in that country, in the name of transparency, it is customary for the media to assume, in their editorial status, the respective political position.
However, a society as strongly “visual” as the one we live in could not help but be very sensitive to disinformation through images. This is mainly because the dissemination of devices capable of capturing, editing, and sharing images has increased a lot in recent years (). This more “visual” side of our society has even led to the emergence of a new form of disinformation: the “deepfakes”. What does this type of manipulation consist of?
From the outset, these are images that are traditionally considered incapable of “lying”. We all remember several examples of aphorisms that clearly translate this characteristic of the image: “seeing is believing”, “a picture is worth a thousand words”. Is, in fact, the image, the whole image, indisputably true? It is not true that it is so, because “the image is not a natural product, it is neither transparent nor true” () and because “there is no photographic truth” (). We must not “accept any image as accurate” (). The definition of this very sophisticated form of disinformation and manipulation is therefore established: "Deepfakes can be defined as visual or audio content that has been manipulated using advanced software to change how a person, object or environment is presented" ().
But let's look at some data in order to try to understand the dimension of the problem. First of all, on the side of the traditional media: according to data from January 2019, seventy-six percent of Turks claimed to have found “fake news” on television according to data from Statista. A high number when compared to the constant world average in the same study, which was then 51%. Of the countries included in this report, the least affected was Germany, which still had a total of 34%, a very high number. As for the press, the numbers were not far from those: in Turkey, 72% of respondents had found fake news in newspapers and magazines.
With regard to social media, we used the Reuters Institute Digital News Report 2020, which indicates that social networks are one of the main channels of disinformation. 40% of respondents considered social media as the means to be most concerned about “false and misleading” content. As an example, it should be noted that, on just one occasion (May-June 2018), the Twitter network suspended about 70 million accounts on suspicion of being used to disseminate false or manipulated information.
The report The Global Disinformation Order 2019 Global Inventory of Organised Social Media Manipulation (2020) states that it was possible to identify manipulation campaigns through social media in 70 countries. According to the same report, in 2018 the number of countries where the same phenomenon had been detected was 48 and in the previous year (2017) only 28.
There is no better way to present this type of manipulation than to recall examples such as some altered speeches by former President Obama or President Putin. Using various techniques, the voice and testimonies of the protagonists were replaced, completely subverting the meaning of the statements.
To what extent are deepfakes a threat? The answer can be found, for example, on the cover of The Times February 2019: “Deepfake videos threaten the world order”. Or, quite clearly, in the warning words of the report “Information Manipulation: A Challenge for our Democracies” prepared by a group of experts, at the request of the French Government, which draws attention to the various dangers of this form of visual manipulation:
An even greater danger, much subtler than the creation of fake videos, arises with small changes to a part of the audio or video, such as recording a speech [...] the circulation of twenty different versions of the same speech can hide the true version in the midst of confusion ().
But there is no need to look for such sophisticated manipulations. A “simple” manipulation of a photograph can fulfil this objective. However, there are other approaches to disinformation. proposes a broader definition of disinformation, “the historical process of altering history itself, culminating in a disruption or blocking of critical thinking”. For this author, disinformation marks a “systematic malfunction of liberal democracy” () and even constitutes, in the case of the United States of America, proof of the collapse of the two-party system.
Let's look again at some figures regarding deepfakes and the increase in their circulation. According to a 2020 report, the number of these contents in circulation increased 6820 times between 2019 and 2020. On the YouTube platform alone, deepfakes will have had about 5.4 billion views. The same report also presents another data that is very significant about the change in objectives associated with this type of content: 81% of the videos detected were non-pornographic.
The dissemination of technology and the digitization of everyday life have made a decisive contribution to the creation and dissemination of disinformation. Without technological evolution, without a truly networked society, disinformation (which has always existed) would have much more difficulty in spreading and growing. Let us systematize, using , the way in which technology has helped the “epidemic” of disinformation:
-
By increasing the volume of information, we are exposed to.
-
By selecting and organizing (or allowing others to do so) the subset of information each of us is exposed to.
-
By increasing the speed of distribution of information and disinformation.
-
Making it easier to create fake news.
It will be important to talk about the algorithms used by the main social media to determine “trends” and amplify them. In the case of Facebook, there was an almost total replacement of human moderators by algorithmic systems (). As a result of this process of “automation”, there have been some cases of “dubious” choices. The most paradigmatic examples are perhaps the dissemination of a conspiracy theory about 9/11 being the result of “a controlled explosion” () or the firingof one of the most celebrated FOX News anchors, Megyn Kelly, accused of supporting Hillary Clinton. This latest news was distributed with great prominence to 1.79 billion Facebook users ().
Another aspect that cannot be ignored is the so-called “filter bubble effect” which, in a simple definition, makes social media users live within a “bubble” of interests common to their virtual friends (). A form of “isolation” that makes users see themselves in people “similar” to them , without finding great contradictory possibilities.
4. Finally, the result of a disinformation process
In this text, we try to clarify the different concepts that shape the disinformation phenomenon, which lies in a setting that interpenetrates conventional media with social media and the digital sphere. It is a media ecosystem cascades (in the sense coined by Robert Entman) disinformation phenomena that expand in several directions, raising questions of enormous socio-political () and epistemic complexity. After all, what is the truth? The facts? The knowledge?
We note an increase in more critical interpretations about the disinformation phenomenon, as a counterpoint to a cyber optimism marked by the pioneering positions of Manuel Castells about the “Network Society”. This stance ended up guiding studies on the relationship between the internet and politics regarding the network's potential for mobilization. The corollary was to give visibility to progressive movements perceived as a democratizing influence. The truth is that right-wing movements have been able, thanks to digital media, to short-circuit the traditional media ().
Populism gained ground. We recall Leslie Moonves, CEO of CBS, who said about Trump's candidacy: "It may not be good for America, but it is very good for CBS." Politics colonized by the media produces effects on the way politics is processed, and a clear blurring of the boundaries between politics and entertainment cannot be ignored (). The growing popularization of humorous narratives to focus on political issues is a symptom of a given sub-alternation of facts to pop narratives. This has critical effects on a wide range of less politicized audiences, who have difficulty in carrying out critical interpretations of the humorous deconstruction process ().
According to , there are other dangers of disinformation about democracy, such as the fact that the main danger of disinformation is not its direct effect on voters, but rather the feeling it conveys to politicians that voters are “inattentive” and “not vigilant” about the coherence of politicians' actions. For these authors, the well-informed public is just a “myth”, but a myth that makes “democracy work well” ().
In this line of reasoning, state that the main factor that facilitates the spread of disinformation lies in people and their low literacy. Among the factors that enhance this situation, they identify:
-
Difficulty to find correct information available.
-
Our own knowledge on how to evaluate news.
-
Our biases in filtering news.
-
Our lack of commitment to look for correct news.
The literacy question is an important answer to this problem. Not only media literacy, the most obvious area, but also a wide range of literacies. Only citizens with literacy tools will be able to develop the necessary critical stance regarding biased and manipulative content. The “experts” can play an important role in socializing the critical attitude of citizenship. By specialists we mean people of recognized competence in the area in which we need information or information validation. But how to recognize the “experts”? The solution may be through their "public credentials" or "public licences" that allow recognizing their qualities. They can take different forms such as, for example: university degrees, positions, professional experience and publications ().
The way to fight disinformation can be quite complex to define and requires a systemic view. In this exploratory study, we look, albeit briefly, due to the nature of the work, at some alternatives. First, the question of legislation and regulation. There is a European framework where significant steps are being taken to reduce disinformation. In recent years, the European Union has been making efforts to legally frame this phenomenon:
-
2015 - Proposal for a Digital Single Market Strategy in Europe;
-
2016 – Communication on Online Platforms and the Digital Single Market and European Parliament Resolution on EU Strategic Communication to Counteract Propaganda Against It by Third Parties;
-
2017 – European Parliament Resolution on Online Platforms and the Digital Single Market and European Public Consultation on “Fake News” and Online Disinformation;
-
2018 – Creation of the High-Level Expert Group on Fake News and Online;
-
2019 – Commission Recommendation on enhancing the European nature of the 2019 European Parliament elections and the effectiveness of the electoral process; Eurobarometer Report on False News and Disinformation Online;
-
Commission Recommendation on Measures for Effectively Tackling Illegal Content Online; Report The Digital Transformation of News Media and the Rise Disinformation and Fake news; Commission Communication on Fighting Disinformation Online: A European Strategy; Multi-stakeholder Forum against Disinformation; Code of Practice against Disinformation and Action Plan against Disinformation.
The role of journalism, which has lost its regulatory role in public opinion, also deserves a close look, partly because it is experiencing a complex situation: on the one hand, it suffers economic pressures that weaken news production organizations and, on the other hand, a new and more demanding role is required from it. It is “a map that allows citizens to navigate society” (), but it is a map of an ocean of disinformation “where journalists risk drowning in the middle of cacophony” ().
In short, disinformation weakens freedom of speech and democracies, as discussed throughout this text, as it changes people's trust in institutions and the media. So, it is up to the academic debate to continue to study the phenomenon as it is a critical agent that demonstrates the erosion of democratic pillars and contributes to an informed debate on disinformation. Therefore, it is crucial to look at the democratic issue regarding its intimate relationship with the communication process, which we called at the beginning of this reflection a post-mediated process. After reviewing the literature, it seems that the concept has heuristic interest, given that democracies can no longer be thought of as political regimes that are legitimized by the mere functioning of institutional formalities (free elections, sovereign bodies in office, political work in progress, media independent of the powers, etc.).
It is important to assess the content of circulating information and the scrutiny that is made about the reliability of the discourses produced, in an increasingly polysemic scenario deregulated of mechanisms for scrutinizing the meaning of the content of the abundant messages.
References
3
4
5
6
7
8
10
11
12
13
14
17
20
21
22
LEVESLEY, D. (2016). What I want to tell you about fake news after my job at Facebook. Available at: https://inews.co.uk/news/technology/facebook-trying-combat-fake-news-winding-back-time-biggest-decisions-year-37005
23
24
25
27
28
30
32
33
34
Notes
[1] The concept was introduced by Kurt Levin in 1947 in Channels of Group Life and applied to journalistic investigation in 1950 by DM White in the article entitled “The ‘Gatekeeper’: A Case Study in the Selection of News”.
[2] The author argues that there is evidence that supports a subversion of the principles of democratic functioning, which are not limited to fulfilling the formalistic issues of the democratic system. There is a notion of democratic practice guided by the sense of the “common good” that is at risk, as political decisions are increasingly held hostage by interest groups that have not been legitimized for this purpose.
[8] Available at https://knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter/
[9] Take the example of channels like FOX News, openly right-wing or MSNBC, more left-wing oriented.
[10] Available at https://www.statista.com/statistics/1017760/fake-news-television-worldwide/ in the field “Share of adults who have witnessed fake news on television worldwide as of January 2019, by country”.
[11] Available at https://www.statista.com/statistics/1016534/fake-news-print-media-worldwide/ in the field “Share of adults who have witnessed fake news in print media worldwide as of January 2019, by country”.
[12] Available at https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2020-06/DNR_2020_FINAL.pdf in the field “PROPORTION THAT SAY THEY ARE MOST CONCERNED ABOUT FALSE OR MISLEADING INFORMATION FROM EACH OF THE FOLLOWING – ALL MARKETS”.
[14] Available at https://www.onesearch.id/Record/IOS14218.libas-0-18088.
[15] Although this is not the objective of this work, it should be noted that deepfake production techniques are usually divided into four: face replacement, face manipulation, face generation and voice/speech generation.
[16] Available at: https://www.diplomatie.gouv.fr/IMG/pdf/information_manipulation_rvb_cle838736.pdf
[17] “DEEPFAKES 2020: THE TIPPING POINT”. Available at https://thesentinel.ai/media/Deepfakes%202020:%20The%20Tipping%20Point%20Sentinel.pdf .
[18] Available at: https://inews.co.uk/news/technology/facebook-trying-combat-fake-news-winding-back-time-biggest-decisions-year-37005
[19] Social media algorithms tend to bring together people who share similar interests and opinions.
[22] Available at: https://eur-lex.europa.eu/legal-content/PT/TXT/?uri=LEGISSUM:3102_3
[25] Available at: https://eur-lex.europa.eu/legal-content/PT/TXT/?uri=CELEX:52017IP0272
[26] Available at: https://ec.europa.eu/digital-single-market/en/news/synopsis-report-public-consultation-fake-news-and-online-disinformation15
[27] Available at: https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation
[29] Available at: https://data.europa.eu/euodp/data/dataset/S2183_464_ENG
[30] Available at: https://ec.europa.eu/digital-single-market/en/news/commission-recommendation-measures-effectively-tackle-illegal-content-online
[31] Available at: https://ec.europa.eu/jrc/sites/jrcsh/files/jrc111529.pdf
[32] Available at: https://ec.europa.eu/transparency/regdoc/rep/1/2018/PT/COM-2018-236-F1-PT-MAIN-PART-1.PDF
[33] Available at: https://ec.europa.eu/digital-single-market/en/news/meeting-multistakeholder-forum-disinformation
[34] Available at: https://ec.europa.eu/digital-single-market/en/code-practice-disinformation