Internet Platforms As Normative Infrastructures for Disinformation

Carolina Aguerre

The internet and globalization have gone hand in hand over the last decades (De Nardis 2009), making this conjunction a feat of international cooperation. Increased connectedness for users, organizations and geographical regions has been enabled by communications networks and the sharing of data driven goods, services and experiences. Lately, however, a less rosy narrative about this international infrastructure has emerged and global ideas of cosmopolitanism have been largely abandoned, as the data flows across these networks have generated both positive and negative externalities across various levels.

The digitally networked environment is now riddled with metaphors of darkness: in its protocol infrastructure, there is the ‘dark web’, a space that is not indexed by traditional market browsers and where a large majority of content is illegal. At the content level, there are ‘dark patterns’, which can take many forms and are embedded in design functions that trap or trick users into clicks and manipulation. Content is also increasingly being filtered in social networks through the practice of ‘shadow banning’, which bans and blocks certain content generated by a user in a non-apparent and transparent way.

While being online has become essential for many activities, ubiquitous use of digital technologies during the COVID-19 pandemic also exposed a more sombre side to these applications and to the power of data-driven platforms (Warwick 2021). Cyberbullying, cybercrime, cyberaddiction and dis/misinformation campaigns have been experienced by netizens in most countries and put on the agendas of national governments, civil society and international organizations. It is no wonder then that internet users in democratic environments have more nuanced opinions on the value of data-driven technologies (McDonnell, Verdin and O’Reilly 2022). How do contemporary data-driven platforms enable disinformation campaigns and how is that cooperation materialized? This article will explore these questions, focusing specifically on the phenomenon of dis/misinformation as it is not only a pervasive contemporary problem but also one which has not been approached from the angle of cooperation.


Theorizing infrastructure and their normativity in digital data

Since disinformation takes place mostly within the spaces of digital platforms, boundary objects are a relevant concept from the Science and Technology Studies (STS) literature (Star and Griesemer 1989) to approach them. Boundary objects have been defined by processes in which actors coming from different social worlds are called on to cooperate, manage and coordinate despite their diverging points of view. Boundary objects have evolved in the literature to capture the work of coordination, alignment, alliance and translation among the different actors and the worlds they mobilize (Trompette and Vinck 2010). Yet, this notion has not been developed to approach types of informal cooperation, cooptation and loosely coupled practices that lead to harmful outcomes, such as those practices seen in dis/misinformation. As stated in the introductory article of this special issue, the ends of cooperation do matter. Understanding the ‘normative infrastructure’ of dark cooperation (Liste and Gadinger 2021) allows us to focus on a structural approach of the cooperation between different actors, organizations and levels that are involved in global data flows and their processes concerning dis/misinformation (Baptista and Gradim 2022).

Technological affordances may be used to account for diverse modes of cooperation that underlie these designs. These affordances, referred to as what an individual or organization can accomplish with a particular technology (and the possibilities for interaction they provide to different actors (Faraj and Azad 2012), are part of the mechanisms involved in dark cooperation in the digital data domain. To identify the types of cooperation involved in disinformation, it is vital to examine the processes of social ordering, which may be grasped in the ways that technical and infrastructural configurations come to shape conduct (Flyverbom and Murray 2018), perceptions and knowledge. These platforms play a crucial role in how we come to understand the world, although there are no shared universally applicable understandings on what their role is or should be. For example, while large platforms help to ‘connect people’ (as Facebook’s original mission stated), they may also be used to ‘divide societies’ (Gil de Zúñiga and Chen 2019). Finally, the ongoing debates about propaganda theory that have risen at the turn of the twenty-first century (Tarín Sanz 2018) are another angle from which the issue of intentionality and spontaneity of propaganda can be assessed.

 

Disinformation and cooperation strategies

Disinformation and related terms (such as misinformation and fake news) are contested notions for which no single encompassing definition exists. Disinformation is understood as something false and deliberately created to harm a person, social group, organization or country; similarly, misinformation is characterized by the fact that, despite being incorrect, it is not created with the intention of causing harm. In both dis/misinformation, the United Nations Development Programme has identified similar trends based on three pillars: they have a political motivation, they are driven by propaganda that is largely based on digital media platforms and they are crisis-driven (by armed conflicts, COVID-19, etc.).

Disinformation campaigns can be categorized as dark practices of cooperation (i) because of their potential effects on societies, specific groups and institutions; (ii) due to the original intentions of their creators, be they humans or bots (as the bots are originally designed by people); and, more relevantly for this work, (iii) because their normative infrastructure is based on internet platforms, which lack transparency and accountability. Misinformation comes with the same problematic implications. The practices that enable the performativity of both disinformation campaigns and misinformation practices are based on three technical and interdependent infrastructures: the internet, digital platforms and artificial intelligence (AI).

The internet’s polycentric arrangements have been extensively developed (Scholte 2017; Koinova et al. 2021; Aguerre 2022). The diffuse, multilevel and fluid arrangements, where there is not one single power or authority to account for the rules and practices on the internet, enable the generativity of functions which have ultimately facilitated the creation of the web and the growth of digital platforms.

Digital platforms performing social media functions, which are the relevant infrastructures for this work, originated thanks to the internet and the world wide web. STS approaches recognize ‘a socio-technical dispositif as an assemblage of human and nonhuman actors, whose competencies and performances are distributed and whose existence is enabled by the workings of innovation’ (Musiani 2019: 87). Like so many media technologies and platforms throughout history, social media platforms were not originally designed for many of the same uses for which they are intended today. Facebook, which was originally designed to help college students identify attractive classmates and Twitter, which was intended to facilitate the dissemination of a ‘short burst of inconsequential information’ have evolved and changed both their mission and practices. Napoli (2015) highlights the contrast between the origins and self-perceptions of social media platforms as ‘technology firms’ and the increasingly important role they are playing in the flow of news and information, raising questions of whether or how the normative dimensions of their governance frameworks reflect the realities of their function and significance.
 

‘To the extent that this (mis)perception resonates with different stakeholder groups, it creates a potentially problematic gulf between the role and function that these platforms are performing in the contemporary media ecosystem and the way in which they are perceived and governed’ (Napoli 2015: 752).


It is through social media platforms that dis/misinformation campaigns are conceived and strategized (Chadwick and Vaccari 2019), achieving both media effects that could be equated to the role of twentieth century propaganda and disinformation in mass communication media but with three relevant caveats: (i) these platforms have media functions, but they rely on the internet, a communications network that is a general purpose technology, and not on their own network infrastructures and public spectrum as was the case with radio and television in the 20th century; (ii) they do not produce content but rely on the content of their users, acting as information intermediaries; (iii) they rely on AI, another general purpose technology which has some degree of agency itself (these platforms use AI to profile, target and customize content) but very little accountability, and transparency regarding content decisions.

Collaboration is strategically employed by actors to take advantage of data on platforms such as Google, Meta, Twitter, and Open AI's ChatGPT for disinformation. Here, a distinction can be made between different types of cooperation. At the human-machine level, humans and AI interact to shape messages and to affect outcomes, from purchases, content sharing and recommendations to sensitive issues such as elections, public health and armed conflicts. AI systems are embedded in the design of social media and other digital platforms to maximize their recommendation systems. Ironically, AI is often supposed to assist in the detection of false information, where it still has problems, but is at the same time used for the creation of fakes (e.g. Deepfakes) or even in cases where content is created completely artificially.

Another level of cooperation is across actors. For example, social media platforms, such as Facebook, TikTok, YouTube and Twitter, rely not only on the infrastructural capability of networking, but even more the development of a digital space that is attractive to advertisers and users alike. Both parts of this ‘two-sided market’ meet on the platform but different from an e-commerce site, where the transaction is palpable, in the case of social media it is much more ambiguous and even hidden. There are formal and legal infrastructures for this kind of collaboration in place, aimed primarily at platform owners and advertisers, which effectively allow for users to pay with their own data and through terms and conditions that are incomprehensible to the vast majority of netizens.

Informal cooperation amongst actors involved in disinformation is a deliberate cooptation of the legal and technical normativity embedded in the platform for the spreading of disinformation. There is also a less conscious cooperation of these platforms, which "enables" these types of activities through their underlying technical and design principles.

A final characterization of cooperation in disinformation is that the dark side may not always be based on an explicit and affirmative conception of cooperation: the organized actors and their networks are more loosely coupled than in traditional product/service campaigns and their exploitations of the available digital platforms are much more difficult to trace.

References

Aguerre, Carolina (2022). 'The Splinternet From a Polycentric Perspective,' Global Cooperation Research – A Quarterly Magazine, 4(2-3), Duisburg: Käte Hamburger Kolleg/Centre for Global Cooperation Research (KHK/GCR21).

Baptista, João Pedro and Gradim, Anabela (2020). ‘Understanding Fake News Consumption: A Review’, Social Sciences 9(10): 185.

Chadwick, Andrew and Vaccari, Cristian (2019). News Sharing on UK Social Media: Misinformation, Disinformation, and Correction. Loughborough: Online Civic Culture Centre, Loughborough University.

DeNardis, Laura (2009). Protocol Politics: The Globalization of Internet Governance, Cambridge, MA: MIT Press.

Faraj, Samer and Azad, Bijan (2012). ‘The Materiality of Technology: An Affordance Perspective’, in Paul M. Leonardi, Bonnie A. Nardi and Jannis Kallinikos (eds), Materiality and Organizing: Social Interaction in a Technological World, Oxford: Oxford University Press, 237–258.

Flyverbom, Mikkel and Murray, John (2018). ‘Datastructuring: Organizing and Curating Digital Traces Into Action’, Big Data & Society 5(2).

Gil de Zúñiga, Homero and Chen, Hsuan-Ting (2019). ‘Digital Media and Politics: Effects of the Great Information and Communication Divides’, Journal of Broadcasting & Electronic Media, 63(3): 365–373.

Koinova, Maria, Deloffre, Maryam Zarnegar, Gadinger, Frank, Şahin Mencütek, Zeynep, Scholte, Jan Aart and Steffek, Jens (2021). ‘It’s Ordered Chaos: What Really Makes Polycentrism Work’, International Studies Review 23(4): 1988–2018.

Liste, Philip and Gadinger, Frank (2021). The Dark Sides of Global Cooperation, Unpublished Document (version 19 December 2021)

McDonnell, Ann, Verdin, Rachel and O`Reilly, Jacqueline (2022). EU Citizens’ attitudes to Digitalisation and the Use of Digital Public Services: Evidence From Eurobarometers and eGovernment Benchmark,EUROSHIP Working Paper No. 12, Oslo: Oslo Metropolitan University.

Musiani, Francesca (2020). ‘Science and Technology Studies Approaches to Internet Governance: Controversies and Infrastructures as Internet Politics’, in Laura DeNardis, Derrick L. Cogburn, Nanette S. Levinson and Francesca Musiani (eds), Researching Internet Governance: Methods, Frameworks, Futures, Cambridge, MA: MIT Press, 85–104.

Napoli, Philip M. (2015). ‘Social Media and the Public Interest: Governance of News Platforms in the Realm of Individual and Algorithmic Gatekeepers’, Telecommunications Policy 39(9): 751–760.

Scholte, Jan Aart (2017). ‘Polycentrism and Democracy in Internet Governance’, in Uta Kohl (ed.), The Net and the Nation State: Multidisciplinary Perspectives on Internet Governance, Cambridge: Cambridge University Press, 165–184.

Star, Susan Leigh (2010). ‘This is Not a Boundary Object: Reflections on the Origin of a Concept’, Science, Technology, & Human Values, 35(5): 601–617.

Star, Susan Leigh and Griesemer, James R. (1989). ‘Institutional Ecology, “Translations” and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39’, Social Studies of Science, 19(3): 387–420.

Tarín Sanz, Adrián (2018). ‘Communication, Ideology and Power: Notes on the Debate Between Intentional Propaganda Theory and Spontaneous Reproduction of Propaganda Theory’, Comunicación y Sociedad, 32: 191–209.

Trompette, Pascale and Vinck, Dominique (2010). ‘Back to the Notion of Boundary Object (2)’, Revue danthropologie Des Connaissances 4(1).

UNDP (n.d.). RISE ABOVE: Countering Misinformation and Disinformation in the Crisis Setting, United Nations Development Programme (UNDP), available at: https://www.undp.org/eurasia/dis/misinformation (accessed 4 March 2023).

Warwick, Claire (2021). ‘Negotiating the Digital Dystopia: The Role of Emotion, Atmosphere and Social Contact in Making Decisions about Information Use in Physical and Digital Contexts’, New Review of Academic Librarianship 27(3): 259–279.

About the Author

Carolina Aguerre is an Associate Professor at Universidad Catolica del Uruguay and visiting professor at Universidad de San Andres (Argentina). She is a Senior Associate Fellow at the Centre for Global Cooperation Research (GCR21), University of Duisburg-Essen (Germany).

Contact:  aguerre@udesa.edu.ar