RU581 Summer 2018

Countering "Counterknowledge":

A scociotechnical approach to disinformation amplification on social media

“The very concept of objective truth is fading out of the world. Lies will pass into history” ~ George Orwell

megaphone-blue

The Problem of "Fake News"

In 2016 the Oxford Dictionary awarded the term “post-truth” its “Word of the Year”, defining it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (Oxford English Dictionary, 2016, para. 1). It was a fitting choice; according to a 2016 Pew Research Center survey, almost two-thirds of U. S. residents believed that “fake news” was responsible for creating confusion about current events (Barthel, Mitchell, & Holcomb, 2016, para. 2). A 2018 Monmouth University poll found that 87% of Americans feel that outside sources are actively planting false news stories on social media sites (Murray, 2018, para. 9). With two-thirds of Americans getting their news from social media (Shearer & Gottfried, 2017, para. 1), a discussion about disinformation amplification on social media is both timely and important. It is a complicated subject, involving complex algorithms, data collection, alternative news sources, bot networks, and the personal belief systems and biases of users.

It’s alarming that 14% of social media users in the U. S. shared “fake news” knowing it was false (Barthel, Mitchell, & Holcomb, 2016, para. 4). In order to comprehend why this happens, it’s imperative to understand the reasons people share online in the first place and how what they see in their news feeds is both generated and perpetuated by platform designers that directly profit from disinformation. Seen through a sociotechnical lens, this recursive interaction between user behavior and platform design creates the ideal conditions for disinformation and the “weaponization” of news to flourish and spread. While media literacy education’s focus on critical analysis of news stories may help reduce disinformation amplification, we may be missing the forest for the trees if we expect media literacy education as it is currently practiced to be the salve that soothes the wounds inflicted by the social media information wars.

User Behavior

Far from being engaged in “information warfare”, many of us simply share to connect with others, revealing something about ourselves and/or presenting particular ideas we have that unite us with like-minded people. “[On social media]…we find comfort in declaring our tribal membership” (Vaidhyanathan, 2018, 50). Sometimes we share to gain or maintain affiliation and/or to establish a sense of trust or authority within our spheres of influence. “By posting a story that solidifies membership in a group,” Vaidhyanathan asserts, “the act generates social value” (2018, 50). Trilling, Tolochko, and Burscher (2017) have studied the reasons why users find a news story shareworthy, citing geographic distance, cultural distance, negativity, positivity, conflict, human interest, and exclusiveness (54). The issue of bias is explored by Ciampaglia and Menczer (2018), including cognitive, societal, and popularity/homogeneity bias (paras. 4, 8, 17). Even when sharing for entertainment, we seek engagement in the form of likes and shares. “A ‘like’ button is not a simple indication of agreement or interest; it may be…a simple vote for people to pay attention to something, either directly…and also algorithmically” (Zhang, Wells, Wang, & Rohe, 2017, 6-7). Attention rules in the information economy, and platform algorithms drive more of what the user engages with into their news feeds, creating an “echo chamber” where disinformation can quickly spread (Vaidhyanathan, 2018, 89). This intentional design aspect – the algorithm - is the social media platform’s most lucrative feature but is also directly responsible for powering the disinformation machine.

Platform Design

Algorithms determine much of what fills our news feeds on social media. Social media platforms determine engagement based on “likes” and shares, regardless of the nature of the content. Engagement is “rewarded” with more posts and stories the user is likely to engage with. Social media platforms call this “relevance,” and they use the data they collect based on user behavior to send you more content from people and other outlets they’ve engaged with before (Vaidhyanathan, 2018, 89). The platform design itself facilitates sharing, and the low barrier of entry into the online environment – the ease with which people can create content, including doctored images, conspiracy theories, and “clickbait” headlines – means that the quality standards to which we once held news sources becomes less meaningful (Buckingham, n.d., para. 8). Facebook is perhaps most often criticized for its promotion of disinformation, not only due to its algorithms, but also because it is designed to inundate users with headlines devoid of the context required for deliberate sense-making (Vaidhyanathan, 2018, 44), all the while profiting financially off user engagement regardless of content quality or veracity. According to Matthew Yglesias of Vox, “Facebook created a medium that is optimized for fakeness, not as an algorithmic quirk but due to the core conception of the platform” (2018, para. 8). Platforms such as Facebook collect our data and deliver it to their advertisers, who market audience-specific products and services to us (Vaidhyanathan, 2018, 86). This targeted advertising model played a significant role in the 2016 U. S. presidential election cycle – but it was far from the only social media platform tool hijacked by those who benefit from disinformation.

Case Study: The 2016 United States Presidential Election

As mentioned previously, Facebook’s “Custom Audiences” advertising model allowed for Donald Trump’s campaign to access user data in order to target advertisements to demographics the campaign felt would be favorable to Trump’s positions. These ads could be tested and retooled in real time on Facebook based on how these users engaged with them (Vaidhyanathan, 2018, 171). The Trump campaign took full advantage of Facebook and Twitter to both promote Trump’s agenda and sow distrust in the “mainstream media”. “Alternative,” anti-globalist media sources were actively promoting false narratives on social media (often through bot networks) while gaining credibility from reporters who covered these stories for the mainstream media; if CNN was saying it wasn’t true, then it became confirmation for the “true believers” that the major news outlets could not be trusted to tell the truth (Starbird, 2017, para. 18). “Extremist groups,” writes Phillips (2018), “along with other groups of media manipulators, are eager to use journalistic norms as a weapon against journalism” (12). Bot accounts on Twitter pushed these stories, assigning trending hashtags to them to both amplify their messages and confuse those searching for those hashtags for other purposes. The more these “stories” trended, the more the mainstream media covered them to refute them, the algorithms picked up on the high levels of engagement they generated, and the more visibility they gained on social media (Prier, 2017, 60-61). The design of the social media platforms – from the algorithms that boost trending topics into greater visibility to the ease with which people and groups can promote and share propaganda with little oversight - allowed for “bad actors” to co-opt their affordances to spread disinformation to a willing audience.

binary-eye

Staunching the Flow

If we acknowledge that platform design bears a significant responsibility for the amplification of disinformation, and that the way users interact with them takes advantage of the affordances social media provides in ways that the designers may not have intended, we must look at what can be done to stem the tide of “fake news.” Many ideas have been proposed, including trust ratings (which Facebook already uses), algorithms and filters to detect and remove disinformation, allowing for anonymity on all platforms, a reinforcement of high journalistic standards, a greater use of subscription models for news outlets, governmental unbundling of companies that run multiple platforms (such as Facebook, which also owns WhatsApp and Instagram), and government regulation (Anderson & Rainie, 2017, 6). Germany passed legislation, effective in 2018, which holds social media companies responsible for not removing hate speech or disinformation on their platforms (Faiola & Kirchner, 2017, para. 3). France appears to be close to enacting a similar law (McAuley, 2018, para. 1).

But there are substantial problems with each of these “solutions,” such as algorithms that inadvertently flag factual information for removal and an online alternative media system that requires no fees to access. One of the most significant risks is censorship according to the whims of the platforms, who, in erring on the side of extreme caution, may be more likely to remove disagreeable or inflammatory stories or posts that are not actually in violation of any laws. “Whether unpopular, controversial, and contested speech has the right to exist on these platforms is left up to unelected corporate executives,” MacKinnon (2013) laments, “who are under no obligation to justify their decisions” (86). Laws such as the Network Enforcement Law in Germany threaten to interfere with the freedom of the press as well (McAuley, 2018, para. 9) and lead to “an overreach that risks becoming de facto censorship” (Faiola & Kirchner, 2017, para. 4). The other face of the censorship dilemma is self-censorship; if users lose anonymity and fear reprisal for speaking out on controversial issues, they will be less likely to do so (MacKinnon, 2013, 89). Based on the literature presented, it seems that many of these “solutions” will be stopgap measures at best, until users either abandon the platforms (which is unlikely) or those who seek to spread disinformation and hate speech adapt to the new system. So, what can be done that will have a lasting effect?


Media Literacy Education: The Great Debate

One possible answer is media literacy, which is frequently proposed as a viable challenge to disinformation – and one that isn’t dependent on technological or governmental intervention. It includes: 1) data literacy (learning how to interpret and understand data and its uses); 2) digital literacy (using technology tools to assess, produce, and distribute information); and 3) information literacy (knowing how to find and evaluate needed information). It requires “active inquiry and critical thinking about the messages we receive and create” (National Association for Media Literacy, 2007, para. 3). Media literacy offers opportunities to view the media messages around us as socially constructed and, therefore, subject to personal, societal, and cultural interpretation and the extraction of unique meanings as a result (Tisdell, Stuckey, & Thompson, 2007, 2). These are particularly important skills to develop at a time when social media has made propaganda easy to produce and disseminate and “hyperpartisan politics…has led to the weaponization of news by individuals, political groups, and foreign countries” (Rosenwald, 2017, 96). As users of social media learn to examine the biases behind media messages and make more informed choices about what they create and/or share, the thinking goes, disinformation will become less of a problem on social media. Algorithms will stop promoting these stories as people engage less, and they will wither on the vine.

But is this realistic? It’s debatable. Though media literacy education is not a new concept - schools around the U. S. and abroad have been teaching it in various forms for decades (Rosenwald, 2017, 96) - what is new is the increased polarization and tribalism in online discourse. “News” is no longer a term associated only with mainstream media outlets; alternative media sources attract large numbers of social media followers. It doesn’t help that public trust in the mainstream media has decreased to its lowest levels in years (Swift, 2016, para. 1). These are challenges that some critics of media literacy education, such as danah boyd and David Buckingham, argue cannot be effectively addressed without attempting to understand this larger context. boyd (2018) argues that teaching people to always question sources has led to an epidemic of doubt and distrust, with ideas of “truth” being socially constructed, influenced more by epistemological differences. “No matter what worldview or way of knowing someone holds dear,” she asserts, “they always believe they are engaging in critical thinking when developing a sense of what is right and wrong, true and false…but much of what they conclude may be more rooted in their way of knowing than any specific source of information” (boyd, 2018, para. 18). Buckingham (n.d.) also points out that “fake news is a symptom of much broader tendencies in the worlds of politics and media. People…may be inclined to believe it for quite complex reasons. And we can’t stop them believing it just by encouraging them to check the facts, or think rationally about the issues” (n.d., para. 23).

danah-boyd-headshot

danah boyd

Renee-Hobbs-headshot

Renee Hobbs

Faith-Rogow-headshot

Faith Rogow

In response to some of these criticisms, media literacy education proponents such as Renee Hobbs (2017) and Faith Rogow (2018) have acknowledged that, while media literacy education has room for improvement, its foundation in critical thinking and evidence-based inquiry of media messages and sources is sound. Hobbs (2017) argues that critical analysis of the world around us harkens back to the Enlightenment (para. 7), and that questioning leads to making more informed choices about media consumption. Moreover, media literacy does not define “right” or “wrong” sources (Hobbs, 2017, para. 8), a point that Rogow (2018) also addresses. “[Media literacy] consciously promotes strong critical thinking, meaning we interrogate the things that confirm our opinions as well as the things that challenge our views. We are after ‘rich readings,’ not ‘single truths’” (para. 12). While boyd argues that education is seen as “the enemy” by many - as “trying to assert authority over epistemology” (boyd, 2018, para. 29) - Rogow asserts that “we shouldn’t change reason-based media literacy in a misguided attempt to reach people who would transform the United States from a democracy into a theocracy if we gave them the chance” (2018, para. 33). Hobbs and Rogow did not, however, disagree with boyd’s observation that “we trust people who we invest our energies into understanding” (2018, para. 60). This creates a vexing challenge to overcome, especially since the algorithms of social media platforms create filter bubbles that don’t challenge us to be critical of what we already believe and news feeds that present headlines that compete for our attention. These factors are likely to only perpetuate the information wars on social media, causing people on both sides of an issue to become further entrenched in their version of “truth.”


The Way Out?

In order to develop an “exit strategy” to the information war we find ourselves engaged in, it’s fundamental that both scholars, as well as social media users, begin to understand the complexities of disinformation amplification from a sociotechnical standpoint. Any information and communications technology (ICT) such as social media, Sawyer & Jarrahi (2013) affirm, “is embedded into a social context which both adapts to, and helps to reshape, social worlds through the course of their design, deployment, and uses” (5). Vaidhyanathan seconds this when he states that …we misunderstand the effects of technologies when we pretend that they exist outside of human bodies and human relations. We are embedded in the data network that would constitute the operating system of our lives…we shape these technologies as much as – if not more than – they shape us” (2018, 101). The designers of social media platforms could not have anticipated how its consumers would use their technologies, nor could they have realized that the recursive nature of their algorithms and the affordances of their designs would open the “Pandora’s box” of disinformation that surrounds us. While societies wait for technological or regulatory “fixes” to attempt to resolve this dilemma, users already have a valuable tool within their grasp. People can be taught media literacy skills to critically evaluate the messages they see, read, and hear. But media literacy education must incorporate new perspectives via ongoing debate and discussion if it is to remain viable in an ever-evolving information landscape. Perhaps media literacy education will then be the “Hope” we have been waiting for.