By Kelly Born | Program Officer, Madison Initiative, William and Flora Hewlett Foundation
Democracy and the Internet may not be as compatible as many had hoped.
The “fake news” allegations of 2016 re-focused attention on longer-standing concerns about echo chambers, filter bubbles, declining journalistic revenue models and a range of issues in the online information space.

But these may be the least of our challenges, and a symptom of a much deeper problem. Truly fictitious news is likely to be solved without philanthropic intervention because information platforms are financially incentivized to address it: Few social media users wake up hungry to read and “share” completely made up information with their networks.

The real information problem—which we at the Hewlett Foundation are actively exploring with potential philanthropic interventions as part of our democracy reform work with the Madison Initiative—is biased news, including misinformation, disinformation and propaganda. Research shows that many citizens are psychologically predisposed to want to read biased news that reaffirms their pre-existing beliefs and tribal identities. This creates obvious pernicious incentives for commercial technology platforms that want to keep people on their sites. So, compared with fake news, biased news will be harder to address.

The impact of biased news is manifold, in part because it is promulgated by actors with a variety of goals. Given the focus of the Madison Initiative (launched in 2013), our initial focus was around how biased information is contributing to ideological polarization in the U.S. But the more we have learned, the more our thinking has evolved. Polarization and hyper-partisanship remain the key concerns at Madison (which we have documented in more detail here). However, some recent evidence calls into question the correlation between growing ideological polarization and social media, given that the most rapidly polarizing—older—demographics are the least likely users of social media. So while some actors are motivated by ideological partisanship, some are profit motivated, some seek to amplify prejudices via hate speech, and still others appear not particularly ideological, but instead most interested in shifting the balance of power domestically (or in the case of Russia globally, towards a more illiberal democracy).

These interests of partisanship, politics, profit, prejudice and power are not new. Nor are problems of propaganda, misinformation and disinformation, which likely preceded the written word. But (at least) two things are pretty new. First, social media and search platforms, which distribute information in a fundamentally different way than print, radio, and television, which were curated and disseminated by media gatekeepers. Second, the extent of foreign government manipulation of these platforms’ quirks is also new. It is these differences that have driven our focus on the role of the platforms.

On the first point, these platforms are different from prior media communications tools in several important ways:

• Democratization: As former DARPA Social Media and Strategic Communication lead Rand Waltzman has noted in a recent report, The Weaponization of Information, any individual or group can communicate to and influence large numbers of others online. This is a feature, not a bug, and has many positive aspects. But it has also removed the gatekeepers and enabled massive fragmentation and inundation in the media environment.
• Anonymity: At the same time that content creation has been democratized, it’s been anonymized. The dominance of name brand newsrooms is dwindling, much less reporters with known bylines or trusted news anchors. Now anyone can create and share content, and while platforms like Facebook require that users use their real names, the rules and structure of platforms like Twitter lend themselves much more easily to anonymous accounts. Of course there are many important benefits of anonymity, particularly under more oppressive regimes. But if we care about extremism, hate speech, or political polarization, anonymity is well known to abet bad behavior.
• Gameability: Anonymity also facilitates the role of bots (“web robots” – essentially computer programed “if/then” statements that are used to perform highly repetitive operations, such as responding to keywords like “guns” by tweeting out related, often biased, articles). One 2015 study found that bots generate around 50 percent of all web traffic. Further research from USC suggests that nearly 50 million Twitters users are, in fact, bots. Looking just at the U.S., another study found that between Sept. 16 and Oct. 21, 2016, bots produced about a fifth of all tweets related to the upcoming U.S. elections. And on Facebook, “as many as 137 million users could violate the platform’s real name policy (or be bots).” Reports last month that Facebook is taking action against “tens of thousands of fake accounts in France” illustrate the seriousness of the challenge – the ability to “game” online information systems allows more extreme or biased ideas to quickly rise to the foreground, amplifying biased perspectives and lending them the appearance of mainstream popularity.
• Speed and virality: Online, content can move and spread more quickly than it did in the days of print (often, though not necessarily, facilitated by bots that can Tweet out a claims a second). A recent study found that, immediately before the French election, social media users were sharing one fake or non-professional news article for every two professionally created articles. This is quite different than the spread of misinformation via print, radio, or television, where citizens would previously need to seek out content from MSNBC or Fox News (or the John Birch Society before that). Audiences for Fox News remain in the 2-3 million range, whereas online apolitical social media users are easily caught in the cross-fire of misinformation, exposed to extreme content without seeking it out, and thus pulling views towards the extremes in ways previously unimaginable.
• Micro-targeting: It’s well known that internet marketers, issue advocates, and political actors alike now collect unprecedented amounts of data about our online affiliations, interests, and behaviors. Claims abound that “by leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks” groups like Cambridge Analytica are able to create personalized, adaptive, and ultimately addictive propaganda. While the actual effectiveness of groups like remains unclear, it’s not hard to imagine a world a few years from now in which the kinds of voter manipulation they claim are a reality.

Collectively many of these aspects help to undermine users’ belief in the existence of facts in a way that erodes civic trust and the health of our democracy. Of course many of these attributes of online platforms (speed, scale, information democratization) are features, not bugs. The elevation of more popular “clickbait” content, which tends to be more extreme, is also an intentional (if lamentable) part of the design—one that needs to be addressed. And, as Stanford Professor Nate Persily has observed, these platforms have an incredible degree of sovereignty. No one is regulating them – they are self-regulating at best.

These platform vulnerabilities can be exploited by actors with any number of interests. But foreign regimes seem to be a bit ahead of the game. Evidence suggests that China’s millions of paid commenters fabricate and post about 448 million social media comments on behalf of the Chinese government each year. Similarly, in February, Russia announced the formation of a new branch of the military, “information warfare troops,” with the believed aim to “erode faith in institutions, sow falsehoods, undermine critical thinking, encourage conspiracy theories, promote social discord, and use the freedom of information against liberal democracies.”

While these challenges hit home with the 2016 U.S. elections, European governments (particularly those in the East but increasingly those in the West) have longer-standing experience with Russian involvement, and have already begun to respond. The EU’s East StratCom Task Force recently established a sort of “counter-propaganda department for the E.U.” to monitor Russian disinformation efforts as part of its Disinformation Review. NATO has established a Center of Excellence in the Latvian capital of Riga to help “bundle the knowledge of its member states in the propaganda war.” The Czech Republic set up a “specialized analytical and communications unit” within the Ministry of the Interior, and Italy and Germany are considering a variety of related interventions.

We at the Madison Initiative have funded a small range of international work to learn from these recent and upcoming elections and translate findings back to the U.S. in ways that may simultaneously address threats from both foreign propagandists and domestic purveyors of misinformation, disinformation and propaganda. Our current grants include work with the ARENA project at the London School of Economics, which is exploring the root causes of misinformation and experimenting with potential interventions; and a grant to Stiftung Neue Verantwortung in Germany for a project to assess the fake news phenomenon in advance of the upcoming German elections this September. At the same time, we are following work by New York University’s Social Media and Political Participation lab, Oxford’s 4-year-old project on Computational Propaganda, and domestic efforts such as USC’s Machine Intelligence and Data Science (MINDs) Group, and MSFT Research’s Project Ratio, amongst others.

Complicating all of this research is the ongoing lack of transparency and data availability from the platforms, which often leaves academics hamstrung when trying to understand the extent of the problem, much less to explore potential solutions. If nothing else changes in the near-term, hopefully it will be that.

Learn more about the Hewlett Foundation and its Madison Initiative at hewlett.org.

How is your foundation using media to communicate important issues? Let us know. Email MIF Communications Director Nina Sachdev at nina@mediafunders.org.