How central are social and search platforms in spreading propaganda and misinformation—and what can be done to improve the quality of online information? To explore these questions, Kelly Born of the William and Flora Hewlett Foundation hosted the second in a series of webinars about how funders can help fight “fake news.” (Read the recap of the first webinar here.)
Born, who is working with Media Impact Funders to organize this webinar series, has been taking a deep look at whether and how social media exacerbates the spread of inaccurate and inflammatory information—in turn, increasing polarization and weakening democracy. In collaboration with nonprofit strategy consultant Nell Edgington of Social Velocity, Born conducted conversations with 45 leaders from academia, digital media platforms, think tanks, and civil society organizations in order to surface philanthropic opportunities to mitigate this current “information problem.” They published their findings in early November.
While disinformation and propaganda are long-standing problems, Born identified several distinct elements of the current environment. Among other factors, these include the democratization of information creation, the social nature of information sharing, the shift of news production from brands to individuals, the role anonymity plays in allowing for both bots and foreign intervention, and the increasing level of content personalization and microtargeting.
As a result, the analysis concludes, the social platforms are “the most critical point for intervention.” This conclusion was underscored by the recent Congressional hearings on the role of Facebook, Google and Twitter in spreading Russian propaganda.
However, much is still not known about the dynamics of social sharing, the impact of disinformation on the democratic process, or people’s psychological responses to biased or sensational information. Born and Edgington observe that there is a real need “to create a comprehensive research agenda and data infrastructure that allows access to and interconnection among the data and those with the ability to gain insight from it.”
To shed light on these issues, researchers working in this space were invited to share their recent findings and research challenges. (Watch the second webinar in full, below, and register here for our third webinar on Dec. 14.)
The researchers included:
Josh Tucker, the co-director of NYU’s Social Media and Political Participation Lab (SMaPP). He spoke about the shift from understanding the internet as a tool for liberation to a threat to democracy. While political and media theorists had previously framed social media as “giving voice” to voices excluded from the mainstream in autocratic societies, he explained, it has now become clear that those same tools can amplify perspectives and voices previously seen as “out of bounds” in democracies. And as authoritarian regimes learn to counter organizing across social platforms with censorship or responses that incite fear or distract audiences, those same methods can be utilized by disruptive actors in democracies. As a result, he suggested, rather than conceptualizing the web as a tool for either liberation or repression, it should be seen as a more neutral arena for negotiating democracy, for good or ill.One of the biggest questions researchers are looking at right now, he said, is whether or not the web is truly an “echo chamber,” He showed data visualizations of how information was shared via retweets during the 2012 election, noting that clear patterns of polarization emerged. This clustering is in contrast to online sharing of information around less politicized topics, such as the Oscars. Sometimes online discussions—such as the one surrounding the Sandy Hook Elementary School shooting in Newtown, CT—include more sharing of information across echo chambers initially, but become more polarized as the topic shifts from public sympathy towards policy solutions, as in the case of gun control laws.Researchers at the SMaPP Lab have also been examining whether online hate speech has increased over the last few years, and which Facebook users are most likely to share misinformation. “There’s lots to learn,” said Tucker.
Jonathan Albright, the research director at the Tow Center for Digital Journalism, noted that researchers are experimenting with many different methods for understanding the behaviors of users and content producers on the web and on social platforms. In tracing the origins and sharing of misinformation, he found that YouTube is a popular platform for those seeking to spread falsehoods. Wikipedia was also under attack from sources sharing unreliable information. As participatory sites that rely on users to contribute content, they were “colonized” by partisans. He also noted that bots and fake accounts such as the Twitter account @TEN_GOP have had success in moving content and storylines into mainstream outlets, thereby considerably amplifying their reach. This in turn helps to shape opinion and news agendas. Such accounts are also working to connect with people on a personal basis, and then deepen interactions. Read more about Albright’s analysis of this “micro-targeting” strategy here.
Nate Persily, a professor of law at Stanford University, spoke about the ways in which the internet is testing our assumptions about whether the “marketplace of ideas” is the best method for arriving at the truth. He spoke about a variety of potential platform remedies for false or malicious online information, including:
- Deletion of offending material
- Disclosure of the source;
- Demotion of information to decrease visibility;
- Delay in publishing to slow down “velocity” of viral communications until they can be verified;
- Dilution of or distraction from bad content with more credible content;
- Deterrence through raising the costs of bad speech, such as taking away ad revenue or retribution against foreign actors.
Bad information is an old story, Persily noted; what’s new is the velocity, virality and anonymity. Filter bubbles and echo chambers online are also new—or at least newly visible. Information monopoly and political control online are also posing new challenges.
Three big questions, Persily said, are what effect disinformation is actually having on people’s attitudes and behavior, whether online echo chambers are more influential than offline ones, and which interventions make a difference. There is no shortage of opportunity for experiments—but the need to fund them is great.
In the Q&A, the researchers spoke about the difficulty of finding funding to examine these emergent dynamics, and the efficacy of proposed solutions. They also spoke to challenges in accessing the large datasets owned by these private companies. Twitter is currently overrepresented in research because it makes user data available, said Albright, but there’s much still to understand about how users and content producers are interacting with Facebook and Google, as well as smaller players like Snapchat, Instagram and Pinterest. Individual researchers are in a weak position to demand data access from the platforms, but funders and others might band together to encourage more transparency.
They also discussed the ethics of studying and experimenting on social media users. “You’re already being experimented on all the time” by the platforms themselves, observed Persily. He noted that at least universities have institutional review boards. Tucker suggested that researchers are providing a public service by showing how the platforms they use every day are benefiting from and allowing others to manipulate user data and social content. “We’re all being tested every time we look at a post,” said Albright.
Interested to discover other ways that funders are supporting responses to misinformation? Join us for the third webinar in the series, which will examine ways to improve audience consumption of information, including fact-checking and media literacy.