New fuel was added to the moral panic around social media last week when the New York Times published an article purporting to show that YouTube was responsible for a rightwing takeover of an entire country, Brazil. The new theme being pushed is that social media algorithms produce “online radicalization.” The reporters who produced the articles exploit the fears generated by recent mass shootings, which are attributed to the fact that the mass murderers could be found in online groups. Based on this shaky premise (who doesn’t go online?), the reporters asked themselves, “Could the consequences of online radicalization go beyond a few extremists and … radicalize large swaths of an entire society?”

In essence, the NYT reporters are pushing an audacious (and very simplistic) theory of media effects. YouTube is held responsible for Bolsonaro’s election. Its recommendation algorithms are said to be skewing politics to the right. They claim that their conclusions are supported by “research.” And indeed, some researchers are eagerly linking themselves to these highly publicized claims. But is there any substance behind it?

Where’s the research?

We looked into it. The first thing we discovered was that the NYT’s conclusions about algorithms and social media are presented as products of “research” when very little – and certainly none of the material relating to Brazil – is published in peer-reviewed, scientific journals. What we have is a lot of blog posts, opinion pieces, newspaper articles, interviews and speaking engagements with experts and spoken presentations. This old media critique of new social media platforms is on tenuous scientific ground.

Insofar as casual forms of research exist and can be examined, the published products are interesting, but do not actually prove what the critics claim they prove. What one finds are two types of studies. One type looks at the linkages between YouTube videos and the recommendations one receives after one watches them. This research merely produces a social network graph showing how different groups cluster in the social media space. And what can be concluded from this? A Berkman-Klein Center researcher writes:

“YouTube’s algorithms are not creating something that is not already there. These channels exist, they interact, their users overlap to a certain degree. YouTube’s algorithm, however, connects them visibly via recommendations.”

So all we have here, really, is a claim that YouTube efficiently connects content. This data suggests that algorithmic recommendations would bring together ANY political, ideological or identity grouping, left, right or center. It proves nothing about the social or psychological effects of the platform or its impact on political outcomes.

The other type of study tries to track the results of recommendation algorithms by generating queries and recording the results. This yields some potentially useful data but nothing serious has been done with it. The Guardian reporters produced some casual and theoretically uninformed content analysis of this data which, as we show below, illustrates why we need to move beyond amateur hour in this discussion.

Neither type of study examines how many people actually watch YouTube compared to other media sources, nor how influential is it in the user’s overall information consumption.

When examined carefully and critically, these investigations do hint at two conclusions: 1) The right has made more active use of YouTube – that is, uploaded more material and paid more attention to it – than their opponents. And by “the right” I mean the illiberal right; the nationalist, paranoid, culturally conservative, anti-immigration, anti-trade, anti-globalization right of Trump, Bolsonaro, etc. 2) Those who are searching for or predisposed toward finding extreme or conspiratorial content of any kind – right or left or crazy – will be able to find it.

Old Media, Old Theory

The simplistic Times stories do not make a helpful contribution to the policy challenges of social media. By implying that if people are exposed to false or extremist content, they will inevitably be “radicalized” by it, they are reviving the old “silver bullet” media effects theory, which equates exposure and effect and invites censorship. The argument suggests that platforms’ ability to locate and aggregate communities of people is inherently dangerous. But this is in many ways an attack on free expression and free association. Although these writers often invoke “democracy” and the “threat to democracy” posed by social media, we see a profoundly undemocratic attitude underlying that critique. The argument lends itself to making the platforms into arbiters of what is true and what is false, what is worthy of public discussion and what is not. They want the platforms’ algorithms and policies to be modified to push certain ideologies and movements they do not approve of outside the bounds of acceptable discussion and exposure. They want more accounts to be eliminated and more people to be de-platformed. Underlying this demand is a palpable fear that the masses of people are irrational and easily misled, and that normal forms of countering false information or educating people will no longer work in today’s environment.

Media as competitive tools.

We take a different view of social media. It is based on an understanding of media forms and technologies as situated in an environment of political and economic competition. We accept the classical liberal idea of a marketplace of ideas which should be allowed to operate freely. But we recognize that it is not just the ideas that compete, there is also intense competition around the techniques, channels and methods used to promote contending ideas. There is some recognition of this amongst the more sophisticated critics of social media. Danah Boyd, in a speech before the Knight Foundation that bears watching, stated that whenever a new medium gains power, there are people who learn how to exploit the unique capabilities of that medium to advance their views. She analyzes the way user perceptions, algorithms, content and comment interact to influence the information environment that social media users find themselves in. Boyd draws on cybersecurity concepts to argue that actors “leverage vulnerabilities in the information landscape to achieve certain goals.” For example, they take advantage of certain search terms or linkages to gain exposure to their ideas. Boyd also shows some appreciation of the power dynamics associated with platforms or governments trying to manipulate recommendation algorithms:

“We talk about what it means to be exposed to the other side. But researchers struggle with this because often people self-segregate regardless of what the algorithms recommend. They push and they double down on a world that’s just like them. So when is it appropriate for a recommendation engine to provide new information? Is that actually helping to inform people, or is that proselytizing? Is that in itself an act of power that needs to be contested?”

Of course, the answer to the last question is yes.

Case study in differential impact

The 2016 election provides a good example of the media as competitive tool approach. According to The Guardian’s “content analysis” of the Guillaume Chaslot data, someone searching for either “Clinton” or “Trump” on YouTube in 2016 would have received a set of video recommendations distributed as follows:

  • 33% neutral or unrelated to the election
  • 55% beneficial to Trump
  • 16% beneficial to Clinton.

The Guardian reporters jump to the conclusion that “YouTube’s algorithm distorts truth.”[1] But a more careful look at the data suggests that this imbalance is rooted in something else. When the reporters “searched the entire database to identify videos of full campaign speeches by Trump and Clinton, their spouses and other political figures” they got these quantitative results:

Number of videos of candidates found:

  1. Donald Trump (382 videos)
  2. Barack Obama (42 videos)
  3. Mike Pence (18 videos)
  4. Hillary Clinton (18 videos)
  5. Melania Trump (12 videos)
  6. Michelle Obama (10 videos)
  7. Joe Biden (42 videos)

The significance of this lopsided tally seems to be lost on The Guardian reporters. Yet it explains a lot about the imbalances found in the recommendations and a lot about the source of the left’s social media critique. It shows that Trump supporters were far more likely to upload content to YouTube than Hillary supporters. Trump speech uploads exceeded Hillary uploads at a rate of almost 20 to 1! If Trump supporters are 20 times more likely to upload material to YouTube, is it surprising that about half of the recommended videos on the topic were anti-Hillary or pro-Trump? If anything, the algorithmic recommendations reduced the overall imbalance in material.

From historical evidence we know that media has a differential influence on politics and society when one group from among a contesting set of groups embraces and actively utilizes a powerful new media form before the other groups do. The effect comes not from the medium per se, but from differences in its utilization. The early adopting group obtains a (temporary) competitive advantage over its rivals. (As an example, think of Protestants embracing the printing press in their contest with Catholics.) It seems evident that the illiberal right has embraced and utilized YouTube in a way that other political tendencies have not. (Just as Obama’s early embrace of certain forms of online organization gave him an advantage in 2008.)

The contestation over social media and algorithms is becoming subordinate to the cultural wars between left and right. The irony is that both the left and the right feel threatened and on the defensive. Old media liberal bastions such as the NY Times and UK Guardian are frightened by the ability of social media to give voice to populist messages and non-mainstream views and wish to blame downturns in their political fortunes on manipulation of the public via social media. Trump and rightwing forces see both the traditional media and the new social media as biased against them, more prone to suppressing their content and de-platforming their spokespersons. Both want to regulate social media.

Endnote:

[1] Strictly speaking, an algorithmically-generated recommendation cannot “distort truth.” A list of recommended videos does not say “climate change is real” (or a myth); it does not make a logical proposition that has a truth value. When The Guardian claims that YouTube’s algorithm distorts truth, what they are really saying is that “YouTube’s algorithm does not exclusively recommend videos that contain messages we regard as true.” The only way a recommendation can approximate a proposition would be the claim that “you [the viewer] are likely to be interested in these videos.” That implicit proposition may well be true or untrue for different users. If it is untrue, however, the user doesn’t watch the recommended videos. If it is true, the user does. So YouTube is “punished” mildly when it is wrong, and benefits when it is right.

2 thoughts on “More Debate about Social Media’s Impact on Society

  1. Curiously dismissing eyewitness/victim testimony; ignoring Youtube’s claim they have data to contest results and suggest an alternative research methodology, which they then refuse to share. Why? And yet… it’s the academics fault. Yeah tell that to Prof. Diniz. “When the university where Ms. Diniz taught received a warning that a gunman would shoot her and her students, and the police said they could no longer guarantee her safety, she left Brazil.”
    “The YouTube system of recommending the next video and the next video,” she said, had created “an ecosystem of hate.”
    “‘I heard here that she’s an enemy of Brazil. I hear in the next one that feminists are changing family values. And the next one I hear that they receive money from abroad” she said. “That loop is what leads someone to say ‘I will do what has to be done.’”
    “We need the companies to face their role,” Ms. Diniz said. “Ethically, they are responsible.”
    Lee again: of course further research is necessary and of course media effects are complex and move in various directions, reflecting wider societal forces. And that makes Youtube saying, oops yeah we see we have been algorithimically fueling anti-Zika vax hysteria threatening public health, our bad, ok? But hoo-ray Youtube says they will fix -that- so all is good. Of course content regulation is tricky and generally to be discouraged; platform regulation on other hand…; )

  2. The quantities uploaded are vastly different. But how much of the uploading was automated by bots, rather than uploaded by human supporters? That is something that has been studied.

Comments are closed.