Last week, a 3-day conference was held at Harvard University’s Belfer center focused on fortifying election security and digital democracy. IGP contributors Karim Farhat and Karl Grindal competed in an Information Operations hackathon, winning first place in the policy category. The panel of judges was comprised of former Secretary of Defense and Belfer Center Director Ash Carter and former Pentagon “cyber czar” and current Defending Digital Democracy Project (D3P) Director and Belfer Center Co-Director Eric Rosenbach.
The motivation for this project was to find a solution that maintains the US’s commitment to an open Internet while protecting elections from foreign interference. The 2016 Russian information operation campaign had three distinct tactics: sockpuppets, strategic leaks, and political advertising. While these strategies represent the present day challenge, our hackathon proposal sought to prepare for a future where bot-based communications amplify existing capabilities.
What follows is an overview of these recommendations. While Facebook has preemptively committed to adopting many of these recommendations, what is legally required of them and other online platforms is another question entirely. This proposal addresses shortcomings in the Honest Ads Act and the Draft FEC Strategic Plan, FY 2018-2022.
Justification
Why these proposals? It’s important to first diagnose the problem. By acknowledging that information operations use distinct tactics and that responses should be tailored to these specific actions we are hopefully addressing the most significant dimensions of the campaign.
These proposals are designed so that if replicated by any other state, democratic or authoritarian, the response would be consistent. Were the United States to adopt this proposal, it would be setting a global norm for campaign transparency as these policies could easily be adopted by other countries. Countries with overburdensome election regulations may lack the market to shape novel disclosure requirements. These provisions would inhibit information operations while protecting innovative platforms and free expression rights.
Election advertising
Campaigns are expensive. In the past much of this money went to television broadcasting. Yet, estimates suggest almost $1 billion campaign dollars went towards online ads in 2016, and this number will certainly grow. Although broadcasters are already regulated when it comes to election advertising, social media platforms are not. Campaign spending consequently provides a unique opportunity.
We propose that federal regulations on political advertising only apply to communication platforms that accept Federal Election Commission (FEC) regulated advertisements from political campaigns and Political Action Committees (PACs). Existing broadcast regulations provide ready made definitions and precedent. Startups and companies that prioritize anonymity over verified identity can choose to be exempt, but will lose out on political advertising.
There are different rules for broadcasters, cable and satellite but the political file that is kept on hand for purchased ads is the same. Our proposed provision would only apply to direct political electioneering i.e. excluding opinion and issue-based campaigns. Political ads are those that directly mention a candidate, legislation or particular party. For instance, an ad stating “I love polar bears” is fine but “vote Clinton because she likes polar bears” applies as direct political electioneering.
These broadcaster regulations should be extended for online electioneering, which includes: a transparency mandate (who is buying ads), domestic origin requirements and non-discriminatory pricing policy — is relevant because of Facebook’s ad auction environment. Wired Magazine reported that the algorithm for Facebook’s ad auction sets lower prices for divisive and inflammatory content because of the higher click through rate. In contrast, broadcaster must offer the same price to all political ads.
The above disclosure regulations would include sockpuppets and bots as explained in the subsequent sections.
Sockpuppets
We define sock puppets as a network of fraudulent online identities that are intended to spread the same message. These fake identities tend to create ‘echo chamber’ effects. It’s our belief that anonymity is not inherently wrong and has legitimate uses: it’s frequently used for parody accounts and by dissidents in oppressive regimes. From the perspective of election campaign law, however, sockpuppets can be construed as a fraudulent form of in-kind contributions that violates election law and should therefore be monitored by the FEC.
An army of sockpuppets masquerading as genuine political discourse may be more effective in swaying elections than a third-party ad buy. An organization that promotes a candidate by secretly paying sockpuppets to lambast the opposing candidate would, in effect, be in violation of federal law. Under Citizens United v. FEC the Supreme Court declared their support for disclosure stating it “is a less restrictive alternative to more comprehensive regulations of speech”.
Consequently, this provision would segment the market between platforms selling anonymity vs. credibility to their users. This provision may expand the already established norm of verified-identity which has obvious consequences to anonymous speech. Today’s leading social media platforms, such as Facebook, already require real identity, which would make this kind of regulation less burdensome. Clearly, the monitoring of sockpuppets might make some platforms less welcoming to dissidents. However, an opt-in mechanism is better for anonymous speech than say a blanket provision applying to all public communication. With a few short lines in their contractual language, platforms that value anonymity such as Reddit and Imgur would clarify that campaigns and PACs are NOT eligible to post advertisements. Since platforms have an incentive-based choice on opting-in, users will also have the choice to use the service that best suits their preferences.
Bot identifier
Sockpuppets will be nothing compared to bots in the future. One sockpuppet can only manage so many social media accounts but the process of posting disinformation, including fake videos (courtesy of Google’s open source engine TensorFlow), will be increasingly automated and large scale in the future. We are proposing that a bot identifier be developed to provide transparency. Human users should be able to make the distinction between when they’re communicating with something that is human and something that isn’t. This identifier scheme could be created in collaboration with industry and only applied to FEC regulated platforms. A bot identifier tag could find a broader audience as a norm, especially since bots are increasingly able to pass the Turing test.
Strategic leaks
The principal challenge faced by the DNC throughout the 2016 election was not one of the Russian sock puppets, but rather establishment media reporting on documents shared through Wikileaks and DCLeaks. By using anonymizing third parties, the Russian espionage campaign strategically released selective but factual information about DNC activities to exploit America’s constitutional prohibition on prior restraint of media. Established since the 1925 Supreme Court case Near v. Minnesota, this protection of press freedoms supports the Fourth Estate. As such, the news media is right to investigate alleged abuses, but is also setting the agenda for national discourse.
With respect to strategic leaks, a culture change is needed within the American media.
At the core of the problem is the economic incentive that journalists have to publish breaking news first. So how do we reconcile disclosure norms against a very compelling force in the opposite direction? There is no proper government role here, the response should lie in self-regulation.
Ethical restraint by the media is needed when it comes to nation-state leaks. Organizations like the International Federation of Journalists, the National Writers Guild, and the Accrediting Council on Education in Journalism and Mass Communications ought to develop trade specific codes of ethics to guide journalists in the use of leaks from foreign powers or other potential purveyors of disinformation. As national trust in the mainstream media has dropped in recent years, context about the substance and origins of leaks is important to maintain objectivity and retain public trust.
Concluding Remarks
Having identified a plan for strategic leaks, political ads, sockpuppets, and bots, we believe these provisions might inhibit information operations while protecting innovative platforms and first amendment rights. We would like to thank Ishan Mehta who made invaluable additions to this proposal.
The wish for “setting a global norm” is betrayed by the phrase ‘foreign powers’. Any code of ethics should all ‘potential purveyors of disinformation’; the US government has often been the source of disinformation both within and outside the USA, via ‘strategic leaks’, ‘unattributed’ or ‘well placed sources’, etc.