As social media gains heightened influence over our lives, it becomes increasingly important that the news we receive online be accurate, trustworthy, and dependable. However, episodes such as “Pizzagate” and claims of Russian interference in the US presidential election have resulted in more Americans than ever questioning the accuracy of the daily news updates that they receive on their social media devices. Recent concerns over data privacy and national security have highlighted that “fake news” has begun to infiltrate our society in ways previously inconceivable.
In March 2018, a bombshell was dropped when whistleblower Christopher Wylie made public allegations that political data analytics firm Cambridge Analytica collected private information from an estimated 87 million Facebooks accounts and used this information, without Facebook’s authorization, to support President Trump’s 2016 election campaign. Facebook announced it has suspended the data analytics firm after discovering the firm had violated Facebook’s platform policies. This revelation was referred to as Facebook’s “worst crisis yet,” and Zuckerberg spent days testifying in front of Congress as to what he knew about the privacy breach.
Cambridge Analytica obtained Facebook users’ information in 2014 and was headed by Stephen K. Bannon, Trump’s former presidential advisor, at the time. The firm utilized the information to profile select US voters and target them with political advertising on social media. Unfortunately for Mark Zuckerberg and Facebook, these revelations come on the heels of allegations that Russian meddling in the 2016 election included the widespread distribution of fake political news on the social media platform. Robert Mueller, the Justice Department’s special counsel, is investigating Cambridge Analytica as part of his inquiry into Russian interference in the election. Fake news, Russian election interference, and an unauthorized data breach all sound like the juicy plot of a modern Bond film, but in reality it’s the stuff of Zuckerberg’s, and many others’, legal nightmares.
So, what exactly is “fake news”?
The Brookings Institution defines fake news as content “generated by outlets that masquerade as actual media sites but promulgate false or misleading accounts designed to deceive the public.” It proclaims fake news is “especially problematic” in democratic systems of government. Others define fake news as “false information—purposefully deployed” that spreads quickly and persuades effectively. Many agree that if skillfully deployed, fake news may pose a “unique threat” to informed democracies, compounded by the fact that social media allows anyone to post content on a multitude of online platforms with maximum ease and minimal supervision.
How pervasive is this “fake news” problem?
In 2016 alone, one media analysis found the 20 largest fake news stories, the majority of which centered around the presidential election, generated over 8.7 million shares, reactions, and comments on social media (compared to 7.4 million interactions with the top 20 stories from legitimate news sites). Facebook has stated it suspects that political content from the Russian Internet Research Agency, the Russian organization involved in Mueller’s latest round of indictments, reached more than 125 million US citizens in the months preceding the election. In its own review of the US election, Twitter revealed more than 50,000 automated accounts (known as “bots”) on its site were linked to Russia. Russian interference has also been linked to the 2016 Brexit vote in the United Kingdom as well as French President Emmanuel Macron’s election last year. Clint Watts, a fellow at the Foreign Policy Research Institute and senior fellow at the Center for Cyber and Homeland Security at George Washington University, warned of the coordinated efforts Russian media propaganda employs to attempt to create tension and dissent in the U.S. and other Western democracies. “The goal,” said Watts, “is to erode trust in mainstream media, public figures, government institutions--everything that holds the unity of the Republic together.” 
What legal tools are available to address the spread of false political information?
In comparison to the United States, European jurisdictions have more leeway in regulating fake news or other undesirable content online. Since the 2016 US presidential election, the Brexit vote, and fears Russia backed efforts to influence both events, the European Commission has commenced the process of creating official policy guidelines on fake news which would be applicable throughout the entirety of the EU. The Commission defines fake news as “intentional disinformation spread via online social platforms, broadcast news media or traditional print.” It plans to investigate the challenges online platforms create for democracies and implement an EU-wide strategy to protect its citizens from the spread of false information.
The EU has also taken a more proactive stance than the US government in encouraging social media companies to self-regulate the content posted on their sites, including getting Facebook, Twitter, and Google to agree to new codes of conduct in 2016 that target hate speech and other undesirable content.
The Commission makes a distinction between the spread of false information, which may already be illegal under existing EU or domestic laws, and fake news, which may fall outside the scope of such laws but still be applicable to regulation. In stark contrast to the US, hate speech is illegal and strictly regulated throughout the EU.
Within the EU, Germany has been a pioneer in the movement to restrict the spread of false information and hate speech within its borders. Last October, a law went into effect in which Facebook, Twitter, and other social media companies could be fined up to €50 million ($57 million USD) if they failed to remove illegal content from their sites within 24 hours of being notified. The ban applies to “illegal, racist, or slanderous comments.” The law is an attempt for Germany to combat hate speech and fake news, both of which have increased with the arrival of more than a million refugees over the last several years.
In France, President Emmanuel Macron plans to introduce a law combating fake news by the end of the year. Specifics are not yet available, but France’s legislation plans to grant judges emergency powers to remove or restrict content that appears to be fake in the sensitive time leading up to an election. The law would call for more transparency for sponsored content and allow France’s national media watchdog organization to combat attempts at destabilizing the election by foreign-financed media organizations. 
Ireland is working on a law in which using bots to influence political debate could garner a sentence of five years in jail or up to €10,000 in fines. Actively promoting fake news on social media would become a punishable offense.  The legislation would also regulate online political advertising and require ads to contain a transparency notice stating their aim and target audience. With this legislation, Ireland aims to address the rise of fake online accounts and “anti-democratic online campaigns.”
The UK does not yet have legislation in the works to combat fake news. However, a parliamentary committee has commenced an inquiry into the online distribution of fake news and false information. The committee has also threatened Facebook and Twitter with sanctions if the sites continue to reject requests to release information about possible Russian interference in the Brexit vote. The UK may be pushed to the forefront of the online content regulation debate as investigations continue into Cambridge Analytica, the Facebook data privacy breach, and concerns the firm may also have been involved in the Brexit vote to leave the EU.
What role can the US government play in restricting fake news?
While the EU works to combat the spread of fake news, the American government is much more limited in the actions it may take to restrict online speech. In the US, the First Amendment makes it unconstitutional for the government to infringe upon most speech, even false speech. Political speech in particular is the highest form of speech protected by the Amendment. In the landmark free speech case, New York Times v. Sullivan, the Supreme Court held that political speech garners the highest of First Amendment protections, including speech that is potentially false. Only very narrow categories of speech are unprotected by the First Amendment (and are therefore open to steadfast government regulation). These include obscenity, fighting words, and defamation.
As the US government is strictly limited in the types of speech it may restrict, the private sector must assume the vast majority of the responsibility for regulating online content. This self-regulation is governed by Section 230 of the Communications Decency Act, wherein social media sites such as Facebook and YouTube are immunized from liability arising out of content posted by their users. Section 230 also delivers immunity for sites to remove objectionable content, such as fake news or hate speech, from their sites as an alternative to government regulation.
Congress created this legislation in 1996 in an attempt to prevent interactive computer service providers (such as America Online, Inc. [AOL], a 20th-century precursor to Facebook and Twitter) from being inundated with lawsuits for content posted on their sites by third-party users because of the potential for “chilled online speech” that could result. Congress endeavored to protect the Internet as a space in which political discourse was encouraged and where burdensome legislation would not restrict the Internet’s burgeoning growth.
The result is the Internet has become a veritable Wild West of content management in which almost anything, at least somewhere, goes: In the US, at least, each private company is able to police its own content as it pleases, without the restrictions of government regulation or the threat of defamation lawsuits looming. In harmonization with First Amendment legal jurisprudence, such as the marketplace of ideas, social media sites seem to take the stance that more content is better, and that restrictions on speech should be kept to a bare minimum. Twitter has declared itself to be the “free speech wing of the free speech party;” Facebooks says it is “in the business of letting people share stuff they are interested in;” and Reddit promotes itself as a “free speech site with very few exceptions.” Such business philosophy, though mostly in line with American free speech values, is not well suited to combating the proliferation of fake news and strategic misinformation that have plagued social media platforms and American society over the past several years.
The rise of fake news has made it much harder for us to trust online news sources. Although 93 percent of Americans say they receive news online, only 32 percent of Americans in 2016 said they have a “great deal or fair amount” of trust in the news (down from 53 percent in 1997). Further, there exists a definitive partisan divide, as only 14 percent of Republicans in a Gallup survey said the media report the news accurately, compared to 62 percent of Democrats. This stark erosion of trust in the media is problematic because, as the “fourth estate” in a democracy, traditional media performs a variety of important functions. The media has been used for centuries to inform people about current events, present issues of public interest, serve as an intermediary between the government and citizens, and act as a watchdog to hold government accountable for its actions. The media are indispensable to a transparent and effective political system. A breakdown in public trust of the media can be seen as a breakdown in democracy itself.
So, what can we in the US do?
Any action by the American government to regulate false and misleading online content must strike a delicate balance between allowing freedom of expression and preserving the integrity of public discourse. Many press and free speech advocates argue the government has no place in regulating the fake news debate at all; others argue it should be left to the companies to regulate themselves. Still others call for a proactive approach in which the government works to create a legal framework through which social media websites are encouraged to monitor and restrict false political information posted to their sites. Whatever avenue is chosen, one thing is clear: Facebook, fake news, and the First Amendment seem here to stay.
[*] Ashley Smith-Roberts is an Associate Editor for the Denver Law Review, and a 2018 J.D. Candidate at the University of Denver Sturm College of Law.
 “Pizzagate” is when Edgar Welch drove from North Carolina to Comet Ping Pong pizzeria in Washington, D.C. with his assault rifle under the false belief that Hillary and Bill Clinton were running a “pedophile sex ring” from the restaurant. This false news story had been spread through social media sites such as Twitter, 4chan, and Reddit. Lee K. Royster, Fake News: Political Solutions to the Online Epidemic, 96 N.C. L. Rev. 270, 270 (2017).
 Sarah Emerson, Mark Zuckerberg: ‘It Was My Mistake’ Facebook Compromised Data of 87 Million Users, Motherboard (Apr. 4, 2018, 3:44 PM), https://motherboard.vice.com/en_us/article/7xdw99/mark-zuckerberg-it-was-my-mistake-facebook-compromised-data-of-87-million-users.
 Matthew Rosenberg, Nicholas Confessore & Carole Cadwalladr, How Trump Consultants Exploited the Facebook Data of Millions, The New York Times (Mar. 17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html; David Ingram & Peter Henderson, Trump Consultants Harvested Data from 50 Million Facebook Users: Reports, Reuters (Mar. 16, 2018, 8:31 PM), https://ca.reuters.com/article/technologyNews/idCAKCN1GT02Y-OCATC?utm_source=34553&utm_medium=partner (stating the Trump camping hired data firm Cambridge Analytica in June 2016 for more than $6.2 million, according to Federal Election Commission records).
 Reuters, Facebook Suspends SCL, Cambridge Analytica for Violating Policies (Mar. 16, 2018, 10:21 PM), https://www.cnbc.com/2018/03/16/reuters-america-facebook-suspends-scl-cambridge-analytica-for-violating-policies.htm (“Facebook said [a researcher it had worked with, Aleksandr Kogan] gained access to the information ‘in a legitimate way’ but ‘he did not subsequently abide by our rules,’ saying that by passing information to a third party, including SCL/Cambridge Analytica and Wylie of Eunoia, ‘he violated our platform policies.’”).
 Sheera Frenkel & Kevin Roose, Zuckerberg, Facing Facebook’s Worst Crisis Yet, Pledges Better Privacy, The New York Times (Mar. 21, 2018), https://www.nytimes.com/2018/03/21/technology/facebook-zuckerberg-data-privacy.html?.
 Zach Wichter, 2 Days, 10 Hours, 600 Questions: What Happened When Mark Zuckerberg Went to Washington, The New York Times (Apr. 12, 2018), https://www.nytimes.com/2018/04/12/technology/mark-zuckerberg-testimony.html.
 EU vs Disinfo, Figure of the Week: 50 Million (Mar. 20, 2018), https://euvsdisinfo.eu/figure-of-the-week-50-million/.
 Rosenberg et al., supra note 3.
 Id.; see Carole Cadwalladr, ‘I Made Steve Bannon’s Psychological War Tool’: Meet the Data War Whistleblower, The Guardian (Mar. 18, 2018, 5:44 AM), https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump (highlighting the extent of Cambridge Analytica’s political reach, stating “Russia, Facebook, Trump, Mercer, Bannon, Brexit. Every one of these threads runs through Cambridge Analytica.”).
 Rosenberg et al., supra note 3.
 Darrell M. West, How to Combat Fake News and Disinformation, The Brooking Institution (2017), https://www.brookings.edu/research/how-to-combat-fake-news-and-disinformation.
 Nabiha Syed, Real Talk about Fake News: Towards a Better Theory for Platform Governance, 127 Yale L.J. F.337, 337 (2017-2018); West, supra note 11.
 West, supra note 11.
 Adrian Chen, What Mueller’s Indictment Reveals about Russia’s Internet Research Agency, The New Yorker (Feb. 16, 2018), https://www.newyorker.com/news/news-desk/what-muellers-indictment-reveals-about-russias-internet-research-agency.
 The Computational Propaganda Project, Polarization, Partisanship and Junk News Consumption over Social Media in the US, at 1 (2018), http://comprop.oii.ox.ac.uk/research/polarization-partisanship-and-junk-news/.
 Jan van der Made, Russian Outlets Sparked Macron’s Fake News Law Plan, Analysts, RFI (Jan. 4, 2018), http://en.rfi.fr/europe/20180104-france-fake-news-law-macron-russia-angry-deny-sputnik-rt.
 See Jill Dougherty, The Reality Behind Russia’s Fake News, CNN (Dec. 2, 2016, 9:25 AM), https://www.cnn.com/2016/12/02/politics/russia-fake-news-reality/index.html; Andrew Weisburd, Clint Watts &Jim Berger, Trolling for Trump: How Russia is Trying to Destroy Our Democracy, War on the Rocks (Nov. 6, 2016), https://warontherocks.com/2016/11/trolling-for-trump-how-russia-is-trying-to-destroy-our-democracy/ (“Globally, the implications of Russia’s social media active measures are dire. Social media has played a key role in controversial decisions and in politics and including those of France, Estonia, and Ukraine. In heated political contests such as Brexit and the U.S. presidential election, Russian social media active measures could tip the balance of an electoral outcome by influencing a small fraction of a voting public.”).
 Dougherty, supra note 19.
 European Commission, Policy: Fake News (2017), https://ec.europa.eu/digital-single-market/en/fake-news.
 Anya Schiffrin, How Europe Fights Fake News, Columbia Journalism Review, Oct. 26, 2017, https://www.cjr.org/watchdog/europe-fights-fake-news-facebook-twitter-google.php.
 European Commission, supra note 21
 Schiffrin, supra note 24 (“Even when they do take action, the European approach differs greatly from that of the US, where speech, even what Europeans would define as hate speech, is protected by the First Amendment.”); Snyder v. Phelps, 562 U.S. 443, 460–61 (2011) (“Speech is powerful. It can stir people to action, move them to tears of both joy and sorrow, and—as it did here—inflict great pain... As a Nation we have chosen a different course—to protect even hurtful speech on public issues to ensure that we do not stifle public debate.”); Pen America, Faking News: Fraudulent News and the Fight for Truth 24 (“The court has repeatedly affirmed the First Amendment’s protection of hateful and offensive speech, even by extremist groups such as the American Nazi Party and the Ku Klux Klan.”).
 Schiffrin, supra note 24.
 European Commission, supra note 21.
 West, supra note 11.
 Seven days would be allowed for companies to decide whether to block more ambiguous content. Schiffrin, supra note 24.
 West, supra note 11.
 Schiffrin, supra note 24.
 James McAuley, France Weighs a Law to Rein in “Fake News,” Raising Fears for Freedom of Speech, The Washington Post, Jan. 10, 2018, https://www.washingtonpost.com/world/europe/france-weighs-a-law-to-rein-in-fake-news-raising-fears-for-freedom-of-speech/2018/01/10/78256962-f558-11e7-9af7-a50bc3300042_story.html?utm_term=.75f861b53e13.
 van der Made, supra note 18.
 McAuley, supra note 33. Experts state Macron aims to target Russian media operatives with this law.
 The Irish News, New Laws Propose Five Years in Prison for Spreading Fake News (Dec. 5, 2017, 1:00 AM), http://www.irishnews.com/news/2017/12/05/news/new-laws-propose-five-years-in-prison-for-spreading-fake-news-1202749/.
 The Week, Should the UK Adopt European-Style Fake News Laws? (Jan. 4, 2018), http://www.theweek.co.uk/90730/should-uk-adopt-european-style-fake-news-law.
 See id.
 Farrah Mukaddam, UK Government Seeks to Tackle the “Fake News” Problem, Norton Rose Fulbright (Mar. 2017), http://www.nortonrosefulbright.com/knowledge/publications/147055/uk-government-seeks-to-tackle-the-fake-news-problem (The inquiry is being conducted by the Culture, Media and Sport Committee).
 The Week, supra note 39.
 See , Facebook and Cambridge Analytica Sued in Data Storm (Mar. 22, 2018, 3:17 AM), http://www.straitstimes.com/world/united-states/facebook-and-british-political-consultancy-sued-in-data-storme; Luke Lythgoe, What Role, if Any, Did Cambridge Analytica Play In Brexit?, InFact (Mar. 20, 2018), https://infacts.org/role-cambridge-analytica-play-brexit/; see Cadwalladr, supra note 9.
 Pen America , supra note 26 (“The extent to which the shield of First Amendment protection extends to deliberate false speech is not fully settled. However, while the Supreme Court has repeatedly placed a low value on knowingly false statements of fact, it has declined to endorse the argument that even knowingly false statements stand outside of First Amendment protection.”)
 Buckley v. Valeo, 424 U.S. 1, 14 (1976) (“The First Amendment affords the broadest protection to such political expression in order ‘to assure [the] unfettered interchange of ideas for the bringing about of political and social changes desired by the people.’” (quoting Roth v. United States, 354 U.S. 476, 484 (1957)); McIntyre v. Ohio Elections Comm'n, 514 U.S. 334, 346 (1995) (“[Political speech] occupies the core of the protection afforded by the First Amendment.”).
 376 U.S. 254 (1964).
 Id. at 280. Under New York Times Co. v. Sullivan, only a statement about a public figure that is false and shows actual malice, meaning such statement was made with knowledge that it was false or with reckless disregard of whether it was false or not, is liable for punitive damages under the First Amendment. The “actual malice” standard is generally a high bar to meet in defamation lawsuits.
 Pen America, supra note 26 (“The Supreme Court has carved out several narrowly defined exceptions to First Amendment protection, including fighting words, true threats, obscenity, and defamation. The Supreme Court has also permitted restrictions on speech judged likely to incite imminent violence.”).
 See Syed, supra note 13, at 339 (“First Amendment theory casts a long shadow, which even private communications platforms—like Facebook, Twitter, and YouTube—cannot escape . . . these three private platforms should be understood as self-regulating private entities, governing speech through content moderation policies.”).
 The Act states “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Communications Decency Act of 1996, 47 U.S.C. § 230 (2015).
 Joel Timmer, Fighting Falsity: Fake News, Facebook, and the First Amendment, 35 Cardozo Arts & Ent. L.J. 669, 687 (2017).
 Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997) (speaking of the legislative significance of Section 230: “The purpose of this statutory immunity is not difficult to discern. Congress recognized the threat that tort-based lawsuits pose to freedom of speech in the new and burgeoning Internet medium. The imposition of tort liability on service providers for the communications of others represented, for Congress, simply another form of intrusive government regulation of speech. Section 230 was enacted, in part, to maintain the robust nature of Internet communication and, accordingly, to keep government interference in the medium to a minimum”.).
 Timmer, supra note 51 at 688.
 Id. at 687–88.
 Syed, supra note 13, at 339 n. 6 (“Since they do not implicate government action, private communications platforms like Facebook, Twitter, Reddit, and YouTube are not as clearly bound by First Amendment doctrine as their predecessors . . . . To the contrary, these platforms enjoy broad immunity from liability based on the user-generated messages, photographs, and videos that populate their pages.”).
 The marketplace of ideas is one of the cornerstones of First Amendment jurisprudence. It is the “theory that in free and public discourse, all ideas should be available to the community—including false ones—because a restriction on speech of any kind might incidentally restrict the truth. It holds that without government interference, truth and falsity will compete in the marketplace, and truth will emerge victorious.” Annie C. Hundley, Fake News and the First Amendment: How False Political Speech Kills the Marketplace of Ideas, 92 Tul. L. Rev. 497, 502 (2017).
 Syed, supra note 13, at 342–43.
 Id. at 338 (stating that First Amendment free-speech theories, when put into practice by private social media companies, leave “both users and social media platforms ill-equipped to deal with rapidly evolving problems like fake news”).
 West, supra note 11.
 Matthias Kaspers, The Press under Pressure–Strengthening the Fourth Estate, Georgetown University, Transatlantic Policy Symposium, at 1(2017).
Pen America, supra note 26, at 25 (2017) (quoting Justice Kennedy: “Society has the right and civic duty t engage in open, dynamic, rational discourse. These end are not well served when the government seeks to orchestrate public discussion through content-based mandates.” United States v. Alvarez, 567 U.S. 709, 728 (2012)); Hundley, supra note 56, at 518 (“The general consensus regarding the First Amendment and fake news is that the status quo should hold—it should not be regulated.”); West, supra note 11 (“Governments should avoid censoring content and making online platforms liable for misinformation. This could curb free expression, making people hesitant to share their political opinions for fear it could be censored as fake news.”).
See Royster, supra note 1, at 294 (“By implementing a notice and takedown requirement similar to that in [Section 230 of the Digital Millennium Copyright Act of 1998], websites will have the opportunity to remove defamatory posts without fear of liability, and individuals will be able to receive financial restitution from those websites in the event that they do not remove fake news.”); West, supra note 11 ("Technology firms should invest in technology to find fake news and identify it for users through algorithms and crowdsourcing.”).
 Kaspers, supra note 62.