Skip to content

Identifying and Countering Fake News

Focus Areas: Free speech
Discourse Type: Scholarship

Forthcoming in the Hastings Law Journal

Fake news presents a complex regulatory challenge in the increasingly democratized and intermediated on-line information ecosystem. Inaccurate information is readily created by actors with varying goals, rapidly distributed by platforms motivated more by financial incentives than by journalistic norms or the public interest, and eagerly consumed by users who wish to reinforce existing beliefs. Yet even as awareness of the problem grew after the 2016 U.S. presidential election, the meaning of the term “fake news” has become increasingly disputed and diffuse. This Article first addresses that definitional challenge, offering a useful taxonomy that classifies species of fake news based on two variables: their creators’ motivation and intent to deceive. In particular, it differentiates four key categories of fake news: satire, hoax, propaganda, and trolling. This analytical framework can provide greater rigor to debates over the issue.

Next, the Article identifies key structural problems that make each type of fake news difficult to address, albeit for different reasons. These include the ease with which authors can produce user-generated content online and the financial stakes that platforms have in highlighting and disseminating that material. Authors often have a mixture of motives in creating content, making it less likely that a single solution will be effective. Consumers of fake news have limited incentives to invest in challenging or verifying its content, particularly when the material reinforces their existing beliefs and perspectives. Finally, fake news rarely appears alone: it is frequently mingled with more accurate stories, such that it becomes harder to categorically reject a source as irredeemably flawed.

Then, the Article classifies existing and proposed interventions based upon the four regulatory modalities catalogued by Larry Lessig: law, architecture (code), social norms, and markets. It assesses the potential and shortcomings of extant solutions.

Finally – and perhaps most important – the Article offers a set of model interventions, classified under the four regulatory modalities, that can reduce the harmful effects of fake news while protecting interests such as free expression, open debate, and cultural creativity. It closes by assessing these proposed interventions based upon data from the 2020 election cycle.

Author(s): Mark Verstraete, Derek Bambauer, and Jane Bambauer

Please search for "TechLaw Program" in "Other Funds"