Section 230 Isn’t an Aberration, It’s a Distillation of Common Law Trends

Is platform liability protection a gross exception to American publishing standards? Opponents of a rule called Section 230, which protects online intermediaries from expensive lawsuits over certain kinds of user-submitted content, say so. They argue that websites are shielded by the rule from legal risks in ways that other editorial broadcasters are not. 

To restore sanity and fairness to publishing norms, online liability protections must be overhauled, critics maintain. Antagonism to Section 230 has moved beyond rhetoric, with Sen. Josh Hawley (R-MO) proposing a bill that would remove platform liability shields  from “non-neutral” services. Websites would need to earn their liability protection by curating content in a government-approved manner.

New research by Mercatus scholars Brent Skorup and Jennifer Huddleston points out that one of the critics’ core assumptions is factually incorrect. Section 230 was not an aberration that bucked prevailing US legal trends. Rather, a body of common law providing Section 230-like protections had already naturally developed in the courts. Statutory platform protections merely codified judicial developments. Furthermore, “neutrality” was never a statutory condition for liability protections. The authors conclude that critics should not attempt to utilize publisher liability rules for what are actually other purposes, like antitrust concerns.

What Section 230 Is (and What It Is Not)

Section 230 has been called “the twenty-six words that built the internet” for good reason. Without platform liability, most web ventures probably would have been too expensive to get off the ground. If websites were held strictly liable for the vast ocean of user-submitted content that keeps them afloat, far fewer would exist today.

The law reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In a few words, Congress made it clear that third party platforms were not legally responsible for the actions of its users. It did this for the explicit purpose of “promot[ing] the continued development of the Internet” and “encourage[ing] the development of technologies which maximize user control over what information is received.”

Except for a few caveats for things like materials involving children or intellectual property, platforms were given the green light to innovate on user-driven content models without fear. This system gives internet users the openness to develop unexpected and beneficial online arrangements while outlining special “notice and takedown” processes for the aforementioned exceptions.

Note what the law does not do. Section 230 does not require that platforms, whether mere distributors or editorial publishers, remain neutral, nor does it prevent them from developing content filtering policies for themselves. To the contrary, one of the bill’s stated goals is to “remove disincentives for the development and utilization of blocking and filtering technologies.” Thus, a core argument of Section 230 critics—that platforms must not engage in filtering to receive liability protections—is invalid.

The text also directly removes civil liability for platforms which take action to “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Indeed, Section 230 was passed in part to encourage such filtering to best address the variety of consumer preferences.

Trends in Common Law Fulfilled in Section 230

Skorup and Huddleston discuss how the decades of developments in publishing liability trended towards Section 230’s ultimate fulfillment in statute. 

For much of the 20th century, the courts chipped away at the previous strict liability regime in favor of one based on faults by publishers and distributors. That is, rather than requiring information disseminators to preemptively scrutinize all information before allowing it to pass through their channels, courts would instead determine the extent to which disseminators actively participated in the promotion of false or slanderous materials. The development of mass media made this necessary.

The genesis of these legal developments can be found in the growth of the wire news service industry in the early 20th century. Economic and technological developments increased journalists’ capacities to gather global news as a burgeoning middle class of educated readers demanded more specialized coverage. Wire news services grew to fill this need, providing local papers with niche stories they otherwise could not cover.

A dispute arose when a Florida newspaper ran a wire service story that was found to contain defamatory content. The state Supreme Court ruled in Layne v. Tribune Co. that the local paper was not strictly liable because it was impractical to expect newspapers to vet and verify the content of wire dispatches. (After all, if newspapers were privy to this information, they wouldn’t need the wire service in the first place.) 

This came to be known as the “wire service defense,” and it created a precedent that a distributor is only liable if it “acted in a negligent, reckless, or careless manner” in reproducing a story. If, for example, a distributor actively contributed to a harmful story, they could be found at fault. But the mere act of allowing a story to run, even if a distributor has discretion, is not enough to create liability. 

Other courts built on this reasoning, extending it to cases involving radio broadcasts, non-wire news publications, and television. In each case, judges based their decisions on the extreme burden that preemptive vetting requirements would impose on disseminators, along with the First Amendment issues that strict liability could introduce.

This reasoning was straightforward to apply to the new world of online intermediaries that cropped up in the 1990s. Rather than third-party provided news content, platforms broadcast user-submitted reviews, commentary, art, and ephemera. The principle is the same: it would be unreasonable to expect platforms to preemptively screen content before allowing it for distribution, plus it could violate their First Amendment rights.

It seemed the law was tending towards this conclusion. One case in 1991, Cubby v. CompuServe, involved a plaintiff suing CompuServe for defamatory content posted by another user on the company’s forums. Hewing to the currents in liability law, the court ruled that CompuServe was protected from liability as a distributor.

However, in 1995, another legal decision threw this new understanding into disarray. The case Stratton Oakmont v. Prodigy was very similar to the situation in Cubby: the plaintiff was defamed by a user on a Prodigy forum and argued that the service was liable. The court came to the opposite conclusion on the grounds that Prodigy had more editorial discretion than CompuServe. Therefore, Prodigy must compensate the victim.

Would the internet have developed into what it is today if each website had to employ lawyers to screen and greenlight content? It is doubtful. Forward-thinking lawmakers understood the dilemma and moved to resolve it with Section 230. It removed any legal doubt that both distributors and publishers were free to broadcast and curate as they saw fit. Other than some special carve-outs for child and intellectual property protection, the internet was made open for content.

Don’t Abuse Speech Liability for Antitrust (Or Other) Purposes

It is easy to sympathize with critics’ concerns. Although right-leaning groups lead the charge today, many have worried that large online platforms may abuse their positions in ways that hurt the public interest. It was once common for left-leaning groups to scrutinize practices that would benefit a platform’s related business, or support a political candidate that might grant the company favors.

The belief in platform bias is widely shared. One poll finds that 59 percent of Americans agree that websites are “politically biased” and “[suppress] views they disagree with.” It’s not partisan, either: 68 percent of Republicans, 53 percent of Democrats, and 61 percent of independents agree it is a problem. A plurality of all groups support some kind of government intervention to enforce platform neutrality.

One does not need to deny that platform bias can be a problem, however, to understand that gutting Section 230 is not the right course of action. Although Facebook and Twitter are the objects of public ire today, such a law would affect the entire internet ecosystem—down to the smallest mom and pop with a web presence—and threaten future growth. 

Plus, it would inevitably politicize internet activity, as websites would need to submit to the whims of whatever political party is in power. It would also create opportunity for sabotage. Perhaps a competitor floods a website with objectionable or one-sided content in order to sic regulators on them. Rather than achieving online objectivity, removing liability protections would make online content production an inherently subjective affair.

Skorup and Huddleston argue that such concerns are better addressed through other means. For example, if the core complaint is that a platform dominates online conversation, and censored users have no other recourse, the problem is properly addressed through antitrust law. Perhaps even this complaint is erroneous. Or maybe not. But the point is that these issues are not at heart a question of publisher or distributor liability.

A reading of the history of common law developments in broadcast liability makes it clear that Section 230 is a logical and beneficial legal conclusion. We should not allow the trials of the day to chip away at this foundation of our internet success story. 

Photo credit: Win McNamee/Getty Images