Tech Policy, Unintended Consequences & the Failure of Good Intentions

It is quickly becoming one of the iron laws of technology policy that by attempting to address one problem (like privacy, security, safety, or competition), policymakers often open up a different problem on another front. Trying to regulate to protect online safety, for example, might give rise to privacy concerns, or vice versa. Or taking steps to address online privacy through new regulations might create barriers to new entry, thus hurting online competition. 

In a sense, this is simply a restatement of the law of unintended consequences. But it seems to be occurring with greater regularity in the technology policy today, and it serves as another good reminder why humility is essential when considering new regulations for fast-moving sectors. 

Consider a few examples.

Privacy vs security & competition 

Many US states and the federal government are considering data privacy regulations in the vein of the European Union’s wide-reaching General Data Privacy Regulation (GDPR). But as early experiences with the GDPR and various state efforts can attest, regulations aimed at boosting consumer privacy can often butt against other security and competition concerns.

Consider how the GDPR can be abused to undermine user security—and ultimately (and ironically) privacy itself. At this year’s Black Hat computer security conference, one researcher recently explained how the GDPR’s “right of access” provision—which mandates that companies give users their personal data—can be exploited by malicious actors to steal personally identifiable information. If a hacker is convincing enough, he or she can use “social engineering” to pose as the target and coax companies to divulge the information. Without GDPR’s mandated reporting infrastructure, such an attack would be much harder.

Nor are malicious actors even necessary for the GDPR to undermine security. In 2018, a customer requested their Alexa voice recordings from Amazon. The company sent the data to the wrong person in an apparent case of human error. If mighty Amazon cannot rise to the challenge of error-free GDPR compliance, what hope do smaller outfits have?

Perhaps the biggest story about the GDPR, however, has been its malign effects on competition. After all, the law earned its nickname—the “Google Data Protection Regulation”—for a reason. Titans like Google and Facebook have dominated European ad tech market since the advent of the GDPR because they can shoulder compliance risks in a way that smaller vendors cannot. More ad money has flowed to Google’s coffers as a result.

But the GDPR applies to far more than just ad tech. Ventures as varied as publishing and virtual tabletop dice rollers have been forced to shutter their digital doors rather than risk the wrath of European data authorities. 

Similar stories emanate from the US. Illinois’ biometric privacy law, which governs the use of technologies like facial recognition and fingerprint scanning, led to the prohibition of Google’s Arts and Culture app which matched user-submitted photos with a classical work of art. If Google can’t hack it in the Land of Lincoln, how could a potential Google-slayer be expected to do so?

These are just the stories we hear about. A prematurely thwarted venture is unlikely to have a platform to voice their compliance problems. What is clear is that the data privacy laws enacted so far have had predictable negative impacts on security and competition, and that ill-defined “privacy fundamentalism” too often drives ill-fitting policies.

Safety vs. free speech & competition

Content moderation at scale is extremely challenging, especially as it relates to efforts to address “hate speech” and extremist viewpoints. On the one hand, free speech activists argue that onerous private content moderation policies can limit debate and punish certain viewpoints, particularly if a platform is a public default for expression. On the other hand, social justice activists contend that lax private standards can fuel the proliferation of conspiracy theories, radicalization, and violent rhetoric.

Recently, President Trump and some conservative lawmakers have been clamoring for greater regulatory controls of social media platforms in the name of “fairness” and countering supposed anti-conservative bias. Sens. Josh Hawley (R-MO) and Ted Cruz (R-TX), for example, have introduced a bill that would require platforms to submit their content moderation policies to regular regulatory audits. If a platform is deemed to be not “politically neutral,” it will lose its liability protections under Section 230 of the Communications Decency Act.

This is reminiscent of the “fairness doctrine,” a long-standing Federal Communications Commission (FCC) policy that was a thinly-veiled attempt to influence the political content of broadcast programs. Conservatives rightly opposed such government involvement in content decisions in decades past, but with this new effort against technology platforms, many of them are repeating the mistakes of the past.

The history of the actual fairness doctrine serves as a cautionary tale here. Today the fairness doctrine is mostly remembered as an anti-conservative effort because of the attention paid to right-leaning talk radio. Former Kennedy administration official Bill Ruder admitted that their “massive strategy was to use the [fairness doctrine] to challenge and harass right-wing broadcasters, and hope that the challenges would be so costly to them that they would be inhibited and decide it was too costly to continue.”

But as testaments from previous broadcast leaders point out, the fairness doctrine was wielded against both “conservatives” and “liberals” depending on who was in power and what their objectives were. When the Nixon administration took office, they wielded the rule to muzzle broadcasters who criticized the White House. And the FCC also applied the doctrine against The Kingmen’s song “Louie Louie” for its suspiciously unintelligible lyrics.

The tension between policies to promote “safety” and government-protected rights to free speech can be literal, as well. Consider efforts to ban so-called “3-D printed guns.” Defense Distributed and other activists do not 3-D print and sell guns. Rather, they publish the schematics for others to print their own arms online. As with the encryption technologies we will discuss below, such code is probably First Amendment-protected speech, although the applications of the schematics may be considered “dual-use” (meaning with both civilian and military applications.) An outright ban on 3-D printed gun blueprints very clearly antagonizes the right to free speech in the US and could threaten innovation in other open source, peer-to-peer 3D-printed applications.

Safety vs. privacy & security

Efforts to promote “safety” can also too often backfire at the expense of privacy and security.

Perhaps the most dramatic and high-stakes illustration of this principle was the years-long legal drama that pitted law enforcement authorities against computer scientists in the so-called “Crypto Wars.” Although cryptographic technologies that conceal data for privacy or security have been around since the days of ancient Egypt—our own Founding Fathers are known to have communicated using ciphers—in the 20th century, they had mostly been limited to military and academic institutions.

The advent of public-key cryptography made these security techniques more accessible to the public for the first time. This was great news for information security: communications and devices could be made hardened to attacks, and people were given more privacy options. But law enforcement feared that criminals would use cryptography to cover their tracks. Thus, in the name of safety, law enforcement first tried banning cryptography as a dual use technology through munitions export controls. When that failed on First Amendment grounds, policymakers attempted to legislate “backdoors” into encryption protocols that would allow government access.

It is easy to see how outright bans or backdoors for encryption technologies could hurt privacy and security. Obviously, prohibiting the civilian use of a privacy and security technology limits privacy and security. But granting government access into encryption standards would ironically ultimately undermine safety as well. After all, if a government can get into an encryption standard, so might a malicious hacker. Although the “Crypto Wars” seemed settled in the 1990’s, these same debates have been cropping up again as more and more devices have default encryption technologies.  

We can also think about mandated reporting requirements intended to promote public safety. Consider the “know your customer” rules imposed on financial institutions. To prevent ills like money laundering and financial fraud, banks and exchanges must keep detailed customer information on file. Yet this ostensibly “pro-safety” rule generates its own security and privacy risks. Banks must manage to responsibly store and protect this valuable customer data, lest their customers’ information get hacked and their identities stolen. This has sadly too often proven too tall an order, and third-party-managed personally identifiable information is exposed to outside parties all the time.

A similar problem arises with efforts to promote child safety online. Consider the debate over MySpace’s age verification efforts in the mid-2000s. Child safety advocates grew concerned over the risks facing children on new social media platforms. Young children lacking awareness of the dangers that could lurk online could unwittingly make friendships with predators posing as other children. So a movement grew to require these new platforms to verify age and identity with a government-provided identification card.

There were obvious technical problems. For starters, children that were young enough to fall under the age verification limit were unlikely to have a government-provided photo identification card. But beyond these simple administrative issues, there was the question of privacy and security. Could Myspace adequately protect the reams of sensitive data from outside breach? Might children actually be put more at danger should those items—which would likely include the children’s address—fall into the wrong hands? And should the government and social media platforms really be in the business of parenting to begin with? Might this actually create a “moral hazard” which leaves parents thinking that online spaces are safer than they actually are?

Tying it all together

In each of these instances, it probably seemed like there was no downside to newly proposed regulations. With time, however, the dynamic effects associated with those policies become evident, and often result in the opposite of what was intended, or the policies led to other problems that supporters did not originally envision. 

The nineteenth-century French economic philosopher Frédéric Bastiat famously explained the importance of considering the many unforeseen, second-order effects of economic change and policy. Many pundits and policy analysts pay attention to only the first-order effects—what Bastiat called “the seen”—and ignore the subsequent and often “unseen” effects. Those unseen effects can have profound real-world consequences in the form of less technological innovation, diminished growth, fewer job opportunities, higher prices, diminished choices, and other costs.

Even when defenders of the failed interventions are forced to admit that their well-intentioned plans did not work out as planned, their response is typically of the we-can-do-better variety. The result is usually just more regulation as one intervention begs another and another. As the Austrian economist Ludwig von Mises taught us 70 years ago in his masterwork, Human Action

“All varieties of interference with the market phenomena not only fail to achieve the ends aimed at by their authors and supporters, but bring about a state of affairs which—from the point of view of their authors’ and advocates’ valuations—is less desirable than the previous state affairs which they were designed to alter. If one wants to correct their manifest unsuitableness and preposterousness by supplementing the first acts of intervention with more and more of such acts, one must go farther and farther...” 

The lesson is clear: paternalistic public policies may sound sensible on the surface, but as Milton Friedman taught us long ago, “One of the great mistakes is to judge policies and programs by their intentions rather than their results. We all know a famous road that is paved with good intentions.”