The Pessimists Archive is an entertaining Twitter account with a corresponding podcast that documents past examples of societal and government “moral panics” or “technopanics” associated with old technologies and forms of culture.
It is easy to hear these examples today and chuckle about how silly they were. We should remember, however, that each of these panics played over many years (sometimes decades). At the time, these fears didn’t seem silly at all, and in fact some very smart people helped to stoke such anxieties. And the backlashes had real consequences. Creative minds were silenced and important innovations were thwarted or delayed. Some people were persecuted and others were formally prosecuted.
It’s worth asking: are we living through any technopanics today that might be featured on a future episode of the Pessimists Archive?
One candidate might be facial recognition technology, which currently faces heated opposition from critics, many of whom want the technology banned outright. Just this week, the city of San Francisco banned local government agencies from using facial recognition technologies.
Policies restraining government use of facial recognition are absolutely needed, but will regulation go too far and end up restricting beneficial uses of the technology? Is a facial recognition technopanic underway? This is where the Pessimists Archive is useful in spotting the trends—and also reminding us why we should be careful about extreme calls for completely bottling up new technologies.
Critics of facial recognition technology, like Luke Stark, who is affiliated with Harvard University’s Berkman Klein Center, suggest it is time we recognize “facial recognition as plutonium-like in its hazardous effects.” Woodrow Hartzog and Evan Selinger say that facial recognition “is the most uniquely dangerous surveillance mechanism ever invented” and is “the perfect tool for oppression.” Accordingly, they have called for a complete ban on the technology.
The brewing panic about facial recognition shares some similarities with past technopanics. The escalated use of “threat inflation” and false equivalence is a sure-fire sign that a technopanic is afoot.
“Threat inflation” has been defined as, “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.” False equivalence refers to comparisons made between two situations that actually have very little in common. Critics often use threat inflation and false equivalence to construct what rhetoricians refer to as an argumentum in terrorem, or an “appeal to fear.” Those making fear appeals argue that “something must be done” immediately or else catastrophe is around the corner.
Although fear appeals typically rest upon logical fallacies, poor risk analysis, or outright myths, they often lead to hasty legislative proposals to control or ban whatever content or technology is currently under fire from critics.
We have also seen technopanics over predecessor technologies to facial recognition—like instant photography, sensors, CCTV, and RFID—come and go. The rise of cameras in the late 1800s was massively socially disruptive, so much so that Samuel D. Warren and Louis D. Brandeis’s penned a famous 1890 Harvard Law Review article decrying the spread of the technology and calling for controls. Later, a famous 1966 Life magazine cover and feature article warned of the dangers of electronic transistors and snooping, suggesting that “bugging is so shockingly widespread and so increasingly insidious that no one can be certain any longer that his home is his castle—free of intrusion.” In the early 2000s, RFID technology was likened to the Biblical threat of the “mark of the beast.” In each case, the threat proved wildly overstated, of course.
Some of today’s critics also seem to imagine America is on the way to becoming the Chinese surveillance state. The Chinese government has adopted a national “Social Credit” and tracking system based partially on facial recognition technology and used it to repress disfavored groups. That is horrifying. But the United States is not China; we have faced the challenge of government misuse of new tools before and used constitutional protections, targeted restrictions, and cultural norms to push back against oppressive uses of technology.
But such concerns have now extended past mere philosophical debates into policy actions that extend well beyond just government uses. Illinois already passed a “Biometric Information Privacy Act” that limits potential uses of facial recognition technology even for fairly benign applications. Strictly interpreted, the law prohibits services such as Google’s Arts and Culture match (which matches your face up with famous works of art) and could even prevent tagging for social media or photo sharing sites.
A variety of other private applications already use facial recognition technologies today that might become illegal if the technology is banned. Most are security-related, but others are more mundane, like photo recognition tech that make it easier to sort through digital photographs or videos.
For example, C-SPAN Archives uses it to identify participants in older events and congressional footage. The White House Historical Association’s mobile phone app includes a Presidential Look-Alike feature, “that allows users to take a selfie to find out which president or first lady they most resemble based on portraits of presidents and first ladies in the White House collection.” Some virtual reality or augmented reality technologies depend upon facial recognition to work properly. Facial recognition has also already helped find missing children and could help find or identify people during disasters.
An outright ban on facial recognition technology would, therefore, require that some existing technologies be taken off the market. We would lose these and other future benefits.
Toward a More Balanced Approach
As is often the case, tech critics are going too far and focusing only on the most negative possible outcomes associated with facial recognition tech. They take a kernel of truth—that a new technology could pose risks if used improperly—and extrapolate from it hyper-dystopian predictions ripped from the plots of sci-fi books and shows.
If facial recognition is really “plutonium-like,” then we should be wearing hazmat suits when using our faces to unlock our smartphones or to check in through security as some airports! Of course, facial recognition is nothing like plutonium, a substance so highly deadly that we must prohibit any sort of use or contact with to avoid catastrophic physical harm.
The fundamental problem with technopanics and those critics who propagate them is that they do not imagine we can learn how to cope with new technologies and control worst uses. If the history of humanity proves anything, it is that we are a resilient, highly adaptive species. We roll with the punches and muddle through in the face of new challenges. Eventually, past panics subsided as society adapted its attitudes and laws to accommodate new technological capabilities.
The critics are correct that a real danger exists that facial recognition could make it easier for governments to surveil our movements and profile us in ways that are repressive and unjust. But we have faced this same problem many times before. We can and will learn how to govern facial recognition technologies the same way we have with the many other tools that have both dangerous and beneficial uses.
Some of the technologies society panicked over in the past did create new risks worth worrying about. In some cases, they demanded new social norms and even targeted laws to deal with highly problematic uses or users. The camera has had countless beneficial uses and become an important part of the human experience by capturing life’s memories and important developments. But bad actors also forced us to come up with “Peeping Tom” statutes and anti-paparazzi laws to restrict intrusive uses of photographic technology.
It was, and remains, particularly important to constrain governmental use of technologies that could be used to surveil citizens and undermine their liberties. The same will be true for facial recognition tech, and we can accomplish this without derailing the positive applications. Again, we have faced this same threat many times before when looking to constrain governmental use of weapons, computers, and various tracking and surveillance technologies.
A more nuanced approach to governmental use of the technology will eventually emerge with time. If law enforcement officials can use facial recognition to scan thousands of photos of felons or missing persons more rapidly than they could with their own eyes, it is hard to understand why that should be forbidden completely. That is very different than using facial recognition to create a real-time surveillance system tracking everyone’s movements. Accordingly, as Cato Institute’s Matthew Feeney suggests, instead of a flat ban, we should prohibit real-time facial recognition tracking by governments while allowing its use as an ex post investigative tool.
Law enforcement requests for privately-held facial records should also require a subpoena and high standard of proof of need. Law enforcement officials should also be required to be more transparent about their facial recognition policies, safeguard any biometric identifiers they collect, and delete data after a set period of time. An added problem with the current debate about facial recognition technology is that it puts the cart before the horse. We first need to consider what is worth policing at all and come to grips with the problem of over-criminalization. We just have far too many laws in America today.
Private sector facial recognition tools and applications do not pose the same level of risk and need not be restricted as aggressively. Companies will, however, need to be more transparent about their uses of biometric identifiers and take steps to ensure that consumers and the general public understand when facial recognition technology is active. Private actors will make mistakes, but they have incentives to correct flawed or “biased” outcomes to avoid civil rights actions, lawsuits, and other legal penalties, or just to avoid the bad PR that accompanies such errors.
Some governments have required that organizations utilizing CCTV on their premises post signage to notify the public. Such transparency and labelling requirements are more sensible than outright bans, and the same approach could be required for facial recognition. Best practices can also be devised and refined over time to govern other uses in workplaces, retail establishments, and large public events.
Calls for complete bans on facial recognition technology could be a negotiating tactic by critics. By starting with such an extreme position, they might hope to move the resulting political conversation closer to their perspective as new rules are formulated.
If that is their strategy, there is a chance the gambit could work. It is more likely, however, that such critics and their proposals will instead be remembered as part of “the great facial recognition panic of 2019” when this all becomes a future installment on the Pessimists Archive podcast. Better to come up with a rational framework—and more tempered rhetoric—for how to talk about and govern this technology going forward.