Aug 29, 2018

Precautionary Principle Creep

Andrea O'Sullivan Feature Writer , Michael Kotrous Program Manager

Should scientists tweak their review process to promote a precautionary regulatory goal? A recent move by an influential computer science consortium attempts to do just that, and we should be wary.

A group within the Association for Computing Machinery (ACM) has proposed that research only be accepted for publication in its academic journals if the authors have considered the potential negative impacts of any breakthroughs in artificial intelligence, machine learning, blockchain, or related technologies. Under the recommended guidelines, peer reviewers for ACM journals would be expected to require that researchers discuss remedies for any foreseen, “reasonable” negative outcomes.

The lead author of the ACM proposal summarizes their recommendations as “incentivizing computer scientists to confront the negative impacts of their research by using the ‘publish or perish’ power of the peer review process.” If enacted, the change would have a dramatic effect on computer science research and its many subfields—the ACM publishes dozens of academic journals that rank among the best in computer science.

The controversial proposal has a long way to go before becoming official policy. According to Axios, it has encountered some strong opposition. Nevertheless, the proposal is troubling.

First, it could expose computer science research to the same bias that makes most contemporary coverage of innovation strongly pessimistic. Requiring researchers to focus on the negative effects or misuse of technology suggests that innovation is only worthwhile if no one is made worse off. It could introduce a strong bias for the status quo, which imposes a high opportunity cost. While the “seen” near-term costs of innovation may be avoided, we may also lose the “unseen” benefits of unrealized innovations and economic growth in the medium- and long-term. Given the compounding returns to slight increases in growth rates, these unrealized benefits are very large.

Second, using the gatekeeping role of peer reviewers to advance a socially-minded mission will skew academic research towards the moral and political commitments of those gatekeepers. Researchers will be asked to envision the world as journal editors and peer reviewers believe it ought to be, which goes far beyond the expertise of many technologists.

(Pre)caution: All Contingencies Must Be Covered

The proposal states that researchers must pay attention to potential negative impacts, even if the positive impacts are shown to outweigh the negatives. Asking researchers to seriously consider the reasonable negative effects of their work sounds, of course, reasonable, but this can be taken to problematic extremes. Requiring authors to enumerate any potential negative impact of their technologies can invite the type of hypothetical thinking that leads to calls for preemptive bans or regulations. The lead author of this proposal makes clear that he would like to see researchers include “suggestions for improved government regulations.”

When determining what counts as a reasonable negative impact that merits discussion, “one initial big tent threshold might be the following: if there has been significant media coverage of a negative impact that is relevant to a paper or proposal, that impact should be addressed by the authors of that paper or proposal.” This tips the scale in favor of concluding that technologies have a net negative impact: positive impacts beyond the first-order “here’s what this thing does” are difficult to anticipate, while second- and third-order negative impacts are easy to imagine and are well-covered by the media.  

Techno-pessimism is common in media coverage and non-fiction writing on technological advances, including big data and social media. Such a negative outlook on innovation is due in no small part to the pervasiveness of dystopias in fictional depictions of the future.

Meanwhile, commercial applications of certain technologies take years, even decades, to develop, and predicting what specific applications will emerge and thrive is difficult. The ACM proposal calls the discussion of expected positive impacts included near the end of most papers “hand-waving” that views the work through “rose-colored glasses.”

The proposal offers as an example of looking through rose-colored glasses “computer scientists who seek to automate yet another component of a common job description point[ing] to the merits of eliminating so-called ‘time consuming’ or demanding tasks.” Labor-saving automation has been associated with many developments that we consider important advancements in fundamental human rights—the automation of household chores set the table for rapid growth in the number of women pursuing higher education and fulfilling careers, and advances in agricultural equipment in the early 20th century allowed more children to complete their primary education. These outcomes were not necessarily anticipated by their creators, yet their potential downsides were, and are, part of the public consciousness. A biased review process might have suggested that these important technologies be regulated away.

Negative Impacts: According to Whom?

A significant cohort of researchers has long supported making their fields more conscientious of the possible implications of their work. The authors of the proposal, for instance, believe that the field’s failure to anticipate and offer a remedy for potential mass unemployment via automation, threats to democratic discourse posed by convincing faked recordings, regulatory arbitrage by gig economy platforms, declines in consumer privacy, and even the thousands of deaths caused by distracted driving is an “embarrassing intellectual lapse.”

What this list of examples makes clear is that negative impacts are not simply the second- and third-order effects of a particular technology, but also a matter of one’s social, moral, and political commitments.

Our colleague Adam Thierer is writing a new book about the use of regulatory arbitrage and evasive entrepreneurialism as a powerful tool against outdated regulations. The deployment of ride-sharing, for instance, resulted in the dissolution of rules that protected the taxicab industry with strict supply caps that hurt consumers. Regulatory arbitrage has also allowed new technologies like drones to be tested and deployed in China, where they have shown incredible promise to reduce the “last-mile” delivery costs that have excluded the rural poor from delivery services.

Indeed, listing technological civil disobedience or “regulatory arbitrage” as negative impacts is a controversial claim that requires its own cost-benefit analysis! Other mitigating policies and technological changes suggested by the authors of the proposal are similarly problematic. Enacting privacy policies like GDPR to address privacy concerns is complicated by several downsides or trade-offs. In essence, what the authors of this proposal have done is suggest replacing alleged techno-optimist hand-waving with enforced techno-pessimism.

Photo credit: FOCKE STRANGMANN/EPA-EFE/Shutterstock

Support Mercatus

Your support allows us to continue bridging the gap between academic ideas and real-world policy solutions.Donate