October 27, 2017

Informational Injury in FTC Privacy and Data Security Cases

rule details

Informational Injury Workshop P17-5413
Notice of Workshop and Opportunity for Comment
Agency: Federal Trade Commission
Proposed: September 29, 2017
Comment period closes: October 27, 2017
Submitted: October 27, 2017

The Technology Policy Program of the Mercatus Center at George Mason University is dedicated to advancing knowledge about the effects of regulation on society. As part of its mission, the program conducts independent analyses to assess agency rulemakings and proposals from the perspective of consumers and the public. Therefore, this reply comment does not represent the views of any particular affected party but is designed to assist the agency as it explores these issues.

We appreciate the opportunity to submit reply comments regarding the Federal Trade Commission’s (FTC) Workshop on Informational Injury. In her comments to the Federal Communications Bar Association on September 19, Chairwoman Maureen Ohlhausen defined the three goals of the workshop: (1) to “better identify the qualitatively different types of injury to consumers and businesses from privacy and data security incidents,” (2) to “better explore frameworks for how we might approach quantitatively measuring such injuries and estimate the risk of their occurrence,” and (3) to “better understand how consumers and businesses weigh these injuries and risks when evaluating the tradeoffs to sharing, collecting, storing, and using information.”

The workshop raises important and timely questions about the FTC’s role in investigating cases relating to data breach and privacy incidents that fall within the commission’s statutory unfairness and deception authorities. Our comments will focus on developing a framework that appropriately addresses the ongoing challenges with data security without imposing on society an ineffective, all-encompassing theory of “harm” that may undermine the freedom to innovate in data use.

We begin with a discussion of data and security issues at the FTC. We then outline our vision for the future of FTC oversight of data breach cases, drawing heavily from the iterative process of common law. We then discuss why rigid theories of harm are inappropriate for meeting data security challenges. Finally, we provide a roadmap for how the commission can move closer to the ideal.

A Brief History of the FTC and Cybersecurity

The United States currently lacks a dedicated regulator for data and security issues, allowing a number of agencies to become involved in cybersecurity issues relating to incidents in their primary jurisdictions. For example, the Securities and Exchange Commission issues guidance for financial institutions to safeguard their data, while the Food and Drug Administration has investigated device manufacturers for selling insecure medical devices. The FTC, however, is more involved than any other federal agency in data security oversight and adjudication. 

This was a development more of necessity than of design. As the internet revolution took hold in the 1990s and companies began grappling with new questions of data collection and storage, there was no regulatory framework to guide industry and establish legal certainty. The FTC, with its relatively broad Section 5 authority to protect consumers from deceptive or unfair acts or practices, was well poised to fill the void.

The commission initially promoted self-regulation as the primary policy for data and security issues, a policy that would be supplemented by promotion of “fair information practice principles” as adequate standards to guide groups. However, the FTC quickly pivoted to more active measures in an attempt to promote internet security and thereby ensure its future functioning. Specifically, the FTC first began pursuing potential privacy violations—where websites did not provide the level of privacy promised in their stated privacy policies—using a claim of “deception.” Later, the FTC became involved in matters relating to data breaches and security, applying its authority to investigate “unfairness” as a basis for such cases. This approach has become controversial within academic and policy circles, and it has spawned two notable legal battles.

Despite lacking a specific congressional charge to oversee data and privacy issues, the FTC has persevered as the primary watchdog for consumer cybersecurity challenges. Notably, as we discuss in more detail later, the FTC has largely eschewed an approach characterized by substantive rulemaking, favoring instead a quasi–common law method facilitated mainly by consent orders and administrative adjudication. Furthermore, the FTC lacks a clear set of guidelines to guide private actors who wish to both maintain good security and remain compliant with FTC best practices—a situation that the commission admirably wishes to rectify with this very workshop.

While we applaud the FTC for its commitment to flexibility and its distaste for onerous, top-down regulation, we believe that the FTC should strive to get closer to a true common law approach rather than attempt to develop rigid, all-encompassing theories of harm that might keep lawyers busy but bring us no closer to better security and privacy. We outline a model path for the FTC to pursue in the following section.

The Common Law Ideal

Concerns about existing tort law’s ability to handle perceived intrusions into privacy are not new in the digital age. In fact, an 1890 Harvard Law Review article established the jurisprudence for privacy torts. Its authors—one of them, Louis D. Brandeis, would later become the famed associate justice of the Supreme Court—thought the rising power of newspapers and new technologies such as photography presented threats to individual privacy.

Former FTC Chairwoman Edith Ramirez spoke positively of a common law approach to unfairness claims, stating that it is “well-suited to find the right balance [between flexibility and certainty].” This statement is also true for the common law’s ability to handle security-specific issues through existing privacy torts. Since legal scholar William L. Posser posited four common law privacy torts in 1960, most states have adopted and codified this typology through precedent or statute. 

Since relatively early in the digital era, these torts have evolved to accommodate reasonable expectations of privacy in cyberspace. The simultaneous adaptability and consistency of the common law gives it a clear advantage over statutory solutions. 

In the case of the informational harms proposed, courts have either handled or could handle these issues with existing tort law. For example, concerns about “dataveillance”—the monitoring of online activity—or other potentially deceitful injuries or subversions of consumer choice could be handled by applying intrusion into voluntary seclusion. Intrusion is not necessarily physical in nature, so courts at common law can consider whether perceived online disclosures or other monitoring such as spyware can be challenged under the existing law. Because the common law does not require a specific physical presence, the existing privacy torts can be extended and do not require an additional element of enforcement.

Some have expressed concerns that the anonymity and distance from the victim associated with using the internet or other technology to carry out intentional torts cause physical or financial harms that are not addressed by current privacy torts; however, torts such as libel or intentional infliction of emotional distress do not require physical proximity to the victim as an element. Cyberspace may change the forum in which such acts are conducted, but it does not change the required elements. Moreover, the common law has evolved to account for situations when the alleged perpetrator remains anonymous through the use of internet platforms. Yelp has been forced to disclose the identities of anonymous reviewers when the reviews are found to be libelous, and individuals have been held liable for defamation or libel for fraudulent negative reviews.

Courts are in a better position than regulators to determine when there is a legal duty in handling data and when that duty has been breached. Regulation is inflexible and preemptively shuts down potential avenues of innovation. In contrast, the courts are more flexible as they rule over specific contested avenues of innovation without curtailing other experiments.

Currently, there is no established legal duty to handle most data or privacy in a certain way; however, a breach of terms of service or other data security claims could be handled under existing tort or contract law without additional regulatory intervention. The courts have been able to adapt existing common law torts of privacy to new media and technology in the past and should be able to adapt to current digital technology. Moreover, for the most vulnerable data, other statutory provisions already exist to establish a duty when handling the information. For example, the Health Insurance Portability and Accountability Act covers the duty surrounding medical information and data, the Children’s Online Privacy Protection Act creates certain duties regarding data collected on children, and the Consumer Financial Protection Bureau’s “unfair, deceptive, or abusive practices” standard can be employed against financial services companies for advertising false data management practices. 

A Bad Alternative: A Theory of Everything

Chairwoman Ohlhausen has made it clear that the FTC is not seeking to “deduce a definition of injury from first principles.” Rather, she calls upon the community to consider (1) whether the FTC’s current case-by-case approach toward privacy- and security-related “informational injury” is representative, (2) whether any element may require government intervention, and (3) how the list of injuries corresponds with the FTC’s statutory deception and unfairness authorities.

We applaud the FTC for eschewing the temptation to develop a ground-up “theory of everything” to drive privacy and security oversight. Too often, members of the academy, the policy-making community, and the general public default to promoting jury-rigged, one-size-fits-all approaches toward concerns about public health and safety. More thoughtful scholars, meanwhile, have attempted to sketch out an actionable rubric for informational harms and adequate remedies, to little avail or consensus. We anticipate that the prominence of newsworthy data security incidents, particularly the recent compromise of Equifax’s expansive personal finance datasets, will fuel feedback to the FTC urging just this kind of approach. We suggest that the FTC stay the course in rejecting such calls for several reasons.

First, broadly defining informational harms could impose serious and unnecessary damage to the information economy. Europe has chosen to institute such a broad definition, and the result has been to diminish competition and innovation in the EU information technology field. Avoiding this approach will ensure that the United States remains a leader in information technology innovation.

Additionally, an expansive view of informational harms may conflict with First Amendment–protected speech. Scholars such as Eugene Volokh have pointed out that when the government determines an information privacy standard that extends into the private sector and prevents the sharing of information, it is inevitably silencing speakers. This is not to say that restrictions on speech for privacy reasons are never allowed, but as with all limitations on free speech, such restrictions must be narrowly tailored. The commission should draw on the current heightened standards for other speech-induced harms, such as defamation and libel, when considering restrictions on information sharing to ensure they do not risk unnecessarily limiting speech. 

In practical terms, it is virtually impossible to develop and enforce a kind of overarching theory of harm appropriate for the internet age. Opinions on what constitutes harm and appropriate redress are almost as varied as the number of people online, and different people have different risk thresholds. In general, US regulators have eschewed this kind of approach, preferring instead to outline hard limits on certain behaviors—say, regarding child safety online—rather than attempting to pursue this Sisyphean task.

Furthermore, such attempts are simply unlikely to single-handedly improve security and privacy outcomes. Security is a fast-paced and dynamic space, and static frameworks will be ill suited to adapt to the evolving nature of developing threats. Similarly, opinions on what constitutes an adequate level of privacy are almost as varied as the personalities of the people who hold them, and these opinions evolve over time. Smart policies require a degree of flexibility to best address both security and privacy.

How, then, can the FTC improve its privacy and security enforcement in a manner that addresses consumer needs without foisting an onerous and ineffective standard on private parties? The answer is by moving FTC enforcement closer to the ideal of common law evolution.

Bridging Theory and Practice: How the FTC Can Improve

Chairwoman Ohlhausen notably characterized the FTC’s current approach to privacy and security issues as common law–like. She described how the agency’s “case-by-case enforcement . . . integrates feedback on earlier cases from advocates, the marketplace and, importantly, the courts. This ongoing process preserves companies’ freedom to innovate with data use. And it can adapt to new technologies and new causes of injury.” The chairwoman’s statements echo those of previous commissioner Julie Brill, who stated that the FTC’s actions had created a “common law of privacy” in the United States.

Unfortunately, the FTC’s approach to privacy and security issues only superficially resembles a true common law path. Rather than developing a real body of law through traditional litigation in the courts, the FTC has built up a mountain of loosely related consent orders that all private parties must sift through to determine whether or not their businesses comply with FTC standards. Notably, this system operates in the absence of a defined rulemaking process; it does not include notice and comment, nor does it provide clear guidelines.

Recent case law has shown the difficulty in applying an unclear standard of unfair or deceptive practices for both regulated entities and the courts. In the recent LabMD case, for example, where the FTC attempted to bring action against a Georgia-based health laboratory despite a lack of notice or guidance, Judge William S. Duffey Jr. criticized the agency’s approach to using consent orders to create regulation or duties without public awareness, stating that the FTC “ought to give [regulated parties] some guidance as to what you do and do not expect, what is or is not required. You are a regulatory agency. I suspect you can do that.” 

Others have expressed similar frustration. In the LabMD case, the FTC attempted to launch a legal theory that had never been considered in court against the defendant, prompting FTC Chief Administrative Law Judge Michael Chappell to ask, “Where is the fairness in that, Counselor? If you’re a company, you’re a corporation, where is the fairness in a standard of what the law is being issued or published after the case is brought?” 

In FTC v. D-Link, the FTC claimed that firmware issues that made a router susceptible to hacking were an unfair and deceptive trade practice because they placed consumers’ personal information and networks at risk. A federal court for the Northern District of California found that while the FTC’s claims that D-Link’s comments about its security were sufficient to allow that portion of the case to continue, there was insufficient evidence to proceed under California’s unfair trade practices law. The court also questioned the sufficiency of the claim regarding unfair trade practices under federal law. The court dismissed the FTC’s unfairness claims against D-Link for lack of an adequate injury, because the FTC did not “allege any actual consumer injury.” This shows at least that some courts will not allow the FTC to pursue a claim when there is no evidence that harm or injury has actually occurred.

FTC v. Wyndham Worldwide provides an example of how the FTC’s current system not only fails to provide a common law itself, but complicates or confuses the existing common law with its lack of clarity. The FTC alleged that Wyndham hotels’ lack of cybersecurity for consumer information, including credit card data and addresses, was an unfair practice when it was hacked, potentially exposing such information. The law is unclear about what constitutes an unfair practice for addressing data breaches, which in one case led the FTC to ask a district judge to take the unusual step of certifying the question to the Third Circuit on interlocutory appeal. The Third Circuit affirmed the FTC’s ability to use its Section 5 authority to enforce data security in the context of that litigation, but it questioned the lack of guidance provided for both the public and regulated individuals. This struggle shows that the existing difficulties also prevent courts and common law from evolving their own definitions while the FTC standard remains notably vague.

Such concerns are not confined to the use of unfairness but also include the use of deception. Perhaps no case study illustrates this more clearly than Nomi Technologies. Nomi collected shopping data and offered customers an option to opt out of both physical store data collection and website data collection. However, the data collection from physical stores was not successfully removed even when a consumer had opted out. Nomi served as a third-party contractor for the retailers in the collection of data and therefore, as Commissioner Ohlhausen stated in her dissent, had no obligation to provide consumers an opt-out. By offering an option, however, the company was found to be deceptive despite having no duty to provide such an option and despite the lack of evidence of harm to any consumers. As some commenters at the time pointed out, the FTC’s ruling made it better for an app developer not to provide any privacy policy rather than to provide one that may later prove to be flawed. Not only does this ruling fail to provide clear standards for what constitutes deceptive practices for data privacy, it also punishes a company in the absence of consumer harm.

Rather than building on existing precedent to establish a series of understandable, stable norms, these orders and actions do little to clarify what the FTC considers an unfair or deceptive practice and fail to provide adequate guidance to regulated parties. Common law provides a precedent that regulated parties and individuals can build upon. The current system fails to adequately provide this guidance. The courts and current tort law may be better equipped to develop a system of common law to establish what duties are required.

Returning such issues properly to the courts as opposed to using administrative consent orders would not leave individuals without remedy and could provide better information to all involved. Class actions or individual lawsuits typically accompany or precede regulatory action. Courts have ruled that actual harm caused by the theft of personal information from a known data breach need not be proved; the heightened threat of identity theft from a “fairly traceable” data theft and the cost necessary to protect oneself from such risks following information exposure are sufficient to allow a case to proceed. Courts are also able to provide injunctive relief to plaintiffs when necessary to stop further harm from occurring. While there is always a risk that common law could evolve in a less than ideal way, the risk of more consequential and restrictive regulations is far more likely to have a negative impact on both consumers and regulated industries.

Any regulation in this area should have a high bar of providing guidance that does not impact the continued development of new technology. It should also retain the right of both consumers and regulated entities to go to court and trust the common law instead of an administrative process.

Conclusion

In calling this workshop to examine the FTC’s history and future of enforcement actions relating to privacy and security issues, the agency demonstrates that it recognizes feedback from industry and commentators and wishes to constructively improve upon its record. We applaud the FTC for recognizing this opportunity to improve, and we have outlined a framework that can maintain both consumer redress and regulatory flexibility.

We believe that the FTC and industry have the same goal: to protect consumers from informational harm without imposing a brittle bureaucratic structure that does little to promote actual security. To that end, we encourage the FTC to eschew any calls to develop rigid, all-encompassing theories of “informational injury” to guide future actions. Rather, the FTC should strive to develop a true body of common law precedent.