Mitigating Privacy Risks While Enabling Emerging Technologies

We appreciate the opportunity to provide comments on the preliminary draft NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management (hereafter the Privacy Framework or the Framework) of the National Institute of Standards and Technology (NIST). The Mercatus Center at George Mason University is dedicated to bridging the gap between academic ideas and real-world problems and to advancing knowledge about the effects of regulation on society. This comment does not represent the views of any particular party or special interest group but is designed to assist NIST in creating a policy environment that will facilitate increased innovation, competition, and access to technology to the benefit of the public.

We applaud NIST’s efforts to empower enterprises to mitigate risk while also recognizing the potential impact of standards on emerging technologies. The request for comment recognizes that the digital space is a “complex ecosystem” with multiple stakeholders. The Framework identifies stakeholders within the data processing ecosystem including manufacturers, government service providers, individuals, commercial service providers, developers, businesses, and suppliers. The complexity of this ecosystem requires that the Privacy Framework function as one part of a wider set of solutions and resources developed by stakeholders to mitigate risk. Toward this end, this comment addresses two of NIST’s requests for review:

  1. Whether the preliminary draft adequately defines the relationship between privacy and cybersecurity risk.
  2. Whether the draft enables organizations to adapt to privacy risks arising from emerging technologies such as the internet of things (IoT) and artificial intelligence (AI).

In the sections that follow, we first suggest how the Framework can empower and educate individuals and civil society to mitigate privacy risk. We then propose resources worth including in the Framework’s proposed resource repository and recommend using the terminology of resilience in specifying the Framework’s goals. We highlight the importance of clarifications for the Framework’s delineation between privacy and cybersecurity risk, and we caution that guidance on AI and the IoT should focus on existing incidents of harm rather than hypothetical risks. Finally, we recommend that the Framework include guidance on risks associated with government access to data.

The Framework’s Role in Empowering Individuals and Civil Society to Mitigate Privacy Risk

The proposed framework properly identifies enterprise risk management as one piece of a larger set of security and privacy efforts. By focusing primarily on the role of enterprises in risk management, NIST’s approach has the potential to diminish the role individuals and civil society have in protecting data. A majority of data breaches and incidents result from human error and are thus unintentional or inadvertent. Processes involving staff or users deserve further scrutiny throughout the five privacy framework functions; specifically, the framework should further emphasize privacy awareness and training, communication of privacy policies internally and externally, and access control regarding data.

Education empowers individuals to adopt good cybersecurity practices and engage in appropriate steps following a data breach. In this regard, NIST should take seriously the Privacy Framework’s role as an educational document for organizations. Aggregating resources and clarifying the responsibilities of organizations will better help these organizations avoid noncompliance with existing or forthcoming legislation. In addition to NIST’s own resources, the following resources for the proposed repository of privacy resources are worth highlighting:

  1. The Federal Trade Commission (FTC), in partnership with other federal agencies, has created OnGuardOnline, a website that offers privacy tips for individuals and businesses.
  2. The Future of Privacy Forum hosts a central repository of privacy resources regarding best practices for organizations.
  3. The Council of Better Business Bureaus has a set of data privacy guidelines for small businesses.
  4. TeachPrivacy is an educational platform for improving security and privacy awareness among employees.
  5. The Electronic Frontier Foundation offers tools to help individuals protect themselves online.
  6. The International Association of Privacy Professionals collects resources on organizational privacy policies, crafting a privacy notice, and existing privacy regulations.

Furthermore, developments in liability standards and the role of tort law in privacy cases are worth understanding when considering the overall regulatory environment in which data privacy decisions are made. NIST’s privacy framework is one piece of an existing web of privacy resources and solutions that help organizations self-regulate and improve baseline privacy through suggested best practices. Given that civil society—including professional organizations, trade associations, research centers, and advocacy groups—supplies privacy resources, NIST should include civil society as a distinct party in the data processing ecosystem in section 3.5 of the Framework. The Framework recognizes that “deriving benefits from data while simultaneously managing risks to individuals’ privacy is not well-suited to one-size-fits-all solutions.” The proposed repository of privacy resources ensures that multiple solutions are available to enterprises.

The Framework’s Role in Fostering a Resilient Data Processing Ecosystem

NIST’s Privacy Framework notes correctly that data actions can have unintended consequences for user privacy. Requirements and recommendations intended to mitigate risk can also result in unintended consequences. For example, requirements can foster a false sense of security against privacy risk because organizations feel that they have “checked all the boxes.” This belief can compromise an organization’s preparedness to deal with emerging or evolving privacy risk.

Regulatory frameworks can favor the most restrictive privacy preferences rather than supporting a wide range of individual preferences and potential improvements to security and privacy that may not yet be anticipated. In this regard, we commend NIST for viewing risk mitigation as a task that is ongoing and evolving. In short, privacy practices must be continuously reevaluated, and resilience to risk must be achieved over and over again. The framework acknowledges the importance of constant vigilance, organizational learning, and risk reassessment in the data processing ecosystem. NIST advances its framework with the following vision:

"The five Functions, defined below, are not intended to form a serial path or lead to a static desired end state. Rather, the [five] Functions should be performed concurrently and continuously to form or enhance an operational culture that addresses the dynamic nature of privacy risk."

While the Framework accurately acknowledges the governance challenge in a dynamic and rapidly evolving ecosystem, it occasionally undermines this stated vision. For example, the Framework aims to help organizations in “future-proofing products and services . . . in a changing technological and policy environment.” Future-proofing is an unachievable goal for a dynamic ecosystem because the future is unknowable and constantly changing. What’s more, looking to future-proof using only existing technologies might actually prevent the emergence of new, innovative solutions that would better improve security and privacy. Instead, it is better to aim at fostering resilient products and services and encourage innovative solutions that can adapt to new threats. Resilience is a process of building the capacity to adapt to emerging threats. There is a tension between mitigating risk in its entirety and achieving resilience to breaches and security threats. Exposure to risk is both inevitable and critical for allowing individuals inside and outside organizations to learn how to manage and adapt to privacy risk. Rather than focus on “future-proofing,” we encourage NIST to specify resilience to privacy risk as an end goal in the executive summary and introduction of the Privacy Framework.

Clarifying Privacy and Cybersecurity Risk

The Privacy Framework can benefit from further clarification on the distinction between cybersecurity and privacy risk management. Defining the boundaries between privacy and cybersecurity risk is challenging because of the subjective nature of privacy concerns. Understanding risk requires identifying and defining privacy harm, as well as differentiating that harm from cybersecurity harm. Section 1.2.1 of the Privacy Framework has clarified this distinction for cybersecurity harm well, but its definition of privacy risks remains overly broad by not clearly distinguishing the risks or lack of risks associated with different types of personal information.

NIST’s categorization of privacy risk could use further clarification. For example, personally identifiable information is defined as data that can be used to distinguish one person from another, such as social security numbers or biometric identifiers, the exposure of which poses greater risks to users than other types of data. An organization or bad actor knowing one’s credit card information or address is riskier than knowing one’s ice cream flavor preference. It is important to be specific about categories of data so organizations can identify the riskiest data actions for scrutiny under the Framework. Without such distinctions, treating all information associated with an individual as having the same level of privacy risk can both deter future innovation as well as prevent already beneficial uses and can result in an overly expansive impact that fails to actually address privacy concerns.

When setting standards for privacy and security, the focus should remain on preventing and mitigating clearly definable harms. This approach provides both consumers and providers with certainty regarding forbidden behaviors while still allowing innovative approaches to continue to flourish. A harm-based approach that clearly defines the risks and actions that lead to such harms will best minimize the effect on innovation by presuming new uses allowable unless previously forbidden. The Framework must also take into account the strong likelihood that citizens, as in the past, will adjust their privacy expectations in response to ongoing marketplace and technological change. Not everyone shares the same sensitivities or values, and therefore defining privacy harm will continue to be a challenge. While the Framework takes positive initial steps in defining harms associated with cybersecurity risks, we suggest that further clarification is necessary, particularly with regard to existing privacy-associated harms and categories of data.

Enabling Emerging Technologies through an Adaptive Approach to Cybersecurity and Privacy Risks

We commend NIST for considering how the Framework will address privacy risk associated with emerging technologies such as the IoT and AI. It is important that the Framework not be so overly prescriptive and precautionary as to prevent innovations that could come from such technologies. When faced with the rapid changes associated with technological advancement, the use of soft law can facilitate a governance approach that is able to evolve with and enable innovation better than traditional policy tools. Soft law includes rules that are not strictly binding, such as best practices and voluntary frameworks (e.g., the Privacy Framework and the Cybersecurity Framework).

When determining what resources and practices are necessary regarding privacy and security for new technologies, the Framework should focus on responding to known risks. This approach will minimize unintended consequences and maximize the potential benefits of innovation while providing appropriate redress for those harmed and levying penalties for actors who break or evade the law. Future recommendations for adapting to risks associated with the IoT and AI should focus on proven incidents of privacy risk and harm rather than hypothetical worst-case scenarios associated with these emerging technologies.

Artificial Intelligence

AI is increasingly used both to extract data and to defend against data extraction. The dual nature of this emerging technology complicates a comprehensive approach to mitigating cybersecurity or privacy risk in the data processing ecosystem. What’s more, certain AI applications such as machine learning rely on large datasets for training. Many of the concerns about such datasets can be addressed by existing tools regarding breach, security, and consent. Education, social pressure, societal norms, voluntary self-regulation, and targeted ex post enforcement through common law or FTC action already work to constrain bad use cases. Beyond agency action, the courts and common law are also often able to address various issues through product liability or other appropriate claims on an ex post basis. These ex post adaptive tools are better able to address concerns about potential misuse and abuse of data actions than are regulatory requirements or prescriptive frameworks.

As our Mercatus colleague Adam Thierer has written,

"In particular, policymakers should prioritize developing an appropriate understanding of the varied sector of artificial intelligence technologies from the outset and developing an appreciation for limitations of our ability to forecast either future AI technological trends or crises that may ultimately fail to materialize."

The framework should focus on those AI applications that have been linked to specific harms rather than applications perceived to have high privacy risks. For example, AI-driven predictive policing and criminal sentencing software result in known civil-liberties concerns. In this case, more accountability mechanisms may be appropriate. Basing policy on evidence, rather than fear of worst-case uses of AI, can foster the right balance between mitigating privacy risk and promoting innovation.

The Internet of Things

The IoT is a network of connected devices that send and receive data. It includes devices from smartphones and computers to autonomous vehicles and mesh networks. As the amount and type of connected devices have grown, so have privacy and cybersecurity concerns. Yet there are several benefits of such technologies for consumers, including the ability to help disabled and aging populations have greater independence. Rather than focus exclusively on the potential problems from data security and privacy, the benefits of IoT devices should also be acknowledged in the Framework. When addressing harm, guidance should be narrowly tailored to address the exact harm and the specific technology involved rather than using a broad approach that could have unintended consequences for both current and future connected devices and data usage.

The governance challenge facing the IoT data ecosystem is unique because of the ecosystem’s dynamism, complexity, and decentralized and distributed nature. A narrow prescription for IoT privacy practices could discourage flexibility and offset the ecosystem’s ability to manage risk. IoT devices are increasingly intertwined with AI capabilities. For example, smart routers use a combination of AI and cloud services to recognize, monitor, and identify threats from malware and botnets to household IoT devices. As with AI, it is important that policy suggestions related to the IoT be responsive to known challenges, rather than anticipatory, and focus on known harms and risks rather than applying to a general-purpose technology.

While it is useful for NIST to think about the impact of AI and the IoT, many of the concerns are tied to the same underlying cybersecurity and privacy risks associated with existing internet technologies. NIST’s guidance should also recognize that these emerging technologies have the potential to prevent certain risks through an improved ability to identify security weaknesses as well as opportunities to decentralize information or improve authentication for access.

Distinguishing Government Requests and Use of Data from Private Industry Data Usage

Currently, the Privacy Framework only focuses on data actions associated with enterprise data collection and usage. The Framework should be careful to distinguish government use of data from its use by private industry or civil society.

Law enforcement requests for access and government-mandated access to user data complicate the task of mitigating privacy risk. Government requests for user data have increased steadily as more activity has moved online. Privacy risks associated with foreign or domestic government access to user data should be included in each of the five Privacy Framework functions, starting with “Identify-P.” Documents referring to requirements under Section 702 of the FISA Amendments Act of 2008 and Mutual Legal Assistance Treaty requirements for cross-border data flows should be included in the proposed resource repository.

Notably, guardrails can be established that limit potential abuse of technology by the government but still allow beneficial uses by both the public and private sectors. For example, placing harms-based restrictions on specific government uses rather than on a technology more generally can help reduce the potential abuses.

With the growth and deployment of data through various connected devices from smartphones to scooters, conversations about government access to this data from both its own collection and private industry are increasingly likely to be part of the policy conversation around emerging technologies. Such data can be useful to government entities for the provision of services such as access to public transportation, but they also have the potential for abuse that could limit individual freedom. For example, government’s ability to identify and actively track its citizens can result in false positives in terms of detainments and arrests. It is important for the Framework to acknowledge these risks through access to appropriate policy resources as well as distinguish government data actions from consumer and private actions.

Conclusion

We agree with NIST’s vision to provide a common language for stakeholders and improve privacy through enterprise risk management. The role of government in addressing cybersecurity and privacy risk is to foster a policy environment such that a wide set of solutions can evolve. By cultivating a resource repository, promoting the adoption of guidelines and best practices, and encouraging enterprises to dynamically respond to harm, the NIST Privacy Framework promises to do just that.

We encourage NIST to emphasize resilience to privacy risk as an aim of the Framework, to limit the guidance and restrictions on data related to AI and the IoT to existing known privacy risks, and to consider potential civil-liberty and privacy risks related to government access to data. Still, we encourage a framework that is rooted in harm rather than a more amorphous standard that could deter future innovation and the development of better tools to allow individuals and enterprises to manage their privacy and security risks.

Additional details

Agency: National Institute of Standards and Technology

Comment Period Opens: September 9, 2019

Comment Period Closes: October 24, 2019

Comment Submitted: October 24, 2019

Docket No. 2019-19315

To speak with a scholar or learn more on this topic, visit our contact page.