Phasing Out Certificate-of-Need Laws: A Menu of Options

February 25, 2020

Research Shows that CON Laws Do Not Achieve Their Objectives

Certificate-of-need (CON) laws in healthcare are currently found in 36 states and the District of Columbia. These laws require those aspiring to offer certain medical services, acquire certain devices, or open or expand particular medical facilities to first obtain authorization from a regulatory authority. Four decades of research show that CON laws are associated with limited access, diminished quality, and higher costs of care. The most promising CON reform, therefore, is complete repeal, a strategy that has been successfully pursued by 15 states comprising nearly 40 percent of the US population. Complete reform, however, is politically difficult, given the outsized influence of incumbent providers, who have an interest in maintaining the current system. In this policy brief we therefore offer a menu of alternative reforms that can limit the anticompetitive effects of CON laws and illuminate a path toward more comprehensive reform in the future.

Unlike other forms of regulation, the immediate goal of CON is not to assess the provider’s qualifications. Instead, regulators attempt to determine whether or not the service is needed by the community, an assessment that in most other markets is made by entrepreneurs with an eye toward expected profitability.

The federal government once encouraged states to adopt these procedures with the 1974 passage of the National Health Planning and Resources Development Act. It withheld federal funding from states that failed to adopt CON laws. The goals of the legislation, often echoed by CON advocates today, were (1) to ensure an adequate supply of healthcare services, (2) to enhance access to care for rural populations, (3) to encourage higher-quality care, (4) to encourage more charity care for impoverished and underserved communities, (5) to encourage the use of lower-cost healthcare alternatives such as ambulatory care, and (6) to lower the cost of care. Early research suggested that CON laws were failing to achieve these ends, and by the mid-1980s, Congress eliminated the mandate.

Several states immediately repealed their CON laws, and over the years, others have followed. Today, nearly 40 percent of the US population lives in states without a CON law in healthcare. Over the past several decades economists and other researchers have compared outcomes in these states with those in CON states to assess the effects of CON on access, quality, and cost. Their examinations use regression analyses with controls that account for potentially confounding factors such as demographics and local economic conditions. This research shows that CON laws have failed to achieve their goals.

CON Law Reduce Access

Consider access to care. A CON law explicitly restricts the supply of services, so economic theory suggests that it is unlikely to expand access. Indeed, this is what the data show. Controlling for other factors, relative to patients in non-CON states, patients in CON states have access to fewer hospitals per capita, fewer hospital beds per capita, fewer dialysis clinics, fewer ambulatory surgical centers, fewer medical imaging services, and fewer hospice care facilities.

Nor do CON laws seem to expand access to care for certain vulnerable populations. There is no greater provision of charity care in CON states than non-CON states. There are not just fewer hospitals and fewer ambulatory surgical centers in CON states, there are also fewer rural hospitals and fewer rural ambulatory surgical centers in these states. This helps explain why patients in CON states must drive further to obtain care and are more likely to seek care in a different state. There is also greater racial disparity in the provision of certain services in CON states than in non-CON states. Finally, hospitals in CON states are less adaptable to change.

CON Laws Lower Quality

These restrictions on access do not seem to have resulted in higher-quality care. In fact, research suggests that patients in CON states have higher mortality rates following heart attacks, heart failure, and pneumonia. And patients in states with four or more CON laws have higher readmission rates following heart attacks and heart failure, more postsurgery complications, and lower patient satisfaction levels.

CON Laws Increase Cost

Finally, both costs per procedure as well as spending per patient (or resident) are higher in CON states than in non-CON states.

Reform Option 1: Full Repeal

The experience of the past half century suggests that full CON repeal would increase access to lower-cost, higher-quality healthcare. In spite of the social benefits of full CON repeal, however, it is not a politically easy thing to do. This is because the anticompetitive benefits of CON laws redound to a small and highly organized group of incumbent providers, while the costs of CON laws fall on a large, diffuse, and politically unorganized group of patients, taxpayers, and would-be providers. This problem was first laid out by economist Mancur Olson. He noted that if public policy imposes costs on a large group of citizens, individual members of that group have little incentive to organize against the policy. For one thing, for any one member of the group, the costs of political engagement in such a fight are typically greater than the benefits that he or she stands to gain from policy change. And for another, members of large and diffuse groups have a strong incentive to free ride on the political engagement of others in their group, and this incentive tends to make all members of large and diffuse interest groups less likely to become engaged at all. To compound the problem, those who bear the costs of CON laws are often unaware that these rules even exist. In contrast, those who benefit from CON laws—generally large, existing hospitals—are few in number, well acquainted with these rules, and typically able to organize to defeat any reforms.

In light of this political reality, the most successful reforms are likely to be those that allow policymakers to cast conspicuous votes for the general interest while giving them some cover as they remove special interest privileges. In the rest of this policy brief, we outline suggestions to enable policymakers to ease their state’s CON requirements.

Reform Option 2: Partial Repeal

CON laws cover a wide assortment of technologies and procedures: everything from new hospitals and hospital beds to air ambulances and radiation therapy. With 30 separate CON requirements, Vermont requires a CON for more services and technologies than any other state. At the other end of the spectrum is Ohio, which requires a CON only for nursing home beds. Research suggests that the negative effects of CON requirements on hospital quality may be cumulative. In states with four or more CONs, postsurgery complications and readmission rates following heart attacks and heart failure are higher while patient satisfaction levels are lower.

One potential path to reform, recently pursued by West Virginia and Florida, is to eliminate CON requirements for certain services or technologies. In 2017, West Virginia legislators eliminated the need for a CON for telehealth, remedial care, ambulatory health facilities, and imaging services. And in 2019, Florida legislators eliminated the need for a CON for new hospitals; specialty hospitals converting to general hospitals; children’s, women’s, specialty medical, rehabilitation, psychiatric, and substance abuse hospitals; and intensive residential treatment facilities services for children.

If policymakers wish to eliminate certain varieties of CONs, there are a number of promising options.

Eliminate CONs That Harm Vulnerable Populations

Good candidates for repeal are CONs that restrict access to services utilized by particularly vulnerable populations, such as CONs for drug and alcohol abuse treatment centers (found in 24 states), CONs for psychiatric care facilities (found in 28 states), or CONs for intermediate-care facilities for those with intellectual disabilities (found in 28 states).

Eliminate CONs for Procedures That Are Unlikely to Be Overprescribed

Another option would be to eliminate CONs for procedures that are unlikely to be overprescribed. In these cases, the rationale for a CON is especially weak. Options here include the elimination of CONs for neonatal intensive care units (found in 22 states), CONs for burn care units (found in 14 states), and CONs for hospice care facilities (found in 18 states).

Eliminate CONs for Low-Cost Modes of Care

Another option is to eliminate CONs that restrict access to lower-cost modes of care. These reforms make sense, given that one of the initial goals of CON regulations was to encourage the use of lower-cost, ambulatory care. Options here include the elimination of CONs for ambulatory surgical centers (found in 28 states) or CONs for home healthcare facilities (found in 19 states).

Eliminate CONs for Small Investments

Many states have a capital investment threshold that triggers the requirement of a CON. The lower the threshold, the more minor investments necessitate a CON. A low threshold discourages new providers from entering a market and makes it difficult for existing providers to modify their services in response to changes in demand or technology. One simple reform that can significantly ease the CON burden is to raise these thresholds.

Reform Option 3: Phased Repeal

There are several options to gradually eliminate CON laws.

A Time-Bound Phase Out

In the 1980s, 19 states repealed or scaled back their CON laws. Two states, Arkansas and Colorado, had adopted legislation committing to repeal their regulations in the event that the federal mandate was eliminated. Eight other states (California, Idaho, Indiana, Kansas, Mississippi, Montana, Wisconsin, and Wyoming) had adopted sunset clauses that ensured that the regulations would be eliminated after a certain period of time; given the elimination of the federal mandate, these sunsets were allowed to take place.

In 1992, Pennsylvania’s statutes were modified to contain a sunset clause under which the CON system would automatically terminate after four years. As the sunset date drew near, special interests and the governor supported an extension of the CON law. The legislature, however, resisted, and the state’s CON regulations were allowed to sunset in 1996. In their place, however, the state began enforcing licensing provisions that focused on whether a proposed project met certain quality requirements. More recently, New Hampshire passed a law in 2012 that repealed its CON law, effective 2015. There, too, special interests mounted an unsuccessful defense of the program (though they did manage to delay the repeal by one year).

Other states could replicate these examples with legislation sunsetting individual CON mandates or entire CON programs.

A Temporary Elimination

As an alternative to a time-bound phase out, states could pursue the reverse strategy. They might eliminate one or more CON laws for a set period of time as a way of testing what a full repeal would be like. Such a sunset provision might give lawmakers the opportunity to look back over the effects of repeal on healthcare access, quality, and costs. However, it is not clear that providers would be willing to undertake investments under such an uncertain regulatory environment, skewing the results of the experiment. Moreover, this method risks allowing CON to return before the legislature has had a chance to act.

Gradual Increases in the Approval Rate

Another option would be to require CON boards to approve an increasing share of applications over successive years. For example, in Florida, the approval rate from 2014 to 2016 was 45 percent. A state like Florida might decide to establish a four-year schedule where it would accept 55 percent of applications in year one, 65 percent in year two, 85 percent in year three, and 100 percent in year four. This would allow for a slow transition away from CON restrictions.

Reform Option 4: Repeal Contingent on the Actions of Others

In another variation, states might attempt to replicate the Arkansas and Colorado path to reform by making CON repeal contingent on the actions of policymakers elsewhere. For example, a state might pass legislation that would automatically eliminate its CON program in the event that certain neighboring states did away with their own programs. This would allow policymakers to institute a reform that benefits the general interest while limiting the ability of special interests to counter it.

This approach may be viable given the fact that both patients and providers are influenced by the policies of neighboring states. As we have already noted, compared with patients in non-CON states, patients in CON states are more likely to seek out-of-state care. And as advancements in telemedicine continue, it will get easier for them to do so. Providers, too, may be tempted to explore alternative investments in neighboring states if doing so allows them to avoid a long and costly regulatory hurdle.

In short, state policymakers may be compelled to liberalize their CON laws as those around them liberalize, and a reform that is contingent on the actions of neighboring states may be easier to accomplish than an outright repeal.

Reform Option 5: Administrative Relief

CON application processes are expensive and time consuming. Here, we suggest ways to alleviate those burdens.

Fee Reduction

Application fees vary widely between states. Connecticut prices applications at just $300. But in many states, an application can cost thousands of dollars. In Virginia, the application fee can reach $20,000. In North Carolina, the base application fee is $5,000 plus 0.3 percent of the capital costs of the project if those costs are greater than $1,000,000, for a total fee of up to $50,000.

There seems to be no economic rationale for those discrepancies. However, if boards have discretion over the fee schedule and are funded by applications fees, they have a perverse incentive to maximize these fees. Lower application fees would ease one hurdle in the way of healthcare access.

Simplified Application and Reporting Requirements

High fees are not the only obstacle. Applications are costly in other ways. Many are long and require complex calculations and forecasts. Providers can spend years and hundreds of thousands of dollars in attorney and consultant fees. By way of example, Illinois’s application template is 78 pages long. One Virginia radiology center spent five years and $175,000 applying for a CON. CON boards should consider simplifying the reporting requirements to make the process more manageable for applicants.

Reform Option 6: Modification of Criteria

States use different criteria to evaluate whether or not a service is “needed” in a certain community.

Eliminate the Nonduplication Criterion

In several states, applicants are required to demonstrate that the service, facility, or technology they wish to offer will not duplicate an already-existing service. If regulators determine that the applicant is likely to duplicate an existing service, the CON will be denied and the current provider will be ensured a monopoly.

Not all services and facilities can be compared in an apples-to-apples fashion. For example, physician Mark Baumel wanted to open a virtual colonoscopy clinic in northern Virginia. It would have used CT scanners to conduct noninvasive alternatives to traditional colonoscopies. His request was denied on the grounds that CT scanners were already being used by another provider in northern Virginia, despite the fact that that provider was not performing virtual colonoscopies.

Beyond this practical consideration, there is no reason to prevent the duplication of a service. When more providers offer similar care, each has an incentive to compete over price and quality so as to attract and retain customers. As in other markets, healthcare quality tends to be higher and prices tend to be lower with more competition. This explains why the Federal Trade Commission and the Department of Justice under both Democratic and Republican leadership have long maintained that CON laws are anticompetitive. As a result, states should consider eliminating the nonduplication criterion.

Eliminate the Utilization Criterion

In certain states, CON boards assess the need for a new hospital by measuring the utilization of existing ones. For example, they will count the number of occupied beds or how often pieces of equipment are used. If existing facilities have low utilization rates, the board concludes that providers have overbuilt facilities and that any additional services of the same type would be wasteful.

This approach is flawed in a few regards. First, practically speaking, current utilization may not reflect needed utilization. For example, states that are prone to natural disasters may need to have the capacity to accommodate many more patients than average utilization levels suggest. Second, incumbent hospitals are aware of the utilization criteria, which gives them an incentive to overinvest in equipment and to underutilize what equipment they have so as to ensure rivals will be denied CONs. Ironically, this encourages the very problem this regulation was intended to prevent. Third, patients often know what services providers do and do not offer, especially in the case of specialty services. For example, patients are likely to know whether or not a certain hospital has a neonatal intensive care unit. As a result, patients are not likely to show up and ask for neonatal intensive care if the hospital doesn’t have a unit. In this case, the utilization rate fails to capture the fact that some patients do indeed need the service.

In sum, the utilization criterion creates perverse incentives and risks understating the true need for new services. State policymakers should consider eliminating it.

Narrow the Geographic Scope of Analysis

As we have noted, CON regulators assess need based on the current level of care offered by existing providers. A provider on the far side of a state is not able to offer convenient care, however, so one simple reform would be to narrow the geographic scope of the analysis to ensure that need is being assessed on a local basis.

Increased Transparency

The pathway to reform can be illuminated by transparency measures, especially those that shed light on the fact that CON laws afford special interests anticompetitive benefits while costing patients and would-be competitors. We suggest six practical steps that states can take to discourage anticompetitive practices and make more ambitious reforms more likely.

Disclose Approval Rates

As we have noted, Florida approved about 45 percent of CON requests from 2014 to 2016. We know this only because, prior to testifying in the Florida House of Representatives, we requested this information from the deputy secretary of the Florida Agency for Health Care Administration Division of Health Quality Assurance. This figure is not available on the CON board’s website, however, and publishing it would be an easy way to make the process more transparent.

Disclose Applications Opposed by Incumbents

The CON process is anticompetitive because it gives incumbent providers a direct opportunity to oppose the entrance of would-be competitors. To mitigate the risk of special interests unduly influencing the decision of the board, states could require CON boards to disclose the percentage of applications that are opposed by incumbent hospitals and providers. Ideally, boards would release all data from past applications and pair it with approval rates, as that would enable analysts to see if incumbent opposition makes approval less likely.

Disclose Applicants’ and Incumbents’ Donations to Political Action Committees

In order to inform the public of the interests at play and to ensure objectivity throughout the review process, states could require both applicants and those opposing applications to disclose their donations to political action committees (PACs). It is common for CON board members to be affiliated with political parties, which may lead to subjective assessments and politically motivated decisions. Moreover, recent research suggests that PAC contributions can affect the likelihood of CON approval. Mandating the disclosure of donations would serve to mitigate this problem.

Disclose CON Board Members’ Financial Ties

Along similar lines, states could require that the identities and financial interests of all members of the CON board be disclosed to the public. (Most states do already require this.) Boards are often composed of public officials and healthcare insiders, both of whom have certain interests in the handling of applications. In its decision in North Carolina State Board of Dental Examiners v. FTC, the US Supreme Court ruled that when boards are dominated by members of the professions they oversee and when elected officials fail to exercise adequate control over these boards, states may be liable for antitrust violations. More broadly, the presence of industry insiders on CON boards may allow the CON system to function as a cartel. By making the composition of the board transparent, states would shed light onto members’ potential motivations.

Ensure That Boards Are Not Dominated by Industry Insiders

Boards should not be dominated by the members of the professions or businesses they oversee. States should ensure that a clear and controlling majority of CON board members do not have financial ties to the existing healthcare industry. Ideally, boards would be comprised of disinterested professionals who are acquainted with the economics of CON regulation and are interested in improving the health and safety of the public, not in protecting incumbent providers from competition.

Disclose Applicants’ Compliance Costs

We noted above that application costs can reach tens of thousands of dollars. In many cases, the total cost of applying for a CON is much higher, as illustrated by the case of a Virginia doctor who wished to purchase a second MRI scanner for his practice group and spent $175,000 preparing the lengthy application. A large share of the costs went to attorney and consultant fees. Even in states where application fees are low, applicants can be deterred by these sorts of steep compliance costs. And if they are not deterred, they will end up expending valuable resources on the application when those resources could instead have been invested in patient services. To make this burden apparent to potential applicants and the public, states could require that applicants report their own compliance costs, such as the number of full-time employee hours used in preparing the application, the labor cost, and the opportunity cost of the time spent preparing it.

Duty to Follow Up after Application Denial

States would be well advised to follow up with providers whose applications have been denied and ask them how their inability to offer the requested service has affected their overall performance in order to make apparent the consequences of denials. A recent incident in southwestern Virginia illustrates the point. In 2010, a local hospital applied for a certificate to create a neonatal intensive care unit (NICU). Despite overwhelming support from the local community, the board was swayed by an incumbent provider’s claim that the new service was superfluous. Two years after the denial, a pregnant woman came to the hospital in premature labor. Lacking a NICU unit, the hospital requested transport to the nearest facility with such a unit. But, unfortunately, the baby died before the transportation could arrive.

Conclusion

A large body of academic research suggests that CON laws limit access, degrade quality, and increase cost. Given this evidence, state policymakers who wish to increase patient access to high-quality, lower-cost care would be well advised to eliminate their entire CON programs.

If you'd like to invite a scholar to provide testimony on CON laws, please email our outreach team. Our scholars regularly provide testimony at the state and federal level.

Building in Accountability for Algorithmic Bias

Monday, February 17, 2020

“Algorithms’ are only as good as the data that gets packed into them,” said Democratic Presidential hopeful Elizabeth Warren. “And if a lot of discriminatory data gets packed in, if that’s how the world works, and the algorithm is doing nothing but sucking out information about how the world works, then the discrimination is perpetuated.”

Warren’s critique of algorithmic bias reflects a growing concern surrounding our interaction with algorithms every day.

Algorithms leverage big data sets to make or influence decisions from movie recommendations to credit worthiness. Before algorithms, humans made decisions in advertising, shopping, criminal sentencing, and hiring. Legislative concerns center on bias – the capacity for algorithms to perpetuate gender bias, racial and minority stereotypes. Nevertheless, current approaches to regulating artificial intelligence (AI) and algorithms are misguided.

The European Union enacted stringent data protection rules requiring companies to explain publicly how their algorithms make decisions. Similarly, the US Congress has introduced the Algorithmic Accountability Act regulating how companies build their algorithms. These actions reflect the two most common approaches to address algorithm bias of transparency and disclosure. In effect, regulations require companies to publicly disclose the source code of their algorithms and explain how they make decisions. Unfortunately, this strategy would fail to mitigate AI bias as it would only regulate the business model and inner workings of algorithms, rather than holding companies accountable for outcomes.

Research shows that machines treat similarly situated people and objects differently. Algorithms risk reproducing or even amplifying human biases in certain cases. For example, automated hiring systems make decisions at a faster and larger- scale than their human counterparts, making bias more pronounced.

However, research has also shown that AI can be a helpful tool for improving social outcomes and gender equality. For example, Disney uses AI to help identify and correct human biases by analyzing the output of its algorithms. Its machine learning tool allows the company to compare the number of male and female characters in its movie scripts, as well as other factors such as the number of speaking lines for characters based on their gender, race, or disability.

AI and algorithms have the potential to increase social and economic progress. Therefore, policy makers should avoid broad regulatory requirements and focus on guidelines and policies that address harms in specific contexts. For example, algorithms making hiring decisions should be treated differently than algorithms that produce book recommendations.

Promoting algorithmic accountability is one targeted way to mitigate problems with bias. Best practices should include a review process to ensure the algorithm is performing its intended job.

Furthermore, laws applying to human decisions must also apply to algorithmic decisions. Employers must comply with anti-discrimination laws in hiring, therefore the same principle applies to the algorithm they use.

In contrast, requiring organizations to explain how their algorithms work would prevent companies from using entire categories of algorithms. For example, machine learning algorithms construct their own decision-making systems based on databases of characteristics without exposing the reasoning behind their decisions. By focusing on accountability in outcomes, operators are free to focus on the best methods to ensure their algorithms do not further biases and improve the public’s confidence in their systems.

Transparency and explanations have other positive uses. For example, there is a strong public interest in requiring transparency in the criminal justice system. The government, unlike a private company, has constitutional obligations to be transparent. Thus, transparency requirements for the criminal justice system through risk assessments can help prevent abuses of civil rights.

The Trump administration recently released a new policy framework for Artificial Intelligence. It offers guidance for emerging technologies that is both supportive of new innovations and addresses concerns about disruptive technological change. This is a positive step toward finding sensible and flexible solutions to the AI governance challenge. Concerns about algorithmic bias are legitimate. But, the debate should be centered on a nuanced, targeted approach to regulations and avoid treating algorithmic disclosure as a cure. A regulatory approach centered on transparency requirements could do more harm than good. Instead, an approach that emphasizes accountability ensures organizations use AI and algorithms responsibly to further economic growth and social equality.

Regulators Wonder if Cancer Patients ‘Need’ New Treatments

Wednesday, October 30, 2019
Authors: 
Matthew D. Mitchell

Matthew D. Mitchell and Anna Parsons write on regulatory hurdles faced in introducing a new cancer drug, despite the drug already having been approved by the FDA.

Read it at the Wall Street Journal.

Mitigating Privacy Risks While Enabling Emerging Technologies

October 24, 2019

We appreciate the opportunity to provide comments on the preliminary draft NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management (hereafter the Privacy Framework or the Framework) of the National Institute of Standards and Technology (NIST). The Mercatus Center at George Mason University is dedicated to bridging the gap between academic ideas and real-world problems and to advancing knowledge about the effects of regulation on society. This comment does not represent the views of any particular party or special interest group but is designed to assist NIST in creating a policy environment that will facilitate increased innovation, competition, and access to technology to the benefit of the public.

We applaud NIST’s efforts to empower enterprises to mitigate risk while also recognizing the potential impact of standards on emerging technologies. The request for comment recognizes that the digital space is a “complex ecosystem” with multiple stakeholders. The Framework identifies stakeholders within the data processing ecosystem including manufacturers, government service providers, individuals, commercial service providers, developers, businesses, and suppliers. The complexity of this ecosystem requires that the Privacy Framework function as one part of a wider set of solutions and resources developed by stakeholders to mitigate risk. Toward this end, this comment addresses two of NIST’s requests for review:

  1. Whether the preliminary draft adequately defines the relationship between privacy and cybersecurity risk.
  2. Whether the draft enables organizations to adapt to privacy risks arising from emerging technologies such as the internet of things (IoT) and artificial intelligence (AI).

In the sections that follow, we first suggest how the Framework can empower and educate individuals and civil society to mitigate privacy risk. We then propose resources worth including in the Framework’s proposed resource repository and recommend using the terminology of resilience in specifying the Framework’s goals. We highlight the importance of clarifications for the Framework’s delineation between privacy and cybersecurity risk, and we caution that guidance on AI and the IoT should focus on existing incidents of harm rather than hypothetical risks. Finally, we recommend that the Framework include guidance on risks associated with government access to data.

The Framework’s Role in Empowering Individuals and Civil Society to Mitigate Privacy Risk

The proposed framework properly identifies enterprise risk management as one piece of a larger set of security and privacy efforts. By focusing primarily on the role of enterprises in risk management, NIST’s approach has the potential to diminish the role individuals and civil society have in protecting data. A majority of data breaches and incidents result from human error and are thus unintentional or inadvertent. Processes involving staff or users deserve further scrutiny throughout the five privacy framework functions; specifically, the framework should further emphasize privacy awareness and training, communication of privacy policies internally and externally, and access control regarding data.

Education empowers individuals to adopt good cybersecurity practices and engage in appropriate steps following a data breach. In this regard, NIST should take seriously the Privacy Framework’s role as an educational document for organizations. Aggregating resources and clarifying the responsibilities of organizations will better help these organizations avoid noncompliance with existing or forthcoming legislation. In addition to NIST’s own resources, the following resources for the proposed repository of privacy resources are worth highlighting:

  1. The Federal Trade Commission (FTC), in partnership with other federal agencies, has created OnGuardOnline, a website that offers privacy tips for individuals and businesses.
  2. The Future of Privacy Forum hosts a central repository of privacy resources regarding best practices for organizations.
  3. The Council of Better Business Bureaus has a set of data privacy guidelines for small businesses.
  4. TeachPrivacy is an educational platform for improving security and privacy awareness among employees.
  5. The Electronic Frontier Foundation offers tools to help individuals protect themselves online.
  6. The International Association of Privacy Professionals collects resources on organizational privacy policies, crafting a privacy notice, and existing privacy regulations.

Furthermore, developments in liability standards and the role of tort law in privacy cases are worth understanding when considering the overall regulatory environment in which data privacy decisions are made. NIST’s privacy framework is one piece of an existing web of privacy resources and solutions that help organizations self-regulate and improve baseline privacy through suggested best practices. Given that civil society—including professional organizations, trade associations, research centers, and advocacy groups—supplies privacy resources, NIST should include civil society as a distinct party in the data processing ecosystem in section 3.5 of the Framework. The Framework recognizes that “deriving benefits from data while simultaneously managing risks to individuals’ privacy is not well-suited to one-size-fits-all solutions.” The proposed repository of privacy resources ensures that multiple solutions are available to enterprises.

The Framework’s Role in Fostering a Resilient Data Processing Ecosystem

NIST’s Privacy Framework notes correctly that data actions can have unintended consequences for user privacy. Requirements and recommendations intended to mitigate risk can also result in unintended consequences. For example, requirements can foster a false sense of security against privacy risk because organizations feel that they have “checked all the boxes.” This belief can compromise an organization’s preparedness to deal with emerging or evolving privacy risk.

Regulatory frameworks can favor the most restrictive privacy preferences rather than supporting a wide range of individual preferences and potential improvements to security and privacy that may not yet be anticipated. In this regard, we commend NIST for viewing risk mitigation as a task that is ongoing and evolving. In short, privacy practices must be continuously reevaluated, and resilience to risk must be achieved over and over again. The framework acknowledges the importance of constant vigilance, organizational learning, and risk reassessment in the data processing ecosystem. NIST advances its framework with the following vision:

"The five Functions, defined below, are not intended to form a serial path or lead to a static desired end state. Rather, the [five] Functions should be performed concurrently and continuously to form or enhance an operational culture that addresses the dynamic nature of privacy risk."

While the Framework accurately acknowledges the governance challenge in a dynamic and rapidly evolving ecosystem, it occasionally undermines this stated vision. For example, the Framework aims to help organizations in “future-proofing products and services . . . in a changing technological and policy environment.” Future-proofing is an unachievable goal for a dynamic ecosystem because the future is unknowable and constantly changing. What’s more, looking to future-proof using only existing technologies might actually prevent the emergence of new, innovative solutions that would better improve security and privacy. Instead, it is better to aim at fostering resilient products and services and encourage innovative solutions that can adapt to new threats. Resilience is a process of building the capacity to adapt to emerging threats. There is a tension between mitigating risk in its entirety and achieving resilience to breaches and security threats. Exposure to risk is both inevitable and critical for allowing individuals inside and outside organizations to learn how to manage and adapt to privacy risk. Rather than focus on “future-proofing,” we encourage NIST to specify resilience to privacy risk as an end goal in the executive summary and introduction of the Privacy Framework.

Clarifying Privacy and Cybersecurity Risk

The Privacy Framework can benefit from further clarification on the distinction between cybersecurity and privacy risk management. Defining the boundaries between privacy and cybersecurity risk is challenging because of the subjective nature of privacy concerns. Understanding risk requires identifying and defining privacy harm, as well as differentiating that harm from cybersecurity harm. Section 1.2.1 of the Privacy Framework has clarified this distinction for cybersecurity harm well, but its definition of privacy risks remains overly broad by not clearly distinguishing the risks or lack of risks associated with different types of personal information.

NIST’s categorization of privacy risk could use further clarification. For example, personally identifiable information is defined as data that can be used to distinguish one person from another, such as social security numbers or biometric identifiers, the exposure of which poses greater risks to users than other types of data. An organization or bad actor knowing one’s credit card information or address is riskier than knowing one’s ice cream flavor preference. It is important to be specific about categories of data so organizations can identify the riskiest data actions for scrutiny under the Framework. Without such distinctions, treating all information associated with an individual as having the same level of privacy risk can both deter future innovation as well as prevent already beneficial uses and can result in an overly expansive impact that fails to actually address privacy concerns.

When setting standards for privacy and security, the focus should remain on preventing and mitigating clearly definable harms. This approach provides both consumers and providers with certainty regarding forbidden behaviors while still allowing innovative approaches to continue to flourish. A harm-based approach that clearly defines the risks and actions that lead to such harms will best minimize the effect on innovation by presuming new uses allowable unless previously forbidden. The Framework must also take into account the strong likelihood that citizens, as in the past, will adjust their privacy expectations in response to ongoing marketplace and technological change. Not everyone shares the same sensitivities or values, and therefore defining privacy harm will continue to be a challenge. While the Framework takes positive initial steps in defining harms associated with cybersecurity risks, we suggest that further clarification is necessary, particularly with regard to existing privacy-associated harms and categories of data.

Enabling Emerging Technologies through an Adaptive Approach to Cybersecurity and Privacy Risks

We commend NIST for considering how the Framework will address privacy risk associated with emerging technologies such as the IoT and AI. It is important that the Framework not be so overly prescriptive and precautionary as to prevent innovations that could come from such technologies. When faced with the rapid changes associated with technological advancement, the use of soft law can facilitate a governance approach that is able to evolve with and enable innovation better than traditional policy tools. Soft law includes rules that are not strictly binding, such as best practices and voluntary frameworks (e.g., the Privacy Framework and the Cybersecurity Framework).

When determining what resources and practices are necessary regarding privacy and security for new technologies, the Framework should focus on responding to known risks. This approach will minimize unintended consequences and maximize the potential benefits of innovation while providing appropriate redress for those harmed and levying penalties for actors who break or evade the law. Future recommendations for adapting to risks associated with the IoT and AI should focus on proven incidents of privacy risk and harm rather than hypothetical worst-case scenarios associated with these emerging technologies.

Artificial Intelligence

AI is increasingly used both to extract data and to defend against data extraction. The dual nature of this emerging technology complicates a comprehensive approach to mitigating cybersecurity or privacy risk in the data processing ecosystem. What’s more, certain AI applications such as machine learning rely on large datasets for training. Many of the concerns about such datasets can be addressed by existing tools regarding breach, security, and consent. Education, social pressure, societal norms, voluntary self-regulation, and targeted ex post enforcement through common law or FTC action already work to constrain bad use cases. Beyond agency action, the courts and common law are also often able to address various issues through product liability or other appropriate claims on an ex post basis. These ex post adaptive tools are better able to address concerns about potential misuse and abuse of data actions than are regulatory requirements or prescriptive frameworks.

As our Mercatus colleague Adam Thierer has written,

"In particular, policymakers should prioritize developing an appropriate understanding of the varied sector of artificial intelligence technologies from the outset and developing an appreciation for limitations of our ability to forecast either future AI technological trends or crises that may ultimately fail to materialize."

The framework should focus on those AI applications that have been linked to specific harms rather than applications perceived to have high privacy risks. For example, AI-driven predictive policing and criminal sentencing software result in known civil-liberties concerns. In this case, more accountability mechanisms may be appropriate. Basing policy on evidence, rather than fear of worst-case uses of AI, can foster the right balance between mitigating privacy risk and promoting innovation.

The Internet of Things

The IoT is a network of connected devices that send and receive data. It includes devices from smartphones and computers to autonomous vehicles and mesh networks. As the amount and type of connected devices have grown, so have privacy and cybersecurity concerns. Yet there are several benefits of such technologies for consumers, including the ability to help disabled and aging populations have greater independence. Rather than focus exclusively on the potential problems from data security and privacy, the benefits of IoT devices should also be acknowledged in the Framework. When addressing harm, guidance should be narrowly tailored to address the exact harm and the specific technology involved rather than using a broad approach that could have unintended consequences for both current and future connected devices and data usage.

The governance challenge facing the IoT data ecosystem is unique because of the ecosystem’s dynamism, complexity, and decentralized and distributed nature. A narrow prescription for IoT privacy practices could discourage flexibility and offset the ecosystem’s ability to manage risk. IoT devices are increasingly intertwined with AI capabilities. For example, smart routers use a combination of AI and cloud services to recognize, monitor, and identify threats from malware and botnets to household IoT devices. As with AI, it is important that policy suggestions related to the IoT be responsive to known challenges, rather than anticipatory, and focus on known harms and risks rather than applying to a general-purpose technology.

While it is useful for NIST to think about the impact of AI and the IoT, many of the concerns are tied to the same underlying cybersecurity and privacy risks associated with existing internet technologies. NIST’s guidance should also recognize that these emerging technologies have the potential to prevent certain risks through an improved ability to identify security weaknesses as well as opportunities to decentralize information or improve authentication for access.

Distinguishing Government Requests and Use of Data from Private Industry Data Usage

Currently, the Privacy Framework only focuses on data actions associated with enterprise data collection and usage. The Framework should be careful to distinguish government use of data from its use by private industry or civil society.

Law enforcement requests for access and government-mandated access to user data complicate the task of mitigating privacy risk. Government requests for user data have increased steadily as more activity has moved online. Privacy risks associated with foreign or domestic government access to user data should be included in each of the five Privacy Framework functions, starting with “Identify-P.” Documents referring to requirements under Section 702 of the FISA Amendments Act of 2008 and Mutual Legal Assistance Treaty requirements for cross-border data flows should be included in the proposed resource repository.

Notably, guardrails can be established that limit potential abuse of technology by the government but still allow beneficial uses by both the public and private sectors. For example, placing harms-based restrictions on specific government uses rather than on a technology more generally can help reduce the potential abuses.

With the growth and deployment of data through various connected devices from smartphones to scooters, conversations about government access to this data from both its own collection and private industry are increasingly likely to be part of the policy conversation around emerging technologies. Such data can be useful to government entities for the provision of services such as access to public transportation, but they also have the potential for abuse that could limit individual freedom. For example, government’s ability to identify and actively track its citizens can result in false positives in terms of detainments and arrests. It is important for the Framework to acknowledge these risks through access to appropriate policy resources as well as distinguish government data actions from consumer and private actions.

Conclusion

We agree with NIST’s vision to provide a common language for stakeholders and improve privacy through enterprise risk management. The role of government in addressing cybersecurity and privacy risk is to foster a policy environment such that a wide set of solutions can evolve. By cultivating a resource repository, promoting the adoption of guidelines and best practices, and encouraging enterprises to dynamically respond to harm, the NIST Privacy Framework promises to do just that.

We encourage NIST to emphasize resilience to privacy risk as an aim of the Framework, to limit the guidance and restrictions on data related to AI and the IoT to existing known privacy risks, and to consider potential civil-liberty and privacy risks related to government access to data. Still, we encourage a framework that is rooted in harm rather than a more amorphous standard that could deter future innovation and the development of better tools to allow individuals and enterprises to manage their privacy and security risks.