Regulation under Uncertainty

September 21, 2016

The catastrophic consequences of events such as the Hiroshima and Nagasaki bombings have made the harmful health effects of exposure to high doses of radiation abundantly clear. But what about exposure to low doses of radiation—for example, radiation from X-rays and CT scans? For decades, government regulators have used a risk assessment model known as the linear no-threshold (LNT) model to inform their rulemaking. This model presumes that low-dose exposure to chemicals and radiation is always harmful. In fact, according to the model, there is no threshold to toxicity—exposure to even a single molecule results in proportional and irreversible harm. This implies that setting regulatory standards to ever lower exposure levels will always appear desirable, especially when cost considerations are ignored.

Relying on updated research and incorporating lessons from economic theory, a new study for the Mercatus Center at George Mason University explains why using the LNT model as the default risk assessment model is inadequate at best and harmful to public health at worst. Regulatory agencies should consider alternative risk assessment models when investigating the risks and benefits of low-level exposure to chemicals and radiation.

BACKGROUND

The National Academy of Sciences endorsed the use of the LNT model for radiation 60 years ago. In the decades that followed, the model came to be adopted as the default risk assessment model by regulatory agencies in the United States and throughout the world. Although LNT was initially applied to radiation, it came to be used for hundreds of chemicals and pollutants as well. Today, the LNT model underlies much of the healthcare and environmental regulation that exists in the United States as agencies drive exposure to lower and lower levels.

METHODOLOGY

Several decades of risk research provide a basis for evaluating the LNT model’s validity. Evidence comes from the areas of DNA repair, preconditioning, and adaptive responses in biology:

  • LNT validation. It takes thousands of subjects to identify small responses and infrequent events that result from low-dose exposure to stressors. Therefore, the LNT model typically relies on studies in which a population was exposed to high doses of a toxic substance.
  • Hormesis. Another model, known as hormesis, has shown that while high doses of a substance such as radiation or a carcinogen may be harmful, low doses of the same substance can actually be beneficial, even decreasing the risk that a subject will develop cancer.
  • DNA repair, preconditioning, and adaptive responses in biology. When the LNT model was adopted for radiation, many scientists believed that a single change to DNA could cause cancer and irreversible damage. We now know that DNA mutation only happens when a large number of molecules are affected. Furthermore, several types of cells can repair mutated DNA. Organisms have even been found to adapt to low doses of environmental “stressors” like pollutants and chemicals. For instance, low doses of X-rays have been shown to initiate an anti-inflammatory response and treat pneumonia, and low-dose radiation has been found to induce a protective effect against kidney damage in diabetic patients.

KEY FINDINGS

  • The LNT model should not be the default model used for characterizing public health risks.Following the recommendations of the National Research Council, regulators should select the model that is best supported by the scientific evidence given the type of risk evaluated. In cases where the evidence is inconclusive, multiple models, including a threshold or a hormetic model, should be harmonized through a model uncertainty framework and incorporated into decision-making.
  • The LNT model causes regulators to continually overestimate risk (although by varying degrees), which upsets the careful balancing required when risk managers consider countervailing risks.Though the LNT model is supposed to be a “better safe than sorry” approach, policy decisions based on overestimating one risk could lead to worse public health outcomes owing to risk-risk and health-health tradeoffs. Overestimation of risk could also encourage regulators to set tolerance levels below any optimal hormetic level of exposure with respect to public health.

CONCLUSION

Advances in the science of risk, along with insights from tradeoff analysis in economics, demonstrate that the linear no-threshold model should be reevaluated as the default model for cancer risk assessment. Rather than intentional overestimation of risk—which may lead to unintended consequences and worse public health outcomes—the weight of the scientific evidence should be the tie-breaker when selecting models to inform policy. When science cannot definitively adjudicate between models, formal model uncertainty analysis becomes indispensable to adequate risk-based regulatory decision-making.
 

The Risks of Ignorance in Chemical and Radiation Regulation

Wednesday, September 21, 2016
Authors: 
James Broughel

The Nuclear Regulatory Commission sought comments last June on whether it should switch its default “dose-response model” for ionizing radiation from a linear no threshold model to a hormesis model. This highly technical debate may sound like it has nothing to do with the average American, but the Nuclear Regulatory Commission’s (NRC) decision on the matter could set the stage for a dramatic shift in the way health and environmental standards are set in the United States, with implications for everyone.

Regulators use dose-response models to explain how human health responds to exposure to environmental stressors like chemicals or radiation. These models are typically used to fill gaps where data is limited or non-existent. For example, analysts might have evidence about health effects in rodents that were exposed to very high doses of a chemical, but if they want to know what happens to humans at much lower exposure levels, there might not be much available information, for both practical and ethical reasons. 

The linear no threshold (LNT) model has a tendency to overestimate risk because it assumes there’s no safe dose — or “threshold” — for an environmental stressor. (We discuss the LNT model in our new Mercatus Center research, “Regulating Under Uncertainty: Use of the Linear No Threshold Model in Chemical and Radiation Exposure.”) The response (cancer, in most cases) is assumed to be proportional to the dose at any level, even when exposure is just a single molecule. LNT is popular with regulators in part because of its conservative nature. When setting standards, the logic goes, better to be safe than sorry. That is, it’s better to assume that there is no threshold and be wrong than to assume a safe dose exists when one does not.

But does the use of the LNT model really produce the “conservative” results its proponents claim? There are very good reasons to doubt it.

The first is that there are no absolute choices; there are only tradeoffs. Regulations that address risk induce behavioral responses among the regulated. These responses carry risks of their own. For example, if a chemical is banned by a regulator, companies usually substitute another chemical in place of the banned one. Both the banned chemical and the substitute carry risks, but if risks are exaggerated by an unknown amount, then we remain ignorant of the safer option. And because LNT detects — by design — low-dose health risks in any substance where there is evidence of toxicity at high doses, businesses are led to use newer, not-yet-assessed chemicals.

Economic costs borne from complying with regulations also produce “risk tradeoffs.” Since compliance costs are ultimately passed on to individuals, lost income from regulations means less money to spend addressing risks privately. When their incomes fall, people forgo buying things such as home security systems, gym memberships, healthier food, new smoke detectors, or safer vehicles. And when regulators inflate publicly addressed risks but leave private risks unanalyzed, it becomes impossible to weigh the pros and cons of public versus private risk mitigation.

But the most compelling reason to doubt that LNT is a “conservative” standard is simply that it’s likely to be wrong in so many cases. The assumption that “any exposure” causes harm is contradicted not only by common sense, but by a growing body of research. In the decades since LNT was first adopted by regulatory agencies, more and more evidence supporting a threshold — or even a “hormetic” — model of dose response has been found.

Hormesis occurs when low doses of exposure actually cause beneficial health outcomes, and, coincidentally, the scientific evidence for hormesis appears strongest in the area where the LNT was first adopted before its use spread to other areas: radiation. For example, low-doses of radiation exposure have been shown to have protective effects against kidney damage in diabetic patients, and low doses of X-rays have been associated with an anti-inflammatory response to treat pneumonia. There is now evidence of hormesis in hundreds of experiments, but the LNT rules out — by assumption — the possibility of these kinds of beneficial health responses. 

Unfortunately, the way regulators typically respond to these problems is simply by ignoring them. Hence a better moniker for the use of the LNT model might be “Ignorance Is Bliss.” So long as regulators ignore the inconvenient truths posed by the possibilities of hormesis and risk tradeoffs, they can continue going to work every day maintaining the belief they are protecting public health. But the uncertainty in their risk assessments is so great that, in fact, regulators often have no idea whether they’re improving public health or doing just the opposite.

A reconsideration of the LNT is long overdue. At the very least, risk analysts should characterize uncertainty using multiple-dose response models — including a threshold model or a hormetic model — when no model has the overwhelming support of the scientific evidence. And analyzing risk tradeoffs should be a routine part of rulemaking.

The NRC should be commended for acknowledging the doubts about the LNT. When the time comes for the agency’s decision, let’s hope they choose knowledge over ignorance.

The Integration of LNT and Hormesis for Cancer Risk Assessment Optimizes Public Health Protection

March, 2016

This paper proposes a new cancer risk assessment strategy and methodology that optimizes population-based responses by yielding the lowest disease/tumor incidence across the entire dose continuum. The authors argue that the optimization can be achieved by integrating two seemingly conflicting models; i.e., the linear no-threshold (LNT) and hormetic dose–response models. The integration would yield the optimized response at a risk of 10−4 with the LNT model. The integrative functionality of the LNT and hormetic dose response models provides an improved estimation of tumor incidence through model uncertainty analysis and major reductions in cancer incidence via hormetic model estimates. This novel approach to cancer risk assessment offers significant improvements over current risk assessment approaches by revealing a regulatory sweet spot that maximizes public health benefits while incorporating practical approaches for model validation.

Find the full article on Health Physics

Asserting Presidential Preferences in a Regulatory Review Bureaucracy

February, 2016

Asserting presidential preferences in a regulatory review bureaucracy US presidents face many challenges in executing their duties as CEOs of a mammoth sprawling bureaucracy known as the nation’s executive branch. Included among the many offices and bureaus in 2014 were 78 regulatory agencies with more than 276,000 employees who in recent years turned out annually some 80,000 Federal Register pages of rules and rule modifications. A successful president, e.g., one who can be reelected or help to pave the way for the party in the next election, must find ways to steer bureau activities in his preferred direction while delivering on regulatory promises made in the process of being elected. White House review of proposed regulations provides an opportunity for presidents to affect regulatory outcomes in ways that reward politically important interest groups. Our review of all empirical work on White House review as well as our own institutional and statistical findings yield strong support to the notion that the review process provides opportunities to make presidential preferences operational.

Warning Fatigue

Thursday, January 21, 2016

With the World Health Organization’s decision not long ago to classify bacon “alongside known carcinogens including asbestos, tobacco, arsenic and alcohol,” regulators have drawn a line in the sandwich. With warning labels proposed for BLT’s, now is an appropriate time to take stock of our system for determining cancer risk, in terms of both when and how the public should be informed and protected.

When everything has a warning label, nothing has a warning label. It is too time-consuming to verify each warning individually, yet indiscriminate faith would rule out everything from coffee and apple pie to the “Happiest Place on Earth.” At this point the public could be forgiven for accusing regulators of crying wolf. But this blame would be misplaced. The regulators’ own science does in fact tell them that these risks are both real and cause for alarm.

That’s because baked into the cake of regulatory cancer risk assessment is the assumption that any dose of a carcinogen does proportional, cumulative harm. This is the Linear No-Threshold assumption, famously referred to as the “any exposure” theory, and it is one of the main tools in the risk-assessor’s toolbox. Its use means that regulators must assume that exposure to even one molecule of a carcinogen puts an individual at an irreversible, accumulating risk. The upshot is that no matter how trivial the exposure, cancer risk assessments justify — almost mandate — a public warning.

And there’s the rub. We are exposed to trivial amounts of all sorts of chemicals every day, most of which are naturally occurring. But because scientists’ ability to measure exposure in this low-dose region is extremely limited, regulators must make an assumption about the body’s response. If they assume a “threshold” response, they’ll tell us that these exposures pose no danger. If they assume a “hormetic” (beneficial) response, they’ll actually encourage a low dose of exposure. But, true to its name, the “any exposure” assumption fills this gap in our knowledge with the notion that any dose is unsafe.

When you have a hammer, everything looks like a nail — and Linear No-Threshold is quite a hammer. It drives regulators to expand their bureaus as they chase after an endless series of risks. And what’s “safe” for a regulator is (to declare) what’s unsafe for consumers.

As a former FDA commissioner Alexander Schmidt put it, “Rarely, if ever, has Congress held a hearing to look into the failure of FDA to approve a new entity; but it has held hundreds of hearings alleging that the FDA has done something wrong by approving a drug.” This lesson applies to all regulatory agencies and weighs heavily on the minds of bureau chiefs; thus the abundance of warnings and the resulting public confusion.

For a health and safety bureaucracy there is no profit — only risk — in letting products to market. Producers subject to market and judicial forces are incentivized to satisfy customers and to be proactive about consumer safety. They jealously guard their brands’ reputations, and they fear liability.

Yet policymakers argue that producers will chronically under-advertise the risks of their products. They often cite this “market failure” to justify signs, labels and loud public proclamations financed out of the public purse. While on the surface this may sound reasonable, without the dual market constraints of profit and liability a regulatory bureaucracy risks advertising too much risk.

The problem is institutional in nature. Instead of correcting “market failure” the bureau may end up producing “government failure,” trading the fear of underproduction for the reality of overproduction. And overproduction always comes with a cost. Overstating danger increases danger. The social cost is not just the agency’s misdirected resources. It is also in the crowding out, diluting or even discrediting of its own message. Bureaucratic solutions should come with a warning label.

Linear No-Threshold Model and Standards for Protection Against Radiation

October 22, 2015

In response to the three petitions by Carol S. Marcus, Mark L. Miller, and Mohan Doss, dated February 9, February 13, and February 24, 2015, respectively, the Nuclear Regulatory Commission (NRC or the Commission) has announced that it is considering assessing its choice of dose-response model, the Linear No-Threshold (LNT) model, for exposure to ionizing radiation. More precisely, the petitioners have proposed that the Commission amend 10 CFR Part 20, Standards for Protection against Radiation, to reflect the latest scientific understanding and evidence in support of low-dose radiation hormesis as a potentially more plausible default. 

The petitioners argue that (1) the LNT assumption has never been validated and is still lacking scientific support; (2) there is vast scientific evidence, grounded in biology, genetics, clinical experiments, and ecological and epidemiological studies, in support of the existence of a low-dose radiation threshold and, even more so, of low-dose radiation hormesis; and (3) the LNT assumption is retarding public health by limiting the potential therapeutic application of low-dose ionizing radiation in treatment of diseases, especially cancer. 

In light of these claims, two of the petitioners have made the following recommendations: “(1) Worker doses should remain at present levels, with allowance of up to 100 mSv (10 rem) effective dose per year if the doses are chronic. (2) ALARA [as low as reasonably achievable] should be removed entirely from the regulations. . . . (3) Public doses [exposure] should be raised to worker doses.” One petitioner also requests that the regulation be changed to “(4) end differential doses for pregnant women, embryos and fetuses, and children under 18 years of age.” 

This comment extends the petitioners’ argument in favor of reexamining the default hypothesis (LNT) and taking consideration of low-dose hormesis for the following reasons: 

1) Failure to review the LNT hypothesis may jeopardize the NRC’s mission to protect public healthand safety. Research on hormesis suggests that low doses of ionizing radiation may be protective of public health. If true, regulating exposure to ionizing radiation according to the ALARA principle may be harmful to public health if it regulates beneath the optimal hormetic dose.

2) The National Research Council’s guidelines for choosing adequate defaults indicate that the choice of low-dose default model is due for a reevaluation. The NRC should conduct a systematic review of evidence, as recommended by the Council guidance, to determine the comparative weight of hormesis and LNT. 

 a. If the systematic review reveals hormesis to be “clearly superior” to LNT, then the NRC should abandon LNT and adopt hormesis. 

b. If the systematic review reveals hormesis to be “comparably plausible” to LNT, then, in light of both models, the NRC should conduct a quantitative model uncertainty analysis, present alternative risk assessments, and update its standards of protection accordingly. 

c. If the Commission decides to maintain adherence to LNT after, or without, conducting the systematic review of evidence, then the Commission should demonstrate why the body of evidence in favor of hormesis is inadequate for consideration under the NRC’s IQA guidelines. Further, the Commission should demonstrate how the studies that support its low-dose LNT assumption conform to the NRC’s IQA guidelines.

Continue reading

Be Risk-Savvy: Consumers Need to Ask Questions about the Real Risks of Products

Monday, September 21, 2015

You are not safe. You are eating, drinking and breathing chemicals. Step outside and you're bombarded with radiation from both the sun and the Earth. Stay in your home and your spouse is irradiating you while you sleep. The media sensationalizes each new study indicating that product X or activity Y is unsafe, but if anything, they are understating the danger. The ink in your newspaper is unsafe, the wireless signal to your laptop is unsafe, and the stress of hearing about all of this is unsafe.

So, given these facts of life, is this a useful way to frame the problem?

A new wave of concern focuses on the safety of e-cigarettes. In one article covering the latest findings, Michael Green, executive director of the Center for Environmental Health, is quoted telling us that "[a]nyone who thinks that vaping [inhaling nicotine-infused liquid through a vaporizer] is harmless needs to know that our testing unequivocally shows that it's not safe to vape." And he is right.

Continue reading

Regulators Need to Do Risk Assessment Right

Monday, August 3, 2015

Risk science is the study of the effect of exposure to certain substances and activities on public health and safety. The effects produced may be beneficial or harmful, depending on the substance or activity under study and the dose to which one is exposed. As is commonly known, too little water is as harmful as too much water, while a moderate amount is very beneficial. In addition, a deadly substance that only exists on Saturn is of no concern to humans because our exposure to it is zero.

Risk science is, of course, more complex than these two simple examples imply. It must take into account many factors, consider their interactions, and try to uncover the true effect hidden under many layers of complexity. Just like any science, risk science is a systematized study. Yet as long as the best process is carefully and transparently followed, we can trust that objective knowledge will eventually prevail. The danger, as in any science, is when the study is done ad hoc and behind closed doors. The scientific method cannot be jeopardized for the sake of superficially gratifying regulatory policies made contrary to evidence or under unacceptable levels of uncertainty.

Continue reading

On Objective Risk

July 30, 2015

Over the last several decades, criticism of regulatory agency estimates of risk has come from a variety of institutions, including the National Research Council and the president’s Office of Management and Budget. As with other matters related to regulation, Congress has delegated the task of estimating risks to federal agencies, with the expectation that they will apply the necessary level of expertise. Debate over agency risk assessments has often focused almost exclusively on the accuracy of the assessment, with little attention to whether the assessment followed objective scientific processes.

In a new paper published by the Mercatus Center at George Mason University, economist Dima Yazji Shamoun and toxicologist Edward J. Calabrese show that shifting the debate to process objectivity would allow better evaluation of risk assessments. In doing so, the paper draws from the government’s own guidance for best practices for performing risk assessments. This paper provides a crucial first step toward enabling those who monitor regulatory agencies to hold them to those practices.

To read the paper in its entirety and learn more about its authors, see “On Objective Risk.”

BACKGROUND

Health and safety questions concern every American household: How much seafood should I consume given the possibility of ingesting methyl mercury? Is air pollution causing my kids’ asthma attacks or my mother’s cardiovascular disease? What chemicals are responsible for increasing my cancer risk? Government regulatory agencies are charged with determining these risks and acting on them where appropriate, but the challenge is always in the details:

  • Environmental Protection Agency rules alone are responsible for 63%–82% of monetized benefits reported by all regulatory agencies. Indeed, the vast majority of the federal government’s regulation of the economy depends on a process that determines how risky an activity or a chemical is and how much of that risk will be reduced by some regulatory action.
  • Too much focus on outcomes can introduce bias, which can increase costs and misallocate resources. Americans depend on federal agencies to strike the right balance and regulate risks to health and safety in a way that neither under- nor overregulates risk. When an agency’s performance is evaluated based on the accuracy of its risk evaluations, bias toward certain outcomes may be introduced, which in turn may cause agencies to overregulate risks relative to the costs Americans pay—or, worse, may cause other risks to increase, making Americans less safe.
  • Process objectivity is a scientific means to ensure consistency across risk assessments. Rather than debating the true risk involved with any particular case, those concerned about regulatory risk assessments should focus on whether the government followed an objective, standard process based on the scientific method. This paper provides the framework for evaluating all risk assessments performed by government agencies.

KEY PRINCIPLES

Risk estimates and benefit estimates of health and safety regulations derive their objectivity from the process that brings them about. Consistent adherence to a process meant to produce objectivity yields objective results. Risk estimates derived from an ad hoc application of the objective process are biased and may be responsible for vast resource misallocation. A routine application of the objective process outlined in the paper will reduce error and achieve consistency across assessments.

The paper proposes a novel methodology for testing the objectivity of the risk/benefit estimates of federal health and safety regulations:

  • In order for the process to be objective, two factors are necessary: (1) adherence to a body of principles, applied consistently and in their entirety; and (2) an independent reassessment (by a third party outside the regulatory agencies), according to the body of principles, of the risk and benefit estimates of major health and safety regulations.
  • There are four main categories of objective risk assessment: analysis, robustness, openness and transparency, and review. By following an objective process along the lines introduced in the paper—based on well-established principles already supposed to be in use by the federal government—risk assessments can be independently verified by a third party, such as a university-based research center.

CONCLUSION

Taxpayers spend billions of dollars on regulatory agencies that promise to protect their health and safety by reducing their risk from exposure to myriad alleged hazards. Such regulatory decisions are based on highly technical and scientific documents, which leave both Congress and the majority of the public in the dark. Using the framework provided in the paper, all people interested in health and safety regulation can systematically review risk assessments and begin the process of holding agencies accountable.

Optimizing Human Health Through Linear Dose–Response Models

July 1, 2015

This paper proposes that generic cancer risk assessments be based on the integration of the Linear Non-Threshold (LNT) and hormetic dose–responses since optimal hormetic beneficial responses are estimated to occur at the dose associated with a 10−4 risk level based on the use of a LNT model as applied to animal cancer studies. The adoption of the 10−4 risk estimate provides a theoretical and practical integration of two competing risk assessment models whose predictions cannot be validated in human population studies or with standard chronic animal bioassay data. This model-integration reveals both substantial protection of the population from cancer effects (i.e. functional utility of the LNT model) while offering the possibility of significant reductions in cancer incidence should the hormetic dose–response model predictions be correct. The dose yielding the 10−4 cancer risk therefore yields the optimized toxicologically based “regulatory sweet spot”.

Continue reading

Pages