How “Expert” Are the Expert Agencies?

To see agencies make better decisions, Congress could explicitly mandate via legislation that agencies identify the problem and outcome a regulation is designed to address. Ideally, agencies should be required to seek public comment on their analysis of the problem before they decide what solution to propose. Only when agencies act like the experts we expect them to be can the public trust them to create regulations that advance the public good.

Since the early 1980s, presidents have required federal regulatory agencies to conduct regulatory impact analysis for significant regulations. Congress has delegated tremendous discretion and rulemaking authority to regulatory agencies, with the expectation that the expertise agencies have in their given domains will make them better judges on complex policy decisions. Yet many regulatory agencies are not asking very basic questions before writing and implementing rules, including what problem a policy is supposed to solve in the marketplace and what outcome policymakers hope to achieve. What’s more, when regulators do ask these questions, the answers are often cursory, unaccompanied by a causal theory, and not backed by empirical evidence. If the public is to have faith in the decisions of policymakers, regulators need to show that their decisions rest on expert analysis and judgment.

The Mercatus Center’s Regulatory Report Card project has evaluated the quality and use of agency economic analysis since 2008. Using a scale of 0 to 5 points, with 0 representing no relevant content and 5 representing fairly comprehensive analysis, the Report Cards show the extent to which regulatory analysis meets basic requirements laid out in executive orders and Office of Management and Budget guidance. (The Report Card’s methodology is described in greater detail here.) 

For example, the chart below shows question 5 from the Report Card, which assesses how well the agency has identified the benefits or other desired outcomes a proposed rule is intended to achieve and how well it has demonstrated that the regulation is likely to achieve them. The chart shows average scores for any agencies that proposed at least four regulations from 2008 through 2012. It also shows the average score for the 108 economically significant prescriptive regulations proposed during that period.

Many agencies score fairly well when asked simply to name the outcome they hope the regulation will achieve. The average score for this criterion on all prescriptive regulations between 2008 and 2012 was approximately 4.2. However, when asked to describe a causal theory explaining how the regulation will accomplish the outcomes, the average score fell to 3.2. The average score for backing up the theory with empirical evidence was even lower just 2.6 points. 

For example, a 2009 proposed rule from the Occupational Safety and Health Administration (OSHA) set communication standards for classifying and labelling chemicals. When identifying the goal of the regulation, the analysis listed the illnesses and injuries OSHA expected the rule to reduce. 

But when it came to backing up these claims with actual evidence, OSHA simply assumed that worker behavior would become more prudent without offering any testable way to determine whether that would actually happen in practice. OSHA even estimated the new regulation would reduce illnesses, injuries, and fatalities in the workplace by one percent relative to estimates of the effects of existing rules, but it failed to provide any empirical support to back up this assumption. 

Based on question 6 from the Report Card, the chart below shows that agencies do no better at identifying, demonstrating the existence of, and tracing the cause of the problem the regulation is trying to solve.

Agencies do worse on diagnosing the problem the regulation aims to solve than they do on questions relating to outcomes. This occurs despite the fact that the first principle listed in the executive order governing regulatory analysis is that the agency should identify the problem it seeks to solve. Question 6a asks whether the analysis simply named and described a systemic problem; the average score was 2.7 points. The average score falls to 2.2 points on question 6b, which asks whether the analysis presents a coherent theory explaining why a significant, systemic problem exists. The average score for empirical evidence—1.9 points—is even lower. 

Over the last century, Congress has delegated an enormous amount of trust and authority to federal regulatory agencies on the presumption that these agencies have experts who are more knowledgeable than Congress or the public about complex issues involving science and economics. It is discouraging to find that these experts often fail to adequately answer basic questions before they make decisions. 

To see agencies make better decisions, Congress could explicitly mandate via legislation that agencies identify the problem and outcome a regulation is designed to address. Ideally, agencies should be required to seek public comment on their analysis of the problem before they decide what solution to propose. Only when agencies act like the experts we expect them to be can the public trust them to create regulations that advance the public good.

1“Economically significant” refers to regulations having an annual impact of $100 million or more on the US economy. Prescriptive regulations are rules that impose mandates of some kind on the American public, as opposed to “budget” regulations that implement government programs.

2 These numbers are statistically different from one another, meaning there is credible evidence that agencies do significantly worse at providing a theory than in naming an outcome and that agencies do significantly worse providing empirical evidence than they do in providing a theory. We see the same general pattern when we look at individual agencies, though in many cases the differences aren’t statistically significant due to the small sample size.

3 Scores on questions 6b, 6c, and 6d differ significantly from scores on question 6a, related to problem identification. Individual agencies often exhibit similar patterns.