The Quality and Use of Regulatory Analysis in 2008

This paper assesses the quality and use of regulatory analysis for economically significant regulations produced by federal agencies in 2008. A shorter and updated version of this paper was published in the journal Risk Analysis.

A shorter and updated version of this paper was published in the journal Risk Analysis.

Abstract

This paper assesses the quality and use of regulatory analysis for economically significant regulations produced by federal agencies in 2008. A nine-member research team used a six-point (0-5) scale to evaluate regulatory analyses according to criteria drawn from Executive Order 12866 on Regulatory Planning and Review, Office of Management and Budget Circular A-4 on Regulatory Analysis, and scholarly research. Principal findings include: (1) The average quality of regulatory analysis, though not high, is somewhat better than previous regulatory scorecards have shown; (2) Quality varies widely; (3) The biggest strengths in the analyses are accessibility and clarity; the biggest weakness is retrospective analysis; (4) Budget or "transfer" regulations receive much lower-quality analysis than other regulations; (5) A minority of the regulations contain evidence that the agency used the analysis in significant decisions; (6) Quality of analysis is positively correlated with the apparent use of the analysis in regulatory decisions; (7) The analyses contain many examples of "best practices," and (8) Greater diffusion of best practices could significantly improve the overall quality of regulatory analysis.

A revised version of this article has been accepted for publication at Risk Analysis, a peer-reviewed journal published by the Society for Risk Analysis.  Individuals with subscription access to the journal can read an electronic version of the article at the Risk Analysis website

Introduction

Since 1974, all presidents have issued executive orders requiring regulatory agencies to analyze the anticipated results and economic effects of proposed regulations. The requirements have become more comprehensive over time. President Carter's Executive Order 12044 on Regulatory Analysis required each agency to state succinctly the problem it is trying to solve, describe alternatives the agency considered, provide an economic analysis of the alternatives, and explain why one alternative was chosen over the others. President Reagan's Executive Order 12291 directed that an agency should not adopt a regulation unless total benefits to society exceeded total costs, and Reagan centralized regulatory review in the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget (OMB). President Clinton's Executive Order 12866 softened the benefit-cost requirement, stating that agencies should regulate only after determining that the benefits justify the costs. The Clinton executive order has largely guided regulatory analysis to this day. (Brito and Ellig 2009, 27-30) President Obama's OMB, in its 2009 Report to Congress on the Cost and Benefits of Federal Regulation, reiterated that "regulatory analysis should be seen and used as a central part of open government," (OMB 2009, 35).

To properly analyze regulations, agencies must first articulate the systemic problems they are trying to solve. If agencies cannot articulate the problems they seek to solve, it is difficult to imagine that they will devise effective solutions. Identifying the problem is only the first step. Executive Order 12866 requires that agencies look at many possible ways to solve the problem and choose wisely, after making a reasoned decision that the benefits of the regulation justify the costs. In short, the executive orders on regulatory analysis have tried to make government develop solutions that make society better off overall, rather than just cutting political deals that only help favored groups. We think this is a laudable goal. We seek to advance it by evaluating the quality of regulatory analysis produced by federal agencies and the extent to which they use that analysis to make decisions.

Several strands of scholarly literature assess the quality of regulatory analysis produced by federal agencies.1 OIRA actions in both the Bush and Obama administrations have generated renewed interest in such evaluations. In its 2008 report on the costs and benefits of federal regulations, OIRA discussed the concept of a "scorecard" that would assess how well agency regulatory analysis complies with OIRA guidance. The report suggests questions that could be included in a scorecard and discusses peer reviewers' responses, concludes that the scorecard idea has merit, and encourages further research on the idea (OIRA 2008, 19-24). In February 2009, OIRA sought public comment on possible revisions to Executive Order 12866, which governs regulatory review (OIRA 2009). There will surely be intense debate about whether the Obama administration's forthcoming changes improve or diminish the quality and use of regulatory analysis by federal agencies.

The last seven presidents have each maintained but also sought to fine-tune regulatory analysis requirements, presumably because the existing requirements have never quite produced the intended results. That may be because those requirements were not fully followed by the agencies or because they would not have worked if they had been followed. Despite the numerous attempts by various administrations to refine regulatory analysis, there have been very few systematic analyses of these refinements to ensure that they do what they were intended to do. A detailed qualitative protocol to evaluate regulatory analysis before and after the changes in the executive order could aid in assessing effects of the changes on the quality and use of regulatory analysis.

This paper applies a 12-point qualitative framework to evaluate regulatory analyses of "economically significant" rules that were reviewed by OIRA in 2008 and proposed in the Federal Register.2 The evaluation criteria are drawn from Executive Order 12866, OMB Circular A-4, and pre-existing scholarship on regulatory scorecards.3

Our approach differs from previous evaluations of regulatory analysis in several ways:

1. It is the first project that evaluates the regulatory analyses accompanying all economically significant regulations proposed in a given year, including budget or "transfer" regulations that define how the federal government will spend money on programs. The most extensive previous evaluations focus on health, safety, and environmental regulations (Hahn and Dudley 2007, Hahn et. al. 2000, Hahn and Litan 2005, Hahn, Lutter, and Viscusi 2000). Surveying the evidence, Hahn and Tetlock (2008, 82) conclude, [T]here is no strong support for the view that economic analysis has had a significant general impact. Furthermore, the quality of regulatory analysis for a significant fraction of regulations does not meet widely accepted guidelines." We assess whether these conclusions still apply when the sample is widened to include economic, civil rights, and transfer regulations.

2. We assess analysis associated with proposed regulations, rather than final regulations. We seek to gauge the quality of analysis at the earliest possible point, where it arguably has the best chance of affecting decisions while the proposed regulation is being written.4

3. Our approach includes an assessment of whether the agency actually uses regulatory analysis to guide decisions. We search for evidence that the analysis has had an effect on decisions by examining the Regulatory Impact Analysis (RIA) document and the preamble to the proposed rule. We also evaluate whether the agency makes a commitment to conducting retrospective analysis to assess the actual outcomes of the rule in the future. Searching these sources may understate the influence of economic analysis, since economists may influence behind-the-scenes decisions that never see the light of day (Williams 2008, 6-7). Nevertheless, it should provide some useful evidence.

4. We opt for a qualitative evaluation of how well the analysis was performed, rather than an objective "yes/no" checklist of analytical issues and approaches covered. We believe this approach may offer a more accurate evaluation of the quality of the analysis, and the evaluation protocol helps keep subjectivity within tolerable bounds. The qualitative evaluation also makes it easier to pinpoint specific best practices.

Our evaluation for 2008 yields numerous insights into the quality and use of regulatory analysis:

Overall quality is generally low. The average regulatory analysis received 27 out of a possible 60 points on our scoring system, or 45 percent. This is somewhat better than previous regulatory scorecards have suggested.

Quality varies widely. The best analysis in 2008 was for the Department of Transportation's Corporate Average Fuel Economy regulation, which earned 43 points (72 percent). Next came the Environmental Protection Agency's National Ambient Air Quality Standards for Lead (42 points) and Housing and Urban Development's proposed revisions to the Real Estate Settlement Procedures Act (41 points). The three worst analyses came from the Social Security Administration, Department of Veterans' Affairs, and Department of Defense, earning 7, 10, and 12 points respectively.

Strongest criteria are Accessibility and Clarity. Of our 12 evaluation criteria, these two criteria earned the highest average scores.

Retrospective analysis is biggest weakness. Two of our evaluation criteria ask whether the RIA or the Federal Register notice demonstrate that the agency has measures, goals, and data that could be used for retrospective analysis of the regulation's actual effects. Average scores on these criteria are the lowest of all, suggesting that few agencies make provisions for retrospective analysis.

Transfer regulations have worse analysis. Budget or "transfer" regulations, which determine how the federal government will spend money, receive much lower scores. On average, transfer regulations received only 17 out of 60 possible points, compared to an average of 32 points for non-transfer regulations.

A minority of regulations use analysis extensively. About 20 percent of the proposed regulations indicate that the regulatory analysis influenced some significant decision. More than 40 percent of the regulations lack evidence that the agency used the analysis at all.

Better analysis gets used. Scores on our criteria measuring the quality of the analysis are positively correlated with scores on our criteria measuring use of the analysis.

Spreading best practices could generate big improvement. Many regulatory analyses have examples of "best practices" that other agencies could emulate. But a given best practice is rarely found in a large number of analyses. For 11 of our 12 scoring criteria, only a handful of analyses earned 5 points, the highest possible score. The average scores on individual criteria are usually below 3 and often below 2. This suggests that substantial improvement could occur if best practices were more widespread.

1. Evaluation Protocol

1.1 What Was Evaluated?

To evaluate the quality and use of regulatory analysis, the research team read the preamble to each proposed rule and the accompanying RIA. In some cases, agencies produced additional analysis in technical support documents, which we also considered. For some rules, agencies are also required to prepare a Regulatory Flexibility Analysis that assesses the effects on small entities and examines alternatives that might reduce the burden on them. We included the Regulatory Flexibility Analysis to the extent that it had content relevant to our evaluation criteria.

This approach is broader than just reading the document or section of the Federal Register notice explicitly labeled "Regulatory Impact Analysis." It is necessary for two reasons.

First, different agencies organize the content differently in different rules. Sometimes the RIA is a separate document only referenced or summarized in the Federal Register preamble. It may also be in a separate section of the preamble. Some parts may be in a separate section of the preamble, and other parts may be in other parts of the preamble that discuss justifications for the regulation. Analysis may also occur in the preamble without even being referenced in the RIA. Reading all of this material made the evaluation task much more difficult, but it allows us to give the agency credit for doing good analysis of outcomes, costs, systemic problems, and alternatives regardless of where the analysis appears.

Second, we assess the agency's commitment to using regulatory analysis to make regulatory decisions and assess their effects in the future. If this commitment is documented anywhere, it is usually found in the preamble to the rule, where the agency discusses the rationale and justification for issuing the regulation. This is especially true for measures involving retrospective analysis of the regulation's actual effects; these almost always appear in the preamble to the rule, rather than the RIA (when they are discussed at all).

1.2 Scoring System

We evaluate regulatory analysis based on 12 criteria, grouped into three categories:

1. Openness: How easily can a reasonably intelligent, interested citizen find the analysis, understand it, and verify the underlying assumptions and data?

2. Analysis: How well does the analysis define and measure the outcomes or benefits the regulation seeks to accomplish, define the systemic problem the regulation seeks to solve, identify and assess alternatives, and evaluate costs and benefits?

3. Use: How much did the analysis affect decisions in the proposed rule, and what provisions did the agency make for tracking the rule's effectiveness in the future?

Figure 1 lists the 12 criteria. The appendix provides additional detail on the kinds of questions considered under each criterion.

Figure 1

For each criterion, the evaluators assigned a score ranging from 0 (no useful content) to 5 (comprehensive analysis with potential best practices). Thus, each analysis has the opportunity to earn between 0 and 60 points. In general, the research team used the guidelines in table 1 for scoring. Because the Analysis criteria involve so many discrete aspects of regulatory analysis, we developed a series of sub-questions for each of the four Analysis criteria and awarded a 0-5 score for each sub-question. These scores were then averaged to calculate the score for the individual criterion.

Table 1

The qualitative nature of the evaluation may be the most controversial aspect— especially since it seems to contrast with the objective and quantitative orientation of much benefit-cost research. A benefit of our qualitative approach is that it provides a richer and potentially more accurate evaluation of the actual quality of the analysis. As OIRA (2008, 19) notes, "Objective metrics can measure whether an agency performed a particular type of analysis, but may not indicate how well the agency performed this analysis." For example, rather than just asking whether the analysis considered alternatives or counting the number of alternatives considered, we can give an analysis a higher score if it considered a wider range of alternatives. Instead of just asking whether the agency named a market failure, we also assess whether the agency provides plausible evidence that the market failure exists, awarding a higher score based on how convincing the evidence is. Another benefit of the qualitative approach is that it encourages agencies to find the best way to do analysis that can inform decisions, instead of treating regulatory analysis as a "check the box" compliance exercise.

The main drawbacks of qualitative evaluation are that the results can be more subjective, less transparent, and harder to replicate. Several aspects of our research design seek to keep these costs within tolerable limits. We designed the evaluation process to achieve a common, intersubjective understanding of which practices deserve which kind of score, and evaluators took notes justifying each score.5 The entire research team underwent extensive training, evaluated several of the same proposed regulations and accompanying RIAs, compared scores, and discussed major differences until we achieved a consensus on scoring standards. For questions that were particularly difficult to evaluate, we developed written descriptions of the practices that would qualify for various scores in most cases, realizing that these descriptions could never be comprehensive. Each analysis was scored by one of the authors of this paper and another team member, with discussion to achieve consensus when scores differed significantly. Each author also reviewed the other's scores and notes and then discussed and resolved differences to ensure that all documents were evaluated consistently on all questions. A set of composite notes justifying every score on every proposed rule is available on the Web at www.mercatus.org/reportcard.

As any professor who has graded student papers knows, "subjective" is not the same thing as "arbitrary." Other researchers using our scoring questions might evaluate these analyses more harshly or leniently. However, we doubt the rankings would change substantially if other analysts used our scoring criteria to evaluate all of the regulations.

1.3 Examples

To illustrate how the evaluation protocol works, tables 2-4 reproduce scores and scoring notes for three different regulatory analyses on criterion 5, Outcomes. An outcome is the ultimate result of a program or activity that benefits society or some sub-group. "Outcomes are not what the program itself did but the consequences of what the program did." (Hatry 1999, 15)

We intentionally employ the broader term "outcomes" rather than "benefits," because some regulations seek to achieve goals that do not necessarily meet the economist's definition of a social benefit. For example, OMB Circular A-4 (2003, 5) notes, "Congress established some regulatory programs to redistribute resources to select groups." If a regulation's primary goal is redistribution, we assess whether the agency's analysis articulated that goal, established measures, articulated a theory explaining how the regulation accomplishes the goal, presented evidence that the theory is right, and adequately addressed uncertainty about the likelihood and size of the outcomes. This does not mean that we advocate counting redistribution or equity as a social benefit when comparing costs with benefits; that is a value judgment that depends on one's political philosophy and ethics. We ask merely whether the regulatory analysis articulates, measures, and justifies the regulation's goal, regardless of whether the goal increases net social benefits.

Table 2 shows the justifications for the high score awarded to the EPA's National Ambient Air Quality Standards for Lead proposed in 2008. The EPA's analysis identifies several outcomes that directly affect or are linked to citizens' quality of life: improved health, lead levels in blood (an intermediate indicator that research has linked to health outcomes), and IQ development in children. It measures the regulation's results via an intermediate outcome: airborne lead concentrations. It presents a clear theory of how reduced lead emissions into the air will affect lead levels in blood, and it provides empirical evidence that reduced emissions are likely to reduce lead levels in blood. The analysis acknowledges that there is some uncertainty about the effects of the regulation and presents a range of estimates.

Table 2

Table 3

Table 3 provides an example of an analysis that scored poorly on the Outcomes criterion: a regulation from the Department of Veterans Affairs establishing post 9/11 GI Bill benefits for veterans. The analysis received partial credit on the first Outcome question because it at least implied that the regulation would help improve veterans' quality of life by providing educational opportunities. But the explicit justification offered for the regulation is simply that Congress passed a law telling the department to provide the benefits. The analysis does not explain how positive outcomes for veterans would be measured, offer a theory of how the benefits program could cause the outcomes, offer evidence that the regulation will improve educational outcomes, or assess any uncertainties about the size of the outcomes. This approach is typical of RIAs that score low on the Outcomes criteria; they merely cite a legal rationale for the regulation and say little about the social benefits Congress expected the regulation to produce.

Table 4

Finally, table 4 provides an example of a regulation that received a middling score on the Outcomes criterion: an OSHA regulation intended to improve safety around cranes and derricks at construction sites. The analysis identified workplace safety outcomes that clearly affect human welfare and explained how to measure them, earning a 5 on each of these questions. However, the analysis does not provide much of a documented theory or evidence that the regulations would reduce fatalities and accidents; the reader is assured that "OSHA analysis" proves this is so. An explicit theory, rather than just an assertion, and documentation of evidence supporting the theory would have earned this analysis a higher score on these two questions. Although the RIA acknowledges uncertainty about benefits, it provides little analysis showing how the uncertainties would affect estimates of injuries and fatalities. This RIA illustrates one common pattern in the evaluation of analyses that received mid-level scores: They can name and sometimes measure outcomes that affect human welfare, but they do not do a thorough job of demonstrating why the proposed regulation can be expected to produce the outcomes.

1.4 Caveats

Three significant caveats accompany our findings. First, we evaluate the quality of regulatory analysis and its use in decisions, but we do not evaluate whether the proposed rule is economically efficient, fair, or otherwise good public policy. This paper is a commentary on the quality of the analysis, not the advisability of the regulation itself.

Second, a high score or ranking does not indicate that an agency's analysis is perfect or that we agree with the results. We evaluated whether the RIA and preamble to the proposed rule make a reasonable effort at covering the major elements of regulatory analysis. Specialists with years of experience on particularly regulatory topics might find that we have been overly generous. For example, the one economically significant EPA air pollution regulation proposed in 2008 scores fairly high for its analysis of uncertainty regarding the size of benefits, but Fraas (2010) documents significant shortcomings in the EPA's uncertainty analysis of benefits of air quality regulations.

We did not seek to replicate the results ourselves or produce our own analysis. Nor did we attempt to verify the underlying data and studies. Two of our criteria assess whether a reader could check the underlying data and the literature relied upon for theories and assumptions, and we award a higher score if the literature cited appears to be peer-reviewed. Virtually all of these RIAs are likely to be considered "influential" under the Data Quality Act: "The term 'influential scientific information' means scientific information the agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions." (OMB 2004, 11) As such, agencies should have subjected both data and models to peer review. But we did not ourselves verify the whether the data or models were current, appropriate or accurate. Therefore, a high-scoring analysis may still have flaws and inaccuracies due to poor underlying data or theories that turn out to be wrong. While an analysis that does poorly in our review is most certainly poor, one that does well may still have severe shortcomings. Authors of previous regulatory scorecards have also noted this drawback (Hahn and Dudley 2007, 196).

Third, we give each criterion the same weight. We recognize, of course, that some criteria, such as whether the agency has identified a systemic problem or whether it has analyzed a broad array of alternatives, are more important than whether the RIA is clearly written for an average reader. Complete scores and scoring notes for each agency are posted online, so that different readers can alter the weightings or examine only the scores on criteria they believe are important.

2. Scores and Rankings

2.1 Summary Statistics

Both the average and median score were 27 out of 60 possible points, or 45 percent. The best analysis received 43 points (72 percent), and the worst received only seven points (12 percent). Figure 2 demonstrates the distribution of scores. If these were student papers, the high grade would be a C and the average and median would be an F.

Figure 3 shows average scores in the three categories of openness, analysis, and use. In general, the documents score higher on openness than on the other two categories. This is largely because it is relatively easy to make documents conveniently available to the public even if the analysis is low quality.

Figure 2

Figure 3

2.2 Best and Worst Analyses

Table 5 lists all 54 regulations by score in descending order, along with their Regulatory Identification Numbers and the name of the parent department that issued them. The best initial analysis in 2008 was for the Department of Transportation's Corporate Average Fuel Economy regulation, followed by the Environmental Protection Agency's National Ambient Air Quality Standards for Lead and Housing and Urban Development's proposed revisions to the Real Estate Settlement Procedures Act (RESPA). The three worst analyses come from the Social Security Administration, Department of Veterans' Affairs, and Department of Defense, earning 7, 10, and 12 points respectively.

The 15 regulations printed in red are budget or "transfer" regulations. These regulations outline how the federal government will spend money, set fees, or administer spending programs. Most of these regulations score poorly.

This finding is consistent with OMB's (2009, 19) observation that although transfer regulations generate social costs via mandates, prohibitions, and price distortions, agencies do not usually estimate the social benefits and costs of transfer regulations. That may be because most transfer regulations primarily codify what Congress has already decided in the enabling legislation. In fact, narrow discretion may explain relatively poor quality for other kinds of regulations as well. Belcore and Ellig (2008, 38-41), for example, found this to be true for homeland security regulations.

We should note, however, that regulatory analysis is a tool to help the executive branch make decisions. One of those decisions may be to go back to Congress if an agency identifies a better alternative than what is currently written into law. One could argue that discovering better alternatives in such an instance could be the most valuable contribution of regulatory analysis. OMB Circular A-4 (2003, 17) notes, "If legal constraints prevent the selection of a regulatory action that best satisfies the philosophy and principles of Executive Order 12866, you should identify these constraints and estimate their opportunity cost. Such information may be useful to Congress under the Regulatory Right-to-Know Act." In fact, Congress requires OMB to report on the social costs and benefits of transfer regulations. (OMB 2010, 19) Therefore, while narrow delegation might explain why agencies sometimes produce poor regulatory analysis, narrow delegation does not justify poor analysis.

Table 5

2.3 Average Scores by Regulation Type

Calculating average scores by type of regulation reveals a big discontinuity, as table 6 shows. Average scores for most types of regulations range between 30 and 35 points. Transfer regulations, however, have a substantially lower average score—just 17.1 points. Transfer regulations score lower on all three categories of criteria, but the biggest difference is in the Analysis category, where transfer regulations score only about one-third as many points as other types of regulations.

Table 6

2.4 Agency Average Scores

Table 7 lists average scores for each agency that issued economically significant regulations in 2008. HUD's one regulation earned it the highest agency average. EPA placed second, and Homeland Security placed third. Scores decline relatively smoothly as one moves down the list, except for the 7.7 point gap that separates HHS, ranked 13th, from State, ranked 14th.

Most of the agencies in the top half of the list produced more than one economically significant regulation in 2008. All of the agencies in the bottom half produced just one, except for HHS (11 regulations) and Education (two regulations). Whether this pattern reflects economies of scale or is a mere coincidence remains to be seen.

Table 7

2.5 Average Scores by Criterion

Average scores on individual criteria reveal where regulatory analysis in practice is generally strongest and weakest. The criterion with the highest average score in table 8 is criterion 1, Accessibility. This is not surprising, since making documents accessible to the public via the Internet is relatively easy to do regardless of the quality of the analysis itself.

The two lowest-scoring criteria are both related to retrospective analysis: establishing measures and goals to track the regulation's effects in the future (criterion 11) and gathering data for such assessment (criterion 12). Section 5 of Executive Order 12866 requires agencies to periodically review significant regulations to determine whether they should be modified or eliminated. An expansive reading of this section of Executive Order 12866 would interpret it to mean agencies should evaluate the costs and benefits of regulations after they have been adopted, regulated entities have complied, and secondary effects have worked their way through the economy.

Apparently few agencies have interpreted the language this way. A recent Government Accountability Office report on retrospective regulatory analysis noted, "Our limited review of agency summaries and reports on completed retrospective reviews revealed that agencies' reviews more often attempted to assess the effectiveness of their implementation of the regulation rather than the effectiveness of the regulation in achieving its goal." (GAO 2007, 20) OMB's annual estimates of the costs and benefits of federal regulation rely heavily on agencies' ex ante cost and benefit estimates, instead of retrospective (ex post) analysis of regulations' actual effects. The most recent report declared, "[W]e recommend that serious consideration be given to finding ways to employ retrospective analysis more regularly, in order to ensure that rules are appropriate, and to expand, reduce, or repeal them in accordance with what has been learned." (OMB 2010, 43) The other low-scoring criterion is identification of the market failure or other systemic problem the regulation is supposed to solve. This low score is puzzling, because section 1 of Executive Order 12,866 leads off by stating that each regulation must identify the problem it seeks to address and assess the significance of that problem. The analyses that score low on this criterion either simply assert a reason for the regulation or mention no explicit rationale at all beyond implementing a law. Such weaknesses are disturbing. It is hard to have confidence that a regulation really will solve a problem, or that the agency has selected the best option for solving a problem, if the agency cannot articulate the problem and cite convincing evidence that the problem exists.

One might imagine that the low average score on the systemic problem criterion is driven by transfer regulations, which often implement very specific legislative mandates, and hence agencies might not feel compelled to explain what problem the regulation seeks to solve. However, we can identify more than a few examples of non- transfer regulations that scored a 1 or 2 on identification of the systemic problem. These include Treasury's risk-based capital rules for banks, Interior's abandoned mine land program and oil shale management rules, DOT's maximum operating pressure for gas transmission pipelines, and Federal Acquisition's employment eligibility verification rules. Another example is DOT's congestion management rule for John F. Kennedy and Newark Airports, which received a 2. A similar rule for LaGuardia airport received a 3 on the same criterion. The main difference was that the RIA for the Kennedy-Newark rule mostly showed the results of the analysis without showing the reader how DOT reached those results, whereas the LaGuardia rule did a better job of showing how DOT reached the results.

Table 8

Given the lower average scores of transfer regulations, it is no surprise that average scores on individual criteria are generally higher when transfer regulations are excluded. The only exception is the first criterion, Accessibility, because most of the transfer regulations come from HHS and are available via the HHS Web site. The clarity criterion receives the highest average score when transfer regulations are excluded, meaning that these RIAs are more difficult than average to understand. Again, the HHS regulations largely account for this. HHS often weaves regulatory analysis into the preamble of the regulations instead of having an extensive "Regulatory Impact Analysis" section, and these preambles are highly technical.

3. Use of Regulatory Analysis

Different scholars offer different hypotheses about whether economic analysis actually has much influence on regulatory decisions. Hahn and Tetlock (2008) conclude that few RIAs have much effect. Williams (2008, 6-7), on the other hand, suggests that regulatory analysis can affect decisions behind the scenes, even if the agency does not explicitly explain in its Federal Register notice. Our scoring on the Use criteria offers another perspective on this question.

3.1 Does Regulatory Analysis Get Used?

The first two Use criteria ask whether there is evidence that the analysis affected decisions about the proposed regulation. To score these criteria, we examine both the RIA and the entire preamble to the rule to see how extensively the agency used information about the systemic problem, or benefits or costs of alternatives, to make decisions. This method is far from perfect. It will not identify any "behind the scenes" influence of the analysis that is not documented in the Federal Register notice. We may also over-estimate the effects of analysis in situations where the agency reached decisions, then crafted the analysis to support those decisions and cited the analysis as justification for the decisions it made.

Figure 4 shows that criterion 9, Use of Analysis, is actually the criterion with the third-highest average score. An agency can earn points on this criterion even if statute prohibits it from considering some factors, such as costs or net benefits. For example, when setting National Ambient Air Quality Standards (NAAQS), "According to the Clean Air Act, EPA must use health-based criteria in setting the NAAQS and cannot consider estimates of compliance cost." (EPA 2008) But since health is one of the key benefits of air quality standards, the EPA received two points on criterion 9 for using the health analysis to inform its decision.

Criterion 10, Net Benefits, receives a lower average score when transfer regulations are included (2.20 points) but a much higher score when they are excluded (2.93 points). One might argue that net benefits are irrelevant when a regulation "merely" transfers money, but surely most federal expenditures are supposed to achieve some type of public benefit that could often be measurable. To achieve a good score on this criterion, the agency does not have to select the alternative that maximizes net benefits. Rather, the agency must demonstrate that it is cognizant of net benefits and weighed them against other factors when making its decision. If the RIA calculated net benefits of multiple alternatives but the preamble to the proposed rule clearly states the justification for choosing an alternative that did not maximize net benefits, the agency can still score well on this criterion. We score the Net Benefits criterion this way in order to avoid imposing the value judgment that agencies "ought" to choose the alternative that maximizes net benefits. Instead, we evaluate whether decisionmakers took notice of net benefits and then determined what weight net benefits should have in the decision.

Figures 4 and 5 show that the scores on these two criteria have a somewhat bimodal distribution. About ten regulations show fairly strong evidence that the analysis affected some major decision, earning a score of 4 or 5. More than 20 regulations used little or nothing of the analysis. The remaining regulations show some use of the analysis, but not substantial use. We infer from this that regulatory analysis sometimes has a significant effect on the regulation (as evidenced by language in the preamble), but more often has a marginal effect or no effect.

Figure 4

Figure 5

The really low scores in the Use category are on the two retrospective analysis criteria, listed at the bottom of table 8. Only four regulations earned a 3 or better on criterion 11, Measures and Goals, and only ten regulations earned a 3 or better on criterion 12, Retrospective Data. These scores show that few economically significant regulations include any substantial plans for retrospective analysis of either costs or benefits. Seventeen years after passage of the Government Performance and Results Act required agencies to develop goals and measures for their major programs, this is disappointing news indeed. Since economically significant regulations are the ones with the largest impact, surely most of them are related to an agency's fundamental mission and strategic goals.

3.2 Does Better Analysis Get Used?

Since we evaluated both the quality and the use of regulatory analysis, we can test to see whether there is any correlation between the two. Does better analysis get used?

Figure 6

Figure 7

Table 9

Table 10

Figures 6 and 7 plot the scores from the Use criteria against the total scores on the quality criteria (criteria 1–8). The "fitted values" line shows the predicted value of Use based on an ordinary least squares econometric regression of the Use scores on the quality scores. Better analysis is correlated with greater use of analysis. The relationship is less pronounced with transfer regulations excluded. Most of the transfer regulations scored relatively low on both quality and use.

The statistical results in tables 9 and 10 investigate the quality-use relationship in greater depth. Table 10 shows regression results using all 45 regulations; Table 11 uses only the non-transfer regulations. Both tables reveal that there is a tighter and more significant correlation between use (criteria 9-12) and the analysis score (criteria 5-8) than between use and the quality score (criteria 1-8). In other words, good analysis is more likely to be correlated with use even if it is more difficult to find, less thoroughly documented, or harder to read.

When all 45 regulations are considered, table 9 shows that there is a positive and statistically significant correlation between the quality of the analysis and every sub- component of the use score. When the sample is confined to non-transfer regulations, however, the relationship is somewhat less extensive, as table 10 shows. Taken together, the use criteria are still highly correlated with quality of the analysis. Quality of analysis is also statistically significantly correlated with the sum of criteria 9 and 10 (Some Use of Analysis and Net Benefits) and with the sum of criteria 11 and 12 (Measures and Goals and Retrospective Data). But when the regressions are run using individual criteria, the quality of analysis is only marginally significant for criteria 10-12. For non-transfer regulations, it appears that the principal source of correlation between quality and use is criterion 9, which measures whether there is any evidence that the RIA affected decisions in the regulation. For non-transfer regulations, good analysis might not be correlated with consideration of net benefits, nor is it associated with retrospective analysis.

Nevertheless, there is some evidence that better analysis is correlated with use of that analysis. This correlation could mean any number of things. Perhaps improving the quality of analysis improves the odds that decision makers will find it useful. Or perhaps causation runs the other way: When decision makers are willing to use regulatory analysis to make decisions, better regulatory analysis gets produced. Or perhaps the correlation is driven by statutory requirements that agencies either must or must not consider various aspects of regulatory analysis when making decisions. Even if most agencies treat RIAs as a mere compliance exercise, it is interesting to note that agencies that are better at complying with regulatory analysis requirements are also more likely to claim that their analysis influenced their decisions. Clearly, the relationship between quality of analysis and use of analysis is an area ripe for further research.

4. Best Practices

4.1 Identifying Best Practices

Our qualitative evaluation method allows us to identify which analyses have done a particularly good job according to the various criteria. We use the term "best practices" in a relative sense; "best practices" identified by this study are the best ones we have found in this set of 45 proposed regulations for 2008. Comprehensively identifying all of the best (and worst) practices in 45 analyses would take more space than is possible in this study. In this section, we highlight some examples of best practices from analyses that earned scores of 4 or 5 to illustrate how our evaluation methodology readily identifies them.

Criterion 1: Accessibility

Twelve regulations earned a score of 5 on this criterion. To earn this score, the Federal Register notice and the RIA (if it is a separate document) had to be easily available on regulations.gov via a keyword search or search using the RIN number. In addition, the Federal Register notice, RIA, and any supporting materials had to be easily available on the agency's Web site. "Easily available" means they can be located quickly and unambiguously using the Web page's search function, or via an intuitive path of clicks from the agency's home page.

Criterion 2: Data Documentation

One regulation, EPA's National Ambient Air Quality Standards for Lead, received a 5 on this criterion. The RIA for this rule has working hyperlinks to data so that readers can get directly to the data used in the analysis.

Criterion 3: Model Documentation

Three regulations received a 5 on this criterion. They are DOT's CAFÉ regulation, Labor's Family and Medical Leave Act regulation, and HHS's regulation revising HIPAA Code Sets. In DOT's CAFE RIA, full citations are given for all studies referenced, many are linked, and the model developed by the Volpe Center that is used to estimate many of the regulation's effects is available via DOT's web page. In Labor's RIA, all aspects of models and assumptions are based on cited literature or analyses. It is obvious to the reader that cited works are recent publications. Most publications are linked. HHS's analysis is primarily based on studies by the RAND Corporation and Robert E. Nolan Company, which appear to be credible and carefully done. Links to both are provided. HHS gives credible reasons for relying less heavily on a report estimating costs commissioned by health insurers.

Criterion 4: Clarity

Three regulations received a 5 on this criterion. The RIA for Homeland Security's Large Aircraft Security Program is clear and understandable. While some technical jargon is used, all of it is explained via efforts like a detailed abbreviations sheet. Although not cited as well as they could be, the charts and graphs on total annualized costs are also very understandable. Especially helpful in illuminating the proposed rule's changes for layperson is a chart titled "Proposed Changes to the Existing Regulatory Framework." Labor's RIA for Refuge Alternatives for Underground Coal Mines is likewise understandable, with minimal technical jargon and well-explained charts. Interior's Oil Shale Management RIA is actually an interesting read; the language is direct, simple, and easy to understand.

Criterion 5: Outcomes

Two regulations, both from EPA, earned a score of 5 on this criterion: National Ambient Air Quality Standards for Lead and Effluent Limitations for Construction and Development. The Lead RIA's approach to Outcomes is discussed in section 1.2 above. In the construction and development RIA, EPA enumerates and measures several outcomes that affect human welfare: easier navigation, easier water storage, easier water treatment due to less sediment, and improved water quality. The claim that reducing runoff reduces sediment in water is supported empirically. The RIA also includes numerous discussions of uncertainties and sensitivity analyses.

Criterion 6: Systemic Problem

HUD's analysis of proposed changes in the Real Estate Settlement Procedures Act achieved a score of 5 on this criterion; no others did. The RIA posits that the complexity of mortgage transactions and lack of knowledge by some borrowers allows mortgage providers to offer the less-informed borrowers less-favorable terms. It cites several consulting and government studies which find that consumers with less education, no counseling, or more complex shopping strategies tend to pay more for loans and settlement services, and they are not fully compensated for these higher charges with better interest rates on mortgage loans. Most of these studies rely on representative samples of mortgage loans or consumers, seeking to draw general conclusions rather than just recounting anecdotes about problems faced by specific customers. About the only weak point of this analysis of the systemic problem is that HUD did not analyze uncertainties about the existence or size of the systemic problem.

Criterion 7: Alternatives

No regulation received a score of 5 for its analysis of alternatives. DOE's analysis of Energy Conservation Standards for General Service Fluorescent Lamps and Incandescent Reflector Lamps probably came closest. The analysis considered a broad list of nine alternative ways to encourage energy conservation by purchasers of these lamps, including non-regulatory alternatives such as tax credits, voluntary conservation programs, and bulk purchases by government. A table presents the net benefits of each alternative. The baseline is well-defined, though it is presented in a different document than the RIA.

Criterion 8: Benefit-Cost Analysis

For benefit-cost analysis, the highest score was 4. This occurred because that criterion involves nine sub-questions, and no analysis racked up enough 5s on a sufficient number of sub-questions to achieve an average of 5 for the criterion. However, on each benefit-cost sub-question, at least one earned a score of 5. This indicates that at least some analyses contain quite good benefit-cost analysis, but no one excels in every dimension.

A few benefit-cost topics with notable best practices include:

Cost analysis: A cluster of our questions ask whether the regulatory analysis identifies incremental costs of all alternatives considered, shows how these costs would affect the prices of goods and services, and analyzes how businesses and consumers would change their behavior in response to the price changes or other aspects of costs. The EPA's RIA on Effluent Limitations for Construction and Development provides several good examples of how to do these things. The RIA identifies compliance costs and assesses how these would affect firms in the construction and development industry. It calculates how the costs would be passed through to buyers of single-family homes—even estimating how the compliance costs would affect down payments and mortgage payments. It estimates effects on the economy-wide demand for goods and services, calculating the economic "deadweight loss" due to the reduction in output. And it does these things for all three alternatives the EPA considered.

Uncertainty: One of the best examples of uncertainty analysis occurs in the RIA that assesses a pair of Justice Department regulations that revise standards for access to public, commercial, and state-owned facilities under the Americans with Disabilities Act. The RIA presents extensive analysis of uncertainties in input values, including sensitivity analysis and Monte Carlo analysis. (A Monte Carlo analysis treats multiple input variables as probability distributions rather than single values, generating a range of possible results as numerous input variables change at the same time.) Results are displayed in graph format to indicate the sensitivities for many of the adjustments that would result from the regulation.

Net benefits: This question asks simply whether the analysis identifies the approach that maximizes net benefits. One good example occurs in DOT's CAFÉ RIA, which includes tables listing the net benefits and costs of seven alternative standards.

Incidence of benefits or costs: HHS's analysis for its proposed Electronic Transaction Standards identifies multiple categories of benefits and costs for a comprehensive list of stakeholders that includes hospitals, physicians, dentists, pharmacies, health plans, and state and federal governments. The Justice Department's analysis of Electronic Prescriptions for Controlled Substances identifies the different parties who receive the quantified benefits. Doctors and pharmacies, for example, benefit from reductions in phone calls to check paper prescriptions, and patients benefit from reductions in waiting time. Similarly, the RIA identifies which parties will bear which types of costs. Homeland Security's analysis of the proposed Biometric Exit System includes calculations of cost and labor savings that will accrue to several government agencies as well as a discussion of increased economic activity and national security benefits, which accrue to the nation as a whole.

Criterion 9: Use of Analysis

Two regulations received a 5 on this criterion. DOT's proposed CAFÉ regulations show significant use of the regulatory analysis. The preamble to the proposed rule frequently refers to the RIA, and the analysis informs many major decisions, such as the stringency of the standards and the decision to set standards based on vehicle attributes rather than a fleetwide average. The preamble to DOJ's ADA rule pertaining to state and local governments expressed a desire to ensure that all categories of mandated renovations have benefits that exceed costs, and it solicited further comment on benefit-cost issues.

Criterion 10: Net Benefits

This criterion asks whether the agency was cognizant of net benefits when it made decisions. Even if the agency declined to choose an alternative that maximized net benefits, it could still achieve a high score by explaining what other factors led it to eschew maximizing net benefits. Two regulations received a 5 on this criterion. In its proposed CAFÉ regulation, DOT explicitly explained that it sought to set the standards at the level that maximized net benefits.

In its Energy Efficiency Standards for Fluorescent Lamps, DOE provides a good example of making decisions that take net benefits into account but do not necessarily maximize net benefits. The analysis calculated the net present value of consumer costs or savings for a variety of standards and non-regulatory alternatives, as well as costs to industry, so the department was clearly aware of net benefit issues. The department repeatedly solicited additional data that would allow it to better evaluate costs and the existence or size of market failures in order to better understand the net benefits of alternative standards (DOE 2009, 17018-20). The law, however, requires the department to set a standard that "offers the maximum improvement in efficacy that is technologically feasible and economically justified, and will result in significant conservation of energy" (DOE 2009, 16923). DOE selected the alternative that it believes meets that legal standard.

Criteria 11 and 12: Measures and Goals, Retrospective Data

Only one regulation received a 5 on either of these criteria: Homeland Security's Biometric Exit rule. The RIA explains how the benefits projected from the proposed regulation relate to the department's strategic goals, and it proposes performance measures. An appendix to the RIA explains the difference between ex ante benefit-cost analysis and performance tracking after implementation. The appendix also outlines more specific outcome performance measures based on the benefits projected for the rule. The Federal Register notice does not explicitly commit to evaluating the regulation in this way, but the RIA gives the impression that it will be done. The approach is sufficiently innovative that it deserves a high score.

4.2 Potential Impact of Best Practices

Many regulatory analyses could be better executed. Widespread adoption of existing best practices would lead to substantial improvement.

Table 11 demonstrates both of these points by comparing the average score on each criterion with the highest score any analysis achieved on that criterion. On the Accessibility criterion, 12 analyses earned the highest score of 5. On 10 of the remaining criteria, no more than three analyses earned a 5. No analysis earned a score of 5 for criterion 8, Benefit-Cost Analysis, but at least one earned a 5 on each sub-question under Benefit-Cost Analysis. Here again, more widespread adoption of existing best practices could substantially improve the quality of most regulatory analyses.

Table 11

5. Conclusions

Regulatory analysis is supposed to inform regulatory decisions, not merely justify them after the fact or simply fulfill a requirement to clear a rule through OIRA. Since proposed regulations usually reflect a great deal of up-front work and are supposed to represent the agency's preferred approach to problem-solving, we evaluated the quality of regulatory analyses accompanying proposed regulations. This allows us to assess whether the analysis conducted closest to the time when initial decisions are made is comprehensive and reliable enough to inform those decisions. In addition, we evaluated whether the agency has demonstrated a commitment to using regulatory analysis to inform its decisions, now and in the future. This allows us to assess whether the quality of regulatory analysis is correlated with its use.

Our findings on quality are generally consistent with prior literature but provide slightly more cause for optimism. Hahn and Tetlock (2008, 74), for example, note that regulatory analyses of a large sample of environmental regulations covered an average of approximately 30 out of 76 items on Hahn's scorecard—an average of 40 percent. Our results, using a qualitative scoring system, are somewhat better. On average, analyses of environmental regulations earned 31.8 out of 60 possible points, or 52 percent. The average for all regulations we assessed was 27 out of 60 possible points, or 45 percent. Excluding transfer regulations, the average was 32, or 54 percent. These figures suggest either that the quality of regulatory analysis has improved somewhat, or that our scoring method identifies some strengths that Hahn's approach does not.

Qualitative scoring allows us to distinguish between better and worse implementation of economic analysis on particular criteria. The scores clearly indicate that every aspect of regulatory analysis is done reasonably well by someone in some agency on some regulation, but no single analysis comes close to doing everything well. In one sense, this is disappointing, since 32 years have elapsed since President Carter's executive order on Regulatory Analysis, and 30 years have elapsed since President Reagan's executive order outlined the first comprehensive requirement for benefit-cost analysis. Substantial improvements in regulatory analysis could occur across the board if federal agencies could mobilize and spread know-how that already exists.

Our results also suggest that regulatory analysis is perhaps more widely used than previous research has shown. A reading of the RIAs and agency justifications for rules shows that some aspect of the analysis affected some major aspect of the regulatory decision in about 10 rules, or 22 percent of the economically significant regulations proposed in 2008. Moreover, use of analysis is positively correlated with quality of analysis—though which way the causation runs remains to be seen.

APPENDIX

Major Factors Considered When Evaluating Each Criterion

Note: Regardless of how they are worded, all questions involve qualitative analysis of how well the RIA addresses the issue, rather than "yes/no" answers.

Openness

1. How easily were the RIA, the proposed rule, and any supplementary materials found online?

How easily can the proposed rule and RIA be found on the agency's Web site?
How easily can the proposed rule and RIA be found on Regulations.gov?
Can the proposed rule and RIA be found without contacting the agency for assistance?

2. How verifiable are the data used in the analysis?

Is there evidence that the RIA used data?
Does the RIA provide sufficient information for the reader to verify the data? How much of the data are sourced?
Does the RIA provide direct access to the data via links, URLs, or provision of data in appendices?
If data are confidential, how well does the RIA assure the reader that the data are valid?

3. How verifiable are the models and assumptions used in the analysis?

Are models and assumptions stated clearly?
How well does the RIA justify any models or assumptions used?
How easily can the reader verify the accuracy of models and assumptions?
Does the RIA provide citations to sources that justify the models or assumptions?
Does the RIA demonstrate that its models and assumptions are widely accepted by relevant experts?
How reliable are the sources?
Are the sources peer-reviewed?

4. Was the Regulatory Impact Analysis comprehensible to an informed layperson?

How well can a non-specialist reader understand the results or conclusions?
How well can a non-specialist reader understand how the RIA reached the results?
How well can a specialist reader understand how the RIA reached the results?
Is the RIA written in "plain English"? (Light on technical jargon and acronyms, well-organized, grammatically correct, direct language used.)

Analysis

5. How well does the analysis identify the desired outcomes and demonstrate that the regulation will achieve them?

How well does the RIA clearly identify ultimate outcomes that affect citizens' quality of life?
How well does the RIA identify how these outcomes are to be measured?
Does the RIA provide a coherent and testable theory showing how the regulation will produce the desired outcomes?
Does the analysis present credible empirical support for the theory?
Does the analysis adequately assess uncertainty about the outcomes?

6. How well does the analysis identify and demonstrate the existence of a market failure or other systemic problem the regulation is supposed to solve?

Does the analysis identify a market failure or other systemic problem?
Does the analysis outline a coherent and testable theory that explains why the problem (associated with the outcome above) is systemic rather than anecdotal?
Does the analysis present credible empirical support for the theory?
Does the analysis adequately assess uncertainty about the existence and size of the problem?

7. How well does the analysis assess the effectiveness of alternative approaches?

Does the analysis enumerate other alternatives to address the problem?
Is the range of alternatives considered narrow or broad?
Does the analysis evaluate how alternative approaches would affect the amount of the outcome achieved?
Does the analysis adequately address the baseline—what the state of the world is likely to be in the absence of further federal action?

8. How well does the analysis assess costs and benefits?

Does the analysis identify and quantify incremental costs of all alternatives considered?
Does the analysis identify all expenditures likely to arise as a result of the regulation?
Does the analysis identify how the regulation would likely affect the prices of goods and services?
Does the analysis examine costs that stem from changes in human behavior as consumers and producers respond to the regulation?
Does the analysis adequately address uncertainty about costs? Does the analysis identify the approach that maximizes net benefits?
Does the analysis identify the cost-effectiveness of each alternative considered?
Does the analysis identify all parties who would bear costs and assess the incidence of costs?
Does the analysis identify all parties who would receive benefits and assess the incidence of benefits?

Use

9. Does the proposed rule or the RIA present evidence that the agency used the Regulatory Impact Analysis?
Does the proposed rule or the RIA assert that the RIA's results affected any decisions?
How many aspects of the proposed rule did the RIA affect?
How significant are the decisions the RIA affected?

10. Did the agency maximize net benefits or explain why it chose another option?

Did the RIA calculate net benefits of one or more options so that they could be compared?
Did the RIA calculate net benefits of all options considered?
Did the agency either choose the option that maximized net benefits or explain why it chose another option?
How broad a range of alternatives did the agency consider?

11. Does the proposed rule establish measures and goals that can be used to track the regulation's results in the future?

Does the RIA contain analysis or results that could be used to establish goals and measures to assess the results of the regulation in the future?
In the RIA or the proposed rule, does the agency commit to performing some type of retrospective analysis of the regulation's effects?
Does the agency explicitly articulate goals for at major outcomes the rule is supposed to affect?
Does the agency establish measures for major outcomes the rule is supposed to affect?
Does the agency set targets for measures of major outcomes the rule is supposed to affect?

12. Did the agency indicate what data it will use to assess the regulation's performance in the future and establish provisions for doing so?

Does the RIA or proposed rule demonstrate that the agency has access to data that could be used to assess some aspects of the regulation's performance in the future?
Would comparing actual outcomes to outcomes predicted in the RIA generate a reasonably complete understanding of the regulation's effects?
Does the agency suggest it will evaluate future effects of the regulation using data it has access to or commits to gathering?
Does the agency explicitly enumerate data it will use to evaluate major outcomes the regulation is supposed to accomplish in the future?
Does the RIA demonstrate that the agency understands how to control for other factors that may affect outcomes in the future?

ENDNOTES

1. Direct assessments of the contents of RIAs include Hahn and Dudley (2007), Hahn et. al. (2000), Hahn and Litan (2005), Hahn, Lutter, and Viscusi (2000), and Belcore and Ellig (2008). Assessments that compare ex ante benefits and costs in RIAs with actual ex post estimates include Harrington et. al (2000), OMB (2005), and Harrington (2006).

2. "Economically significant" regulations are defined as regulations that have an economic impact exceeding $100 million or that "adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local or tribal governments or communities."(EO 12866 Sec. 3(f)(1)) Economically significant regulations require an extensive Regulatory Impact Analysis (RIA) that assesses the need, effectiveness, benefits, costs, and alternatives for the proposed regulation. (EO 12866 Sec. 6(a)(3)(C))

3. The qualitative evaluation method is based on the Mercatus Center's Performance Report Scorecard, a 10-year project that assessed the quality of federal agencies' annual performance reports required under the Government Performance and Results Act of 1996. For the most recent results, see McTigue et. al. (2009).

4. We acknowledge that the decision makers may already have their minds made up before any economic analysis is done (Williams 2008, 18–19), but remain optimistic that better analysis may sometimes lead to better decisions.

5. The term "intersubjective" refers to subjective interpretations that different individuals can share because they have commonly-understood meanings. Social scientists most commonly use the term to denote economic agents‘ ability to understand the interpretations and meanings of other economic agents (Schutz 1953, 7-8) or the social scientist‘s ability to understand the interpretations and meanings of the economic agents who are the subject of study (Schutz 1953, 34; Lavoie 1990, 172–77). We think it applies equally well here, when colleagues share similar subjective understandings of what constitutes better and worse analyses.

REFERENCES

Belcore, Jamie, and Jerry Ellig. 2009. "Homeland Security and Regulatory Analysis: Are We Safe Yet?," Rutgers Law Journal (Fall).

Brito, Jerry, and Jerry Ellig, 2009. "Toward a More Perfect Union: Regulatory Analysis and Performance Management," Florida State University Business Review 8:1 (Spring/Summer).

Environmental Protection Agency. 2008. "Proposed Lead NAAQS, Regulatory Impact Analysis" (June).

Executive Order 12866 (Oct. 4, 1993).

Fraas, Arthur G. 2010. "Uncertainty in EPA's Analysis of Air Pollution Rules: A Status Report," Resources for the Future Discussion Paper 10-04 (February).

Government Accountability Office. 2007. Reexamining Regulations: Opportunities Exist to Improve Effectiveness and Transparency of Retrospective Reviews Report No. GAO- 07-791.

Greene, William H. 2003. Econometric Analysis, Fifth Edition. Upper Saddle River, NJ: Prentice-Hall, Inc.

Hahn, Robert W., Jason Burnett, Yee-Ho I. Chan, Elizabeth Mader, and Petrea Moyle. 2000. "Assessing Regulatory Impact Analyses: The Failure of Agencies to Comply with Executive Order 12,866." Harvard Journal of Law and Public Policy, 23(3): 859–71.

Hahn, Robert W., and Patrick Dudley. 2007. "How Well Does the Government Do Cost– Benefit Analysis?" Review of Environmental Economics and Policy, 1(2): 192–211.

Hahn, Robert W., and Robert Litan. 2005. "Counting Regulatory Benefits and Costs: Lessons for the U.S. and Europe," Journal of International Economic Law 8(2), 473-508.

Hann, Robert W. Randall W. Lutter, and W. Kip Viscusi. 2000. Do Federal Regulations Reduce Mortality? Washington, DC: AEI-Brookings Joint Center for Regulatory Studies.

Hahn, Robert W., and Paul C. Tetlock. 2008. "Has Economic Analysis Improved Regulatory Decisions?" Journal of Economic Perspectives 22(1) (Winter): 67-84.

Harrington, Winston. 2006. "Grading Estimates of the Benefits and Costs of Federal Regulation: A Review of Reviews." Discussion Paper 06-39. Washington, DC: Resources for the Future.

Harrington, Winston, Richard Morgenstern, and Peter Nelson. 2000. "On the Accuracy of Regulatory Cost Estimates." Journal of Policy Analysis and Management, 19(2): 297– 332.

Hatry, Harry P. 1999. Performance Measurement. Washington, DC: Urban Institute Press.

Lavoie, Don. 1990. "Hermeneutics, Subjectivity, and the Lester-Machlup Debate: Toward a More Anthropological Approach to Empirical Economics," in Warren J. Samuels (ed.), Economics As Discourse: An Analysis of the Language of Economists. Norwell, MA: Kluwer Academic Publishers: 167-84.

Maddala, G. S. 2001. Introduction to Econometrics. West Sussex, England: John Wiley and Sons, Ltd.

McTigue, Maurice, Henry Wray, and Jerry Ellig. 2009. 10th Annual Performance Report Scorecard: Which Federal Agencies Best Inform the Public? Arlington, VA: Mercatus Center at George Mason University.

Morgenstern, Richard D., ed. 1997. Economic Analysis at EPA: Assessing Regulatory Impact. Washington, DC: Resources for the Future.

Office of Management and Budget. 2010. 2009 Report to Congress on the Benefits and Costs of Federal regulations and Unfunded Mandates on States, Local, and Tribal Entities. Washington, DC: Office of Information and Regulatory Affairs.

________. 2009. Federal Regulatory Review, Request for Comments. Federal Register 74:37 (Feb. 26), 8819.

________. 2008. 2008 Report to Congress on the Benefits and Costs of Federal regulations and Unfunded Mandates on States, Local, and Tribal Entities. Washington, DC: Office of Information and Regulatory Affairs.

________. 2005. Validating Regulatory Analysis: Report to Congress on the Costs and Benefits of Federal Regulations and Unfunded Mandates on State, Local, and Tribal Entities. Washington, DC: Office of Information and Regulatory Affairs.

________. 2004. Final Information Quality Bulletin for Peer Review. (December 16).

________. 2003. Circular A-4 (Sept. 17).

Schütz, Alfred. 1953. "Common-Sense and Scientific Interpretation of Human Action," Philosophy and Phenomenological Research 14:1 (Sept.): 1-38.

Williams, Richard. 2008. "The Influence of Regulatory Economists in Federal Health and Safety Agencies," Mercatus Center Working Paper (July), https://www.mercatus.org/publication/influence-regulatory-economists-fe….