"Between the idea and the reality, between the motion and the act, fall the environmental, cost-benefit, and small entity analyses." T.S. Eliot didn't write exactly that ("falls the shadow"), but it applies to federal regulation.
Various statutes and executive orders require environmental analysis, cost-benefit analysis, economic analysis of the impacts of a regulation on small businesses, governments, and non-profits, and more, before a good (or bad) idea can become regulatory reality. Most of these requirements are informational: a regulation isn't generally precluded because it may fail a cost-benefit analysis, or have an adverse impact on small entities.
These statutes and orders are an attempt to deal with a principal-agent problem. Civil servants focus on a problem defined by their agency mission. Highway administrators want to build highways - they believe in highways, and their constituency groups also want highways. But the public has a wider range of interests. They want to build highways, but they want to protect the environment as well, and they want to provide opportunities for small businesses. These analysis requirements attempt to force agencies to face some of these other, broader issues.
The lineage of the cost-benefit requirement can be traced back to the Nixon Administration. It took its modern form early in the first Reagan administration in Executive Order 12291. This served through the Reagan and Bush administrations and was superseded by E.O. 12866 early in the Clinton administration. Despite Reagan-era liberal concerns that cost-benefit analysis was simply a tool to prevent needed social and environmental regulation, the Clinton administration continued the practice. The new Bush administration has not replaced E.O. 12866 - it did introduce new guidelines for 12866 analysis last fall.
Does the cost benefit requirement do any good? Winston Harrington and Richard Morgenstern ("Evaluating Regulatory Impact Analyses") survey the literature on the different ways of approaching this question. This survey was prepared for an Organization for Economic Cooperation and Development (OECD) conference on evaluation of regulation last fall.
Are the analyses done well (presumably a bad analysis won't accomplish much good)? Do they have the basic things you should find in a good analysis (a selection of alternatives, an executive summary? This is an "extensive content test.") Do they not only have the requisite parts, but are the parts well done? ("intensive content test"). Are their predictions borne out? (an "outcome test"). Do they make a difference in the nature of the regulatory process outcomes? (a "function test"). More to the point, do they improve the functioning of the regulatory process? (a more demanding version of the function test).
Richard Hahn and Patrick Dudley do an "intensive content" test in this recent AEI-Brookings Joint Center for Regulatory Studies report, "How Well Does the Government Do Cost-Benefit Analysis?". They review 55 EPA studies from 1982 to 1999 to see to what extent they include the basic elements of a good analysis. Hahn and Dudley focus on analyses of economically significant regulations - those with costs or benefits over $100 million a year.
Hahn and Dudley found that:
- Not all studies included estimates of total costs; costs to industry were reported more often than costs to federal and state governments.
- Benefit estimates were not presented as systematically as cost estimates; benefits were less likely to be monetized.
- Comparisons of costs and benefits (through net benefit or cost effectiveness analysis) were often not made even when they could be made: "Of the rules in the sample that monetized benefits, only 54 percent calculated net benefits. Of the rules in the sample that quantified benefits, only 69 percent calculated cost effectiveness or net benefits." (page 11)
- The percent of analyses including at least one alternative dropped from 89% in the Reagan administration, to 74% in the Clinton administration. It was uncommon to calculate net benefits or cost effectiveness for alternatives.
- There was no clear trend in the quality of the analyses across the administrations
A useful 79-point checklist in the appendix shows the elements Hahn and Dudley would like to have seen in these analyses.
Harrington, Morgenstern and Nelson performed an "outcome" test in the RFF discussion paper "On the Accuracy of Regulatory Cost Estimates." This is a nice article - I posted on it in October 2002 in "Do government economists make mistakes?".
Harrington and Morgenstern survey literature on "function testing." They note that an examination of individual documents for the presence of alternatives suggests that the document was meant for use in designing the regulation. Absence of alternatives (or alternatives that are obviously strawmen) suggests not.
Statistical studies relating the characteristics of analyses to regulatory outcomes may also be helpful. Do the regulations differ systematically with characteristics of the regulatory analysis? This may suggest that one affects the other, although whether for better or worse may be unclear. Harrington and Morgenstern point to two studies. One found that higher quality analysis was associated with more stringent (although not necessarily better) environmental regulation. A second found that analysis "had at best a slight effect on cost-effectiveness."
Case studies may also identify relations between analysis and regulatory outcomes. Meta-analyses of lots of case studies can begin to suggest patterns. Harrington and Morgenstern point to a volume of case studies of EPA regulation (Morgenstern, Richard D. editor. Economic Analyses at EPA: Assessing Regulatory Impact). The authors of these cases studies - generally "closely connected with the regulatory process they were writing about" - felt that the analyses had improved the ultimate regulations by leading to changes that reduced costs or increased benefits. A clear lesson from these studies was:
"...the critical importance of timing to the usefulness of RIAs [cost-benefit analyses - Ben]. Several case-study authors mentioned the fact that many RIAs are not initiated until after the regulatory process is well under way, often after the preferred alternative has been selected...In this situation, the usefulness of the RIA is obviously undermined. Worse, it puts pressure on the analyst not to deliver bad news about benefits and costs, especially abut the preferred alternative, leading to cynicism about the role of RIAs in the regulatory process. Most analysts believe the RIA should begin before the regulatory process begins, in order to develop information useful in decisionmakng."
Minor editorial revisions 4-14-04