Home » Articles » Volume 49 » Issue 1 » The Problem of Unquantified Benefits

 
 

The Problem of Unquantified Benefits

 

Topics: , , ,

I. Introduction

[Cost-benefit analysis] minimizes decision costs through the magic of quantification. Once valuations are obtained from the marketplace and surveys . . . decisions are relatively automatic. – Jonathan Masur & Eric Posner[1]

When important benefits and costs cannot be expressed in monetary units, [cost-benefit analysis] is less useful, and it can even be misleading, because the calculation of net benefits in such cases does not provide a full evaluation of all relevant benefits and costs. – Office of Management and Budget[2]

It’s a simple idea. Before issuing regulations, the government should first add up all the social costs and the social benefits and compare them.[3] But the devil is in the details. Drawing meaningful conclusions from a comparison of costs and benefits is difficult—and sometimes maybe impossible—unless you can quantify both sides in a common metric. If costs are measured in dollars, then the best way to accomplish a comparison is to measure the benefits in dollars as well.

And there’s the rub. While regulatory costs tend to involve values that are relatively easy to measure and express in monetary terms—the cost of installing a scrubber on a smokestack, for example—regulatory benefits tend to involve things that are hard to quantify, and even harder to monetize.[4] They include things like effects on human health, premature death, degradation of ecosystems, extinction of species, and so on. And if costs are completely (or relatively completely) monetized, but benefits only partially so, then drawing any meaningful conclusion from a comparison becomes problematic.

This is hardly a new insight. Indeed, most of the criticisms raised by those who are skeptical of cost-benefit analysis (CBA) in agency rulemaking relate in some way to the difficulties posed by the quantification and monetization of regulatory benefits. The list of reasons that benefits may be left unquantified or under-counted in CBA is long. And many of these reasons implicate deep theoretical and normative issues that have spawned an extensive literature over many decades.[5] But the difficulties posed by quantification also raise a straightforward empirical question that has been largely ignored: how often and to what extent does the problem of unquantified benefits actually arise in practice?[6]

Asking that empirical question also brings into focus a more prosaic problem that is frequently mentioned but rarely analyzed in any depth—the problem of insufficient data.[7] Putting aside the perhaps more intellectually exciting problems of incommensurability, endowment effects, wealth effects, discount rates, and so on, benefits are sometimes (perhaps quite often) left unquantified and under-quantified in CBA for the simple reason that the relevant data don’t exist.[8]

CBA skeptics almost always list the missing data problem in an initial catalogue of CBA’s shortcomings, but then usually move on to tackle deeper theoretical issues. Proponents of CBA often acknowledge the problem also, but then shrug and move on as though it doesn’t really matter, or is, perhaps, of trivial enough magnitude to be safely ignored. But when we tackle the empirical question of the frequency and magnitude of unquantified benefits in the real world—as I did in the original empirical study presented below—it turns out that the missing data problem looms large and, as I argue, calls into question not just the practice of CBA but the intellectual foundations on which it rests.

All of this matters, particularly now. The Trump Administration has declared war on the regulatory state.[9] A series of executive orders have promised to reduce regulatory burdens, and the President has pledged to undo a litany of Obama-era regulations aimed at protecting public safety and the environment—rules on climate change,[10] highway safety,[11] worker protections,[12] wetlands preservation,[13] and a host of other pressing issues.[14] In this war, CBA will play a central role,[15] as it has since an earlier icon of anti-regulatory zeal, President Ronald Reagan, first imposed a CBA requirement on federal agencies nearly four decades ago.[16] Since then, CBA has been embraced by both Democratic and Republican administrations, but in academic and policy circles, it continues to spark fierce debate: is it a valuable technocratic tool that harnesses “the magic of quantification” to meaningfully evaluate the quality and desirability of regulations, or a smokescreen that cloaks a garbage-in-garbage-out analysis in a veneer of scientific objectivity? Tackling the question of unquantified benefits empirically, it turns out, begins to shed new light on these questions.

So how big is the problem of unquantified benefits? Anecdotal evidence suggests that it may be significant.[17] Case studies of individual CBAs show large and significant aspects of benefits that are left uncounted. Cass Sunstein, for example, found that in its CBA on the regulation of arsenic in drinking water, the United States Environmental Protection Agency (EPA) left unquantified the effects of five of the seven different kinds of cancer associated with arsenic, along with a host of other health effects, including “pulmonary, cardiovascular, immunological, neurological, and endocrine effects.”[18] The CBA accompanying EPA’s 2011 mercury and air toxics rule for power plants monetized only one narrow human health endpoint: IQ losses suffered by children exposed to mercury in utero when their mothers ate fish caught recreationally in U.S. waters.[19] It thus excluded the vast bulk of exposures to pregnant women—all exposures from commercially caught fish and from fish caught in non-U.S. waters.[20] It also left out numerous other impacts, including IQ losses in other populations, other neurological effects, potential cardiovascular, genotoxic, and immunotoxic effects, all ecological effects, and all other toxics besides mercury.[21] Similarly, EPA’s CBA of its rule governing cooling water intakes at power plants was roundly criticized for leaving entirely unquantified the aquatic ecosystem benefits of the rule, and for leaving out all but two percent of the fish populations it did try to count.[22]

Although these case studies and anecdotal accounts are important, this Article tackles the question of unquantified benefits more systematically through an empirical study of a set of forty-five CBAs conducted by EPA over a recent thirteen-year period. I chose to focus on EPA because it is the agency that is usually held up as the gold standard for agency conduct of CBA.[23] My data set included the CBAs conducted by EPA in connection with each of the major rules (primarily those with effects on the economy of $100 million or more) issued between 2002 and 2015.[24]

While this empirical project has embedded within it a paradox—it seeks to measure what the agency has deemed immeasurable—I was nonetheless able to uncover some evidence as to the magnitude of the benefits left unquantified in these CBAs. In thirty-six out of the forty-five CBAs I analyzed (80%), EPA described as “important,” “significant,” or “substantial” categories of benefits that the agency excluded as unquantifiable due to data limitations.[25]

Indeed, in certain instances, the monetized benefits estimate left out the value of ameliorating the very harm at which the rule itself was aimed. Thirteen of the rules had the explicit purpose of reducing emissions of hazardous air pollutants and yet the CBAs failed to monetize the value of reducing those pollutants at all.[26] Virtually all of the monetized benefits came instead from the salutary fact that emissions controls aimed at reducing hazardous air pollutants also happen to produce the ancillary benefit of reducing a different pollutant: particulate matter.[27]

While admittedly preliminary, this data suggest that the problem of unquantified benefits is a big one that deserves more attention than it has received. One consequence of significant benefits remaining unquantified, for example, is that it becomes impossible for the agency to perform formal CBA of the sort called for in the executive orders and guidance memos governing agency use of CBA. Rather than identifying the efficient level of regulation, the analyst can draw only limited conclusions. Accordingly, these results suggest that formal CBA is even further unmoored from its foundations in welfare economics and Kaldor-Hicks efficiency than most of its defenders have assumed.[28]

For environmental regulation, there are other standards—feasibility and health-based standards, in particular—with long track records in agency practice that don’t require comprehensive monetization of regulatory benefits.[29] These standards have been criticized for being insufficiently grounded in efficiency and welfare economics.[30] But if CBA’s own grounding in efficiency is itself called into question, then it no longer has that leg-up over alternative tools, and perhaps these alternatives deserve a closer look. There is some sense, after all, in setting standards based on the information you have, rather than the information you wish you had.

This Article proceeds in three parts. Part II.A begins by laying out an analytic framework for understanding the role that quantification plays in CBA. At the outset it is important to clarify that a variety of methods are often lumped together under the umbrella term, “cost-benefit analysis,” not all of which require the quantification or monetization of benefits. The framework provided here sets the stage for understanding the role that quantification and monetization of benefits plays in various forms of CBA and the constraints that significant unquantified benefits place on the kind of CBA that that agencies can meaningfully perform.

Part II.B then examines the law of CBA—primarily the executive orders and guidance documents that govern federal agency conduct of CBA—to explore the legal constraints these documents impose with respect to the level of formality and quantification expected or required of agencies. It turns out that formal CBA (of the sort that finds its normative grounding in economic theory and notions of social welfare maximization) is the expectation and the norm for agencies, though it is very rarely accomplished in practice because of the problem of unquantified benefits. Part II.C concludes by reviewing the existing empirical literature on CBA, which, while frequently acknowledging the problem of unquantified benefits, has given it very little sustained attention.

Part III explains the methods used and the results obtained in my empirical study, including both the overall quantitative results as well as qualitative descriptions of some of the individual CBAs underlying those results. Finally, Part IV brings the empirical results and the conceptual framework (Parts II and III) together to argue that the problem of unquantified benefits is a significant one that raises fundamental concerns about the soundness of CBA’s normative foundations. The results of the empirical study suggest that in developing environmental rules, agencies are rarely ever able to legitimately conduct formal CBA of the sort called for in the executive orders, and that even the more informal varieties of CBA they can conduct will produce only limited conclusions at best. This suggests that the connection between CBA and its normative foundations in efficiency or welfare is even more tenuous than most of its defenders have assumed, and that alternatives to CBA that have been criticized for not closely tracking the efficiency norm deserve a second, less jaundiced look.

II. Background

This Part reviews first (in II.A) the variety of forms that CBA can take on a spectrum from informal to formal, explains the common grounding and defense of formal CBA in welfare economics, and describes the processes by which regulatory benefits are typically quantified and monetized. Part II.B then examines the legal constraints imposed on agencies by the executive orders and guidance documents governing the CBA requirement. Finally, Part II.C examines the small amount of previous literature that has taken an empirical approach to quantification in CBA.

A. Formal and Informal CBA

The term “cost-benefit analysis” can refer to any decision-making tool that involves weighing and comparing the costs and the benefits of a course of action.[31] There are many different varieties of CBA that fall on a spectrum from informal to formal. And not all of them require the quantification or monetization of benefits (or costs). In previous work I have explored this spectrum in some depth, developing a typology of the varieties of CBA and considering some of the implications of that typology for the ongoing debate about the use of CBA in agency rulemaking.[32] Here, it will be sufficient to merely touch on some of the highlights of that analysis.

On the informal end of that spectrum is what I have previously called “Ben Franklin CBA,” which involves essentially drawing a line down the center of a page, listing pros and cons qualitatively described in each column, and then performing an ad hoc, intuitive comparison.[33] On the other end of the spectrum is what I call “economic CBA,” a highly technical and formal analytic method grounded in economic theory that attempts to fully quantify and monetize all of the social costs and benefits of a whole range of regulatory options and then, by calculating the point at which the marginal benefits curve intersects the marginal costs curve, identify the economically efficient level of regulation.[34]

  1. The Normative Grounding of Formal CBA in Welfare Economics

I refer to the CBA on the most formal end of the spectrum as “economic CBA” precisely because this form of CBA is almost always normatively anchored in welfare economics. That is, it is most often defended as a good decision-making method on the grounds that it identifies the economically efficient level of regulation—in the Kaldor-Hicks sense.[35] A government regulation meets the criterion of Kaldor-Hicks efficiency if those who stand to benefit from the regulation could fully compensate those who stand to lose from it and still be better off.[36]

Economic CBA aims to identify the regulatory alternative that is optimally efficient in the Kaldor-Hicks sense by finding the regulatory alternative that maximizes overall net social benefit to all members of society in the aggregate.[37] This involves estimating the total social costs and total social benefits of a whole range of incrementally varying alternative regulations and finding the point of net benefits maximization (which is also the point at which marginal benefits are just equal to marginal costs).[38] Making this precise calculation requires that both costs and benefits be expressed in the same metric (which in practice is money).

The Kaldor-Hicks test runs up against salient moral objections stemming from the potential disconnect between individual preferences and actual welfare (the drug addict’s preference for drugs, for example), and its failure to ensure actual compensation of losers by winners.[39] Formal cost-benefit analysis essentially adapts the Kaldor-Hicks test to real-world implementation by using money as the measure of welfare, but in so doing it generates additional objections relating to wealth effects,[40] the willingness-to-pay standard,[41] and the devaluation of future benefits.[42]

Most CBA practitioners simply ignore these objections or carry on in spite of them. But some proponents of CBA in regulatory decision making have in recent years sought to decouple it to some degree from its normative foundations in Kaldor-Hicks efficiency. The most notable example is Matthew Adler and Eric Posner’s groundbreaking book, New Foundations of Cost-Benefit Analysis,[43] widely viewed as the most sophisticated and credible defense of CBA in the literature to date.[44] In it, they reject the Kaldor-Hicks test as “simply not an attractive normative criterion” and seek to tether CBA instead to a broader utilitarian notion of overall welfare.[45] But the tether is a long one. Rather than conceptualizing it as a direct measure of overall welfare, they defend CBA as a “decision procedure” that provides simply a “rough and ready proxy” for overall welfare.[46] This allows them to largely sidestep many of the central critiques of welfare economics, although some they address through proposed modifications of standard CBA, like the laundering of preferences to eliminate distorted preferences and distributional weightings to counteract wealth effects.[47]

In sum, the kind of CBA that emerges out of welfare economics is grounded in the notion of Kaldor-Hicks efficiency or welfare maximization. It is also highly formal, complex and technical, and requires both costs and benefits to be quantified and expressed in monetary terms.

  1. Quantification

Formal economic CBA requires an arithmetic calculation. Ideally, the analyst can plot marginal costs and marginal benefits for a whole range of incrementally varying regulations on a graph and locate where the two curves cross. Or, at a minimum, she is able to subtract total costs from total benefits in order to calculate a single number representing net benefits for each alternative and to identify the alternative with the largest net benefits. This kind of calculation requires both the costs and the benefits of multiple alternatives to be quantified and to be expressed in the same metric. In practice, that metric is always money.

What we might generically refer to as quantification is, then, actually a two-step process.[48] First, the relevant value must be quantified (e.g., lives saved, acres of wetlands preserved, pounds of pollution averted).[49] Second, that quantified value must be translated into monetary terms.[50] Note that I use the term “quantify” to refer specifically to the first step, but also sometimes (as in the title of this Article and this Subpart) to refer generically to the whole two-step process. These two steps are described in more detail below.

a. Quantitative Risk Assessment

In the context of human health (where the vast amount of the energy and resources aimed at providing monetized values for CBA has been aimed), the first step has been formalized into a process called Quantitative Risk Assessment.[51] This process involves four stages: 1) hazard identification, 2) dose-response evaluation, 3) exposure assessment, and 4) risk characterization.[52] Hazard identification is typically conducted through epidemiological studies or long-term animal bioassays in order to determine whether there is some causal link between a given pollutant and some adverse health effect.

Dose-response evaluation is the process by which that causal relationship is reduced to quantitative terms.[53] This requires that the relevant epidemiological and animal bioassay studies be of sufficient number and quality to support quantitative estimates of the likely incidence of disease at various dose or exposure levels.[54] In most instances, the exposure level for which the studies provide direct evidence of disease incidence are substantially higher than the levels anticipated to occur from pollution. Dose-response evaluation then requires making certain assumptions in order to extrapolate the dose-response curve to lower exposure levels.[55] Where the studies are all or primarily animal bioassays, the analyst also needs to make a set of assumptions in order to extrapolate from animal data to likely effects on humans.[56]

Exposure assessment involves estimating the extent to which human populations will be exposed to a particular hazard.[57] This requires data or modeling of ambient levels of pollution and how these are affected by weather patterns etc.[58] This may require data about how many people live near a facility that emits harmful pollutants into the air and how much time they spend outside. Or it might require data on how many people drink water from particular groundwater sources likely to become contaminated or eat fish caught from contaminated waters, along with data about how toxins enter the food chain and become concentrated in the tissues of particular fish species, and so on.

Finally, the risk characterization stage involves combining the results from the second and third stages in order to derive a numerical estimate of population health risk.[59] This essentially means multiplying the dose response ratio by the exposure level.[60] This is usually expressed as an individual’s incremental increase in the risk of dying from some particular disease (or of simply contracting that disease) at the level of exposure likely to result from the pollutant in question. Thus, a risk characterization might conclude that under specified working conditions, an individual worker’s lifetime probability of developing cancer would increase by 1 in 1,000.[61] That number might be translated into an amount of harm to society as a whole by multiplying by the number of people expected to be exposed at that level. Thus, under the previous example, if there were 1,000 workers employed under the specified conditions, the chemical would be found to produce an increased population risk of one excess case of cancer. In this way an analyst can calculate a total number of deaths a particular increase in some pollutant might be expected to cause over some designated geographic area and time period—or conversely, the number of lives that would be saved by a regulation decreasing levels of that pollutant.

Thus, through Quantitative Risk Assessment, an analyst can arrive at some number, or set of numbers, representing a society-wide benefit expected to accrue from some particular increment of regulatory protection: lives saved, cases of chronic bronchitis averted, and so on. In attempting to quantify ecological harms, an analyst might well go through analogous steps, though the process is far less formalized and routinized than it has become in the human health sphere. Rather than quantifying lives saved or illnesses averted, the analyst might come up with an estimate of acres of wetlands or habitat preserved, numbers of fish deaths averted, and so on.[62]

But these values are inevitably expressed in units of “apples” that cannot be directly compared with the “oranges” of social costs. Thus, a formal economic CBA requires the final step of monetization: translating a quantified estimate of social benefit into monetary terms, so that it can be directly compared to social costs. The next Subpart describes this process.

b. Monetization

In some instances, where the quantified benefit involves some good traded in markets, monetization is relatively straightforward. Where, for example, the benefit involves improving a commercial fishery and the increase in the number of fish expected to be caught by commercial fishermen has been estimated, it is a relatively simple matter to multiply that number of pounds of fish by the retail price of that particular variety in order to obtain a monetary value.[63]

More often, however, environmental benefits involve non-market goods for which divining a monetary value is far less straightforward. What is the dollar value of saving a human life, preventing a painful and debilitating illness, or pulling a species back from the brink of extinction? Some argue that the entire enterprise of trying to monetize such values is misguided because they are fundamentally incommensurable with money.[64]

Despite these objections, economists have developed a number of techniques for trying to divine the monetary value of things not traded in markets.[65] “Revealed preference” techniques attempt to infer a dollar value for nonmarket goods by observing things that are traded in markets and are thought to reflect (or “reveal”) the unpriced value.[66]

These techniques often take the form of hedonic valuation methods, which attempt to disaggregate environmental or health attributes from other goods with which they are bundled in the market in order to infer the value people place on them.[67] For example, economists compare the wages paid to workers in jobs associated with a high risk of death to the wages paid to workers in less risky jobs in order to infer the dollar value people attach to avoiding some particular risk of death.[68] They can then use this value to calculate the “value of a statistical life” (VSL), a dollar value associated with the avoidance of one death in the population as a whole.[69] (If the average person is willing to give up $10 in wages, for example, to avoid a one-in-a-million risk of death, the VSL would be $10 million.) Alternatively, economists try to measure how much value people attach to unspoiled open space by comparing the prices of properties located adjacent to such areas with those that are not.[70]

The other primary revealed preference technique, the travel cost method, involves inferring the value of environmental amenities from the costs people incur to travel to them.[71] Thus, an economist might measure the recreational “use value” attached to natural resources by measuring the admission fees and travel costs hikers pay to visit a national park.[72]

Alternatively, where values can’t be “revealed” through actual market transactions, economists turn to “stated preference” methods. “Contingent valuation” surveys, for example, attempt to measure people’s willingness to pay for nonmarket goods by simply asking them.[73] These are essentially sophisticated public-opinion polls that give respondents information about a particular natural resource or medical condition and then ask them how much they would be willing to pay to preserve the resource or avoid the disease. One such survey, for example, concludes that California households are, on average, willing to pay $18.14 per year to increase gray whale populations by 100%.[74] Another pegs the average person’s willingness to pay to avoid contracting chronic bronchitis at $457,000.[75]

All of these methods are controversial and produce highly contestable results.[76] One problem, for example, is the endowment effect.[77] Even though measuring willingness to pay (to buy) versus willingness to accept (to sell) yields different values for the same good, economists have yet to come up with any principled basis for choosing between the two. This makes stated-preference surveys, which are almost always designed to measure willingness to pay, vulnerable to criticism that they underestimate the values they try to measure.

c. A Typology of Unquantified Benefits

In sum, as the preceding pages have set forth, translating regulatory benefits into monetary terms involves several discrete steps, each of which poses significant challenges. Obstacles at any of these steps along the way may result in an agency being unable to monetize a regulatory benefit in a way that allows for a formal CBA. Accordingly, we can sketch out a typology of potential reasons that quantification might fail:

 

  1. Insufficient information to conduct:
    1. Hazard identification (establishing a causal link between the regulated activity and human/ecological health)
    2. Dose-response evaluation (collecting sufficient data to credibly establish a dose-response curve, describing the causal link in quantitative terms)
    3. Exposure assessment (establishing how many people—or ecosystem components—are likely to be exposed to a particular hazard and to what degree, based on actual data or modeling techniques)
  2. Insufficient data/models to monetize

 

To these, we can add another: The agency might consider benefits unquantifiable because of commensurability concerns. It may take the position that a particular benefit is, as a philosophical matter, simply not reducible to monetary terms.[78] Finally, it is perhaps worth separating out lack of money or resources as a separate reason that might be cited in conjunction with any of the others. Thus, we can add two more reasons to those listed above:

 

  1. Incommensurability
  2. Lack of money, time, and/or resources

 

While an analyst may not always differentiate among all of these reasons—particularly among the first three (1a-c)—this typology will nonetheless be helpful to keep in mind as we proceed.

  1. The Formality Spectrum

On the other end of the spectrum from the formal economic CBA described above, is the intuitive comparison of qualitatively described pros and cons, which I have called “Ben Franklin CBA.”[79] Based on that spectrum, this section describes a typology of formality and informality in CBA.

We can conceptualize the CBA spectrum as breaking down into three distinct but related axes, each of which extends from informality on the left to formality on the right:

Figure 1. The formality-informality spectrum.[80]

As illustrated by the light gray lines between axes, the three axes are related such that a move along one may require or imply a parallel move along another. Where, for example, all costs and benefits are only described in qualitative terms (the left-most position on Axis 1), then the analyst can perform only a rough comparison (left-most position on Axis 2) and is likely to apply the analysis to only a single alternative or a handful of alternatives at most (left two boxes, Axis 3). At the other extreme, where all costs and benefits are fully monetized (right-most position on Axis 1), then an analyst can either precisely compare them for a single alternative (middle box on Axis 2; left-most box on Axis 3), or, if evaluating a whole range of incrementally varying alternatives (right-most box on Axis 3), she can perform a formal economic CBA, pinpointing the economically efficient alternative for which marginal costs are just equal to marginal benefits (right-most box on Axis 2). [81]

Where some but not all benefits are monetized (intermediate positions on Axis 1), the analyst may employ a less precise balancing test (intermediate left positions on Axis 2), such as a “wholly disproportionate” standard. Or she may conduct a “break even analysis,” which involves calculating the amount by which the monetized benefits estimate falls short of the monetized costs estimate and then making an intuitive judgment about whether the unquantified benefits are likely big enough to make up the difference. (More on this later.)[82]

  1. Standard-Setting vs. Litmus-Test CBA

One important insight that arises from this typology is that formal and informal CBA play fundamentally different roles in decision making. Formal economic CBA measures the costs and benefits of many incrementally differing alternatives and then chooses one perfect option from the whole range.[83] Accordingly, it has the capacity (theoretically, at least) to locate the precise level of regulatory stringency that will achieve economic “efficiency” (maximization of overall social welfare).[84] An informal CBA, on the other hand, simply provides a binary go-or-no-go answer for a single option.[85] And while it may give policy makers a vague idea of whether a given regulation is desirable in relation to the status quo, it can’t tell them whether it is efficient or optimal in any sense.[86]

Thus, at the formal end of the spectrum, CBA acts as a standard-setting tool, telling the agency exactly where, among a whole range of options, to set the regulatory standard.[87] More informal CBAs, on the other hand, act as a sort of litmus test—a secondary check on a standard-setting decision that has been made initially by other means.[88] These informal, “litmus-test CBAs” include the kind that agencies perhaps most frequently do, where the analyst simply asks whether benefits outweigh costs for a single alternative. [89]

It is worth noting that a litmus-test CBA provides a pretty inaccurate proxy for efficiency.[90] While it is probably true that a regulation that flunks a litmus-test CBA (produces more total costs than total benefits) is inefficient, the converse is not true. Just because a single regulation passes a litmus-test CBA does not necessarily mean that it is efficient. Indeed, a regulation can pass such a test and be very far indeed from the point of efficiency.

If, for example, the efficient level of regulation would reduce aggregate emissions of some pollutant from 100 to 25 tons and produce $15 billion in net benefits, a far less stringent regulation that reduced emissions only from 100 to 99 tons and produced only $1 billion in net benefits would still pass a litmus test CBA with flying colors. But it would be very far from the efficient level of regulation, producing only a small fraction of the emissions reductions and net benefits that would occur under the efficient regulation.

To see the same point graphically, imagine a scenario in which reducing pollution levels by some small amount will be fairly inexpensive and deliver substantial benefits but where, as pollutant levels are reduced further and further toward zero, the marginal costs (the cost for each unit of pollutant removed) will gradually increase and the marginal benefits gradually decrease. This is a fairly reasonable assumption that probably captures, at least in general terms, how marginal cost and benefit curves most frequently behave.[91] On such assumptions, a stylized version of the marginal cost and marginal benefit curves might look something like those depicted in Figure 2:

Figure 2: Litmus-test versus standard-setting CBA as a measure of efficiency.

The total costs and benefits for any particular level of pollution control (any point along the X-axis) is the area under the curve. At the point of efficiency, total benefits clearly exceed total costs (up to dotted line “e,” the area under the benefit curve is bigger than the area under the cost curve). At any point to the left of e, total benefits would still exceed total costs, but not by quite as much as they would at point e. One can also see intuitively that regulating up to point z creates an area under the benefit curve that is roughly equal in size to the area under the cost curve. Thus, regulating to that point would produce zero net benefit: total costs would be just equal to total benefits. At any more stringent level of regulation (to the right of point z), the area under the benefit curve would be smaller than the area under the costs curve, meaning that total costs would exceed total benefits (net benefits would be negative). At any less stringent level of regulation (to the left of point z), the area under the benefit curve would be larger than the area under the cost curve, and net benefits would be positive. Thus, any regulation anywhere within the range of the crosshatched area would pass a litmus-test CBA. And that includes levels of regulation both far more stringent and far more lax than the efficient level (e). Litmus test CBA, then, provides only a very rough estimate of efficiency.

In sum, there are multiple varieties of CBA that can be conceptualized on a spectrum from informal to formal. Conceptualizing CBA in this way brings into focus important distinctions between informal and formal types of CBA. In particular, it highlights the fact that informal and formal CBA play very different roles in the decision-making process. Formal economic CBA operates (in theory) as a standard-setting tool, allowing the analyst to choose the maximally efficient level of regulation from a whole range of alternatives. Less formal “litmus-test” varieties of CBA, on the other hand, act only as a secondary filter on a standard-setting decision that has been initially made by other means. While formal economic CBA identifies the efficient level of regulation, litmus-test CBA provides a very poor proxy for efficiency. It will be important to keep these limitations on informal, litmus-test CBA in mind as we proceed.

One question that these important distinctions between formal and informal varieties of CBA raises is which of these varieties are federal agencies required or expected to perform? The next section takes up this question.

B. Agencies’ Legal Obligations Regarding Formal CBA

Most agencies perform CBA not pursuant to any statutory command, but under a set of executive orders that require federal agencies to perform CBA on all “significant” regulations (primarily, those costing at least $100 million per year).[92] Indeed, most of our environmental health and safety statutes actually eschew CBA and direct agencies to use some alternative standard-setting tool instead.[93] The most prominent of these alternatives are feasibility standards, which direct the agency to set standards at the most stringent level that is economically and technologically feasible, and health-based standards, which direct agencies to set standards at the level requisite to protect public health and/or the environment, without considering costs.[94] This reflects the significant congressional skepticism toward CBA that held sway in the 1970s, when most of these statutes were passed—a skepticism that stemmed in large part from the same concerns about the undercounting of benefits that animate this study.[95]

As noted in the introduction, President Reagan first imposed a CBA requirement on agencies through executive order in 1981, after sweeping into office on promises of economic stimulus through deregulation.[96] Some version of that requirement has persisted, through Democratic and Republican administrations, ever since.[97] In cases of conflict, the statutory standard, of course, prevails over the executive orders’ CBA requirement. So agencies often find themselves in the anomalous position of having to perform a CBA under the executive orders, upon which they are not permitted to rely in their actual decision making.[98]

Despite the primacy of other decision-making tools in U.S. environmental law, however, CBA remains an important topic of study and discussion for several reasons. First, Republicans have for many years been trying to pass a statutory CBA “super-mandate,” which would effectively re-write all the statutes that currently direct agencies to base regulatory decision making on other standards and instead create a statutory command making CBA the rule of decision.[99] Indeed, in 1995, such a bill came within two votes of surviving a Senate filibuster to become law.[100] Second, the Supreme Court has in recent years shown an increasing willingness to read ambiguous statutes to allow, or possibly even require, CBA.[101]

Third, the executive order CBA requirement has been in place now for nearly four decades. And over that time, perhaps spurred on by institutional pressures from the small group of economists at the White House’s Office of Information and Regulatory Affairs (OIRA) who are tasked with overseeing and administering it, the CBA mandate has begun to have an effect on agency culture.[102] Lisa Heinzerling, for example, who served as Associate Administrator of EPA’s Office of Policy during the Obama Administration, reports that “OIRA’s fine cost-benefit sieve leads EPA personnel to be deeply wary of developing rules that have very high costs in relation to their quantified and monetized benefits.”[103] Thus, there is reason to believe that the CBA requirement has an informal effect on agency decision making that goes beyond its formal legal effect.

With respect to the formality of the CBA agencies are required to perform under the executive order, there is a tension: on the one hand, the executive orders requiring CBA and the guidance documents interpreting them clearly hold up formal CBA as the goal and the norm. On the other hand, they acknowledge the inevitability of unquantified benefits and instruct agencies to take those benefits into account. This is an awkward tension because, as noted above, where any significant portion of benefits are unquantified, formal economic CBA is actually impossible to perform.

The primary CBA executive order in place today, Executive Order 12,866, was signed by President Clinton in 1993.[104] It replaced Reagan’s 1981 order, but, while arguably a little kinder and gentler, kept the basic CBA mandate in place. Like the Reagan order before it, the Clinton order requires agencies to “choos[e] among alternative regulatory approaches” so as to “select those approaches that maximize net benefits.”[105] This reference to net benefits maximization appears to set up formal economic CBA as the goal. In other places, however, the language in the Clinton Order seems to contemplate a litmus-test CBA, such as that requiring a determination that a regulation’s “benefits . . . justify its costs.”[106] It also makes several references to the difficulties inherent in attempting to quantify certain values, directing that costs and benefits “be understood to include both quantifiable measures (to the fullest extent that these can be usefully estimated) and qualitative measures of costs and benefits that are difficult to quantify, but nevertheless essential to consider.”[107]

President Obama considered revoking Executive Order 12,866 soon after coming into office in 2009 and even solicited public comment on the idea.[108] Ultimately, he left the prior order in place, however, and instead simply issued Executive Order 13,563, “Improving Regulation and Regulatory Review,” which “is supplemental to and reaffirms” Executive Order 12,866.[109] This order reiterates some of the key provisions in Executive Order 12,866, including language that points both toward a simple litmus-test CBA (requiring a showing that a regulation’s “benefits justify its costs” recognizing that “some benefits and costs are difficult to quantify”) and toward a more formal economic CBA (requiring agencies to “select, in choosing among alternative regulatory approaches, those approaches that maximize net benefits”).[110] But in language that shifts even further toward formality with no analogue in the Clinton order, the Obama order also unambiguously holds out full quantification and monetization as the goal, stating that “each agency is directed to use the best available techniques to quantify anticipated present and future benefits and costs as accurately as possible.”[111] It follows that statement with an acknowledgment of the difficulties inherent in attempting to quantify certain values, but ultimately makes the directive that agencies discuss unquantifiable values permissive, not mandatory: “Where appropriate and permitted by law, each agency may consider (and discuss qualitatively) values that are difficult or impossible to quantify, including equity, human dignity, fairness, and distributive impacts.”[112]

The CBA requirement in these executive orders has been further refined and clarified in OMB Circular A-4, issued by OIRA in 2003.[113] Like the executive orders, this document is clear in setting up formal economic CBA as the goal, stating and that “[b]y measuring incremental benefits and costs of successively more stringent regulatory alternatives, you can identify the alternative that maximizes net benefits.”[114] It also notes that CBAs should be “consistent with economic theory,”[115] and repeatedly uses the language of economics, making several references to “market” or “economic efficiency,”[116] and directing agencies to measure costs and benefits in terms of “opportunity costs” and “willingness-to-pay.”[117] Finally, it clearly contemplates full monetization as the goal and the norm: “[a] distinctive feature of BCA [Benefit-Cost Analysis][118] is that both benefits and costs are expressed in monetary units, which allows you to evaluate different regulatory options with a variety of attributes using a common measure.”[119]

On the other hand, Circular A-4 also acknowledges that “[i]t will not always be possible to express in monetary units all of the important benefits and costs” and that “[w]hen it is not, the most efficient alternative will not necessarily be the one with the largest quantified and monetized net-benefit estimate.”[120] In such circumstances, the Circular directs agencies to “exercise professional judgment in determining how important the non-quantified benefits or costs may be[,]” and, to carry out a breakeven analysis if they are determined to be “important.”[121] But at another point, it acknowledges that in such circumstances CBA “is less useful,” and “can even be misleading because the calculation of net benefits in such cases does not provide a full evaluation of all relevant benefits and costs.”[122]

EPA’s Guidelines for Preparing Economic Analyses are similarly geared toward a highly formal CBA.[123] From the outset, the introduction frames the CBA enterprise in the language of economic theory: “[the Potential Pareto] criterion is the foundation of BCA, requiring that a policy’s net benefits to society be positive. . . . The policy that maximizes net benefits is considered the most efficient.”[124] And a detailed appendix provides a textbook introduction to the fundamentals of economic theory.[125] Like Circular A-4, EPA’s Guidelines require that “[b]enefits and costs should be reported in monetary terms whenever possible” and that “[b]enefits and costs that cannot be monetized should, if possible, be quantified.” They acknowledge that “[i]n reality . . . there are often effects that cannot be monetized, and the analysis needs to communicate the full richness of benefit and cost information beyond what can be put in dollar terms.”[126] But ultimately, despite these caveats, the Guidelines take a hard line, requiring a strict numerical comparison of costs against benefits in order “to determine a regulation’s net benefits, even if important benefits or costs cannot be monetized.”[127]

In sum, the CBA executive orders and the guidance documents interpreting them clearly set up formal economic CBA as the goal. They make repeated reference to economic theory, net benefits maximization, and to the importance of quantifying costs and benefits “as accurately as possible.”[128] At the same time, there’s a tension running throughout these documents: While they hold up formal CBA as the goal and the norm, they also all acknowledge that, in at least some instances, quantification and/or monetization of all costs and benefits will be impossible.

As to what to do when important benefits are unquantifiable, the message is mixed: The OMB Circular directs agencies to conduct a breakeven analysis, but then later states that the use of CBA in such circumstances may be “misleading.”[129] The EPA Guidelines direct the agency to “communicate the full richness” of unquantified benefits, but then—in an instruction that seems at best to undermine that directive and at worse to produce highly misleading results—also requires the analyst to calculate “net benefits, even if important benefits or costs cannot be monetized.”[130] In any event, it appears clear that the executive orders and guidance documents put significant pressure on agencies to produce quantified estimates of costs and benefits.[131]

C. Previous Literature

The problem of unquantified benefits is a subset of a much larger set of problems that lead generally to the undercounting of benefits in CBA. Indeed, most of the theoretical objections to CBA involve problems that often cause benefits estimates to be too low. These include the problems of discounting, the endowment effect, and wealth effects.[132] These issues, along with the problem of incommensurability, implicate deep philosophical and normative issues and have generated a substantial literature spanning many decades.[133] Benefits that remain unquantified due to data deficiency, in contrast, have generally received little sustained attention in the debate over CBA.[134]

CBA skeptics often mention the problem in passing but then promptly move on to more meaty theoretical problems.[135] Proponents, on the other hand, often acknowledge the problem in introductory remarks but then assume it away in their analysis.[136] They largely seem to implicitly assume that in most cases unquantified benefits are insignificant and do not prevent a meaningful comparison of monetized benefits to monetized costs.[137] They may, for example, in discussing a particular CBA, acknowledge that significant categories of benefits were omitted from the benefits calculation but then insist that the (relatively complete) costs estimate can still be subtracted from the (relatively incomplete) benefits estimate to obtain a meaningful measure of net benefits. This constitutes a corruption of CBA that I have previously dubbed “false formality,” in which the analyst inappropriately and illogically combines an informal position on Axis 1 with a formal position on Axis 2.[138]

This kind of false formality occurs every year in an annual statutorily proscribed ritual by which OIRA reports on the costs and benefits of federal regulation.[139] In this report, OIRA adds up all the monetized cost and benefit figures contained in the CBAs for major regulations issued by federal agencies during the previous ten-year period and calculates a number purporting to represent the net benefits of all federal regulation. The 2014 report, for example, announced that the annual net benefits of federal regulation during President Obama’s first term were $200 billion.[140] You have to read several pages further ahead before you see the important caveat that these results “are neither precise nor complete,” and that “it is not always possible to quantify or to monetize relevant benefits or costs of rules in light of limits in existing information.”[141]

Outside observers compound the problem by continuing to ignore the limitations posed by unquantified benefits. Hahn and Tetlock, for example, looked at OIRA’s 2006 report to Congress and concluded that for “the 95 major rules from 1995 to 2005 for which substantial benefits and costs were monetized in the regulatory analysis, . . . 14 of the 95 [were] likely to fail a benefit-cost test.”[142] They offered no explanation for their confidence that the “substantial” monetized benefits were complete enough to warrant a strict comparison of costs to benefits and the conclusion that a rule “failed” a CBA when the (presumably relatively complete) costs estimate outweighed the (possibly partial) benefits estimate.[143] The report itself simply stated that “in many instances, agencies were unable to quantify all benefits and costs . . . [and that] [t]he monetized estimates . . . present[ed] necessarily exclude these unquantified effects.”[144] Earlier in the same article, Hahn and Tetlock themselves “readily acknowledge[d]” the “difficulties in placing a monetary value on certain key benefits,” and “le[ft] open the possibility that in particular cases unquantifiable costs or benefits may tip the balance.”[145] But despite this disclaimer, they failed to note the inconsistency between the potential existence of significant unquantified benefits and the drawing of definitive conclusions about net benefits.[146]

A similar disconnect appears in an empirical study by Hahn and Dudley that examined seventy-four EPA CBAs published between 1982 and 1999 with the aim of assessing their quality.[147] Not surprisingly, in many instances, benefits were not monetized at all.[148] But 50% of the CBAs in the sample monetized “at least some benefits.”[149] The study made no assessment of the magnitude or significance of the benefits left unmonetized.[150] Instead, by the next page, the authors appeared to have forgotten that there were any unquantified benefits at all, faulting the agencies for not calculating net benefits in all those instances in which “at least some benefits” were quantified or monetized: “Of the rules in the sample that quantified benefits, only 74 percent calculated . . . net benefits. This suggests that comparisons of costs and benefits are not occurring in a large number of cases for which the necessary data are actually available.”[151] But, of course, where the most that can be said is that “at least some benefits were monetized,” there is no reason at all to believe that “the necessary data are actually available” to calculate net benefits.

In a 2012 study, Shapiro and Morall set out to determine whether rules with good CBAs have higher net benefits.[152] They conducted an empirical analysis of 109 rules issued by federal agencies between 2000 and 2009.[153] They restricted their data set to rules “that included data on the monetized costs and benefits of regulation,” but gave no indication whether these CBAs monetized all or even a significant portion of costs and benefits.[154] Nonetheless, the authors calculated the “net benefits” of each rule based on the numbers provided by the agencies (or OIRA) and made that their “key dependent variable.”[155] They then “scored” each CBA along a six-point scale by answering a set of six yes/no questions.[156] One question related to quantification/ monetization and simply asked “[d]oes the analysis quantify and monetize benefits and costs of proposed action?”[157] To receive a “yes” on this question “monetization did not need to be comprehensive, just thorough enough to demonstrate that major benefits and costs had been monetized.”[158] They then compared their calculated “net benefits” with the quality scores each rule received on the six-point scale to determine whether there was a correlation between the quality of the analysis and the net benefits of the rule.[159]

Treating the “net benefit” numbers that they calculated according to the agency’s monetized cost and benefit estimates as the “key dependent variable,” of course, required an implicit assumption that those numbers actually bore some relationship to the actual net social benefits of the rules, which, in turn, required an implicit assumption that any benefits (or costs) left unquantified or unmonetized were insignificant.[160] The authors appeared surprised and perhaps a little disappointed by their results: “[w]e are forced to conclude that the level of detail in the analysis has little impact on the net benefits of the rule.”[161] Nowhere was the possibility entertained, however, that their measure of the rules’ “net benefits” might itself be unreliable and bear no consistent relationship to the actual net benefits of the rules to society.[162]

Cass Sunstein appeared to make a similar analytic leap in his 2014 article on unquantified benefits, when he mentioned offhand in a footnote, that where some benefits are unquantified, agencies should “to the extent permitted by law, . . . select the approach that maximizes net benefits.”[163] Yet, in those instances in which the benefits left unquantified are significant, a meaningful calculation of net benefits for any alternative is, of course, impossible, as is identification of the point of net benefits maximization.[164]

Even those who recognize the possibility of significant unquantified benefits as a problem tend to discount the frequency of its occurrence, treating it as the unusual or exceptional case.[165] This typically leads to the optimistic suggestion that the problem can be solved simply by a renewed commitment to research aimed at quantifying unquantified benefits.[166]

One exception is a recent article by Jonathan Masur and Eric Posner, Unquantified Benefits and Bayesian Cost-Benefit Analysis.[167] In it, they described the results of an empirical analysis of the CBAs associated with 106 major regulations issued by federal agencies from 2010 through 2013.[168] They found that the agencies were only able to “fully quantify” the costs and benefits for two rules.[169] For forty-eight they were able to partially quantify costs and benefits.[170] And for fifty-six, the agency was unable to attach any number to costs or benefits or both.[171] (Most of those—forty-seven—involved entirely unquantified benefits.) In over 74% of the regulations, the reason for the lack of quantification was what Masur and Posner called “empirical uncertainty—missing data, modeling difficulties, or other related effects.”[172]

Masur and Posner made no effort to systematically evaluate the magnitude of the unquantified benefits, though they did remark that “in some cases, it appears that the unquantified benefits could be quite large.”[173] Despite the apparent magnitude and pervasiveness of the problem, however, Masur and Posner remained optimistic about the ability of agencies to quantify the unquantified benefits by uncovering and analyzing more existing data, generating new data, and where those approaches fail, employing Bayesian analysis to make reasonable guesses about the benefits of regulation “based on the experience and latent knowledge of the agency staff.”[174]

This literature begs an important empirical question: How often are the benefits that agencies leave unquantified in CBA of a sufficient magnitude that they cannot simply be dismissed as de minimis? The next section describes an empirical study in which I attempted to compile at least some preliminary data on that question.

III. Methods and Results

In order to gain a better understanding of the role that unquantified benefits actually play in the practice of CBA, I conducted an empirical study of forty-five CBAs of major rules conducted over a ten-year period. Unlike Masur and Posner, who included in their data set all federal agencies issuing major regulations, I confined my study to a single agency: EPA. I chose to focus on EPA because it is often held up as the federal agency most practiced in performing CBAs and most sophisticated in its approach.[175] Internationally, the United States is, in turn, held up by analysts in other countries as the world leader in the practice of CBA.[176] Accordingly, while the direct implications of my findings relate specifically to EPA, they also likely have indirect implications for the use of CBA in agency rulemaking more generally, at least in the environmental arena.

A. The Data Set

The data set with which I began consisted of 52 CBAs conducted in connection with the major final rules issued by EPA for which OIRA review was completed between October 1, 2002 and September 30, 2015.[177] I used the list of rules identified in the OMB’s annual Reports to Congress on the Costs and Benefits of Federal Regulation for 2013 and 2016.[178] (Since each report examines the rules from the previous ten fiscal years, the 2016 report added an additional three years of rules to the ten years analyzed in the 2013 report.) For purposes of this report, OMB defines a major rule as one that is expected to have an annual effect on the economy of $100 million or more, or exhibits some other indicia of significant economic impact.[179]

I excluded from the data set seven rules that were qualitatively different in kind from the others. Two were issued jointly with other agencies;[180] I excluded them simply to keep the study cleanly focused on EPA. Another rule was never finalized by EPA.[181] The other four were deregulatory in nature.[182] In deregulatory rules, the costs and benefits are essentially flipped. Costs to industry that will be avoided due to the removal of regulatory restrictions become benefits. Any benefits to public health or safety that will be foregone due to the loosening of regulation become costs. Accordingly, the kinds of challenges that ordinarily make quantification of regulatory benefits so difficult in the environmental context are missing in such rules (or arise on the cost side instead). This renders deregulatory rules qualitative different in kind from other rules.

This left forty-five CBAs in the final data-set. Of those, the vast majority (thirty-three) were Clean Air Act Rules. Six were promulgated under the Clean Water Act, two under the Safe Drinking Water Act, two under the Resource Conservation and Recovery Act, and two under the Toxic Substances Control Act.

B. Axis 1: Quantification

  1. Significant Categories of Benefits Unquantified

All of the forty-five CBAs in that final data set left multiple categories of benefits unmonetized. None indicated that the monetized benefits estimate was complete or included all significant benefits. In three instances, the agency was unable to monetize any benefits at all.[183] In thirty-six out of forty-five CBAs (80%), EPA excluded from its monetized benefits estimate categories of benefits that the agency itself described as either actually or potentially “important,” “significant,” “substantial,” or equivalent language.[184] In twenty-seven cases, that characterization was unequivocal. In the other nine cases, the characterization of the magnitude of the omitted benefit categories was accompanied by some degree of equivocation e.g., words like “potentially,” “likely to be,” or “possibly” significant.[185] These results are depicted graphically below in Figure 3:

Figure 3. CBA’s of major EPA rules: Oct. 1, 2002 through Sept. 30, 2015.

EPA used a “+B” to represent the unquantified benefits in the final tally only sporadically, in thirteen instances. One of those, discussed below, was among the nine CBAs that did not explicitly characterize unquantified benefits as significant, although the use of the “+B” term suggests that the agency did, in fact, view them that way.[186]

For most of the rules in the data set, EPA produced a monetized benefits estimate that exceeded the monetized costs. But in fifteen instances, benefits fell short of costs, either definitively, or, where costs and benefits were expressed in ranges, those ranges overlapped to create at least the possibility of a benefits shortfall. These shortfalls were generally quite substantial—in all but three instances amounting to more than 50% of total costs in at least part of the range. But in eight out of the fifteen instances, the agency described the missing benefits as “important,” “significant,” or “substantial,” suggesting an inconclusive litmus-test CBA. Nonetheless, the agency only conducted a breakeven analysis for three of those eight rules.[187]

  1. Reasons for Lack of Quantification

First, it may be helpful to recall the typology of potential reasons for lack of quantification developed in Part I.A.2:

  1. Insufficient information to conduct:
    1. Hazard identification
    2. Dose-response evaluation
    3. Exposure assessment
  2. Insufficient data/models to monetize
  3. Incommensurability
  4. Lack of time, money, or resources

For one of the rules, we were unable to find any explanation as to why some benefits were not quantified. For all of the remaining rules in the data set, EPA cited lack of information, data, methods, lack of time or resources (categories 1, 2, or 4) or some combination of those as their reason(s) for failing to quantify certain benefits. Surprisingly, we found no references at all to incommensurability or the intangible nature of the benefits.[188]

The CBAs frequently described the reasons only in generic terms (e.g., “methodology and data limitations”[189]) so that it was impossible to discern the specific obstacles that prevented quantification. These are categorized as “1 or 2” in the chart below. But in some instances they were more specific. Category 4 (lack of time, money, or resources) was the most commonly cited specific reason, appearing in over two-thirds of the CBAs.[190] But inadequacies in all of the specific elements of risk assessment (insufficient information to conduct 1a—hazard identification, 1b—dose-response evaluation, and 1c—exposure assessment) were also cited not infrequently, in roughly a third to a half of CBAs. Finally, insufficient data or models to monetize (category 2) was cited in just over a third of CBAs.

Figure 4. Number of CBAs citing various reasons for lack of quantification of benefits. (Note: Numbers do not sum to 45 because, in most instances, EPA cited multiple rationales.)

C. Axis 3: Alternatives

The number of alternatives for which cost and benefits were estimated ranged from one to fourteen, with the vast majority falling on the low end of that spectrum. Figure 5, below, depicts these results graphically. For over a third of the rules (18), the CBA examined only a single alternative, that is, the regulation as promulgated. In another fourteen instances, the number of alternatives was either two or three.[191] Thus, over two-thirds of the CBAs considered fewer than four alternatives.

Figure 5. Number of CBAs considering varying numbers of alternatives.

D. Digging Deeper: The Story Behind the Numbers

  1. The Outsized Role of Particulate Matter and the Undersized Role of Toxics

Once we start digging beneath the numbers to look at the kinds of benefits that were quantified and left unquantified, one particular benefit category stands out head and shoulders above the rest: particulate matter—or, more specifically, the public health benefits of reducing particulate matter. Of the thirty-three Clean Air Act CBAs in the sample, fifteen quantified only particulate matter benefits, even though, in each instance, the rule was also expected to reduce multiple other harmful pollution agents—typically ozone, sulfur dioxide, oxides of nitrogen, volatile organic compounds, and/or hazardous air pollutants.[192] In four other instances, particulate matter benefits amounted to at least 99% of total monetized benefits.

Indeed, in many instances, the very pollutants at which the rule was specifically aimed were not monetized at all. In these instances, all of the benefit dollars were attributable to the ancillary benefits (or “co-benefits”) arising from the fact that the measures taken to reduce the targeted pollutants would also happen to reduce particulate matter levels. This was particularly true of the rules aimed at air toxics, or, in the parlance of the Clean Air Act, “Hazardous Air Pollutants” (also known as “HAPs”).[193] Fifteen of the rules in the final data set were Clean Air Act rules specifically aimed at the control of HAPs. With only two exceptions, these CBAs left completely unmonetized all of the benefits of reducing the very harm against which the standard was aimed: HAPs. Three did not monetize any benefits at all. But of the ten that did monetize at least some benefits, all but two monetized solely the co-benefits of particulate matter reductions that happen to occur as a byproduct of pollution controls aimed at HAPs.[194]

The HAPs for which benefits remained unquantified include a long list of chemicals known to be dangerous to human health, like benzene, formaldehyde, polycyclic organic matter (POM), toluene, MTBE, arsenic, chromium, lead, and many more. Many of these compounds have been listed for many years as known or probable carcinogens.[195] Many are also linked to a variety of other impacts on human health, including immunotoxicity, genotoxicity, nervous system disorders, respiratory effects, and reproductive and developmental effects.[196] Through dynamics that are even less well studied, these compounds also have deleterious effects on other species (plant and animal) and therefore on ecosystem structure and function.[197]

The two instances in which EPA did include some of the benefits of HAP reduction in their benefits estimate were the two iterations of the EPA’s rule limiting emissions of mercury and other toxics from power plants (the Bush administration’s Clean Air Mercury Rule and the Obama administration’s Mercury and Air Toxics Standards).[198] In the CBAs accompanying each of these rules, EPA did monetize some of the benefits of reducing emissions of one of the HAPs (mercury), but it was only a very small slice of those benefits.[199] First, while there is evidence that mercury causes a range of nasty impacts on human health—from decreased neurological function and loss of fine motor skills to adverse cardiovascular effects, genotoxic effects, and immunotoxic effects—EPA included just one endpoint in its analyses: decreases in IQ in children exposed prenatally to mercury from their mother’s consumption of fish.[200] Yet, there’s reason to believe the other effects may well be substantial. One study suggests that the cardiovascular benefits of mercury reductions, for example, may be on the order of seventeen times the IQ benefits.[201]

Second, even though “commercial fish consumption constitutes a large portion of exposure to methylmercury,”[202] EPA reduced the population accounted for in its analysis by a factor of fifteen by counting only consumers of recreationally caught fish from inland waters.[203] Third, in a startling descent into Alice-in-Wonderland logic, the agency discounted the benefit of reducing mercury poisoning (and avoiding IQ loss) to account for the fact that, had the benefited kids actually lost IQ points due to mercury poisoning, they could have saved money on tuition, because kids with lower IQs attend fewer years of school.[204] Thus, there are many reasons to conclude that, in both instances, the dollar figures the agency arrived at for the benefits of mercury reduction significantly under-estimated the actual benefits.

In sum, even though EPA regulates hundreds of different air pollutants under the Clean Air Act, the vast majority of the benefits the agency is able to monetize come from just one: particulate matter. With only very limited exceptions, the benefits flowing from reductions of the hundreds of other air pollutants at which EPA’s Clean Air Act regulations are aimed, are left unquantified.

  1. Missing Benefits: Ecological Effects

Ecological benefits were similarly under-quantified. Only seven CBAs (six of them Clean Water Act rules) included any quantification of ecological benefits at all, and these were in all cases incomplete. An additional thirty-five contained some qualitative discussion of ecological benefits, ranging from a generic mention of the fact that such benefits might exist, to an extensive (multi-page) qualitative description of ecological benefits, but made no effort to quantify. Three CBAs made no mention of ecological benefits at all. (A search for the terms “ecology,” “ecological,” or “ecosystem” came up empty.)

  1. The Other Rules

As noted above, none of the CBAs contained quantified benefits estimates that were complete or near complete. There were nine that did not specifically characterize unquantified benefits as “significant,” “important,” or “substantial,” but even these presented monetized estimates that left out whole categories of benefits.[205] And when one digs into the details of these, it appears at least plausible that the benefits left unquantified in these CBAs were also significant. The agency just didn’t happen to describe them in precisely those terms. I discuss in detail below five representative examples of these “other rules” for which EPA did not specifically characterize the magnitude of the benefits left unquantified.

Two involved rules aimed at reducing levels of airborne lead—the National Ambient Air Quality Standard for lead and a rule amending the regulation of renovations involving lead-based paint.[206] Airborne lead is associated with a whole host of adverse health impacts, including reproductive, developmental, cognitive, neurobehavioral, cardiovascular, renal, immunotoxicity, genotoxicity, and carcinogenic effects.[207] But in each of these two CBAs, only a narrow slice of these regulatory benefits was actually monetized.

For the lead paint rule, the CBA monetized “avoided losses in expected earnings due to IQ drop [in children under the age of six],” and avoided medical costs for cardiovascular diseases in adults, but left out all other health effects, many of which are very well established.[208] It also left out certain groups of people who would receive IQ and cardiovascular benefits. For example, it did “not include individuals living in adjacent (detached) houses or children who spend time in a friend or relative’s house renovated under the rule.”[209] Nor did the number for IQ benefits include people other than children under six.[210] The CBA for the lead NAAQS monetized only avoided IQ losses for children under seven (based on future earnings impacts) as well as the co-benefits of reducing direct emissions of particulate matter.[211] (Airborne lead usually takes particulate form.) But all of the other health effects of lead were left out.[212]

Two of the rules involved measures to reduce water pollution. In the CBA for the rule setting water quality standards for nitrogen and phosphorous in Florida, EPA went through an elaborate process to try to monetize the largely ecological benefits using contingent valuation surveys.[213] But the agency candidly acknowledged that “these . . . estimates [did] not account for all potential economic benefits.”[214] Indeed, EPA appeared to have so little confidence in the accuracy of this estimate, that it never actually put it in a side-by-side comparison with costs.[215] Had it done so, the comparison would have shown monetized costs outweighing monetized benefits. The fact that the agency decided to go forward despite this is perhaps implicit evidence that it viewed the unquantified benefits as substantial.[216]

The CBA accompanying EPA’s rule aimed at water pollution from Concentrated Animal Feeding Operations (CAFOs) also relied on contingent valuation studies to quantify benefits to surface water quality and reduced contamination of private drinking water wells.[217] Additionally, it monetized some benefits involving commercial goods—reduced die-offs of cattle from drinking impaired waters and improved commercial shellfish harvesting.[218] But the list of benefit categories that EPA was unable to monetize due to data limitations and monetization difficulties was long and encompassed both human health and ecological effects, including “[r]educed human illness due to pathogen exposure,” and “[reduced] [e]utrophication of coastal and estuarine waters,” among many others.[219] Indeed, EPA thought these unmonetized benefits were significant enough that it used a “+B” to represent them in the final tally.[220]

Finally, the last of these rules was the 2008 New Source Performance Standards for petroleum refineries under the Clean Air Act, which reduced emissions of particulate matter (PM), sulfur dioxide (SO2), nitrous oxides (NOx), and volatile organic compounds (VOCs).[221] The only benefits the agency quantified were those stemming from reduced PM2.5[222]—that which was directly emitted as well as that which was formed in the atmosphere by “precursor emissions,” SO2, NOx, and VOC’s. This left out not only the direct health and ecological benefits of PM10, SO2, NOx and VOCs, (which include a number of toxic compounds), but also the important health effects associated with ozone, the primary component of smog, which is known to have significant adverse effects on public health and which is formed in the atmosphere by chemical reactions involving NOx and VOCs.[223]

These five examples are representative of the nine rulemakings in which EPA did not actually describe the unquantified benefits as “significant,” “important,” or “substantial.” In each case, the monetized benefits estimates were far from complete. They left out whole categories of benefits, which may well have been significant, even if not explicitly described by EPA in those terms. The CBAs for the other four rules in this category all provide indications that the benefits left unquantified in those analyses were likely significant as well.[224]

IV. Implications: The Problem of Unquantified Benefits

These findings strongly suggest that in the practice of CBA, unquantified benefits occur with significant frequency and magnitude. The next question is, does it matter to the enterprise of CBA? The discussion in Part II about the multiple varieties of CBA suggests that it may not matter, depending on the kind of CBA the analyst is pursuing. Some varieties of CBA at the informal end of the spectrum do not require quantification at all.[225] As we have also seen, however, the CBA executive orders and accompanying guidance documents charge the federal agencies with conducting formal economic CBA.[226] And, for formal economic CBA, unquantified benefits pose fundamental problems.

A. The Constraints Imposed by Unquantified Benefits

If we think in terms of the diagram in Figure 1 (Part II.A.3), we can see that significant unquantified benefits of any type put an analysis toward the left end of Axis 1 and therefore constrain the type of CBA that an analyst can perform. A formal economic CBA that calculates the point of economic efficiency where marginal costs and benefits are equal, for example, is impossible in such circumstances. Indeed, no standard-setting CBA of any variety is possible. Significant unquantified benefits constrain the analyst instead to a litmus-test CBA (at best).[227]

Another way to see this is to begin with the observation that significant unquantified benefits prevent a calculation of net benefits for any alternative. Imagine the most favorable case, where the partial estimate of benefits exceeds the complete estimate of costs.[228] Here the analysis can adopt the middle position on Axis 2 and apply a benefits-exceed-costs test, since the analyst can comfortably infer that the true benefits also exceed the true costs.[229] But she cannot know by how much benefits exceed costs and therefore cannot calculate net benefits. Since the analyst cannot calculate net benefits for any alternative, she also cannot use CBA to locate the net-benefits-maximizing alternative; only a litmus-test CBA is possible.[230]

In the data set examined above, then, EPA would have been limited to a litmus-test CBA at best in at least 80% of the cases.[231] Accordingly, the results of this study suggest that all of the distinctions described above between a more formal standard-setting CBA on one hand and a less formal litmus-test CBA on the other are of particular importance to the actual practice of CBA at EPA. Moreover, EPA’s analysis is in most instances hobbled by the substantial limitations of litmus-test CBA, including the fact that litmus-test CBA provides only a very blunt instrument for evaluating the efficiency of regulations.

And, of course, as noted above, a successful litmus-test CBA is only the best-case scenario.[232] If the (complete) costs are greater than the (incomplete) benefits, it is much harder to reach a definitive conclusion.[233] The unmonetized benefits might or might not be big enough to make up the difference. Thus, a complete-costs-partial-benefits CBA produces an asymmetry: If the monetized benefits exceed the monetized costs, it provides a definitive result (at least with respect to litmus-test CBA), but if costs exceed benefits, it does not—or at least, it provides a harder case.[234]

In the data set described above, monetized benefits exceeded monetized costs in most instances, but there were still fifteen cases in which they did not, and eight of those were in the set of CBA’s that explicitly labeled the unquantified benefits as “significant,” “important,” or “substantial.” Thus, in at least eight instances (and probably more),[235] EPA had no way of knowing whether unquantified benefits would be large enough to tip the scales and was therefore unable to reach a definitive result, even under a litmus-test CBA.

These are the harder cases. OIRA encourages agencies to conduct what it calls a “breakeven” or “threshold” analysis in these circumstances, as the next Subpart discusses.[236]

B. Breakeven Analysis

Breakeven (or “threshold”) analysis subtracts the incomplete benefits estimate from the (complete) costs estimate and then asks the analyst to make an intuitive judgment whether the remaining unquantifiable benefits are likely large enough to make up the difference.[237]

Breakeven analysis is essentially an informal, Ben Franklin CBA—using intuitive, apples-to-oranges balancing to ask whether a set of benefits described in qualitative and/or non-monetized quantitative terms is likely to outweigh some dollar cost. Despite its informality, breakeven analysis enjoys support in quarters where formal CBA ordinarily holds sway.  As noted above, OMB Circular A-4 directs agencies to conduct breakeven analysis in certain circumstances.[238]  And Cass Sunstein, who has generally shown a preference for more formal varieties of CBA,[239] also endorses its use. He argues that agencies can employ certain methods to bring more “discipline” and “rigor” to breakeven analysis, so that it is not based only on “intuition, dogma and anecdote.”[240]

Sunstein suggests two specific methods for accomplishing this. First, the analyst can try to establish monetized upper and/or lower bounds on the benefits estimate.[241] If the data themselves do not allow for such estimates, he suggests that the analyst try to create such upper/lower bounds (or floors and ceilings) by analogizing to other values for which monetized values have been developed.[242] Thus, for a rule that would prevent one hundred people from becoming partially paralyzed, for example, one might create an upper bound benefits estimate by analogy to the value of a statistical life, which most agencies currently set around $9 million.[243] Under this logic, since preventing 100 deaths would be valued at $900 million, and since partial paralysis presumably imposes less of a welfare loss than death, $900 million would be identified as the upper bound for estimating the value of preventing 100 cases of partial paralysis.

Second, the analyst can use quantification to break the problem into smaller units that might allow for more meaningful intuitive judgment.[244] Sunstein gives as an example, a hypothetical regulation that would cost $200 million to provide modest improvements in water quality with no ramifications for human health. If the analyst knew the regulation would affect twenty water bodies and knew something about those water bodies—that they were relatively small and had little recreational or aesthetic value—then she might be able to make the intuitive apples-to-oranges judgment more meaningful by breaking the problem down and conceptualizing it on a per-water-body basis.[245]

Each of these methods, of course, requires some ability to meaningfully quantify benefits (but not in a monetary metric). Sunstein’s notion of “disciplining” the analysis or making it more “rigorous” appears to involve primarily shoehorning some amount of quantification back into the analysis.[246] Where that is not possible, or where the quantification that can be accomplished is not of a type that facilitates meaningful comparison to costs, he acknowledges that “breakeven analysis may not be a great deal more than a conclusion or a hunch.”[247] But even here, he insists that breakeven analysis is nonetheless “helpful” because “it explains what information is missing and why some cases are especially difficult.” [248]

How often such guideposts will actually be available to “discipline” the analysis is unclear. In the data set described above, EPA only conducted a breakeven analysis in three of the fifteen instances in which it was called for (where monetized costs exceeded monetized benefits).[249] In one, guideposts along the lines described by Sunstein were available, and while they by no means definitively resolved the uncertainty, they did provide at least some support for the agency’s conclusion that “there is a reasonable chance that the benefits of these rules would exceed the costs.”[250] In the other two instances—both involving rules aimed at limiting the number of fish and other aquatic organisms killed by cooling water intake structures at power plants—no such guideposts were available.[251] The benefits that EPA left unquantified for those two rules included, for example, whole categories of aquatic organisms for which it simply had no data, some of which were species that might play crucial roles in the food chain and other aspects of the aquatic ecosystem—phytoplankton, zooplankton, endangered sea turtles, and even certain commercially valuable species, like shrimp, lobsters, crabs, and mussels.[252] But EPA simply did not have data to even begin to quantify any aspect of the losses to these species averted by those rules.

Even where such guideposts are available, pushing analysts to look for them could in some instances be counterproductive. Dan Farber worries that “[i]n situations that involve the most moral perplexity—situations with unknown risks of catastrophic outcomes or intangible values like human indignity—the worst mistake of all may be to think that the solution is too easy.”[253] Farber worries that Sunstein may be putting more confidence than is warranted in the location of the error bars.[254] And he worries in particular about tail risks—that even where the most probable level of harm avoided might be well below the cost, there may be a small but significant probability of catastrophic harm that would dwarf the costs.[255] By pushing analysts to create upper and lower bounds on their estimates, Sunstein’s vision of breakeven analysis might encourage analysts to cut the tails of the probability distribution curve too short, thus failing to account for unlikely but disastrous outcomes.[256]

In any event, while reasonable minds can obviously differ on the utility of breakeven analysis, it certainly confronts some significant challenges. And all agree that at least in some instances, significant unquantified benefits will render even a litmus-test CBA inconclusive (and a standard-setting CBA impossible).

IV. Implications

Thus, significant amounts of unquantified benefits impose serious constraints on the kind of CBA an analyst can perform. The root of the problem is the inability to meaningfully calculate net benefits where significant benefits are unquantified. The most one can definitively say in such circumstances is that benefits outweigh costs—if it so happens that the (incomplete) benefits estimate exceeds the (complete) costs estimate. But even then, one cannot say by how much benefits exceed costs (i.e., what the net benefits are). And without a meaningful measure of net benefits, it is impossible to identify the alternative that maximizes net benefits and so impossible to conduct a formal economic CBA.

This means that for at least thirty-six out of the forty-five EPA rules in my data-set, any meaningful calculation of net benefits was impossible. Even had the agency expended the substantial resources necessary to generate cost and benefit estimates for a large number of alternatives, those would have been no more than partial estimates, and a formal economic CBA that identified the efficient level of regulation was therefore a logical impossibility. The best EPA could do was a litmus-test CBA. As the discussion in Part II.A.4 above demonstrates, however, a litmus-test CBA is a very blunt instrument for locating the efficient level of regulation. All it tells you is that you are moving generally in the direction of efficiency, but not how close you are. Indeed, a regulation may pass a litmus-test CBA and yet be very far from the point of efficiency.[257]

And even a successful litmus-test CBA was contingent on the happenstance of monetized benefits exceeding monetized costs. To the extent the numbers came out the other way (as they did in fifteen instances in my data-set), the agency was limited to conducting a breakeven analysis—a crude instrument dependent on intuition and guesswork in the absence of meaningful information that is, at best, a far cry from the purported accuracy and objectivity that ordinarily makes CBA seem attractive in the first place, and, at worst, a recipe for controversy, confusion, and political manipulation.[258]

Thus, the results of this study suggest that unquantified benefits due to data limitations pose real problems for CBA. And while this study was limited to EPA, because EPA’s practice with respect to CBA is so often held up nationally and even internationally as a model, these findings have implications for the use of CBA in agency rulemaking more generally, at least in the environmental arena. Accordingly, there is reason to suppose that the problems revealed here occur no less—and quite possibly more—in other agencies, both here and abroad. To the extent that is so, these data paint a picture of the practice of CBA that is far less sanguine than the usual accounts.

First, these results suggest that the presence of unquantified benefits of a magnitude that rises above a de minimis threshold may well be a systemic and pervasive problem that cannot be ignored in broader conversations about CBA as a regulatory tool. Second, and more specifically, these results suggest that most of the time, due to the presence of significant unquantified benefits, the kind of formal economic standard-setting CBA that the executive orders and guidance documents hold up as the norm is quite simply impossible to perform or even approximate.[259] The agencies are instead confined to litmus-test CBA, at best, and a breakeven analysis at worst. By rendering a formal standard-setting CBA a logical impossibility, the pervasive existence of significant unquantified benefits further distances CBA from the normative criteria to which it is moored, whether Kaldor-Hicks efficiency or Adler and Posner’s welfarism.[260]

As detailed above, Adler and Posner shored up the normative defense of CBA a decade ago by framing it not as a direct measure of efficiency or welfare, but rather as a decision-making tool that tracked overall welfare reasonably well—at least better than any of the alternatives.[261] Drawing on the legal literature about rules versus standards, Adler and Posner had the crucial insight that a decision-making tool that measured welfare directly in theory would fail to do so in practice, much as a legal standard that directly implements a legislative goal (“Motorists must drive safely”) may often be less effective at achieving that goal than a specific rule (“Motorists must not drive faster than fifty-five miles per hour.”).[262] The rule “deviates, in some instances, from the underlying legislative goal, but it is easier to apply.”[263] Like rules, they argued, CBA may actually be more successful in practice at tracking the moral criterion we care about precisely because it is one or two steps removed from that criterion.[264]

In Adler and Posner’s view, “direct implementation” of the welfarist criterion would inevitably flounder on a whole host of obstacles:

To begin, the “direct implementation” procedure would have massive decision costs, both direct costs . . . and delay costs. . . . [It would] tell agencies to collect all information relevant to determining the probability of the different possible outcomes of policy choices and to determining the level of overall welfare in each outcome . . . . [It] is also poorly designed to check decision-maker error or opportunism.[265]

In short, “direct implementation” would be so complicated that it would be hopelessly expensive and time consuming to carry out, nontransparent to all but the technically trained, and, as a result, vulnerable to political manipulation and corruption.

Thus, the argument walks a precarious tightrope: to succeed, CBA must be both distant enough from actual welfare to avoid the pitfalls of “direct implementation” yet close enough to convincingly and meaningfully provide a proxy.[266] One can question whether CBA actually does avoid the pitfalls of direct implementation or is itself hopelessly complex, expensive, non-transparent, and vulnerable to manipulation,[267] and many, including this author, have.[268] But for my purposes here, the concern is the other side of the tightrope: whether CBA in practice provides a sufficiently accurate proxy for overall welfare to provide meaningful results. Adler and Posner posit that it does, and rest their entire argument on that premise. As they ultimately acknowledge at the end of the book, however, this question of the accuracy of CBA in tracking welfare “is at bottom an empirical question” that their largely theoretical book cannot answer.[269]

In the study described above I have attempted to approach that empirical question. And the results, though preliminary, are not promising for CBA. They suggest that CBA in practice fares quite poorly with respect to its accuracy in tracking what we care about, whether that’s overall welfare or Kaldor-Hicks efficiency. Certainly, this study involved an admittedly small sample and produced results that undoubtedly need to be tested by further empirical work. Nonetheless, the results paint an intriguing picture of CBA in the real world as even further unmoored from its normative foundations than most of its defenders have assumed.

Some will undoubtedly respond to these findings by calling on agencies to devote more personnel and resources to fixing CBA, by doing the studies, gathering the data, and developing the models necessary to quantify the unquantified benefits.[270] Although the question of the feasibility of such an enterprise was not the specific focus of this study, if the CBAs in this sample are any guide, it appears that this will be no easy (or inexpensive) task.

The benefits left unquantified are numerous, wide-ranging and diverse. They include many effects on ecosystems and human physiology for which even the basic causal mechanisms are only dimly understood. Meanwhile, the vast majority of quantified benefits are attributable to a single air pollutant, particulate matter, for which we have been fortunate to have an unusually large and extensive network of monitors generating data for decades—data that generations of epidemiologists have now mined for an abundance of studies on the health effects of this single pollutant. In contrast, for numerous other pollutants (including, for example, the 189 hazardous air pollutants listed in the Clean Air Act) the information base is slim to nonexistent. Thus, any suggestion that CBA should be rehabilitated faces difficult questions about whether such an enterprise is feasible at all within any reasonable time frame, and, if so, whether the necessary commitment of resources would be worth the opportunity costs.

Another approach takes the view that you should set environmental standards with the information you have, not the information you wish you had. CBA is not the only standard-setting method in the book. The two most prominent alternatives, feasibility and health-based criteria, both have a long history of implementation at EPA[271] and require a substantially less extensive data set. In particular, neither standard requires monetization of benefits.

Feasibility analysis aims at locating the most stringent level of pollution control or safety that is technologically and economically feasible.[272] It typically begins with a threshold finding of regulatory benefit, but this is usually just a cursory look, without detailed explication or quantification, simply to ensure that there is some harm worth regulating (i.e., that regulation will produce some significant benefit).[273] The focus is instead on the cost side of the equation, where feasibility analysis performs a fairly granular and often quantified analysis of the costs and technological feasibility of pollution control technologies. Rather than comparing costs to benefits, as CBA does, it keeps the focus on costs, gauging their magnitude by comparing them to the overall financial capacities of the industry.[274] Thus, feasibility analysis avoids the necessity of comprehensively quantifying and monetizing benefits by essentially assuming—based on a cursory threshold finding of some benefit—that benefits sufficient to justify costs actually exist (at least up to the feasibility limit).[275]

Health-based standard-setting focuses solely on regulatory benefits, but two aspects make it far less demanding than CBA. First, it is not as dependent on comprehensive measurement of all social benefits for its success. And second, it does not require regulatory benefits to be converted into monetary terms.[276]

Both of these alternatives have been frequently dismissed as insufficiently tethered to an efficiency or welfarist criterion.[277] But if, in practice, CBA itself is unmoored from those normative foundations, then that objection no longer holds and CBA’s competitors perhaps deserve a second look. In short, if close adherence to an ideal standard of efficiency or welfare turns out in practice to be a holy grail, then perhaps we need a different measuring rod.

V. Conclusion

Benefits that are left unquantified due to the prosaic problem of inadequate data have been frequently mentioned but rarely analyzed in any depth in the vast literature on CBA in environmental rulemaking. Yet the study results described above—categories of benefits described by the agency as “important,” “significant,” “substantial” left unquantified in 80% of CBAs—suggest that unquantified benefits due to data deficiency represent a systemic and pervasive problem that we can no longer ignore in broader conversations about CBA as a regulatory tool. These results also suggest that most of the time, due to the presence of significant unquantified benefits, the kind of formal economic standard-setting CBA that the executive orders and guidance documents hold up as the norm, is quite simply impossible to perform or even approximate. The agencies are instead confined to litmus-test CBA at best (a blunt instrument for identifying efficient regulations) and a breakeven analysis at worst (a crude instrument dependent on intuition and guesswork). Finally, more broadly, these results suggest that the link between CBA and efficiency or welfare may be even more tenuous than most of CBA’s supporters have assumed. This in turn suggests that perhaps a fresh look at CBA’s primary competitors might be in order.

About Author

James E. Beasley Professor of Law, Temple University. Thanks to Pam Bookman, Dan Cole, David Driesen, Rick Greenstein, Hosea Harvey, Duncan Hollis, Sanne Knudsen, Tom Lin, Michael Livermore, Greg Mandel, Marcia Mulkey, Catherine O’Neill, Lauren Ouziel, Arden Rowell, Colleen Shanahan, Sid Shapiro, Todd Wildermuth, and participants in faculty colloquia at the University of Washington School of Law and Temple Law for helpful comments on earlier drafts. Special thanks also to the many RAs who worked on this project, including especially my supervising RAs, Mary Jones, Matt Kaslow, Shana Miles, and Holly W. Smith.

 

Footnotes    (↵ returns to text)

  1. Jonathan S. Masur & Eric A. Posner, Against Feasibility Analysis, 77 U. Chi. L. Rev. 657, 700 (2010).
  2. Office of Mgmt. & Budget, Circular A-4, To the Heads of Executive Agencies and Establishments: Regulatory Analysis 10 (2003) [hereinafter Circular A-4].
  3. See, e.g., Richard L. Revesz & Michael A. Livermore, Retaking Rationality: How Cost-Benefit Analysis Can Better Protect the Environment and Our Health 13–16 (2008); Cass R. Sunstein, The Cost-Benefit State: The Future of Regulatory Protection 19–20 (2002).
  4. While costs are generally far more amenable to quantified (and monetized) estimation, the simplicity of quantifying costs should not be overstated. The CBA literature has been rightly criticized for paying too little attention to the difficulties attendant to cost estimation. See Adam M. Finkel, The Cost of Nothing Trumps the Value of Everything: The Failure of Regulatory Economics to Keep Pace with Improvements in Quantitative Risk Analysis, 4 Mich. J. Envtl. & Admin. L. 91, 92–95 (2014); see also Whitman v. Am. Trucking Ass’n, 531 U.S. 457, 492–93 (2001) (Breyer, J., concurring) (noting that cost estimates are necessarily “speculative, for they include the cost of unknown future technologies”).
  5. Douglas A. Kysar, Regulating from Nowhere: Environmental Law and the Search for Objectivity 104–05 (2010); Frank Ackerman & Lisa Heinzerling, Priceless: On Knowing The Price Of Everything And The Value Of Nothing 35–40 (2004); Sidney A. Shapiro & Robert L. Glicksman, Risk Regulation at Risk: Restoring a Pragmatic Approach 32, 39–40 (2003); Elizabeth Anderson, Value in Ethics and Economics 194–95 (1993); Mark Sagoff, The Economy of the Earth: Philosophy, Law, and the Environment 1–5 (2nd ed. 2008); Robert Dorfman, Forty Years of Cost-Benefit Analysis, in Econometric Contributions to Public Policy 268, 268–70 (Richard Stone & William Peterson eds., 1978); Arthur Smithies, The Budgetary Process in the United States 344–46 (1955); Duncan Kennedy, Cost-Benefit Analysis of Entitlement Problems: A Critique, 33 Stan. L. Rev. 387, 401–07 (1981).
  6. See, e.g., Robert W. Hahn et al., Assessing Regulatory Impact Analyses: The Failure of Agencies to Comply with Executive Order 12,866, 23 Harv. J. L. & Pub. Pol’y 859, 869–70 (2000) (“Determining whether the benefits that agencies chose not to quantify represent a significant portion of the total benefits was beyond the scope of this analysis, although it is an important issue.”).
  7. But see Jonathan S. Masur & Eric A. Posner, Unquantified Benefits and the Problem of Regulation under Uncertainty, 102 Cornell L. Rev. 87, 104 (2016) (reporting results of empirical study of CBAs of federal regulations in which for over 74% of the regulations the agency “could not quantify all of the relevant benefits or costs because of empirical uncertainty—missing data, modeling difficulties, or other related effects”).
  8. See Thomas O. McGarity, Reinventing Rationality: The Role of Regulatory Analysis in the Federal Bureaucracy 134 (1991) (“Inadequate data, inaccurate models, and the infirmities of quantitative analysis all combine to leave regulatory analysis swimming in a sea of uncertainties.”); Ronnie Levin, Lead in Drinking Water, in Economic Analyses at EPA: Assessing Regulatory Impact 205, 230 (Richard D. Morgenstern ed., 1997) (“Serious gaps in data and methodology constrain the utility of [CBA]. Typically, only a few potential health or other benefits can be quantified, and even fewer can be valued monetarily. Consequently, when the sum of the limited subset of benefits that can be quantified and monetized is shown to be less than the estimated costs, it is often impossible to conclude anything about the relative magnitude of the full benefits.”); Al McGartland et al., Estimating the Health Benefits of Environmental Regulations: Changes Needed for Complete Benefits Assessment, 357 Sci. 457, 457 (2017) (describing how health benefit analysis has often excluded relevant factors, such as reductions in “birth defects, neurodevelopmental effects, and cardiovascular disease” because the data don’t exist); David Driesen, Is Cost-Benefit Analysis Neutral?, 77 U. Colo. L. Rev. 335, 369–77 (2006) (finding numerous instances in which significant benefits were left unquantified based on an empirical study of Office of Information and Regulatory Affairs review of twenty-five rules).
  9. Philip Rucker & Robert Costa, Bannon Vows a Daily Fight for ‘Deconstruction of the Administrative State’, Wash. Post (Feb. 23, 2017), https://perma.cc/L5MG-4JKA.
  10. See Promoting Energy Independence and Economic Growth, Exec. Order No. 13,783, 82 Fed. Reg. 16,093 (Mar. 31, 2017).
  11. Alan Levin, Killer-Truck Fix Hits Roadblock in Trump’s Quest to Cut Rules, Bloomberg News (July 5, 2017), https://perma.cc/55H7-8REK.
  12. Barry Meier & Danielle Ivory, Under Trump, Worker Protections Are Viewed with New Skepticism, N.Y. Times (June 5, 2017), https://perma.cc/NZ7U-2JRX.
  13. See Restoring the Rule of Law, Federalism, and Economic Growth by Reviewing the “Waters of the United States,” Exec. Order No. 13,778, 82 Fed. Reg. 12,497 (Mar. 3, 2017).
  14. See Reducing Regulation and Controlling Regulatory Costs, Exec. Order No. 13,771, 82 Fed. Reg. 9,339 (Feb. 3, 2017); Enforcing the Regulatory Reform Agenda, Exec. Order No. 13,777, 82 Fed. Reg. 12,285 (Mar. 1, 2017).
  15. For example, in defending its proposal to repeal one particularly contentious Obama-era rule protecting wetlands, Trump’s EPA has already put CBA front and center, issuing a new CBA that reduces the Obama administration’s benefits estimate 85% to 90% by categorizing previously quantified benefits as unquantifiable. The result is that the monetized benefits of the Obama rule no longer outweigh the costs, thus paving the way for repeal. See Ariel Wittenberg, Trump Analysis Slashes WOTUS’s Economic Benefits, E&E News (July 7, 2017), https://perma.cc/Z2B4-92GA. Trump’s most notorious executive order on “regulatory reform,” Executive Order 13,771 (also known as the one-in-two-out order) in some ways represents a radical departure from the CBA approach that has dominated “regulatory reform” efforts for decades. The Trump order attempts to impose on agencies a cost-only regulatory budget that caps the amount of regulatory costs each agency can impose through regulation with no reference to regulatory benefits. This approach has been roundly criticized by economists and other academics and policy makers covering a wide swath of the political spectrum. See Arianna Skibell, 95 Scholars Urge Trump to Revamp ‘Misguided’ 2-for-1 Order, E&E News (May 24, 2017), https://perma.cc/WHQ5-JKJV. But while it is antithetical to and arguably incompatible with CBA, the Trump Administration has nonetheless repeatedly insisted that it will continue to respect and enforce the pre-existing CBA mandate. Dominic J. Mancini, Acting Administrator, Office of Information and Regulatory Affairs, Implementing Executive Order 13771, Titled “Reducing Regulation and Controlling Regulatory Costs” (Apr. 5, 2017), https://perma.cc/XN2G-5K4S (“EO 12,866 remains the primary governing EO regarding regulatory planning and review. Accordingly . . . agencies must continue to assess and consider both the benefits and costs of regulatory actions . . . and issue regulations only upon a reasoned determination that benefits justify costs.”). Indeed, the EPA apparently views the CBA mandate as important enough that the agency has announced plans to re-vamp it. See Advanced Notice of Proposed Rulemaking, Increasing Consistency and Transparency in Considering Costs and Benefits in the Rulemaking Process, 83 Fed. Reg. 27,524, 27,524 (June 13, 2018). As the above examples illustrate, CBA continues to play a central role in public debates about regulatory decision making.
  16. Exec. Order No. 12,291, 46 Fed. Reg. 13,193 (1982). See generally Revesz & Livermore, supra note 3, at 24–29.
  17. See Mark L. Plummer, Welcome to the Data-Poor Real World: Incorporating Benefit-Cost Principles into Environmental Policymaking, 23 Res. L. & Econ. 103, 127 (2007) (“As is often the case for other forms of environmental policy making, data on the monetary benefits of critical habitat designation for West Coast salmon and steelhead were insufficient to pursue a standard benefit-cost approach.”); see also Andrea Renda, Impact Assessment in the EU: The State of the Art and the Art of the State 62–63 (2006) (reporting, based on a score card analysis of the first seventy CBAs completed by the European Commission, that “some” benefits were monetized in only 28% of CBAs, and “all or nearly all” benefits were quantified in only 14%); Revesz & Livermore, supra note 3, at 11 (identifying “ignored benefits” as one of the “substantive biases” in CBA that make it less protective of the environment); Stuart Shapiro & John F. Morrall III, The Triumph of Regulatory Politics: Benefit-Cost Analysis and Political Salience, 6 Reg. & Governance 189, 193 (2012) (in an empirical study of 109 CBAs by federal agencies, stating that “[f]or many of these rules, agencies do not monetize all the benefits”); Daniel H. Cole, Law, Politics, and Cost-Benefit Analysis, 64 Ala. L. Rev. 55, 61 n.31 (2012) (“As a practical matter, non-market environmental goods are still frequently assigned a value of zero because many agency CBAs, including those of the EPA, exclude the more difficult to evaluate environmental benefits . . . of regulatory proposals.”); Richard W. Parker, Grading the Government, 70 U. Chi. L. Rev. 1345, 1383 (2003) (noting that many CBAs either don’t quantify any benefits of fail to quantify “whole categories of important benefits”); Masur & Posner, supra note 7, at 101 (reporting result of an empirical study of 106 agency CBAs, in which only two fully quantified both costs and benefits, thirty-six did not quantify any benefits, and forty-eight partially quantified costs and benefits); Driesen, Is Cost-Benefit Analysis Neutral?, supra note 8, at 364-78, 401 (observing, based an empirical study of 25 federal environmental health and safety rules, that “the data suggest that the . . . problem of unquantifiable benefits is pervasive”).
  18. Cass R. Sunstein, The Arithmetic of Arsenic, 90 Geo. L.J. 2255, 2272–74 (2002); Thomas O. McGarity, Professor Sunstein’s Fuzzy Math, 90 Geo. L.J. 2341, 2352 (2002).
  19. U.S. Envtl. Prot. Agency, Regulatory Impact Analysis for the Mercury and Air Toxics Standards 4-30, 8-1 (2011) [hereinafter Mercury and Air Toxics Standards]. See infra notes 198–204.
  20. Mercury and Air Toxics Standards, supra note 19, at 8-1.
  21. David A. Evans, The Clean Air Mercury Rule, in Reforming Regulatory Impact Analysis 82, 99 (Winston Harrington et al. eds., 2009); Catherine A. O’Neill, The Mathematics of Mercury, in Reforming Regulatory Impact Analysis 108, 112, 118–19 (Winston Harrington et al. eds., 2009) (calling the CBA of EPA’s 2005 Clean Air Mercury Rule “a complete cost-incomplete benefit analysis”).
  22. See Amy Sinden, Cost-Benefit Analysis, Ben Franklin, and the Supreme Court, 4 U.C. Irvine L. Rev. 1175, 1177, 1195 (2014). The EPA expressed concern from the outset of this rulemaking that CBAs under the Clean Water Act have generally been “limited in the range of benefits assessed,” thus “hinder[ing] EPA’s ability to compare . . . benefits and costs . . . comprehensively.” National Pollutant Discharge Elimination System—Proposed Regulations to Establish Requirements for Cooling Water Intake Structures at Phase II Existing Facilities, 67 Fed. Reg. 17,122, 17,191 (proposed Apr. 9, 2002); see also Control of Emissions from Marine Compression-Ignition Engines Less Than 30 Liters per Cylinder, 74 Fed. Reg. 44,442, 44,509 (proposed Aug. 28, 2009) (“Limitations of the scientific literature often result in the inability to estimate quantitative changes in health and environmental effects, such as potential increases in premature mortality associated with increased exposure to carbon monoxide. Deficiencies in the economics literature often result in the inability to assign economic values even to those health and environmental outcomes which can be quantified.”). One of the greatest public health triumphs of all time, EPA’s phase-out of lead in gasoline, might never have happened had EPA been required to produce a quantified CBA to defend its rule. Ironically, the data that eventually allowed epidemiologists to quantify the dramatic impact that spewing lead from millions of car tailpipes had on children’s health were only possible to produce once the phase-out was already well underway, creating, in essence, a real-world control group. See Frank Ackerman et al., Applying Cost-Benefit to Past Decisions: Was Environmental Protection Ever a Good Idea?, 57 Admin. L. Rev. 155, 161 (2005).
  23. See Alan Krupnick & Richard Morgenstern, The Future of Benefit-Cost Analyses of the Clean Air Act, 1 Ann. Rev. Pub. Health 427, 427–28 (2002).
  24. To view the data from this study, visit http://elawreview.org/data/sinden-data/ [hereinafter Data].
  25. See Figure 3.
  26. Data, supra note 24; see infra notes 191–202 and accompanying text.
  27. Data, supra note 24.
  28. See infra notes 248–261 and accompanying text.
  29. See infra notes 262–267 and accompanying text.
  30. See infra note 277 and accompanying text.
  31. Richard A. Merrill, Risk-Benefit Decisionmaking by the Food and Drug Administration, 45 Geo. Wash. L. Rev. 994, 996 (1977) (“‘Risk-benefit analysis’ . . . includes any technique for making choices that explicitly or implicitly attempts to measure the potential adverse consequences of an activity and to predict its benefits.”). Cf. Steven Kelman, Cost-Benefit Analysis: An Ethical Critique, Regulation, Jan./Feb. 1981, at 33 (“At the broadest and vaguest level, cost-benefit analysis may be regarded simply as systematic thinking about decision-making.”).
  32. See Amy Sinden, Formality and Informality in Cost-Benefit Analysis, 2015 Utah L. Rev. 93, 172–73 (2015).
  33. Id. at 116.
  34. Id. at 107–09.
  35. See Anthony E. Boardman et al., Cost-Benefit Analysis: Concepts and Practice 32 (4th ed. 2014); E.J. Mishan, Cost-Benefit Analysis 390–91 (1976); Richard A. Posner, Economic Analysis of Law 17–20 (7th ed. 2007).
  36. See Mishan, supra note 35, at 390; Boardman et al., supra note 35, at 32.
  37. Edith Stokey & Richard Zeckhauser, A Primer for Policy Analysis 137 (1978); Boardman et al., supra note 35, at 13, 33; Office of Mgmt. & Budget, supra note 2, at 9–10.
  38. Edward M. Gramlich, A Guide to Benefit-Cost Analysis 2 (2d ed. 1990); Tom Tietenberg, Environmental and Natural Resource Economics 25, 66 (5th ed. 2000); Richard D. Morgenstern, Conducting an Economic Analysis: Rationale, Issues, and Requirements, in Economic Analyses at EPA: Assessing Regulatory Impact 25, 40 (Richard D. Morgenstern ed., 1997).
  39. See Kysar, supra note 5, at 103–05; Matthew D. Adler & Eric A. Posner, New Foundations of Cost-Benefit Analysis 125, 142–46 (2006); Arthur A. Leff, Economic Analysis of Law: Some Realism About Nominalism, 60 Va. L. Rev. 451, 460–64 (1974); Daniel Kahneman & Amos Tversky, Choices, Values, and Frames, 39 Am. Psychologist 341, 343, 345, 349 (1984).
  40. See Adler & Posner, supra note 39, at 125, 142–46. See generally C. Edwin Baker, The Ideology of the Economic Analysis of Law, 5 Phil. & Pub. Aff. 3 (1975); Arthur Allen Leff, Economic Analysis of Law: Some Realism About Nominalism, 60 Va. L. Rev. 451 (1974).
  41. See generally Tuba Tunçel & James Hammitt, A New Meta-Analysis on the WTP/TWA Disparity, 68 J. Envtl. Econ. & Mgmt. 175 (2014); John K. Horowitz & Kenneth E. McConnell, A Review of WTA/WTP Studies, 44 J. Envtl. Econ. & Mgmt. 426 (2002); Jack L. Knetsch, Environmental Policy Implications of Disparities Between Willingness to Pay and Compensation Demanded Measures of Values, 18 J. Envtl. Econ. & Mgmt. 227 (1990); Cass R. Sunstein, After the Rights Revolution: Reconceiving the Regulatory State 40–42 (1990); Mark Sagoff, Price, Principle, and the Environment (2004).
  42. See generally Douglas A. Kysar, Discounting . . . On Stilts, 74 U. Chi. L. Rev. 119 (2007); Richard L. Revesz, Environmental Regulation, Cost-Benefit Analysis, and the Discounting of Human Lives, 99 Colum. L. Rev. 941 (1999); Lisa Heinzerling, Discounting Our Future, 34 Land & Water L. Rev. 39 (1999).
  43. Adler & Posner, supra note 39, at 23; see also Cass R. Sunstein, The Cost-Benefit State: The Future of Regulatory Protection 19–20 (2002).
  44. Amy Sinden et al., Cost-Benefit Analysis: New Foundations on Shifting Sand, 3 Reg. & Governance 48, 50 (2009).
  45. Adler & Posner, supra note 39, at 23.
  46. Id. at 25.
  47. Id. at 149–53.
  48. Louise B. Russell & Anushua Sinha, Strengthening Cost-Effectiveness Analysis for Public Health Policy, Am. J. Preventative Med., 50(5S1), 2016, at S8–S10.
  49. Id.
  50. Id.
  51. See Alon Rosenthal et al., Legislating Acceptable Cancer Risk from Exposure to Toxic Chemicals, 19 Ecology L.Q. 269, 277 (1992) (outlining the basic process for conducting a quantitative risk assessment). For more information visit EPA’s Risk Assessment Portal at Risk Assessment, U.S. Envtl. Protection Agency, https://perma.cc/SF72-8RCP (last visited Feb. 21, 2019).
  52. Rosenthal et al., supra note 51, at 278.
  53. See id.
  54. See generally European Chemicals Agency, 3 Guidance for Human Health Risk Assessment 225–26 (2013), https://perma.cc/CD78-VLBZ (discussing how human risk assessments require “representative and reliable monitoring data” from animal testing).
  55. See John R. Fowle III & Kerry L. Dearfield, Sci. Pol’y Council, EPA 100-B-00-002, Risk Characterization Handbook, at D-11 to D-12 (2000), https://perma.cc/UE9B-VW32 (explaining how EPA used a dose-response study from a rat bioassay to arrive at a “low-dose extrapolation”).
  56. See Wendy E. Wagner, The Science Charade in Toxic Risk Regulation, 95 Colum. L. Rev. 1613, 1626 (1995).
  57. Catherine A. O’Neill, Exposed: Asking the Wrong Question in Risk Regulation, 48 Ariz. St. L.J. 703, 713 (2016).
  58. See Fowle & Dearfield, supra note 55, at C-22, C-23.
  59. Rosenthal et al., supra note 51, at 278.
  60. Id. at 293.
  61. Methylene Chloride and N-Methylpirrolidone; Regulation of Certain Uses Under TSCA Section 6(a), 82 Fed. Reg. 7,464, 7,471 (Jan. 17, 2017) (proposed rule).
  62. See Fowle & Dearfield, supra note 55, at B-9 to B-10.
  63. In theory, it’s not quite so simple. The social benefit should include not just price, but consumer surplus in order to capture all of the increase in social welfare. But this detail is routinely ignored. Price is taken to be a reasonable approximation of actual increase in welfare. See Mark Sagoff, Price, Principle, and the Environment 81 (2004) (“For example, if you get an infection, you need an antibiotic, which may be inexpensive. The price you pay, then, may be less than you would be willing to pay, given the great benefit you may receive from the pills.”).
  64. Many people, for example, balk at the prospect of attaching a dollar figure to the loss of a human life, the destruction of a pristine natural area, the extinction of a species, or a regulation’s impact on the dignity of people in wheelchairs, because they view such a measure as flattening and cheapening the richness and diversity of human experience in a way that at best provides a grossly incomplete and inaccurate representation of human values and at worst leads to morally unjustified outcomes. See, e.g., Kelman, supra note 31, at 33–36. See generally Elizabeth Anderson, Value in Ethics and Economics (1993); Cass R. Sunstein, Incommensurability and Valuation in Law, 92 Mich. L. Rev. 779, 785–86 (1994); Lawrence H. Tribe, Ways Not to Think About Plastic Trees: New Foundations for Environmental Law, 83 Yale L.J. 1315, 1317–21 (1974). Or they may view monetization of such values as wrong because it confuses the people’s preferences as consumers with the values they hold as citizens. See Mark Sagoff, The Economy of the Earth: Philosophy, Law, and the Environment 93–94 (1988).
  65. See generally David W. Pearce & Anil Markandya, Environmental Policy Benefits: Monetary Valuation (1989) (discussing various direct and indirect benefit valuation techniques, including hedonic and contingent valuation methods).
  66. See generally David S. Brookshire et al., Valuing Public Goods: A Comparison of Survey and Hedonic Approaches, 72 Am. Econ. Rev. 165, 165–67, (1982); Boardman et al., supra note 35, at 357, 361–63; Philip E. Graves, Benefit-Cost Analysis of Environmental Projects: A Plethora of Biases Understating Net Benefits, 3 J. Benefit-Cost Analysis, 2012, at 1, 12–19.
  67. See Graves, supra note 66, at 12.
  68. W. Kip Viscusi, Value of Life, in The New Palgrave Dictionary of Economics, 586, 586 (2nd ed. 2008).
  69. Id.; Mortality Risk Valuation, U.S. Envtl. Protection Agency, https://perma.cc/7CKH-B6AS (last updated Feb. 8, 2018).
  70. See, e.g., Richard Ready & Charles Abdalla, The Impact of Open Space and Potential Local Disamenities on Residential Property Values in Berks County, Pennsylvania (Pa. State U. Dep’t of Agric. Econ., Soc., & Educ. Staff Paper No. 363, 2003), https://perma.cc/Q89C-KRK6.
  71. See Boardman et al., supra note 35, at 358–65.
  72. See Shi-Ling Hsu & John Loomis, A Defense of Cost-Benefit Analysis for Natural Resource Policy, 32 Envtl. L. Rep. 10,239, 10,242 (2002); Graves, supra note 66, at 18.
  73. See Boardman et al., supra note 35, at 357; Hsu & Loomis, supra note 72, at 10,242; Thomas H. Stevens et al., Measuring the Existence Value of Wildlife: What Do CVM Estimates Really Show?, 67 Land Econ. 390, 392–99 (1991). For a critique, see generally John M. Heyde, Is Contingent Valuation Worth the Trouble?, 62 U. Chi. L. Rev. 331, 332, 362 (1995).
  74. John B. Loomis & Douglas M. Larson, Total Economic Values of Increasing Gray Whale Populations: Results from a Contingent Valuation Survey of Visitors and Households, 9 Marine Resource Econ. 275, 282 (1994).
  75. See W. Kip Viscusi et al., Pricing Environmental Health Risks: Survey Assessments of Risk-Risk and Risk-Dollar Trade-Offs for Chronic Bronchitis, 21 J. Envtl. Econ. & Mgmt. 32, 47, 50 (1991).
  76. See David W. Pearce & R. Kerry Turner, Economics of Natural Resources and the Environment 141–58 (1990); Leonard Shabman & Kurt Stephenson, Environmental Valuation and Its Economic Critics, 126 J. Water Resources Plan. & Mgmt. 382, 382–84 (2000). A prominent example is EPA’s Cooling Water Intake rules, for which the agency has struggled for years (not particularly successfully) to come up with noncontroversial methods for monetizing the harms to aquatic organisms and ecosystems against which the rule is aimed. See Sinden, Cost-Benefit Analysis, Ben Franklin, and the Supreme Court, supra note 22, at 1195–96.
  77. See generally Tunçel & Hammitt, supra note 41; Horowitz & McConnell, supra note 41; Knetsch supra note 41. There is some recent debate in the literature about whether these findings are valid or reflect results skewed by subject misconceptions. See generally Charles R. Plott & Kathryn Zeiler, The Willingness to Pay—Willingness to Accept Gap, the “Endowment Effect,” Subject Misconceptions, and Experimental Procedures for Eliciting Valuations, 95 Am. Econ. Rev. 530 (2005); Jack L. Knetsch & Wei-Kang Wong, The Endowment Effect and the Reference State: Evidence and Manipulations, 71 J. Econ. Behav. & Org. 407 (2009); Andrea Isoni et al., The Willingness to Pay—Willingness to Accept Gap, the “Endowment Effect,” Subject Misconceptions, and Experimental Procedures for Eliciting Valuations: Comment, 101 Am. Econ. Rev. 991 (2011).
  78. See supra note 64 and accompanying text.
  79. See Sinden, Formality and Informality in Cost-Benefit Analysis, supra note 32, at 107–08; Sinden, Cost-Benefit Analysis, Ben Franklin, and the Supreme Court, supra note 22, at 1176, 1186.
  80. From Sinden, Formality and Informality in Cost-Benefit Analysis, supra note 32, at 113.
  81. Putting aside the theoretical objections to Kaldor-Hicks efficiency and the willingness-to-pay standard. See supra note 39 to 42 and accompanying text.
  82. See infra notes 219–235 and accompanying text.
  83. Sinden, Formality and Informality in Cost-Benefit Analysis, supra note 32, at 118.
  84. Id.
  85. See Jonathan Cannon, The Sounds of Silence: Cost-Benefit Canons in Entergy Corp. v. Riverkeeper, Inc., 34 Harv. Envtl. L. Rev. 425, 454 (2010) (describing informal CBA, what he calls “the weak form of CBA,” as a tool for “screen[ing] for irrational outcomes”); Daniel H. Cole, Law, Politics, and Cost-Benefit Analysis, 64 Ala. L. Rev. 55, 57 (2012) (noting that CBA “is viewed as a kind of filter designed to capture welfare-reducing proposals, while allowing welfare-enhancing proposals to pass through”); see also Boardman et al., supra note 35, at 13 (distinguishing between the decision rule for litmus-test CBA and more formal CBA); Richard E. Just et al., The Welfare Economics of Public Policy: A Practical Approach to Project and Policy Evaluation 642 (2004) (arguing for welfare maximization approach to CBA).
  86. See Tietenberg, supra note 38, at 66 (observing re: litmus-test CBA that “[w]hile [this test] guarantee[s] that no activity which confers more costs on society than benefits will be undertaken, [it] do[es] not guarantee efficiency . . . [E]fficiency is attained when the marginal value of benefits equals the marginal value of costs”); Nathaniel O. Keohane, The Technocratic and Democratic Functions of the CAIR Regulatory Analysis, in Reforming Regulatory Impact Analysis 49 (Winston Harrington et al. eds., 2009) (“Simply calculating total benefits and costs does not shed light on marginal benefits and costs, which—as any economics student knows—must be equated to satisfy efficiency.”).
  87. See, e.g., Keohane, supra note 86, at 47 (noting that a CBA that considers only one option “fails to meet the most basic requirement of sound economic policy analysis: namely, the consideration of multiple alternatives”).
  88. Id. (“A document that considers the costs and benefits of the proposed policy only relative to the status quo cannot possibly have been used to design that policy.”); see also David M. Driesen, Two Cheers for Feasible Regulation: A Modest Response to Masur and Posner, 35 Harv. Envtl. L. Rev. 313, 320–21 (2011) (criticizing Jonathan Masur and Eric Posner for confusing these two different forms of CBA).
  89. Keohane, supra note 86, at 34 (noting that EPA’s CBA for the Clean Air Interstate Rule was “essentially an up-or-down assessment of the final rule versus the status quo”).
  90. See supra notes 35–38 and accompanying text.
  91. See Office of Mgmt. & Budget, supra note 2, at 5 (“[M]arginal costs generally increase with stringency, whereas marginal benefits may decrease.”).
  92. Executive Order No. 12,866, 58 Fed. Reg. 51,735, 51,736 (Oct. 4, 1993), requires federal agencies to submit to OIRA “an assessment of the potential costs and benefits” of all “significant regulatory actions.” Id. at § 6(a)(3)(B)(ii). The executive order defines “significant regulatory action” as ny regulatory action that is likely to result in a rule that may:1) Have an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, or tribal governments or communities;2) Create a serious inconsistency or otherwise interfere with an action taken or planned by another agency;3) Materially alter the budgetary impact of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or4) Raise novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in this Executive order.d. at § 3(f). For regulatory actions falling under § 3(f)(1) (annual economic effect of $100 million or more, etc.), often dubbed “economically significant” regulations, the order requires a more detailed CBA, including quantification of costs and benefits “to the extent feasible,” and assessment and analysis of the costs and benefits of “potentially effective and reasonably feasible alternatives” to the regulation. Id. at § 6(a)(3)(C).
  93. See Shapiro & Glicksman, supra note 5, at 32–33; Thomas O. McGarity, Media-Quality, Technology, and Cost-Benefit Balancing Strategies for Health and Environmental. Regulation, 46 Law & Contemp. Prob., no. 3, 1983, at 160–61 (1983).
  94. Amy Sinden, Cost-Benefit Analysis, in Edward Elgar Encyclopedia of Environmental Law, Vol II, Environmental Decision Making (Glicksman & Paddock eds., 2016).
  95. See Robert V. Percival et al., Environmental Regulation: Law, Science, and Policy, 363–64 (4th ed. 2003); Subcomm. on Oversight & Investigations of the Comm. on Interstate & Foreign Commerce, 94th Cong., Federal Regulation and Regulatory Reform 510–15 (1976).
  96. Exec. Order No. 12,291, 46 Fed. Reg. 13,193 (Feb. 19, 1981); see also Revesz & Livermore, supra note 3, at 24–25.
  97. Revesz & Livermore, supra note 3, at 29–31.
  98. See, e.g., Primary National Ambient Air Quality Standards for Sulfur Dioxide, 75 Fed. Reg. 35,520, 35,587 (June 22, 2010) (final rule) (“EPA prepared a Regulatory Impact Analysis (RIA) of the potential costs and benefits associated with this action. However, the CAA and judicial decisions make clear that the economic and technical feasibility of attaining the national ambient standards cannot be considered in setting or revising NAAQS, . . . . Accordingly, although an RIA has been prepared, the results of the RIA have not been considered by EPA in developing this final rule.”).
  99. See, e.g., The Regulatory Accountability Act of 2017, H.R. 5, 115th Cong. (2018). See generally Assessing the Regulatory Accountability Act, Regulatory Rev. (May 30, 2017), https://perma.cc/KQD4-DMUT.
  100. See The Risk Assessment and Cost-Benefit Act of 1995, H.R. 9, Title II Division D, 104th Cong. (1995); Fred Anderson et al., Regulatory Improvement Legislation: Risk Assessment, Cost-Benefit Analysis, and Judicial Review, 11 Duke Envtl. Law & Pol’y F. 89, 98–99 (2000).
  101. See Michigan v. U.S. Envtl. Prot. Agency, 135 S.Ct. 2699, 2711 (2015) (holding that the EPA was required to consider cost before making a decision to regulate power plants under § 7412(n)(1)(A) of the Clean Air Act); see also Amy Sinden, A “Cost-Benefit State?” Reports of Its Birth Have Been Greatly Exaggerated, 46 Envtl. L. Rep. 10,933, 10,950 (2016) (noting that Michigan v. U.S. Envtl. Prot. Agency was the first time the Supreme Court held that an environmental statute required, rather than permitted, an agency to consider costs) [hereinafter Sinden, Reports of Its Birth].
  102. Lisa Heinzerling, Inside EPA: A Former Insider’s Reflections on the Relationship Between the Obama EPA and the Obama White House, 31 Pace Envtl. L. Rev. 325, 333 (2014).
  103. Id. at 352; see also Cass R. Sunstein, Valuing Life: Humanizing the Regulatory State 37 (2014) (“Within the federal government, agencies are acutely aware of the cost-benefit requirement of Executive Order 13563 and Executive Order 12866.”); Rena Steinzor, The Case for Abolishing Centralized White House Regulatory Review, 1 Mich. J. Envtl. & Admin. L. 209, 243–44 (2012) (discussing dynamic set up by centralized review of agency rules by OIRA, as giving OIRA significant power and sway over agency rule making); Cass R. Sunstein, The Real World of Cost-Benefit Analysis: Thirty-Six Questions (and Almost as Many Answers) 8 (Harvard Law Sch Pub. Law & Legal Theory Working Paper Series, Paper No. 13-11, 2013) (noting that where a regulation’s monetized benefits are less than monetized costs, “the agency is unlikely to attempt to go forward with this regulation”).
  104. Regulatory Planning and Review, Exec. Order No. 12,866, 3 C.F.R. §§ 638, 639 (1994), reprinted in 5 U.S.C. § 601 (2012).
  105. Id. at § 1(a). A subsequent section of the Executive Order also requires the agency to submit to OIRA “[a]n assessment, including the underlying analysis, of costs and benefits of potentially effective and reasonably feasible alternatives to the planned regulation.” Id. at § 6(a)(3)(C)(iii). The Reagan Order similarly stated that “[r]egulatory objectives shall be chosen to maximize the net benefits to society” and “[a]mong alternative approaches to any given regulatory objective, the alternative involving the least net cost to society shall be chosen.” Federal Regulation, Exec. Order No. 12,291, 3 C.F.R. §§ 127, 128 (1982) at § 2(d).
  106. Regulatory Planning and Review, Exec. Order No. 12,866, 3 C.F.R. §§ 638, 639 (1994), reprinted in 5 U.S.C. § 601 (2012) at § 1(b)(6).
  107. Id. at § 1(a); see also id. at § 1(b)(6) (“recognizing that some costs and benefits are difficult to quantify”).
  108. See Memorandum: Regulatory Review, 74 Fed. Reg. 5,977, 5,977 (Jan. 30, 2009) (directing OMB to produce recommendations for a new executive order on regulatory review); Federal Regulatory Review, Request for Comments, 74 Fed. Reg. 8,819, 8,819 (Feb. 26, 2009) (requesting public comment on those recommendations).
  109. Improving Regulation and Regulatory Review, Exec. Order No. 13,563, 76 Fed. Reg. 3,821, 3,821 (Jan. 21, 2011) at § 1(b).
  110. Id.
  111. Id. at § 1(c).
  112. Id. (emphasis added). President Trump has kept Executive Order 12,866 in place and has not issued any supplementary orders specifying how agencies are to conduct CBA. His executive order on “regulatory reform” (E.O. 13,771), requiring agencies to withdraw two existing regulations for every new one they promulgate, has stirred up considerable controversy for taking a very different approach to controlling regulatory costs. But in subsequent implementing guidance, OIRA has made clear that it expects agencies to continue to adhere to Executive Order 12,866’s CBA requirement and OMB Circular A-4. See Mancini, supra note 15 (“[A]gencies must continue to assess and consider both the benefits and costs of regulatory actions . . . .”).
  113. See generally Office of Mgmt. & Budget, supra note 2.
  114. Id. at 10. Earlier, it makes a similar point, stating that “[w]here all benefits and costs can be quantified and expressed in monetary units, [CBA] provides decision makers with a clear indication of the most efficient alternative, that is, the alternative that generates the largest net benefits to society.” Id. at 2.
  115. Id. at 23.
  116. Id. at 6, 14.
  117. Id. at 18; see also id. at 33–34 (requiring costs and benefits to be discounted at both 3% and 7%).
  118. The term “benefit-cost analysis” (BCA) has the same meaning as CBA and is preferred by a number of CBA’s proponents. See, e.g., Kenneth J. Arrow et al., Is There a Role for Benefit-Cost Analysis in Environmental, Health, and Safety Regulation?, 272 Sci. 221 (1996) (using the term BCA in the same manner in which the term CBA is used in this article).
  119. Office of Mgmt. & Budget, supra note 2, at 10; see also id. at 27 (“You should monetize quantitative estimates wherever possible.”).
  120. Id. at 2.
  121. Id.
  122. Id. at 10. Similar references to unquantified benefits are peppered throughout the document. For example, at another point, the Circular says “you should be able to assess quantitatively benefits and costs,” but then immediately goes on to qualify that statement: “A complete regulatory analysis includes a discussion of non-quantified as well as quantified benefits and costs.” Id. at 3; see also id. at 18 (directing agencies to “describe benefits and costs you cannot quantify”); Id. at 26–27 (“Sound quantitative estimates of benefits and costs, where feasible, are preferable to qualitative descriptions . . . . However, some important benefits and costs (e.g., privacy protection) may be inherently too difficult to quantify or monetize given current data and methods.”).
  123. See U.S. Envtl. Prot. Agency, Guidelines for Preparing Economic Analyses 1-2 (2010), https://perma.cc/A7ZQ-R5YS [hereinafter Guidelines for Preparing Economic Analyses]. Starting in 1983, EPA issued a series of Guidelines for preparing CBAs. The agency released its most recent version in December 2010. This document was prepared by economists at EPA and subsequently peer reviewed by EPA’s Science Advisory Board. Id. at 1-1.
  124. Id. at 1-4.
  125. Id. at Appendix A.
  126. Id. at 11-2.
  127. Id. at 11-3 (emphasis added). In contrast, the European Commission’s Impact Assessment Guidelines make a finding that “the most significant part of both costs and benefits can be quantified and monetized,” a prerequisite to requiring full CBA. European Comm’n, Impact Assessment Guidelines 45 (Jan. 15, 2009), https://perma.cc/K5JQ-Z9AE. The European Commission’s Impact Assessment process also employs something called “the proportionate level of analysis” which essentially varies the degree of quantification and monetization required in a CBA according to the significance of the action. See id. at 13–14; Anne Claartje Margreet Meuwese, Impact Assessment in EU Lawmaking 62 (2008).
  128. Improving Regulation and Regulatory Review, Exec. Order No. 13,563, 76 Fed. Reg. 3,821, 3,821 (Jan. 21, 2011).
  129. Office of Mgmt. & Budget, supra note 2, at 2, 10.
  130. Guidelines for Preparing Economic Analyses, supra note 123, at 11-2 (emphasis added).
  131. See Heinzerling, supra note 102, at 333.
  132. See supra notes 40–42 and accompanying text. Theoretically, wealth effects could cause benefits estimates to be too high or too low, depending on whether the beneficiaries are rich or poor. Since environmental degradation and health and safety hazards tend to disproportionately impact poor communities, wealth effects often skew benefits lower.
  133. See supra note 5 and accompanying text.
  134. But see Cass R. Sunstein, The Limits of Quantification, 102 Calif. L. Rev. 1369, 1376 (2014) (focusing on the problem of benefits that can’t be quantified “simply because we lack relevant information”); Richard L. Revesz, Quantifying Regulatory Benefits, 102 Calif. L. Rev. 1423, 1430 (2014) (responding to Sunstein and focusing on the problem of benefits that are unquantifiable due to data limitations); Daniel A. Farber, Breaking Bad? The Uneasy Case for Regulatory Breakeven Analysis, 102 Calif. L. Rev. 1469, 1485 (2014) (same); Lisa Heinzerling, Quality Control: A Reply to Professor Sunstein, 102 Calif. L. Rev. 1457, 1457 (2014) (same); Masur & Posner, supra note 7, at 92, 104 (reporting results of empirical study finding “countless examples . . . where agencies fail to fully monetize the benefits and costs of regulations,” and in over 74% of instances, the lack of quantification was attributed to empirical uncertainty); John C. Coates IV, Cost-Benefit Analysis of Financial Regulation: Case Studies and Implications, 124 Yale L.J. 882, 887, 997 (2015) (exploring, through six case studies, the difficulties involved in employing formal CBA to analyze financial regulations, highlighting difficulties of quantification).
  135. See, e.g., Amy Sinden, In Defense of Absolutes: Combatting the Politics of Power in Environmental Law, 90 Iowa L. Rev. 1405, 1425–26 (2005) [hereinafter Sinden, In Defense of Absolutes].
  136. See, e.g., John Graham, The Evolving Regulatory Role of the U.S. Office of Management and Budget, 1 Rev. Envtl. Econ. & Pol’y 171, 188 (2007) (“The information base on which we made multibillion-dollar decisions was often remarkably slim.”).
  137. See, e.g., infra notes 138–151 and accompanying text; Jonathan Masur & Eric Posner, Against Feasibility Analysis, 77 U. Chi. L. Rev. 657, 700 (2010) (“[Cost-benefit analysis] minimizes decision costs through the magic of quantification. Once valuations are obtained from the marketplace and surveys—fixed costs that can be spread across multiple regulations—decision are relatively automatic.”).
  138. Sinden, Formality and Informality in Cost-Benefit Analysis, supra note 32, at 97; Sinden, Cost-Benefit Analysis, Ben Franklin, and the Supreme Court, supra note 22, at 1177.
  139. OMB is required to submit this report annually to Congress. 31 U.S.C. § 1105 (2012). OMB delegates the task to OIRA. Office of Mgmt. & Budget, Information and Regulatory Affairs, WhiteHouse.gov, https://perma.cc/V6U2-7LD9 (last visited Feb. 16, 2019). OIRA appears in recent years to have de-prioritized this mandate. According to OMB’s website, the last final report that was issued was for 2015. See Office of Mgmt. & Budget, OIRA Reports to Congress, WhiteHouse.gov, https://perma.cc/7939-FH7R (last visited Feb. 16, 2019) (showing only draft reports for 2016 and 2017 and no report for 2018).
  140. Office of Mgmt. & Budget, Exec. Office of the President, 2014 Rep. to Congress on the Benefits and Costs of Fed. Reg. and Unfunded Mandates on State, Local, and Tribal Entities 3 (2014) [hereinafter 2014 Rep. to Congress]; see also Sunstein, supra note 102, at 36 (using net benefits figures from the OMB annual reports to assert that the net benefits of all economically significant regulations issued during President Obama’s first three years (when Sunstein was OIRA administrator) were twenty-five times as high as during the Bush Administration and six times as high as during the Clinton administration).
  141. 2014 Rep. to Congress, supra note 140, at 6–7; see also id. at 19 (noting that “many of these major rules have important non-quantified benefits and costs”).
  142. Robert W. Hahn & Paul C. Tetlock, Has Economic Analysis Improved Regulatory Decisions?, J. Econ. Persp., Winter 2008, at 67, 71.
  143. Id.
  144. Office of Mgmt. & Budget, Exec. Office of the President, 2006 Rep. to Congress on the Costs and Benefits of Fed. Reg. and Unfunded Mandates on State, Local, and Tribal Entities 2 n.9 (2006).
  145. Hahn & Tetlock, supra note 142, at 69.
  146. Id.; see also Robert W. Hahn & Robert E. Litan, Counting Regulatory Benefits and Costs: Lessons for the US and Europe, 8 J. Int’l. Econ. L. 473, 483–84 (2005) (acknowledging, in a study of five OMB annual reports, that “most regulations [in one case 23 of 34] either do not provide enough information to compare costs and benefits, or there is a large enough range of uncertainty in the agencies’ estimates to put the regulations in a gray area,” but subsequently asserting “[a]ggregate net benefits can provide useful information on whether a particular set of regulations or programs are enhancing economic welfare”).
  147. Robert W. Hahn & Patrick M. Dudley, How Well Does the U.S. Government Do Benefit-Cost Analysis?, 1 Rev. Envtl. Econ. & Pol’y 192, 193 (2007).
  148. Id. at 200.
  149. Id.
  150. Id. at 201.
  151. Id. (emphasis added).
  152. Shapiro & Morrall, supra note 17, at 190.
  153. Id. at 193.
  154. Id. at 194.
  155. Id.
  156. Id. at 196.
  157. Id. at 195. The other questions asked about a statement of need for the regulation, analysis of alternatives, discounting, and presentation of uncertainties. Id.
  158. Id. at 197.
  159. Id.
  160. Id. at 194.
  161. Id. at 198.
  162. There have been other empirical studies of agency CBA’s that didn’t examine the issue of unquantified benefits. See, e.g., Art Fraas & Randall Lutter, The Challenges of Improving the Economic Analysis of Pending Regulations: The Experience of OMB Circular A-4, 3 Ann. Rev. Resource Econ. 71, 73 (2011) (providing an empirical study of the CBAs associated with fourteen major EPA rulemakings completed between 2005 and 2009 that “[did] not consider . . . whether EPA [had] developed quantitative estimates of benefits and costs”).
  163. Sunstein, The Limits of Quantification, supra note 134, at 1392 n.79.
  164. This would only make sense if one were to adopt Arden Rowell’s position that non-monetizable benefits should be entirely ignored in CBA. See Arden Rowell, Partial Valuation in Cost-Benefit Analysis, 64 Admin. L. Rev. 723, 741 (2012) (arguing that where benefits are unquantifiable due to incommensurability, they should simply be excluded and CBA conducted using only monetizable costs and benefits: “there is no room to allow non-monetizable benefits to affect the outcome of a monetary cost-benefit analysis”). But that’s not a position Sunstein appeared to endorse in the rest of his article. He did, however, at a later point in the article, acknowledge that where significant benefits are unquantified the agency might have “a great deal of difficulty in deciding which approach maximizes net benefits.” Sunstein, The Limits of Quantification, supra note 134, at 1394.
  165. See Farber, supra note 134, at 1485 (“Even if . . . arguments [for more qualitative approaches] are rejected as applied to more typical regulatory problems . . . they may have additional force in situations where uncertainties admittedly loom large.”).
  166. Revesz, supra note 134, at 1436; John Graham, Saving Lives through Administrative Law and Economics, 157 U. Penn. L. Rev. 395, 526 (2008). Graham also suggests adopting a star rating system indicating the relative importance of unquantified benefits. See id. at 524–26; John Graham et al., Managing the Regulatory State: The Experience of the Bush Administration, 33 Fordham Urban L. J. 953, 992–93 (2006).
  167. Masur & Posner, supra note 7; see also McGartland et al., supra note 8.
  168. Masur & Posner, supra note 7, at 100–01.
  169. Id. at 101.
  170. Id.
  171. Id.
  172. Id. at 104.
  173. Id. at 108.
  174. Id. at 92. For other empirical studies noting the frequency with which benefits remain unquantified in CBA, see Robert W. Hahn et al., Assessing Regulatory Impact Analyses: The Failure of Agencies to Comply with Executive Order, 23 Harv. J.L. & Pub. Pol’y 859, 869–70 (2000) (finding in an empirical study of forty-eight agency CBAs of environmental health and safety regulations that only 28% monetized all of the stated benefits but noting that estimating the magnitude of the unmonetized benefits was “beyond the scope of this analysis”); U.S. Gov’t Accountability Office, GAO-14-519, Environmental Regulation: EPA Should Improve Adherence to Guidance for Selected Elements of Regulatory Impact Analyses 20 (2014) (finding in an empirical study of seven CBAs of EPA rulemakings that the agency repeatedly failed to monetize “benefits . . . related to the primary purpose or key impacts of the regulatory actions” due to “limited data, modeling capabilities, and time and resource constraints”); Driesen, Is Cost-Benefit Analysis Neutral?, supra note 8, at 364–78, 401 (finding “pervasive” unquantified benefits in an empirical study of twenty-five federal environmental health and safety rules).
  175. Masur & Posner, supra note 7, at 113–15 (holding up EPA as one of three federal agencies most practiced at CBA and producing more quantified estimates of benefits than other agencies) Krupnick & Morgenstern, supra note 23, at 427–28 (noting that 2 EPA studies on the costs and benefits of the Clean Air Act “are widely seen as the ‘gold standard’ of benefit-cost studies conducted by government, both in the United States and abroad”).
  176. Renda, supra note 17, at 7–8 (calling the U.S. experience with CBA “the polar star” for E.U. policymakers).
  177. Data, supra note 24.
  178. Office of Mgmt. & Budget, Exec. Office of the President, 2013 Rep. to Congress on the Benefits and Costs of Fed. Reg. and Unfunded Mandates on State, Local, and Tribal Entities (2013) [hereinafter 2013 Rep. to Congress]; Office of Mgmt. & Budget, Exec. Office of the President, 2016 Draft Rep. to Congress on the Benefits and Costs of Fed. Reg. and Unfunded Mandates on State, Local, and Tribal Entities (2016). I used the draft 2016 report because a final 2016 report was never published. See Office of Mgmt. & Budget, OIRA Reports to Congress, WhiteHouse.gov, https://perma.cc/4Q7U-ZDBX (last visited Feb. 16, 2019).
  179. See 2013 Rep. to Congress, supra note 178, at 3. For purposes of the report, OMB defines major rule to include any rule that: 1) falls in the definition of “major rule” under the Small Business Regulatory Enforcement Fairness Act of 1996, 5 U.S.C. § 804(2); 2) meets the analysis threshold under the Unfunded Mandates Reform Act of 1995, 2 U.S.C. § 1532(a); or 3) falls in the definition of “economically significant” under section 3(f)(1) of Executive Order 12,866. Id.
  180. 2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel Economy Standards, 77 Fed. Reg. 62,624, 62,624 (Oct. 15, 2012) (codified at 40 C.F.R. § 85) (issued jointly with the Department of Transportation); Clean Water Rule: Definition of “Waters of the United States,” Final Rule, 80 Fed. Reg. 37,054, 37,054 (June 29, 2015) (issued jointly with the Army Corps of Engineers).
  181. See 2014 Standards for the Renewable Fuel Standard Program, 78 Fed. Reg. 71,732, 71,732 (Nov. 29, 2013) (proposed rule); Andrew Childers, EPA Won’t Finalize Renewable Fuel Standards in 2014, Cites Lengthy Delays, Bloomberg News (Nov. 24, 2014), https://perma.cc/SXB7-QXKR.
  182. See generally Oil Pollution Prevention; Spill Prevention, Control, and Countermeasure (SPCC) Rule—Amendments for Milk and Milk Product Containers, 76 Fed. Reg. 21,652 (April 18, 2011) (codified at 40 C.F.R. § 112); Oil Pollution Prevention; Spill Prevention, Control, and Countermeasure (SPCC) Rule—Amendments, 74 Fed. Reg. 58,784 (Nov. 13, 2009) (codified at 40 C.F.R. § 112); Revisions to the Definition of Solid Waste, 73 Fed. Reg. 64,668 (Oct. 30, 2008) (codified at 40 C.F.R. § 260); Oil Pollution Prevention; Spill Prevention, Control, and Countermeasure Plan Requirements—Amendments, 71 Fed. Reg. 77,266 (Dec. 26, 2006) (codified at 40 C.F.R. § 112).
  183. U.S. Envtl. Prot. Agency, Regulatory Impact Analysis: Final New Source Performance Standards and Amendments to the National Emissions Standards for Hazardous Air Pollutants for the Oil and Natural Gas Industry, 4-1 to 4-2 (April 2012); U.S. Envtl. Prot. Agency, Regulatory Impact Analysis for the Plywood and Composite Wood Products NESHAP, Final Report, EPA-452/R-04-005, ES-1 (Feb. 2004); U.S. Envtl. Prot. Agency, Economic Impact Analysis, Petroleum Refineries, Amendments to National Emissions Standards for Hazardous Air Pollutants and New Source Performance Standards, EPA-HQ-OAR-2010-0682 (Sept. 2015).
  184. See, e.g., U.S. Envtl. Prot. Agency, Regulatory Impact Analysis (RIA) for Existing Stationary Compression Ignition Engines NESHAP 7-11 (Feb. 2010) [hereinafter Stationary CI NESHAP RIA] (“Data, resource, and methodological limitations prevented EPA from quantifying or monetizing the benefits from several important benefit categories, including benefits from reducing carbon monoxide and hazardous air pollutants, ecosystem effects, and visibility impairment.”). This is consistent with Masur and Posner’s findings that in some cases, at least, “it appears that the unquantifiable benefits [of federal regulations] could be quite large.” Masur & Posner, supra note 7, at 108.n all but two instances, the agency used one of the three words indicated (“important,” “significant,” or “substantial”). In those two instances, the language used was slightly different but amounted to the same thing. See U.S. Envtl. Prot. Agency, Regulatory Impact Analysis of the Final Revisions to the National Ambient Air Quality Standards for Ground Level Ozone, EPA-452/R-15-00, at tbl. 6A-1, 6A-12 (Sept. 2015) (characterizing “unquantified ozone health benefit categories” as leading to an “underestimate” with a “magnitude of impact on monetized benefits” that is “high”); U.S. Envtl. Prot. Agency, Regulatory Impact Analysis of the Final Clean Air Mercury Rule 10-143 (Mar. 2005) (“[T]he benefits of reduced cardiovascular effects (from fatal and non-fatal heart attacks) if quantified could possibly be many times larger than those we are able to quantify in this section of the report due to the potential for mortality effects (monetized with the value of a statistical life which is much higher in value than IQ loss).”) [hereinafter RIA of the Final Clean Air Mercury Rule].n two of the thirty-six CBAs that used this kind of language indicating unquantified benefits of a significant magnitude, the agency was unable to quantify any benefits at all. U.S. Envtl. Prot. Agency, Regulatory Impact Analysis for the Plywood and Composite Wood Products NESHAP (Feb. 2004); U.S. Envtl. Prot. Agency, Regulatory Impact Analysis: Final New Source Performance Standards and Amendments to the National Emissions Standards for Hazardous Air Pollutants for the Oil and Natural Gas Industry 4-1 (April 2012).n another of these cases, the RIA used a benefits transfer approach—so that the CBA itself (which was short) did not make a statement about unquantified benefits, but it incorporated by reference the CBA from another rule, which made many such statements. U.S. Envtl. Prot. Agency, Regulatory Impact Analysis of the Standards of Performance or Stationary Compression Ignition Internal Combustion Engines 6-1, 6-3 (June 2006).
  185. See, e.g., U.S. Envtl. Prot. Agnecy, Regulatory Impact Analysis for Particulate Matter National Ambient Air Quality Standards 5-85 to 5-86 (Sept. 2006) (noting the “omission of potentially significant benefit categories” and that the “the benefits associated with [certain unquantified] nonhealth benefits categories are likely significant”) [hereinafter RIA for Particulate Matter NAAQS].
  186. See infra note 219 and accompanying text.
  187. See infra notes 236–256 and accompanying text.
  188. This is generally consistent with Masur and Posner’s findings that in more than 74% of instances in which agencies failed to fully quantify benefits or costs, they cited “empirical uncertainty” as the reason and that agencies asserted benefits to be “unquantifiable in principle” in only 9 out of 106 instances. Masur & Posner, supra note 7, at 104.
  189. See, e.g., U.S. Envtl. Prot. Agency, Regulatory Impact Analysis for the Final Mercury and Air Toxics Standards 69, 72 (Dec. 2011) (“Due to methodology and data limitations, we were unable to estimate the benefits associated with the hazardous air pollutants that would be reduced as a result of these rules.”) [hereinafter RIA for the Final Mercury and Air Toxics Standards].
  190. See infra Figure 4.
  191. See infra Figure 5.
  192. Two of those fifteen rules were aimed directly at particulate matter pollution. See National Ambient Air Quality Standards for Particulate Matter, 78 Fed. Reg. 3,086, 3,086 (Jan. 15, 2013); National Ambient Air Quality Standards for Particulate Matter, 71 Fed. Reg. 61,144, 61,144 (Oct. 17, 2006). But even these two rules were expected to reduce other pollutants as well. See RIA for Particulate Matter NAAQS, supra note 185, at 5-1 to 5-2 (Sept. 2006) (“Reductions in certain PM2.5 precursors such as NOx and VOC may also lead to changes in ambient concentrations of ozone. These changes in ozone will also have health and welfare effects. However, for this RIA, because the majority of the illustrative strategies evaluated do not affect NOx and VOC emissions (with the exception of nonattainment areas in parts of the western U.S., where we do not currently have adequate modeling data for ozone), we focus on estimating the health and welfare effects associated with changes in ambient PM2.5.”); U.S. Envtl. Prot. Agency, Regulatory Impact Analysis for the Final Revisions to the National Ambient Air Quality Standards for Particulate Matter, EPA-452/R-12-003, chapter 6 (Dec. 2012) (qualitative discussion of climate co-benefits, as well as ecological co-benefits of reducing NOx, SO2, mercury and ozone).
  193. See U.S. Gov’t Accountability Office, Report to Congressional Requesters: Environmental regulation: EPA Should Improve Adherence to Guidance for Selected Elements of Regulatory Impact Analyses 20 (2014) (noting EPA’s failure to monetize benefits other than particulate matter benefits in CBAs of air toxics rules).
  194. Of the thirteen CBAs that left HAPs completely unmonetized, three failed to monetize any benefits at all. Of the remaining ten, eight monetized only PM co-benefits. Of the remaining two, one was a motor vehicle emission standard aimed at the control of PM and ozone as well as HAPs. See Control of Air Pollution from Motor Vehicles: Tier 3 Motor Vehicle Emissions and Fuel Standards, 79 Fed. Reg. 23,414, 23,611, 23,612 (Apr. 28, 2014). While most of the monetized benefits in that CBA were attributable to PM (74%–93%), ozone benefits were also monetized. See U.S. Envtl. Prot. Agency, Control of Air Pollution from Motor Vehicles: Tier 3 Motor Vehicle Emission and Fuel Standards Final Rule 8-1, 8-6 to 8-7 (2014). The other was aimed specifically at HAPs, see National Emission Standards for Hazardous Air Pollutants for Stationary Reciprocating Internal Combustion Engines, 69 Fed. Reg. 33,474, 33,474 (June 15, 2004), but monetized a small amount of ozone co-benefits along with the particulate matter co-benefits. Particulate matter benefits still accounted for 99.5% of total monetized benefits. See U.S. Envtl. Prot. Agency, Regulatory Impact Analysis of the Stationary Internal Combustion RICE NESHAP, Final Report, 8-52 (Feb. 2004) (Table 8-13 & Note a allows calculation of the percentage of overall monetized benefits attributable to PM and ozone).
  195. See, e.g., Benzene, U.S. Envtl. Protection Agency, https://perma.cc/L3VR-UMN8 (last updated Jan. 2012); Formaldehyde, U.S. Envtl. Protection Agency, https://perma.cc/8PB3-3EQK (last updated Jan. 2000); Arsenic Compounds, U.S. Envtl. Protection Agency, https://perma.cc/XGC3-YJZY (last updated Jan. 2000).
  196. See, e.g., Benzene, supra note 195; Formaldehyde, supra note 195; Arsenic, supra note 195.
  197. See U.S. Envtl. Prot. Agency, Regulatory Impact Analysis for the Stationary Internal Combustion Engine (RICE) NESHAP 7-3 (2004).
  198. See RIA of the Final Clean Air Mercury Rule, supra note 184, at 11-1; RIA for the Final Mercury and Air Toxics Standards, supra note 189, at 4-1; see also Catherine A. O’Neill, The Mathematics of Mercury, in Reforming Regulatory Impact Analysis 108, 112 (Winston Harrington et al. eds., 2009); Lisa Heinzerling & Rena I. Steinzor, A Perfect Storm: Mercury and the Bush Administration, Part II, 34 Envtl. L. Rep 10,485, 10,490 (2004).
  199. See RIA of the Final Clean Air Mercury Rule, supra note 184, at 11-13 to 11-14; RIA for the Final Mercury and Air Toxics Standards, supra note 189, at ES-1.
  200. RIA for the Final Mercury and Air Toxics Standards, supra note 189, at 4-1 to 4-2; RIA of the Final Clean Air Mercury Rule, supra note 184, at 2-8 to 2-9. Mercury emitted from power plants is carried by winds through the air and eventually deposited on water and land. Once in the water, some mercury is transformed to methylmercury, a highly toxic form of the chemical, which is ingested by organisms low on the aquatic food chain and eventually bioaccumulates in fish. There is considerable evidence that children exposed to mercury in utero from their mothers’ consumption of contaminated fish exhibit decreases in IQ. Mercury obviously can also be expected to have impacts on other species and ecosystems. EPA made no effort to quantify the ecological effects of mercury emissions, however. Id. at 2-8.
  201. See Glenn Rice & James K. Hammitt, Economic Valuation of Human Health Benefits of Controlling Mercury Emissions from U.S. Coal-Fired Power Plants 18 (2005) (estimating IQ benefits at between $194 million and $288 million annually, and cardiovascular benefits at between $3.3 billion and $4.9 billion annually).
  202. See Mercury RIA 2005, supra note 200, at 4-1.
  203. See id. at 4-46.
  204. See id. at 10-46 to 10-47.
  205. In addition to the five discussed below, these rules included Revising Underground Storage Tank Regulations—Revisions to Existing Requirements and New Requirements for Secondary Containment and Operator Training: Final Rule, 80 Fed. Reg. 41,566 (July 15, 2015); Hazardous and Solid Waste Management System; Disposal of Coal Combustion Residuals from Electric Utilities; Final Rule, 80 Fed. Reg. 21,302 (Apr. 17, 2015); Petroleum Refinery Sector Risk and Technology Review and New Source Performance Standards, Final Rule, 80 Fed. Reg. 75,178 (Dec. 1, 2015); Effluent Limitations Guidelines and Standards for the Steam Electric Power Generating Point Source Category, 80 Fed. Reg. 67,838 (Nov. 3, 2015).
  206. Lead; Amendment to the Opt-out and Recordkeeping Provisions in the Renovation, Repair, and Painting Program, 75 Fed. Reg. 24,802 (May 6, 2010) (final rule); National Ambient Air Quality Standards for Lead, 73 Fed. Reg. 66,964 (Nov. 12, 2008) (final rule).
  207. U.S. Envtl. Prot. Agency, Economic Analysis for the TSCA Lead Renovation, Repair, and Painting Program Opt-out and Recordkeeping Final Rule for Target Housing and Child-Occupied Facilities 5-12 n.5 (Apr. 2010) [hereinafter Lead Opt-out EA]; U.S. Envtl. Prot. Agency, Regulatory Impact Analysis of the Proposed Revisions to the National Ambient Air Quality Standards for Lead 5-10 to 5-11. (Oct. 2008) [hereinafter Lead NAAQS RIA].
  208. Lead Opt-out EA, supra note 207, at 5-12 n.5, 5-20; see, e.g., Lead Opt-out EA, supra note 207, at ES-8 (“Removal of the opt-out provision will provide additional protection for women who do not know they are pregnant. . . . This is particularly important because the transplacental transfer of lead in humans is well documented.”).
  209. Id. at 5-6 (“The population groups discussed below do not reflect all of the individuals protected by removing the opt-out, but instead represent groups for which calculations can be readily made.”).
  210. Id. at 5-22; 75 Fed. Reg. 24,802, 24,810, 24,811.
  211. Lead NAAQS RIA, supra, note 207, at ES-7 to ES-8.
  212. Id. at ES-14. See Lead NAAQS RIA, supra note 207, at 5-11 (“Although a number of adverse health effects have been found to be associated with lead exposure, this benefits analysis only includes a subset.”); id. (“This human health benefits analysis does not attempt to estimate the changes in lead-related health effects among adults. Several key data limitations prevented EPA from quantifying these important endpoints.”). The monetized benefits were smaller than the monetized costs (though the agency was not technically supposed to consider that fact in its decision making). See id. at 1-1; Whitman v. American Trucking Ass’ns, 531 U.S. 457, 468, 471 (2001).
  213. This process involved translating expected improvements in water quality into a single numerical “water quality index,” and then using contingent valuation surveys to estimate how much residents of Florida would be willing to pay for improvements in that index. See U.S. Envtl. Prot. Agency, Economic Analysis of Final Water Quality Standards for Nutrients for Lakes and Flowing Waters in Florida 13-1, 13-10 (Nov. 2010) [hereinafter Florida WQS EA].
  214. Id. at ES-12; see also Water Quality Standards for the State of Florida’s Lakes and Flowing Waters, 75 Fed. Reg. 75,762, 75,802 (Dec. 6, 2010) (final rule) (same language). One respect in which the monetized benefits estimate was incomplete was its geographic scope: n analyzing benefits of the rule, EPA estimated benefits from nutrient reductions to Florida households only. Although residents of other states may hold values for water resources outside of their home state, if such resources have personal, regional, or national significance, EPA did not have sufficient information to estimate WTP for water quality improvements in Florida for out of state residents. As a result, the population considered in the benefits analysis of the rule does not represent all the households that are likely to hold values for water resources in the state of Florida. Even if per household WTP for out-of-state residents are small they can be substantial in the aggregate if these values are held by a substantial fraction of the population.lorida WQS EA, supra note 213, at 13-16 (emphasis added).
  215. EPA seemed to view it instead as simply a tool to give a sense of the general magnitude of the potential benefits. See Florida WQS EA, supra note 213, at ES-12 (“Although these monetized benefits estimates do not account for all potential economic benefits, they help to demonstrate the economic importance of restoring and protecting Florida waters from the impacts of nutrient pollution.”); Water Quality Standards for the State of Florida’s Lakes and Flowing Waters, 75 Fed. Reg. at 75,802 (same language).
  216. The provisions of the Clean Water Act under which this rule was promulgated are not among those, discussed supra notes 93–95 and accompanying text, that clearly prohibit the agency’s reliance on CBA in setting standards. See 33 U.S.C. § 1313(c) (2012).
  217. U.S. Envtl. Prot. Agency, Environmental and Economic Benefit Analysis of Final Revisions to the National Pollutant Discharge Elimination System Regulation and the Effluent Guidelines for Concentrated Animal Feeding Operations 3-9, 4-17 to 4-25 (Dec. 2002).
  218. Id. at 1-1, 8-7.
  219. National Pollutant Discharge Elimination System Permit Regulation and Effluent Limitation Guidelines and Standards for Concentrated Animal Feeding Operations (CAFOs), 68 Fed. Reg. 7,176, 7,240 (Feb. 12, 2003). As described by EPA in the preamble to the rule, unquantified benefits included: [Reduced] [h]uman health and ecological effects of metals, antibiotics, hormones, salts, and other pollutants associated with CAFO manure. [Reduced] [e]utrophication of coastal and estuarine waters due to both nutrients in runoff and deposition of ammonia volatilized from CAFOs. Reduced human illness due to pathogen exposure during recreational activities in estuaries and coastal waters.  Improvements to soil properties due to reduced overapplication of manure, together with increased acreage receiving manure applications at agronomic rates.Reduced pathogen contamination in private drinking water wells. Reduced cost of commercial fertilizers for non-CAFO operations. Id.
  220. U.S. Envtl. Prot. Agency, Economic Analysis of the Final Revision to the National Pollutant Discharge Elimination System Regulation and the Effluent Guidelines for Concentrated Animal Feeding Operations ES-5, ES-18 (Dec. 2002). The agency also characterized the monetized benefits as comprising “only [a] subset” of the total benefits of the rule. Id. at 1-13.
  221. Standards of Performance for Petroleum Refineries, 73 Fed. Reg. 35,838, 35,840 (June 24, 2008) (final rule).
  222. Id. at 35,844. PM2.5 refers to small particles with a diameter of less than 2.5 micrometers. PM10 refers to PM 10 micrometers or less in diameter. Both are associated with significant adverse health effects.
  223. Id. at 35,862 (“It should be noted that the range of benefits estimates provided above does not include ozone-related benefits from the reductions in VOC and NO emissions expected to occur as a result of this final rule, nor does this range include benefits from the portion of total PM emissions reduction that is not PM 2.5. We do not have sufficient information or modeling available to provide such estimates for this rulemaking.”).
  224. See U.S. Envtl. Prot. Agency, Benefit and Cost Analysis for the Effluent Limitations Guidelines and Standards for the Steam Electric Power Generating Point Source Category, EPA-821-R-15-005, at 2-4 (Sept. 2015) (“[T]he total monetized human health benefits included in this analysis represent only a subset of the potential health benefits that are expected to result from the [rule].”); U.S. Envtl. Prot. Agency, Regulatory Impact Analysis: EPA’s 2015 RCRA Final Rule Regulating Coal Combustion Residual (CCR) Landfills and Surface Impoundments at Coal-Fired Electric Utility Power Plants 6-8 (Dec. 2014) (“[Q]uantified human health benefits represent only a subset of the potential health benefits expected to result from the CCR rule.”); U.S. Envtl. Prot. Agency, Assessment of the Potential Costs, Benefits, and Other Impacts of the Final Revisions EPA’s Underground Storage Tank Regulations 4-29 (April 2015) (monetizing various avoided costs, but leaving unmonetized all human health and ecological benefits; “some larger releases may have significant cancer and non-cancer risks associate with them. Although these health effects are not able to be reliably quantified with available data, they represent additional potential benefits of the regulation”); U.S. Envtl. Prot. Agency, Economic Impact Analysis, Petroleum Refineries, Final Amendments to the National Emissions Standards for Hazardous Air Pollutants and New Source Performance Standards, EPA-HQ-OAR-2010-0682, 3-21 (Sept. 2015) (explaining that rule expected to result in emissions of air pollutants, including “approximately 3,900 tons per year of HAP and 33,000 tons per year of VOC” but providing no monetization of these (or any) benefits of the rule).
  225. See discussion supra Part II.A.
  226. See supra notes 92–98 and accompanying text.
  227. Uncertainty in benefits estimation poses a related problem and similarly leads to constraints on the formality of the CBA that can be meaningfully performed. Driesen, Two Cheers for Feasible Regulation, supra note 88, at 330.
  228. This was true for most of the rules in the data set. See supra notes 183–188 and accompanying text.
  229. This, of course, assumes that one is comfortable with the monetized values assigned to begin with.
  230. This constraint on CBA with unquantified benefits is sometimes overlooked. See, e.g., Sunstein, Limits of Quantification, supra note 134, at 1392 n.79 (suggesting that an analysis might be able to “select the approach that maximizes net benefits” even where significant benefits are unquantified); but see id. at 1394 (recognizing that an agency might have “a great deal of difficulty” using a breakeven analysis to “decid[e] which approach maximizes net benefits”).
  231. Indeed, the percentage could be even higher. Recall that even in the nine cases in which EPA did not explicitly describe the unquantified benefits as “important,” “significant,” or “substantial,” it appeared that they might well have been. See supra notes 206–224 and accompanying text. Moreover, EPA was also limited in most cases to a “litmus test CBA” because of the small number of alternatives the agency considered. See supra notes 83–89, 191 and accompanying text.
  232. Rather than acknowledge this challenge, agencies often simply zero out unquantifiable benefits. Indeed, in striking down the asbestos ban, the Fifth Circuit Court of Appeals encouraged the EPA to do exactly that when it famously said, “Unquantified benefits can, at times, permissibly tip the balance in close cases. They cannot, however, be used to effect a wholesale shift on the balance beam.” Corrosion Proof Fittings v. U.S. Envtl. Prot. Agency, 947 F.2d 1201, 1219 (5th Cir. 1991). Where benefits are non-monetizable solely due to incommensurability problems, Arden Rowell takes a similar view, arguing that the unmonetizable benefits should be excluded from the analysis altogether. See Rowell, supra note 164, at 741 (“[T]here is no room to allow non-monetizable benefits to affect the outcome of a monetary cost-benefit analysis.”). This is not because they are “worthless; it is because monetary cost-benefit analyses deal with money, and non-monetizable benefits, by definition, have no value that can be expressed in dollars.” Id. at 731–32. She is also careful to make clear, however, that she does not necessarily think CBA “should be the sole determinant of legal policy.” Id. at 741. Thus, in her view, incommensurable non-monetized values can be considered by agencies, just not within “the ledgers of monetized cost-benefit analysis.” Id. at 732.
  233. Some argue that a determination about whether benefits “justify” or “outweigh” costs can still be made in such an instance by performing a rough, intuitive apples-to-oranges comparison. See infra notes 237–248 and accompanying text. But there are arguments against this idea from both ends of the spectrum. Compare Driesen, Is Cost-Benefit Analysis Neutral?, supra note 8, at 401 (noting that CBA advocates “have not explained how agencies can integrate non-quantified benefits into a cost-benefit framework), and Levin, supra note 8, at 230 (“[W]hen the sum of the limited subset of beneft that can be quantified and monetized is shown to be less than the estimated costs, it is often impossible to conclude anything about the relative magnitude of the full benefits.”), with Rowell, supra note 164, at 741 (arguing that where benefits are unquantifiable due to incommensurability, they are irrelevant and should be excluded).
  234. And of course, in such an instance there can be no meaningful calculation of net benefits unless it were to set a lower bound.
  235. Even in the cases in which EPA did not explicitly label the unquantified benefits as significant, there was reason to believe that they might be. See supra Part III.D.3.
  236. OMB Circular A-4, supra note 2, at 2; see generally Sunstein, Limits of Quantification, supra note 134; Revesz, supra note 134; Farber, supra note 134; Heinzerling, supra note 134.
  237. OMB Circular A-4, supra note 2, at 2. Lisa Heinzerling argues that breakeven analysis has been selectively deployed by OIRA to “relax the evidentiary requirements of the cost-benefit test,” but only for certain favored types of rules. It is used often to loosen the requirements for terrorism and transportation safety rules, but only rarely for environmental rules. Heinzerling, supra note 134, at 1459–60.
  238. See supra note 120–121 and accompanying text.
  239. Sinden, Reports of its Birth, supra note 101, at 10,940–41.
  240. Sunstein, Limits of Quantification, supra note 134, at 1380.
  241. Id. at 1392.
  242. Id. at 1396.
  243. Id. at 1392.
  244. Professor Sunstein warns here about the perils of what he calls “micro-disaggregation,” which he argues can “mask” the true magnitude of costs or benefits. Id. at 1402 (“It is easy to frame an expensive regulation in terms that make it appear appealing—as in, for example, the idea that for a particular rule, one that reduces real risks, every American will have to pay merely $4 per day . . . . [S]uch a rule would cost over $360 billion per year, and a rule of that kind would have adverse economic consequences whether or not it is ultimately justified.”).
  245. Here, Sunstein asserts it would be easy to conclude that it would not be worth spending $10 million per water body for such modest improvements. Sunstein, Limits of Quantification, supra note 134, at 1387–88. But his assertion is arguably too facile. Imagine, for example, that the waterbodies provide breeding grounds for a species of dragonfly that provides free ecological services in the form of mosquito control to a nearby heavily populated area. In such circumstance, $10 million per water body might well seem a reasonable price. See Farber, supra note 134, at 1477.
  246. To this extent, for at least some kinds of benefits, his argument is vulnerable to criticism on incommensurability grounds. See Heinzerling, supra note 134, at 1464 (“Subjecting rules that spring from statutes that are all about human dignity to cost-benefit analysis, with a dollop of dignity thrown into the mix, disrespects both the statutory scheme and dignity itself.”).
  247. Sunstein, Limits of Quantification, supra note 134, at 1404.
  248. See id. at 1393.
  249. There were two other CBAs in the data set in which EPA conducted a breakeven analyses, but these analyses were aimed at accounting for uncertainty in the benefits estimate rather than benefits that were entirely unquantified. See U.S. Envtl. Prot. Agency, Economic Analysis for the Final Long Term 2 Enhanced Surface Water Treatment Rule 8-10 to 8-11 (Dec. 2005); U.S. Envtl. Prot. Agency, Economic Analysis for the Final Stage 2 Disinfectants and Disinfection Byproducts Rule 9-11 to 9-12 (Dec. 2005). These were both Safe Drinking Water Act regulations aimed at preventing contamination of public water systems by Cryptosporidium and other microbial pathogens. In both instances, the monetized benefits estimate exceeded the monetized cost estimate by a sizable margin. But because of the considerable uncertainties associated with the benefits estimates (significantly higher than for the cost estimate), EPA used breakeven analyses to show that even at the 5% and 95% confidence intervals, benefits were, under most assumptions, big enough to exceed costs.
  250. U.S. Envtl. Prot. Agency, Regulatory Impact Analysis, Final New Source Performance Standards and Amendments to the National Emissions Standards for Hazardous Air Pollutants for the Oil and Natural Gas Industry 6-3 (Apr. 2012). This rule set air pollution standards for oil and natural gas production facilities and provided at least some guideposts of both types described by Sunstein. First, while EPA did not have sufficient information on the locations of affected facilities to perform an exposure assessment, EPA was able to estimate quantitatively the aggregate amounts by which emissions of various pollutants would be reduced (12,000 tons of Hazardous Air Pollutants, 190,000 tons of Volatile Organic Compounds (VOCs), and so on). Id. at 5-2. It used these numbers to break the problem into smaller units, by calculating the dollars of benefit each ton of pollution reduction would have to produce in order to break even. Id. at 6-1 to 6-3. This was complicated by the fact that multiple pollutants were involved. Id. at 6-4 to 6-6. So EPA treated each pollutant separately, calculating the dollars of benefit per ton each pollutant reduction would have to produce in order to break even, holding the benefits of the other pollutants at zero. Id. at 6-1. (e.g., “[i]f we assume the health benefits from HAP emission reductions are zero, the VOC emissions would need to be valued at $2,900 per ton . . . for the benefits to exceed the costs”). The agency acknowledged, however, that “[o]f course, it is inappropriate to assume that the value of reducing any of these pollutants is zero,” and thus noted that “the real breakeven estimate is actually lower than the estimates provided above because the other pollutants each have non-zero benefits that should be considered.” Id.he second “guidepost” EPA employed took the form of upper and lower bounds. The agency compared breakeven dollar-per-ton values to the results of studies that had calculated approximate per ton benefit values for reductions of the same pollutants. For VOC reductions, for example, the relevant study reported the associated PM2.5 health benefits as ranging from $280 to $7,000 per ton. Id. at 1-7. That range was obviously quite large and not conclusive, since the breakeven amount ($2,900) was in the middle. Id. Furthermore, since the benefit amount depends on population size in the locality affected and the study results were drawn from a different set of geographical locations (eight urban areas) than those affected by the rule, EPA had little confidence in the accuracy of those upper and lower bounds for this context, and concluded that the study results were “not appropriate to calculate monetized benefits of these rules, even as a bounding exercise.” Id. at 4-21 to 4-22. (“While these ranges of benefit-per-ton estimates provide useful context for the breakeven analysis, the geographic distribution of VOC emissions from the oil and gas sector are not consistent with emissions modeled in Fann, Fulcher, and Hubbell (2009).”) Nonetheless, the agency concluded that “even if VOC emissions from oil and natural gas operations result in monetized benefits that are substantially below the average modeled benefits, there is a reasonable chance that the benefits of these rules would exceed the costs, especially if we were able to monetize all of the benefits associated with ozone formation, visibility, HAP, and methane.” Id. at 6-3.
  251. See National Pollutant Discharge Elimination System—Final Regulations to Establish Requirements for Cooling Water Intake Structures at Phase II Existing Facilities, 69 Fed. Reg. 41,576 (July 9, 2004) (to be codified at 40 C.F.R. pts. 9, 122, 123, 124, and 125); National Pollutant Discharge Elimination System—Final Regulations To Establish Requirements for Cooling Water Intake Structures at Existing Facilities and Amend Requirements at Phase I Facilities, 79 Fed. Reg. 48,300 (Aug. 15, 2014) (to be codified at 40 C.F.R. pts. 122 and 125).
  252. See Sinden, Cost-Benefit Analysis, Ben Franklin, and the Supreme Court, supra note 22, at 1196; 79 Fed. Reg. at 48,410.
  253. Farber, supra note 134, at 1473.
  254. Id. at 1474–76.
  255. Id. at 1474.
  256. Id.
  257. See supra notes 83–91 and accompanying text.
  258. Sinden, In Defense of Absolutes, supra note 135, at 1454–57.
  259. I am assuming that EPA is not being disingenuous about its inability to quantify these benefits. But see Masur & Posner, supra note 7, at 92 (suggesting that agencies may have an incentive “to claim unquantified benefits even when they can be quantified” in order to argue that they are large enough to tip the balance).
  260. These results also suggest at least the possibility that, beyond the unquantified benefits identified and described by analysts conducting CBAs, there may also be a significant number of what we might call “unknown unknowns”—impacts of which we are simply unaware, and about which we do not even know to try to collect data. DoD News Briefing Sec’y Rumsfeld and Gen. Myers, U.S. Dep’t Def. (Feb. 12, 2002), https://perma.cc/JLZ8-7CSV. If, as the above study results suggests, the known gaps in our knowledge base are sufficiently pervasive and widespread to produce “significant,” “substantial,” and “important” unquantified benefits most of the time, it may be reasonable to assume the existence of additional knowledge gaps about which we are not even aware. But see Sunstein, Limits of Quantification, supra note 134, at 1380 (suggesting that such circumstances are rare). t is not hard to imagine, for example, that at present, many consequences of ecosystem degradation remain off the radar screen as unknown unknowns. Both the services that ecosystems provide to humans, as well as the consequences to ecosystems of pollution and other forms of environmental degradation remain vastly understudied and inadequately understood, as do the human health effects of an enormous number of chemical compounds. See Jody Freeman & Andrew Guzman, Climate Change and U.S. Interests, 109 Colum. L. Rev. 1531, 1556­­­–­60 (2009) (valuation of biodiversity loss from climate change left out of most economic models and very problematic in those models that attempt it); Michael Livermore, Can Cost-Benefit Analysis of Environmental Policy Go Global?, 19 N.Y.U. Envtl. L.J. 146, 172 (2011) (discussing the limited value of CBA in contexts involving impacts to natural resources and ecosystems because of the “particular difficulties” of valuation in this sphere). Where unquantified benefits take the form of unknown unknowns, CBA may fail altogether—that is, meaningful comparison of costs and benefits, even of the informal, Ben Franklin variety, becomes impossible. Sinden, Formality and Informality in Cost-Benefit Analysis, supra note 32, at 116.
  261. Adler & Posner, supra note 39, at 62.
  262. Id. at 65.
  263. Id.
  264. Id. at 68.
  265. Id. at 66–67.
  266. Sinden et al., supra note 44, at 58.
  267. See id.
  268. See, e.g., Sinden, In Defense of Absolutes, supra note 130, at 1432.
  269. Adler & Posner, supra note 39, at 190.
  270. Revesz, supra note 134, at 1436, 1445–51, 1456; Graham et al., supra note 166, at 526. Masur and Posner suggest that agencies actually face incentives to under-quantify in order to use unquantified benefits as a justification to promulgate rules that otherwise fail CBA. See Masur & Posner, supra note 7, at 91–92. In part on that basis, they argue that agencies should more aggressively use existing information to quantify benefits, and, failing that, pursue additional studies, and/or employ Bayesian probabilities to make reasonable guesses based on staff intuition and expertise. In previous work, I have suggested that agencies seem to face the opposite incentive—to quantify more and more in order to gain credibility with the public and OIRA. See Sinden, Formality and Informality in Cost-Benefit Analysis, supra note 32, at 97. That pull toward formality and quantification can get agencies into trouble. See Sinden, Cost-Benefit Analysis, Ben Franklin, and the Supreme Court, supra note 22, at 1179. I worry that resorting to Bayesian probabilities would invite improper political motivations into the analysis or at least perceptions of that, and ultimately mire CBA in intractable controversy and gridlock.
  271. See Michael A. Livermore & Richard L. Revesz, Rethinking Health-Based Environmental Standards, 89 N.Y.U. L. Rev. 1184, 1190 (2014).
  272. See David M. Driesen, Distributing the Costs of Environmental, Health, and Safety Protection: The Feasibility Principle, Cost-Benefit Analysis, and Regulatory Reform, 32 B.C. Envtl. Aff. L. Rev. 1, 3 (2005); Jason R. Bent, Health Theft, 48 Conn. L. Rev. 637, 646–47 (2016); Dov Waisman, Equity and Feasibility Regulation, 50 U. Rich. L. Rev. 1263, 1263, 1269–70 (2016); Driesen, Two Cheers for Feasible Regulation, supra note 88, at 314; Jonathan S. Masur & Eric A. Posner, Against Feasibility Analysis, 77 U. Chi. L. Rev. 657, 657, 663 (2010); Sidney A. Shapiro & Thomas O. McGarity, Not So Paradoxical: The Rationale for Technology-based Regulation, 1991 Duke L.J. 729, 737 (1991); Wendy E. Wagner, The Triumph of Technology-based Standards, 2000 U. Ill. L. Rev. 83, 89–90 (2000).
  273. Sinden, Reports of Its Birth, supra note 101, at 10,937–39.
  274. Id. at 10937–38. Note, too, that feasibility analysis defines costs more narrowly than does formal CBA, counting simply compliance costs imposed on the regulated industry, in contrast to formal CBA’s attempt—at least in theory—to count all costs to society as a whole.
  275. Id. at 10,939.
  276. See generally Livermore & Revesz, supra note 271, at 1188–89; Sinden, In Defense of Absolutes, supra note 134, at 1416, 1457–58.
  277. Masur & Posner, Against Feasibility Analysis, supra note 272, at 705 (arguing that there is no “theoretically coherent normative basis for feasibility analysis”); Adler & Posner, supra note 39, at 73–100 (arguing that CBA is merely a proxy for welfare, but that it tracks welfare more accurately than any of its competitors (except perhaps informal CBA)).
Print this pageEmail this to someoneTweet about this on TwitterShare on Facebook

Comments are closed

Sorry, but you cannot leave a comment for this post.