Home » Articles » Volume 37 » Issue 4 » The Art of the Unsolvable: Locating the Vital Center of Science for Environmental Law & Policy

 
 

The Art of the Unsolvable: Locating the Vital Center of Science for Environmental Law & Policy

 

View PDF

Abstract

Environmental science is limited by both the power of its methods and the characteristics of its subject matter. Ideal scientific problems are ones with sufficient complexity and generality to make them interesting, but not so much that they become intractable. Scientists working in fields relevant to environmental law are rarely able to select problems with an optimal balance of broad implications and potential solutions. Issues ranging from the toxicity of industrial chemicals, to the protection of endangered species, to the projected magnitude of global warming transcend existing scientific knowledge. This complexity poses an unsettling question: If scientific uncertainty is so pervasive, what exactly do scientific methods contribute to environmental policymaking? Resolving this question has proven to be exceedingly difficult, both because of the technical challenges and the high stakes. Further, the image of science that has emerged from debates over environmental policy has been distorted by expectations that are simultaneously too great and too modest. By clinging to a classical vision of science, critics set environmental science up for failure; by presuming that scientific results are primarily the product of ideology, they risk trivializing their value. This Article looks beyond the domain of environmental law to identify appropriate benchmarks for the role of science in environmental policymaking. A unique contribution of the Article is identification of finance theory and modeling as an exemplar for effective application of sophisticated scientific methods.

I. Introduction

Wildlife management is a problem that superficially appears science ought to be able to resolve handily. Yet, successful wildlife management has proven to be far from simple and is exemplary of the complex dynamics that can emerge from simple biological interactions. Wildlife populations, for example, can be modeled using a formula with just one variable,[1]but this analytic simplicity is deceptive. Non-linear feedbacks, such as the responses of predators, can cause populations to crash unpredictably.[2]This dynamic is reflected in the formula, which is stunningly sensitive to minor variations in its single parameter-a difference of just one tenth of one percent can lead to widely divergent predictions for the same management decision.[3]

This example highlights a basic truth that is often overlooked. Science is limited by both the power of its methods and the characteristics of its subject matter. Ideal scientific problems are ones with sufficient complexity and generality to make them interesting, but not so much that they become intractable. Identifying good scientific problems is therefore essential to success as a scientist and to successful science. In this light, “[i]f politics is the art of the possible, [scientific] research is surely the art of the soluble.”[4]

Scientists working in fields relevant to environmental law are rarely able to select problems with an optimal balance of broad implications and potential solutions. Escaping from the aridity of the laboratory comes at a steep price-the inchoate swamp of the natural world. Issues ranging from the toxicity of industrial chemicals to the protection of endangered species and the projected magnitude of global warming transcend existing scientific knowledge.

This complexity poses an unsettling question: if scientific uncertainty is so pervasive, what exactly do scientific methods contribute to environmental policymaking? Resolving this question has proven to be exceedingly difficult, both because of the technical challenges and the high stakes. Typically, it is answered in the negative-folks know bad science when they see it-which more often than not simply involves dissecting the inevitable gaps in an opponent’s scientific methods.[5]

The resulting war of attrition has spawned a corrosive brand of skepticism fueled by vague terms, such as “sound” or “junk” science, that are used to label science as either good or bad.[6]More recently, it has led to dubious legislative actions, such as the Data Access Amendment (or “Shelby Amendment”) and the Information Quality Act, that purport to be good-science reforms.[7]Both of these laws give the appearance of enhancing peer review and oversight of regulatory science, but their primary utility is as tools for partisan challenges to agency science. They appear, if anything, designed to heighten strife and to create new barriers to the effective use of science in regulatory decision making.[8]

These types of reforms succeed, in part, because of long-standing misconceptions about science. Critics on both sides of the debate, for example, baldly challenge environmental science for being reductive-a position akin to criticizing a painting by Picasso for its failure to represent its subject matter realistically-and ignore the unavoidable epistemological constraints.[9]Arthur Leff has framed the dilemma incisively: “the less [a scientist] accepts as relevant, the less he can say that is not misleading; the more he accepts as relevant, the less he can say at all.”[10]Environmental science is vulnerable to attack because striking this balance so often rests on tenuous grounds.[11]

Determining the proper role of science is complicated further by the thorny moral questions that are interwoven with methodological considerations.[12]Most risk assessments, for instance, focus on certain risks of human mortality, such as contracting cancer, while omitting other mortality risks and only rarely considering morbidity.[13]Yet, regardless of whether the relevant data are obtainable, undercounting potential risks to human health will skew the analysis. This blurring together of difficult methodological and moral judgments has exacerbated controversies over environmental science.

The image of science that has emerged from this debate is distorted by expectations that are simultaneously too great and too modest. By clinging to a classical vision of science, critics set environmental science up for failure; by presuming that scientific results are primarily the product of ideology, they risk trivializing their value.[14]These polarized views have mired debate between a world of inviolable, deterministic science and an overly cynical one in which science cannot be trusted unless it is purified of all corrupting influences.

This Article develops an alternative account of what science offers environmental policy. As prefigured above, the simple answer is that the power of science depends on the nature of the problem and the strength of the tools available to analyze it. Good science ranges from the highly precise and accurate methods found in the hard sciences[15]to heuristic models based on objective aggregating methods that expose general patterns in complex systems.[16]Science is thus inherently pluralistic, as the different scientific disciplines attest, and a unitary conception of environmental science is neither a desirable end nor a viable goal.[17]

It follows from this pluralistic view that a general standard for judging scientific results does not exist. Even the most widely accepted scientific convention, that empirical findings satisfy a ninety-five percent significance level, is not universal.[18]To give just one example, subtle effects can matter a lot in environmental policy. Just as stealing a penny from every bank account in the United States would make you rich, weak effects spread over large populations can, in the aggregate, have significant consequences. In such cases, statistical significance will rarely be met, but this failure only confirms the subtlety of the effect, not its absence. This does not diminish the value of statistical testing; it shows only that scientific standards cannot be applied mechanically and that, similar to legal rules, exceptions to them will exist.

This Article seeks to identify benchmarks for science that respect the contingencies of environmental problems (and policies) without lapsing into a self-defeating form of scientific relativism. The Article begins by examining the controversy over the role of science in environmental law and placing it in a broader context by drawing on parallel debates in finance theory and ecology. It then argues that relatively simple models that embody aggregate patterns observed in a system, supplemented by narrower, more realistic assessments, are essential to understanding even the most complex environmental problems. The Article concludes by briefly identifying misconceptions that unnecessarily exacerbate the gulf often perceived between social values and quantitative methods.

II. Science Beyond Environmental Law and Policy

The risks posed by industrial chemicals represent an extreme example of how implacable scientific problems can be. The methods available for testing chemicals are hampered, above all, by the complex biology of chemical toxicity and its sensitivity to context.[19]Matters are made worse by the subtlety of the effects, which frequently involve harms that are manifest in one person out of thousands.[20]The absence of effective testing methods have, in turn, impeded scientific understanding of the mechanisms underlying toxic responses that could aid in developing new experimental protocols or strengthening existing ones.

The stark nature of these uncertainties, combined with the human drama associated with toxic chemicals, has made toxics regulation a particularly salient issue politically. Failed or faulty regulation of industrial toxins has been the poster child, and at times the whipping post, for the false promise of science in environmental policy.[21]Thus, if commentators wish to expose the evils of “junk science” or to dramatize the significance of value judgments in technocratic approaches to policymaking, toxic risk assessment is the example of choice.[22]

Toxics issues have had a powerful effect on the current understanding of environmental science because of this high visibility. Rachel Carson’s seminal book Silent Spring described the environmental harms of pesticides; the Three Mile Island meltdown threatened to release radioactive materials into the environment; and both Love Canal and Bhopal involved widespread exposure to toxic chemicals.[23]Current debates over genetically modified foods and nanotechnology also implicate toxic chemicals.[24]Unfortunately, the salience and uncertainties of toxics issues have polarized the debate and fueled the misperception that science is binary, either good or bad, when science actually spans a broad spectrum of degrees of accuracy and precision.

This section begins with a short discussion of the debate over the science of toxic substances and then looks beyond the domain of environmental law to identify appropriate benchmarks for the role of science in environmental policymaking. The logic of this strategy is straightforward. Just as complex problems are made more accessible by studying simple variants, so too will it be easier to evaluate scientific methods by studying them when their use is not limited by political pressures or resources.

A unique contribution of this Article is identification of such a field-finance theory and modeling. Financial markets provide an exemplary test bed for the practical application of sophisticated scientific methods. They equal, or exceed, the complexity of many natural systems, as suggested by the long history of economics and ecology influencing each other.[25]Equally important, the quantitative skill of financial modelers is superlative, and the resources on Wall Street are unrivaled. Even the strict instrumental rationality of financial analysts is a virtue, as it rigorously selects for scientific methods that work. Unlike environmental science, though, the moral implications of financial models are remote. Collectively these factors create a relatively unconstrained, pragmatic context for applying scientific methods to complex problems, and therefore for understanding the limits of their potential value.

A. Environmental Science in the Shadow of the Toxics Debate

Toxics regulation is intertwined with the long-standing debate over risk assessment, which is the broad analytical framework in which toxicological studies are utilized to establish environmental standards. The uncertainties inherent in risk assessment methods and chemical toxicology are infamous.[26]More than twenty years ago, the National Research Council identified almost fifty decision points in risk assessments for which “inference options” necessitate choosing between several scientifically plausible alternatives that cannot be resolved given existing uncertainties.[27]Legal scholars have frequently pointed to these inferential gaps to challenge the scientific authority of risk assessment methods and to object to them as implicating social values that transcend scientific expertise.[28]

These uncertainties and the resulting regulatory delays have very real consequences.[29]Failures to protect European workers against asbestos exposures before 1980, for example, may result in 250,000 additional cancers.[30]This failure, like many others, demonstrates the limits of epidemiological methods.[31]For example, even among the most heavily used chemicals, toxicity testing is sparse; there are no publicly available toxicity data for forty-three percent of the chemicals used in the highest volumes, and developmental toxicity testing is available for a mere seven percent.[32]Moreover, scientists are pessimistic about the prospects of achieving major advances through improvements in current toxicological test methods.[33]

Criticism of risk assessment methods crosses political lines. Supreme Court Justice Stephen Breyer is among those who have chronicled the many limitations and assumptions built into toxicological test methods.[34]Noting that animal studies, which dominate toxicological testing, are often subject to greater uncertainties than studies of humans, Justice Breyer goes on to describe their specific weaknesses:

The investigator applies a high dose of a supposed carcinogen to the animals; if they develop a higher than average number of tumors, the analyst tries to extrapolate backward to low doses in humans. What assumptions shall be made in doing so? What extrapolation model should be used? Risk analysts tend to use, for both animal and epidemiological studies, a linear model, which extrapolates backward on a straight line . . . . Critics argue that to use such mathematical models is like saying “If ten thousand men will drown in ten thousand feet of water, then one man will drown in one foot of water.”

The critics are right, in that there is no consistent scientific rationale for assuming a linear relation between dose and response. Some substances, such as cyanide, are proportionately as deadly in small doses as large ones; others, such as butter, are harmful only when consumed in large quantities; while still others, such as iodine, kill in high doses, are harmless in small doses, and in tiny doses are necessary for life. Science very often does not tell us which of these examples best applies.[35]

By suggesting that chemical toxicity estimates, at a certain point, amount to little more than educated guesses, Justice Breyer’s characterization is a clear indictment.

In truth, the steps that Justice Breyer discusses are just the beginning of a longer process in which qualitative judgments are often determinative. Most assessments of whether a chemical is harmful are based on multiple studies, each with its own limitations and qualitative differences. Aggregating the results of different studies therefore entails making difficult judgments about the relative weight to be given to each study,[36]and these judgments are made independently of the preceding quantitative analysis.[37]Agencies use a simple strategy to overcome such knowledge gaps: scientific decree by consensus in which committees of scientists review the existing studies and produce a consensus opinion on the numerical potency of a toxic chemical.[38]

Notwithstanding Justice Breyer’s criticisms, the methods embodied in toxic risk assessment are standard scientific fare, although applied in a context of extremely limited data and marginal knowledge. Scientific uncertainty comes in two basic forms: 1) uncertainties in the mathematical form of the model selected to represent a phenomenon, here chemical toxicity, and 2) small, random uncertainties that derive from the limits of the experimental methods and the many low-level influences that may affect the phenomenon (e.g., variation in individual susceptibility and attenuated environmental effects).[39]

Justice Breyer focuses on model uncertainty and the dubious grounds for the simple linear model used in toxic risk assessments.[40]As he notes, the uncertain basis of this model can result in predictions that err by several factors of ten thousand.[41]In toxic risk assessments, model uncertainty dwarfs the random uncertainties that are the focus of standard statistical methods.[42]These dramatic uncertainties arise because toxic risk data provide little or no information about the nature of the relationship between exposure level and harm.[43]The only types of studies currently possible are conducted at high exposure levels-typically the only “low” exposure level is zero-and no basis consequently exists for discriminating between the competing models.[44]

This problem is analogous to attempting to determine the driving distance between two cities without any information about the topography between them. Clearly, the answer will vary markedly according to whether the landscape is flat or mountainous and whether there are any intervening bodies of water. The model used in toxic risk assessment is the equivalent of drawing a straight line between the two cities and treating this as the presumptive driving distance. It is important to understand that this lack of information implies that model uncertainty cannot be systematically quantified.[45]All that one can do, as Justice Breyer does, is provide estimates using a range of different models and then compare their results. Moreover, as the complexity of a phenomenon increases so too does the range of potential models and, presumably, the differences between their predictions.

Standard statistical methods, by contrast, allow scientists to discriminate between hypotheses, but only where the data are sufficiently robust.[46]When the uncertainties in a given data set are much greater than the small, random effects that statistical methods are designed to overcome, as is the case for many chemicals, numerous hypotheses will be indistinguishable. The upshot of these experimental constraints is that while statistical methods can help to determine whether an effect is likely to exist at all, they cannot resolve the actual relationship between chemical exposure level and harm with a determinant degree of precision or accuracy. Moreover, unlike the cities example, where a straight-line estimate at least provides a lower bound on the distance, toxicity estimates are only bounded on the low end by zero because the test conditions-very high exposure levels-tend to be so extreme.

A second kind of “model” is also used in toxic risk assessments. Most toxicity testing is conducted on “animal models,” where animals function as living models of humans.[47]Animal testing has been controversial, however, due to the many differences between humans and animals, most notably in metabolic processes that break down chemicals.[48]Use of animal models thus adds an inferential leap. Scientists have to assume not only that the nature of harmful effects at high doses is the same at low doses, but also that a chemical will have the same type and magnitude of harmful effects in a test animal as in a human.

Use of animal models embodies a different type of statistical aggregation, by class. In this case the class is mammals believed to have similar susceptibilities to toxins as humans. Use of animal models also introduces a new source of uncertainty. While human susceptibility varies across populations, the much broader class of human-like organisms can be subject to far greater heterogeneity, which makes identifying reliable associations even more difficult. In essence, the inferences that scientists must make are two steps removed, mediated by biological and analytical models of uncertain validity.

These gaps reflect an inescapable barrier. Scientists will never be able to observe the effects of most toxic chemicals at low doses using current test methods. Even with much larger studies involving many thousands of test animals, scientists would run into the problem of having to control for innumerable low-level effects that would also be present. The direct harm caused by most chemicals at exposure levels relevant to regulation is just too subtle to drag out of the background noise.[49]Only identification of entirely different test methods, such as a test that measures the activation of a harmful biological process, will enable scientists to discover the nature and magnitude of such low-level effects.

The preceding observations lead back to the question that motivated this Article, namely, what purpose is science serving. I would argue a quite limited, but nevertheless important one. The observations of harm in test animals subjected to high exposures of a chemical provide important objective evidence of harm, and as such the results of a carefully conducted animal study can be extremely valuable.[50]Thus, at the very least, these studies provide some, albeit limited, solid ground from which to work. Perhaps more importantly, they also provide an initial basis and motivation for conducting further work toward understanding the underlying phenomena.

The real challenge is in determining how to use these studies-that is, what inferences are justifiable. In the short run, the only honest answer is probably that very few inferences can be ruled out and that no inferences can be made with much confidence. The catalog of chemical effects that Justice Breyer describes above is representative of the potential relationships between dose and harm, and thus illustrates the range of potential inferences one could make from animal studies in the absence of more specific knowledge.[51]Sometimes additional knowledge does exist, for example, in the case of certain cancers. However, these inferences are more often driven by non-scientific factors, such as the desire to make conservative assumptions that are protective of public health or economic considerations.

This is admittedly a slim reed upon which to base a major regulation. However, as the preceding laundry list of assumptions and qualifications suggests, toxic risk assessment represents a worst-case scenario for the application of scientific methods. The nature of the problem-small effects and large threatened populations-requires scientists to adopt numerous simplifying assumptions and abstract from detailed dynamics using the few observable patterns of harm that are available.[52]The long-standing focus of the debate over environmental science and chemical toxicity is thus analogous to evaluating an athlete solely based on the sport at which she is weakest. While this example illustrates the limits of science, it is also one-sided and can obscure the power that scientific methods have to address even very complex problems. The sections that follow seek to demonstrate this often overlooked capacity.

B. Scientific Pragmatism on Wall Street

To avoid any misperceptions, I will address at the outset the disparate contexts in which environmental science and financial modeling are conducted. In the financial sector, maximizing profits is the basic objective around which financial models are based.[53]Technical disagreements will often exist, but they will be testable and, at least in principle, resolvable. By contrast, the objectives of environmental policy are hotly debated and fraught with difficult value judgments.[54]This overlay leads to intense disagreement over the assumptions embedded in environmental science, such that a good model for one person may be viewed as disastrous by another.[55]The debate over the assumptions found in toxic risk assessment exemplifies the contentiousness of these battles.

These differences, while important, are tangential to my use of financial modeling here. While they clearly add a layer of constraints to environmental science, my objective with this Article is to identify, in effect, a high-water mark for what we can expect of science in complex regulatory settings. In other words, if regulators were not limited by resources, expertise, or conflicting objectives how would science be used and what approaches would dominate? Consequently for my purposes, the single mindedness of the financial sector is a virtue that does not invalidate the parallels that I am seeking to draw between the two disciplines. What I will show is that the basic methods are essentially identical to those used in environmental science.

It may nevertheless still seem incongruous, on scientific grounds, to discuss financial modeling in conjunction with environmental science. The similarities are much greater that one might expect, as evidenced by the long and notable history of interactions between the biological sciences and economics.[56]Market theory, for example, owes a great debt to the “survival of the fittest” concept drawn from Darwin’s theory of natural selection.[57]Similarly, mathematical theories of optimization, particularly game theory, have been exported from economics to biology, where they have proven to be tremendously influential.[58]

These connections are sustained by a number of core analytical problems. Financial analysts, like biologists, must contend with enormously complex systems that vary over time, contain highly heterogeneous elements, and involve many non-linearities.[59]As we saw in the introduction, feedbacks can make systems highly sensitive to local conditions that, in turn, may be subject to significant temporal variations. Put another way, each component is both subject to the influence of surrounding elements and is part of the changing environment that these other elements experience. As a consequence, the dynamics are more those of a crowd than the bounded motion of a ball moving down an inclined plane.

A byproduct of this complexity is the largely unconstrained behavior that results. Just as legal discretion expands with the number of factors a judge is permitted to consider, so too do the vast number of variables at play in economic systems cause their evolution to be effectively unbounded. This open-ended nature introduces an element of contingency and limits the role of traditional scientific methods. A basic premise of traditional scientific methods is that natural systems operate according to certain laws and that they display a discrete set of patterns that reflect the characteristics of the laws that govern them.[60]By contrast, movements of stock prices are dependent on “unique historical ‘accidents’ that cannot, in principle, be predicted.”[61]

None of these obstacles has impeded the use of quantitative methods in finance theory or day-to-day financial analysis. To the contrary, financial modeling is increasing in importance on Wall Street.[62]During the 1980s and 1990s there was an infusion of quantitative analysts with Ph.D.s in physics, mathematics, and computer science.[63]The development of more exotic financial instruments, particularly the growth in options trading and hedge funds, has added further impetus to using quantitative methods.[64]

Financial analysts acknowledge that their models cannot be fully verified and that they are inevitably partly true and partly false. They self-consciously distinguish their methods from those used by hard scientists and engineers:

In engineering . . . optimization is sensible because each scenario is precisely understood, and you’re trying to find the best one. In financial theory, in contrast, each scenario is imprecisely wrong . . . . While averaging may cancel much of the [errors in a model], optimization tends to accentuate your lack of knowledge.[65]

In other words, if you begin with a set of assumptions that you know to be partially false, seeking the putatively optimal solution based on them stands to magnify the effects of these starting imperfections.

Financial modelers rely on a combination of phenomenological models, which are derived by analogy from preexisting models for other systems, and statistical methods.[66]In the former, scientists identify well-understood phenomena with similar characteristics and then revise the parameters of an associated model using financial data.[67]A vivid example of this approach in physics is the use of the equations that describe a drop of water as a model for the nucleus of an atom.[68]Only where an analogous system is unavailable will modelers resort to statistical methods, which being the bluntest analytical tools, have the lowest fidelity.[69]

One of the most successful models in finance is the Black-Scholes model for option pricing,[70]which resolved a notoriously difficult problem in finance theory.[71]The simplicity and elegance of the model are stunning, given the complexities. The Black-Scholes model is based on several standard economic metrics, including the price of the stock on which an option is based, the volatility of this stock, the duration of the option, and the current interest rate on treasury bills.[72]In fact, in part because of this parsimony, Black and Scholes had difficulty publishing the original paper describing it.[73]

The two economists succeeded by using an aggregation method analogous to those used in environmental science. Their model is based on two key insights: 1) that the market price of a stock could be used as a surrogate for the risk premium on an option,[74]and 2) that the price of an option could be derived by constructing a portfolio consisting of stocks and risk-neutralizing options.[75]These insights allowed them to identify surrogate variables and, using these variables, to derive a simple formula for option pricing.

Their approach necessitated using a portfolio of stocks and options, as it was only by considering them in aggregate that stable associations were detectable-the inherent variability of individual stock prices otherwise obscured these relationships.[76]Similar to the scaling up in dose and averaging over numerous test animals in toxic risk assessment, financial modelers studied aggregates of stocks and options to identify stable associations. An important difference emerges at this point, however, as financial analysts routinely work with large portfolios of stocks and options. Thus, unlike in the highly artificial test conditions in the toxics example, the scale (level of aggregation) of the testing to validate the Black-Scholes model matches the scale at which the model is used in practice and decisions are made.

The Black-Scholes model, although empirically grounded, also reflects several idealizations of market dynamics. It assumes falsely, for example, that interest rates are fixed, and as financial modelers readily acknowledge, “[t]he real world violates most of the principles of options theory.”[77]In this respect, the Black-Scholes model operates much like the Coase theorem in law and economics, whose central assumption-costless transactions-is obviously false, but whose virtue is providing an intuitive framework for thinking about more realistic or specific scenarios.[78]Similar to the Coase theorem, it is the conceptual tractability, not its accuracy, that drives analysts to use the Black-Scholes model.[79]

Emanuel Derman, a former Wall Street investment banker turned professor of finance theory, explains the reasoning behind this success as follows:

[A]ll Black-Scholes requires of you is your opinion about future [price] volatility. It then converts your conceptual thoughts about future uncertainty into a fair dollar value. This is no black box or voodoo model; it’s reason transmuted to numbers, and that’s the right way for a model to work.

. . . .

. . . [B]etter to have market models with variables and factors you can name and whose nature you can grasp and opine about, than to have black-box models that dictate actions without a perceived structure.[80]

Derman’s point is two-fold. First, because no financial model can be perfectly accurate, applying and using them will entail difficult interpretative judgments-model results cannot be read off mechanically. Second, analysts cannot interpret model results without having an intuitive understanding of its parameters and its functional properties. As such, models like Black-Scholes operate as useful conventions against which judgments are made and other models constructed, not as rigid formulas for determining actions.

Truth in this context is pragmatic, and the success of a theory, at least initially, owes as much to persuasion and consensus as it does to a model’s putative efficacy.[81]In this mode, scientific methods function more as tentative forecasting tools that must be used skeptically and wisely. Derman describes them aptly as generating

[a] collection of parallel thought universes you can explore. Each universe should be consistent, but the real financial and human world is going to be much more complex than any of them. You’re always trying to shoehorn the real world into one of them to see how useful that approximation is.[82]

Nevertheless, models, like academic theorizing generally, can be taken to extremes or reified. The financial sector is as subject to such overreaching as any field of human endeavor. This tendency was displayed spectacularly in 1998 with the near meltdown of Long-Term Capital Management (LTCM), a hedge fund run by an elite group of financial analysts and economists that included two Nobel Laureates.[83]LTCM lost more than four billion dollars in six weeks following an unexpected economic retrenching of the Russian government.[84]

The story of the LTCM debacle is not a simple one, though. While there is certainly evidence that the principals had unfounded confidence in their models, it is by no means clear that similar lapses can be averted.[85]Indeed, several hedge funds have lost huge sums of money in subsequent years.[86]These failures may be par for the course-the underlying dynamics are extremely complex, data are scarce, and systematic testing is either limited or impossible.[87]

Much therefore depends on the wisdom of financial analysts. After all, no model can be taken literally and interpreting them will inevitably be part art and part science. Derman, once again, captures the spirit of and the difficult judgments inherent in the enterprise:

The success of options valuation is the story of a simple, asymptotically correct idea, taken more seriously than it deserved and then used extravagantly, with hubris, as a crutch to human thinking.

. . . .

. . . But the catastrophes of options valuation are the obverse side of the same coin, when people pay more attention to formulae than ideas, so that extravagance evolves into idolatry.

Somewhere between these two extremes, north of hubris but still south of idolatry, lies the wise use of models.[88]

The practice of financial modeling discussed above exposes a basic truth: realism is of relatively limited value in complex decision-making settings. Despite their high levels of sophistication, resources, and incentives, financial analysts use remarkably simple models to predict and understand the behavior of complex market dynamics. This strategy is driven by a practical insight: it is much harder to interpret and test the validity of a complex model than a simple one. As Wall Street analysts have learned, the better part of valor is to build models around the few simple patterns evident in complex systems, otherwise the number of potential solutions and variables rapidly overwhelm the data and human cognitive capacities.

These limits should not be interpreted as implying that complexity can be ignored. To the contrary, experience in financial markets, as the LTCM case illustrated, reveals that tragic errors all too often follow from reifying simple models. Furthermore, while large financial losses are far from benign, they all ultimately involve money, and often the money of wealthy individuals who can withstand such losses. The same is not true of environmental policy, where human lives and irreplaceable resources are often at stake. As alluded to at the beginning of this section, these qualitative differences alter the risk calculus and thus not only how models are used, but the models themselves.

In either context, though, it is up to decision makers and analysts to remain cognizant of the limits of the models on which they rely and to be vigilant in determining whether changing conditions require that they be reassessed. In general, consideration of subtler or rare influences will be secondary in quantitative models and difficult qualitative judgments will dictate when they need to be factored into an analysis. The difficulty of these judgments makes it all the more important for decision makers to understand how and why scientific models are used, as well as what makes them more or less reliable.

C. Bridging the Newtonian-Darwinian Divide in Environmental Science

Herbert Simon and Allen Newell long ago observed that scientific work is subject to two opposing pulls: “On the one side, a powerful attraction is exerted by ‘good problems’ . . . On the other side, strong pulls are exerted by ‘good techniques.'”[89]They then went on to warn that when these two pulls fall out of sync, “science is threatened by schism. Some investigators will insist on working on important problems with methods that are insufficiently powerful and that lack rigor; others will insist on tackling problems that are easily handled with the available tools, however unimportant those problems may be.”[90]

The difficult problems raised by environmental policy have promoted a similar schism in environmental science.[91]Missing in the current debate is a clear conception of the reliable, though still contingent, center where scientific methods have sufficient power and rigor to be useful. Further, by failing to have a clear conception of good science, environmental science is much more vulnerable to the political battles that dominate environmental law and policy.

Drawing on the preceding discussion, this section describes the basic contours of scientific practices required to address complex environmental problems effectively. A basic premise of all models is that unimportant details must be suppressed (or averaged out) because they obscure the few stable patterns that can provide a conceptual foothold for understanding complex systems and predicting their behavior.[92]Put more simply, just as maps omit secondary roads and focus on primary routes for long distance travelers, so to do statistical methods, and scientific models more generally, focus on the variables with the greatest relevance and clearest associations.

Statistical methods are used to detect patterns at multiple levels.[93]As we have seen, a basic strategy in science is to scale up along different dimensions, such as time, space, or class of things.[94]For example, toxicological methods scale up by averaging over many lab animals-even though significant variation in susceptibility exists within a species-as well as the magnitude of the exposure (i.e., high doses).[95]Ecologists aggregate over time, space, and across species, while economists aggregate across industries, financial instruments, and time.[96]These different forms of aggregation serve the same purpose, namely to expose the few stable associations that may exist in a system by averaging out effects of only localized significance.

The patterns that scientists identify through this basic strategy are every bit as real as the natural laws discovered in the hard sciences. However, rather than being comprehensive, they extract the simple dynamics found in complex systems. Statistical models are just a generic form of mathematical model that scientists use when they lack mechanistic information. They function as dynamic maps or, perhaps more aptly, simple cartoons of reality that exploit the information contained in the stable patterns and important variables of a system. Models highlight what scientists believe (and hope) are the salient dynamics in a system or, expressed another way, summarize the useful information in a system.[97]

Echoing Derman’s comments above, Simon Levin, an ecological modeler and theorist, describes this approach with characteristic clarity:

This is the principal technique of scientific inquiry: by changing the scale of description, we move from unpredictable, unrepeatable individual cases to collections of cases whose behavior is regular enough to allow generalizations to be made. In so doing, we trade off the loss of detail or heterogeneity within a group for the gain of predictability; we thereby extract and abstract those fine-scale features that have relevance for the phenomena observed on other scales.[98]

It would be foolish, for example, to try to study the behavior of a gas by attempting to follow the motion of every single gas molecule. There are just too many of them. In this simplest of all systems, scientists aggregate-that is, they look at the characteristics (for example,temperature) of large “ensembles” of molecules to derive a set of variables that are representative of the group.[99]In this case, it is just the sheer number of objects that necessitates summarizing the dynamics in this manner. Further, the perfect fungibility of atoms means that the summary variables capture the relevant information about the system with a high degree of generality, accuracy and precision-not much of importance is lost in translation.[100]

An implication of this approach is that not all levels of abstraction for analyzing a problem are created equal.[101]For example, it may be futile to attempt to understand biodiversity by tracking populations of individual species. Consideration of scale is therefore essential to basic scientific understanding and to practical problems of effective environmental management. In fact, the two are inextricably linked because identification of strong associations through basic scientific studies makes environmental management possible.

The work of John Harte, another prominent ecological theorist, exemplifies this approach. Similar to the views expressed by financial modelers, Harte is skeptical of the current “infatuation” with highly complex models because they become as “inscrutable as nature itself,” and are immune to testing and refinement.[102]Like Derman, Harte believes that simple models have the virtue of being readily interpretable, which can provide scientists with an intuitive sense of when and how they are likely to go wrong.[103]

Identifying the stable patterns that exist in complex systems is just the start of a much longer process, however. Simple models on their own are of limited value if scientists do not also develop an understanding of the underlying mechanisms.[104]In the biological sciences, Harte argues that the primary means for obtaining this information is discrete field studies that “combine . . . the natural-history component of ecology[] with the experimental manipulations that are essential to testing putative mechanisms.”[105]Once confirmed, this mechanistic knowledge can then be fed back into general models to improve their reliability and predictive power, as well as to enhance scientists’ ability to interpret them reliably.[106]

Scientific efforts to resolve the primary drivers of lake eutrophication provide an illustrative example of this approach.[107]In the 1970s, it was unclear whether the explosive algal growth in lakes throughout the Midwest was part of a natural cycle or caused by runoff from farms and cities (e.g., phosphates from fertilizers).[108]Scientists from opposing sides of the debate constructed fantastically complicated models, some with literally hundreds of parameters, to support their opposing claims.[109]Yet data was available for only a few of the model parameters, so scientists in each camp had virtually free reign to adjust the remaining parameters to conform to whatever position they were predisposed to believe.[110]

A breakthrough occurred towards the end of the decade when scientists discovered an association between algal levels in certain lakes and phosphorous levels in the rivers feeding them.[111]Once again, aggregation was the key to success. In a reverse twist on the high exposure levels used in toxic risk assessment, it was only by studying multiple lakes with relatively low levels of phosphorous that the association was detected.[112]The reason for this was that algal growth quickly plateaued (i.e., reached a saturation point) at higher levels of phosphorous, which was why algal growth in highly contaminated lakes was not correlated with phosphorous levels.[113]

Scientists used this association to construct a simple mass-balanced model for the Great Lakes system, which accounted for the aggregate flows into and out of the lakes.[114]This high-level model produced estimates of phosphorous levels for each of the Great Lakes with admirable accuracy and predicted substantial benefits from reducing runoff into them.[115]Its accuracy and generality were direct products of evaluating the problem at this high level of aggregation.[116]Furthermore, the model “output could not be fudged” because its “handful of parameters were all readily measured,”[117]and similar to financial modeling, the scale at which policy was set roughly matched the scale at which the problem could be effectively studied. The strength of these results gave scientists an objective basis for confidence in their predictions and policymakers a neutral ground on which to address the problem.[118]

In more complex settings, the tradeoffs between tractability and accuracy can become much more acute. The deep problems encountered with toxics regulation exemplify the difficult tradeoffs that can exist.[119]Similarly, the use of average global surface temperature in climate change research raises similar issues.[120]A global average is a useful fiction that obscures a great deal of local variability that is of paramount importance to

individuals living in regions likely to be hit the hardest.[121]It is nevertheless a useful, if not essential, metric to start with because a global average is far less subject to the large, chaotic fluctuations that obscure evidence of changing climactic conditions at the regional level.

These tradeoffs highlight the importance of analyzing environmental problems at multiple levels. The dynamics of environmental problems often span many scales, whether temporal, spatial, or organizational, and the stable patterns that are essential to decision making and constructing effective regulations may also exist at multiple scales. Science must consequently be viewed as an iterative process of learning and refinement, where conceptually simple models are used in concert with localized studies (and models) to ensure that important details are not overlooked. Notably absent from this vision of science are rigid thumbs-up-thumbs-down standards for assessing validity.

III. Scientific Bias Versus Social Value

Many critics of environmental science are likely to be unsatisfied with the model of science described in the preceding sections. Two potential classes of critics stand out in this regard: those concerned about perceived informational gaps and those who object to these methods on moral grounds. Methodological critiques are all of a piece-they question the realism of an analysis, with the most common variety challenging starting assumptions for their disregard of a problem’s complexity. Moral objections focus on the biases inherent in scientific methods. Quantitative methods, for example, are criticized for failing to consider not-easily quantifiable factors (for example, certain values), which can tip the scales against regulation.[122]

Essayist and scientist Stephen J. Gould frames this dilemma from a scientist’s perspective, focusing instead on the ways in which science can be shaped by social norms, ideology, and politics:

Yet the history of many scientific subjects is virtually free from constraints of fact . . . . some topics are invested with enormous social importance but blessed with very little reliable information. When the ratio of data to social impacts is so low, a history of scientific attitudes may be little more than an oblique record of social change.[123]

Gould’s point, in part, is that the aura of objectivity associated with science is often accepted uncritically and that when this occurs, science loses both its primary compass, namely, a critical mode of inquiry, and becomes vulnerable to the prevailing biases of the day.[124]Uncertain science, or science for which little empirical or theoretical support exists, therefore requires a particularly high level of vigilance to protect it against overreaching.

Much of the criticism of environmental science is driven by a concern that science, particularly when subject to large uncertainties, is being leveraged beyond what it can reasonably support or co-opted for political ends.[125]To the extent that critiques address instances of overreaching or outright manipulation, I have no quarrels with them. To the contrary, such efforts are essential to maintaining the integrity and reliability of environmental science.[126]

All too often, however, allegations of junk science or overly reductive methods involve blanket criticisms that could be made of any scientific work, good or bad. They function merely as a form of mud slinging to tarnish the credibility of otherwise solid, albeit imperfect, scientific studies. Standard tactics include highlighting sources of uncertainty that are of marginal importance to the analysis or the questions at issue[127]or picking apart the limitations of individual studies in an effort to remove them one by one.[128]Among other oversights, such critiques are devoid of an appreciation for the composite power of evaluating scientific studies collectively.

I have two distinct objectives in addressing these critiques. With regard to the methodological objections, I want to draw on the preceding discussion to inject some context and common sense into the debate. With regard to the moral concerns, I want to try to dissect the muddle that typifies the blending of moral and methodological considerations in the debate over environmental science. This discussion is not intended to resolve these issues. Instead, it is intended to help clarify some of the tensions and tradeoffs that exist, and thereby to enable decision makers to make judgments in a more thoughtful manner. More ambitiously, I hope that it will help refocus the debate over science in environmental law and policy.

It should be clear by now that all scientific methods can be criticized for what they leave out, as all models sacrifice realism for tractability. Yet, one of the standard criticisms of scientific methods is that they disregard so called “soft variables,” that is variables that are difficult to measure or quantify.[129]Critics raise concerns that “the result [is] not only that ‘soft’ variables tend to be ignored or understated but also that entire problems tend to be reduced to terms that misstate their underlying structure and ignore the ‘global’ features that give them their total character.”[130]According to this view, the rhetorical power of quantitative assessments “dwarf[s] [] soft variables” and biases environmental policies, particularly in favor of the most readily measured standard economic factors.[131]

This characterization is perfectly accurate so far as it goes. However, it ignores an important point, namely, that scientists’ inability to represent complex systems accurately is not unique to their methods.[132]Verbal representations may be equally deficient or even less effective than standard scientific ones. Like any conceptual framework, “the problem of choosing the model is that of choosing the human point of view . . . . One is going to be driven insane if one tries to find a nonhuman point of view from within a hopelessly human problem.”[133]

In this light, selecting a model can be analogized to the process of selecting a map; both depend on the purposes for which they are being used. If traveling from Tucson, Arizona, to Washington, D.C., one would select a map with few details, as one will travel on interstate highways and have no need to see the secondary and tertiary roads. By contrast, for a trip from Madison, Wisconsin, to the Leopold Memorial Reserve, one would select a detailed map that includes secondary roads. Neither map, of course, is a perfect representation of the road systems; only the roads themselves are. Instead, each is suited to the trip planned.

Models differ in their level of dynamical detail, that is, the range of interactions they are able to represent. Like a map, a model is selected according the needs at hand. Unlike a map (particularly in an era of comprehensive geographical data), the types of potential models are limited by the nature of the problem, available data, and current understanding. Because of these limitations, researchers utilize a variety of approaches that vary the degree of generality, realism, and precision in a model according to the nature of the questions they are seeking to answer.[134]No single model will be the “best” for all questions, or even for a single question, and most

scientists will employ a variety of models, strategies, and assumptions to create a composite picture of a complex phenomenon.[135]

Blanket objections that a model ignores soft variables overlook these constraints and risks becoming demands for an unattainable level of objectivity. Soft variables are “soft” for a reason. Typically, a variable is omitted because it is difficult to quantify, say, the social value of maintaining species diversity, or it is difficult to obtain the underlying data.[136]However, as I have argued above, these constraints are often driven more by the impossibility of synthesizing the relevant information than by the cost or difficulty of obtaining the data.[137]The scale at which a problem is analyzed often matters a lot in this respect. Just as there is no way to describe the trajectories of trillions of atoms in a gas, so too may it be impossible to describe the effects of various environmental impacts on a single species; the interactions may simply be too complex and numerous to reduce to a tractable form.

In the absence of stable patterns that allow scientists to reduce the informational demands of describing a system, all that scientists can do is describe it in much the same way that we would list the events that comprise our daily experience. This is not science in any meaningful sense; it is mere description or “witnessing” of the events as they occur.[138]Sometimes these impediments can be overcome by analyzing the problem at a different scale-aggregating over time or space-but not always. The important point to grasp here is that our inability to reduce descriptions and to gain understanding of complex systems is often not primarily a function of our methods, but of the nature of the system itself. Not all problems, as toxic risk assessments attests, are equally soluble.

This is not to say that scientific and linguistic methods are interchangeable. Clearly, there will be times when it is more appropriate-and effective-to describe things using words rather than numbers, and vice versa. My point here is only that the characteristics of an environmental problem that make it difficult to represent using scientific methods will also often make it hard to reduce to a tractable verbal form. Put another way, scientific laws and verbal metaphors are both reliant on the existence of stable patterns and associations; without them, nature (like history) simply becomes “one damn[] thing after another.”[139]

I worry that those of us engaged in debates over environmental policy frequently fail to take this next step to examine whether the alternatives to existing scientific methods do any better or, at the very least, that they offer a different perspective on environmental issues that is sufficiently coherent to be used meaningfully to inform decision-making processes. Questions about the viability of the precautionary principle reflect similar concerns.[140]

The complement of ignoring soft variables is reifying the hard ones, and this too is a common criticism of scientific methods.[141]Critics worry that when policymakers think or claim they are basing their decision making on reality, they are actually seeing only what the available metrics reveal and missing a great deal of what may be important information.[142]While a valid criticism and a reasonable fear, reification is clearly not unique to quantitative methods. Ideas and concepts are just as susceptible to reification as numbers.[143]One need only consider powerful ideologies ranging from neoconservative economics, with its elevation of markets, to Marxism, with its fixation on labor theory, to appreciate that non-quantitative theories also single out and reify certain factors to the exclusion of many other important ones.[144]

Good scientific models, like powerful literary metaphors or political theories, are susceptible to reification because they capture important concepts in a compelling manner.[145]Neither scientific methods nor quantification per se is the primary problem, but rather the human tendency to find or infer simple deterministic causes where none exist. Quantitative models and numbers operate as a rhetorical vehicle for giving voice to such tendencies. Yet, simple models or metaphors, whether numerical or otherwise, are essential tools for addressing and debating complex problems of all kinds.[146]

The use of simple models and metaphors crosses all languages, modes of expression, and human understanding. Models and metaphors are means for understanding more complex processes. A central challenge, as we saw with the Wall Street failure of Long Term Capital Management, not to mention innumerable lapses in environmental policy, is treating them as more than they are: useful generalizations that are abstractions from a much more complex reality and, as such, inherently fallible in their predictions.[147]

IV. Conclusions

The challenges posed by environmental problems put policymakers in a seemingly paradoxical position. By undermining the apparent objectivity of science and foreclosing simple benchmarks for trustworthiness, they deprive environmental science of the authority and rhetoric needed to sway public opinion.

An important implication of the model of science presented here is that this dilemma is illusory for two primary reasons. First, scientific methods, particularly those that analyze problems at multiple levels of aggregation, can and do identify objectively stable patterns in even the most complex system. Objectivity in environmental science is thus far from a lost cause. Second, insofar as scientific results are subject to several equally plausible interpretations, the only way to take advantage of such results is to recognize the contingent nature of interpreting them. On balance, it is far better to accept the loss of the dubious rhetorical authority of “science” for a much more expansive-and human-view of scientific methods.

Ultimately, scientists and policymakers face a simple choice: either acknowledge the limitations of scientific methods, and some diminished authority in a complex world, or risk undermining the value of the objective results that it can produce and compromising its core principles of objectivity and skepticism.


* Associate Professor, James E. Rogers College of Law, University of Arizona. The author would like to thank Greg Mandel, Marc Miller, Carol Rose, and Amy Sinden for their comments on earlier drafts of the Article, as well as the participants in the roundtable meeting “Law, Science, and the Environment Forum: A Meeting of the Minds,” for their helpful comments and suggestions.

[1]See Daniel B. Botkin, Discordant Harmonies: A New Ecology for the Twenty-First Century 38 (1990).

[2]See id. at 47.

[3]See Donald N. McCloskey, History, Differential Equations, and the Problem of Narration, 30 History and Theory 21, 28-29 (1991); see also Robert M. May, Simple Mathematical Models With Very Complicated Dynamics, 261 Nature 459, 459-60 (1976).

[4] P. B. Medawar, The Act of Creation,in The Art of the Soluble 85, 87 (1967). Medawar portrays science as a pragmatic enterprise: “[g]ood scientists study the most important problems they think that they can solve. It is, after all, their professional business to solve problems, not merely to grapple with them.” Medawar, Introduction, in The Art of the Soluble, supra.

[5] Holly Doremus, Science Plays Defense: Natural Resource Management in the Bush Administration, 32 Ecology L.Q. 249, 251-53 (2005) (discussing the “strident pitch” of the debate over the quality of science used in environmental policymaking).

[6]See, e.g.,Stephen Milloy, Bad Climate Science Yields Worse Economics, Fox News, November 26, 2006, http://www.foxnews.com/story/0,2933,225719,00.html (last visited Nov. 18, 2007) (referring to climate modeling science that “pile[s] guess upon guess” as “junk science”); Chris Mooney, Beware “Sound Science.” It’s Doublespeak for Trouble,Wash. Post, Feb. 29, 2004, at B02 (discussing origins and uses of the term “sound science”).

[7] Wendy E. Wagner, The “Bad Science” Fiction: Reclaiming the Debate over the Role of Science in Public Health and Environmental Regulation, 66 Law & Contemp. Probs. 63, 68-69 (2003) [hereinafter Wagner, The “Bad Science” Fiction] (describing the Shelby Amendment as providing “regulatory participants with access to the data underlying studies produced by federal agencies” and the Data Quality Act as establishing “a process by which parties can lodge petitions for the correction of information, including scientific studies, disseminated by the agency”).

[8]Id. at 95-96 (noting that expanding peer review to stakeholders “is not likely to improve the quality of scientific research”).

[9]See Seth Shulman, Undermining Science: Suppression and Distortion in the Bush Administration 3 (2006) (noting how critics “on both the left and right of the political spectrum . . . make astute points about the inherent biases that can taint scientific research”); Wagner, The “Bad Science” Fiction, supra note 7, at 64 (discussing how science is limited by the requirement that there be “a hypothesis capable of being tested in a replicable way or the use of methods that scientists have generally accepted as valid”).

[10] Arthur A. Leff, Economic Analysis of the Law: Some Realism About Nominalism, 60 Va. L. Rev. 451, 477 (1974). Leff’s characterization parallels the technical terminology perfectly: the fewer variables one considers, the greater the potential bias; the more variables one considers, the higher the variance. See also Peter Kennedy, A Guide to Econometrics 107-09 (5th ed. 2003) (discussing the correlation between increasing numbers of variables and increasing variance); Philip E. Tetlock, Expert Political Judgment: How Good Is It? How Can We know? 18-19 (2005) (“[I]f we only accept evidence that confirms our worldview, we will become prisoners of our preconceptions, but if we subject all evidence, agreeable or disagreeable, to the same scrutiny, we will be overwhelmed.”).

[11]See Doremus, supra note 5, at 252-53 (commenting that readily contestable policy judgments are integral to the vast majority of environmental problems). See also Laurence H. Tribe, Technology Assessment and the Fourth Discontinuity: The Limit of Instrumental Rationality, 46 S. Cal. L. Rev. 617, 630 (1972) [hereinafter Tribe, Technology Assessment] (observing that the complexity of environmental problems precludes straightforward applications of scientific methods).

[12]See Wagner, The “Bad Science” Fiction, supra note 7, at 66 (“Since the zigzag nature of science and policy makes it easy to blur the respective roles of science and policy in regulatory decision making, these political checks and balances can be lost or at least impeded by the complex interweaving of technical and value decisions.”).

[13]Id.at 65.

[14]See, e.g.,Doremus, supra note 5, at 259-63 (arguing that advocates on both sides of the environmental policy debate have taken advantage of “public misperception of science as a binary enterprise, essentially dividing scientific assertions neatly into two categories: those conclusively proven and those patently false”).

[15]See, e.g.,Dean A. Morande, A Class of Their Own: Model Procedural Rules and Evidentiary Evaluation of Computer-Generated “Animations, 61 U. Miami L. Rev. 1069, 1121 (2007) (discussing the reliability of scientific evidence and finding that Newtonian physics, a hard science, is accurate under any standard).

[16]See, e.g.,Doremus, supra note 5, at 254 (describing the process of working with incomplete scientific data to draw regulatory conclusions).

[17] Using “standard experimental and quantitative procedures so well suited for simple, timeless, and repeatable events in conventional science” as the ideal for all science is a categorical error that ignores the fundamental differences between simple physical processes and more complex biological and social systems. Stephen J. Gould, The Hedgehog, The Fox, and the Magister’s Pox: Mending the Gaps Between Science and the Humanities 18 (2003). Mechanical reliance on reductive methods also misses their primary purpose. Reductionism is “the search strategy employed to find good points of entry into otherwise impenetrably complex systems. Complexity is what interests scientists in the end, not simplicity. Reductionism is the way to understand it.” Edward O. Wilson, Consilience: The Unity of Knowledge 54 (1998).

[18]See Kennedy, supra note 10, at 70-71 (observing that “there is no good reason why [a] 5% [error rate] should be preferred to some other percentage”).

[19]See Stephen Breyer, Breaking the Vicious Circle: Toward Effective Risk Regulation 42-50 (1993) (discussing “uncertainties in the technical regulatory process” resulting from the often complex interaction of many variables and disciplines).

[20]See Mark R. Powell, Science at EPA: Information in the Regulatory Process 5 (1999) (“Many environmental problems are cryptic . . . [t]he absence of a simple, direct, and immediate relationship between a pollutant and a . . . ‘body count’ does not dismiss the possibility that . . . problems result from complex processes or indirect interactions.”).

[21]See, e.g., Wendy E. Wagner, The Science Charade in Toxic Risk Regulation, 95 Colum. L. Rev. 1613, 1615 (1995) [hereinafter Wagner, The Science Charade] (noting that “[w]ith the agencies’ failure inevitable under science-based mandates, Congress has begun to abandon” attempts to regulate toxics based on science).

[22]See generally Peter W. Huber, Galileo’s Revenge: Junk Science in the Courtroom (1991). Huber draws on toxicology as an example of so-called “junk science,” which he characterizes as “a hodgepodge of biased data, spurious inference, and logical legerdemain, patched together by researchers whose enthusiasm for discovery and diagnosis far outstrips their skill.” Id. at 3. On the other end of the political spectrum, Wendy Wagner provides a thoughtful and provocative analysis of the limits of toxic risk assessment in her article The Science Charade. Wagner, The Science Charade, supra note 21, at 1628-31 (arguing that government agencies “engage in a ‘science charade’ by failing” to identify gaps in the scientific process and the policy decisions they must then rely upon to fill those gaps).

[23] Rachel Carson, Silent Spring 297 (Mariner Books 2002) (1962); R.O. Wooton et al., Analysis of the Three Mile Island Accident and Alternative Sequences, at v (1980); Allan Mazur, A Hazardous Inquiry: The Rashomon Effect at Love Canal 10 (1998); Jamie Cassels, The Uncertain Promise of Law: Lessons from Bhopal 1 (1993).

[24]See, e.g., Jamie A. Grodsky, Genetics and Environmental Law: Redefining Public Health, 93 Cal. L. Rev. 171, 269 (2005) (arguing that as new genetic modification technologies uncover earlier evidence of toxic exposure new definitions of “health” will become necessary); Linda K. Breggin & Leslie Carothers, Governing Uncertainty: The Nanotechnology Environmental, Health, and Safety Challenge, 31 Colum. J. Envtl. L. 285, 290 (2006) (discussing environmental opportunities and challenges of nanomaterials and nanotechnology).

[25]See Robert V. Percival, et al., Environmental Regulation: Law, Science, and Policy 24-35 (2003) (discussing the history of economic concepts and terminology in the environmental policy field).

[26]See, e.g., Donald T. Hornstein, Reclaiming Environmental Law: A Normative Critique of Comparative Risk Analysis, 92 Colum. L. Rev. 562, 563 (noting that even EPA’s Scientific Advisory Board has given only a qualified endorsement of risk assessment, acknowledging that data for conducting risk assessment can be “notoriously spotty” and that risk-bearing “can involve qualitative elements not easily indexed for comparison”).

[27] Nat’l Research Council, Risk Assessment in the Federal Government: Managing the Process 28-33 (1983).

[28]See, e.g., Howard Latin, Good Science, Bad Regulation, and Toxic Risk Assessment, 5 Yale J. on Reg. 89, 90 (1988) (“challeng[ing] the conventional view that scientific perspectives should dominate the risk-assessment process”); Wagner, The Science Charade,supra note 21, at 1629 (arguing that “[a]gency scientists and bureaucrats engage in a ‘science charade’ by failing first to identify the major interstices left by science in the standard-setting process and second to reveal the policy choices they made to fill each trans-scientific gap”).

[29] Percival et al.,supra note 25, at 334 (indicating that “[o]nly 6 percent of the [premanufacture notices] received annually by EPA have any toxicity test data at all”); Mark R. Powell, Science at EPA: Information in the Regulatory Process 3-4, 122, 339-46 (1999) (describing the delay of the EPA in meeting its statutory obligations and the “degree of centralization in the development and use of science”); Wendy Thomas, Note, Through the Looking Glass: A Reflection on Current Mercury Regulation, 29 Colum. J. Envtl. L. 145, 152 (2004) (noting that “there is no collective recognition of what constitutes an unacceptable amount of mercury contamination”); Russell S. Thomas et al.,Identification of Toxicologically Predictive Gene Sets Using cDNA Microarrays, 60 Molecular Pharmacology 1189, 1189 (2001) (pointing out that “a thorough analysis of each chemical requires $2 to 4 million and several years to complete”).

[30] Julian Peto, Cancer Epidemiology in the Last Century and the Next Decade, 411 Nature 390, 392 (2001).

[31]Id. at 392 (noting that only “[a]bout a dozen specific occupational exposures and several complex mixtures, particularly the combustion products of coal, have caused high risks of certain cancers (predominantly lung cancer) in heavily exposed workers”).

[32] Philip J. Landrigan et al., Environmental Pollutants and Diseases in American Children: Estimates of Morbidity, Mortality, and Costs for Lead Poisoning, Asthma, Cancer, and Developmental Disabilities, 110 Envtl. Health Persp. 721, 721 (2002). Approximately 3,000 high-volume chemicals are produced in or imported into the United States at over one million pounds per year. Percival et al., supra note 25, at 335.

[33] Gary Taubes, Epidemiology Faces Its Limits, 269 Sci. 164, 164 (1995).

[34]See generally Breyer, supra note 19.

[35]Id. at 44.

[36]See EPA, What is IRIS?, http://www.epa.gov/iris/intro.htm (last visited Nov. 18, 2007) (demonstrating that EPA conducts this global analysis of existing data under its Integrated Risk Information System (IRIS) program); see also Powell, supra note 29, at 31-34 (indicating that EPA uses potencies/reference doses and modeling methods to calculate regulatory standards for each of the chemicals it regulates. As such, the IRIS toxicological reviews provide the final toxicological information used by EPA to calculate regulatory standards for toxic substances).

[37] While a lower level of statistical significance may permit scientists to consider more data, it provides no guidance on the more important judgment of how the data are assessed relative to each other or as a whole. Randall Collins, Statistics Versus Words, 2 Soc. Theory 329, 336-37 (1984) (explaining that scientific judgments on the value of specific experimental results “count most, not some meeting of, or failure to meet, an arbitrary level of statistical ‘significance'”).

[38] Powell, supra note 29, at 33-34.

[39]See Collins, supra note 37, at 336-38; Powell, supra note 29, at 31-34.

[40] Breyer, supra note 19, at 43-45.

[41]Id. at 45.

[42]See id. at 43-45.

[43]See id. at 44.

[44]See id. at 45-47.

[45] In fact, standard statistical methods used to quantify uncertainty, typically “R2” values, are themselves based on a very simple model of experimental uncertainty, which may or may not hold in a particular study. For example, if the system being studied is subject to various nonlinearities, the assumptions of the standard normal distribution model often will be violated. Only limited means exist for characterizing uncertainties here.

[46]Cf. Andrew F. Siegel, Statistics and Data Analysis 308-10 (1988).

[47] Mary Weideman, Toxicity Tests in Animals: Historical Perspectives and New Opportunities, 101 Envtl. Health Persp. 222, 222 (1993).

[48] Office of Tech. Assessment, U.S. Congress, Neurotoxicity: Identifying and Controlling Poisons of the Nervous System 121 (1990).

[49]See Toxicology Working Group of the 10X Task Force, U.S. EPA, Draft: Toxicology Data Requirements for Assessing Risks of Pesticide Exposure to Children’s Health 30 (1999), available at http://www.epa.gov/scipoly/sap/meetings/1999/may/10xtx428.pdf.

[50] Scientists, of course, hotly debate whether a study’s design and implementation are adequate, but I think that this is separable from the point that I am trying to make here. I am starting with the assumption that the experimental work has been conducted competently, as my focus is on determining what science can tell in such complex regulatory settings. Clearly, if scientists cannot agree even on how to construct reliable experiments, all bets are off, and science can tell little or nothing about the problem.

[51]See Breyer, supra note 19 and accompanying text.

[52] In such cases, one can try to account for other effects (i.e., confounding variables), but as the magnitude of the effect diminishes, the number of confounding variables increases. The difficulty of resolving weak effects is not simply a matter of resources-for example, increasing a study to 100,000 mice instead of 100-but attributable to the fundamental limits of the statistical methods themselves. The numbers are both strikingly large and numbingly small. It is this counterbalancing of large exposure and small effects that makes scientific understanding so difficult.

[53] Simon Z. Benninga, Financial Modeling 57 (2d ed. 2000).

[54]See, e.g., Percival et al., supra note 25, at 9-24.

[55]See id. at 10.

[56]See Eduard Heimann, History of Economic Doctrines: An Introduction to Economic Theory 18, 48, 78 (1945).

[57] Richard Levins & Richard Lewontin, The Dialectical Biologist 84 (1985) (noting the striking parallels between evolutionary theory and classical economic market theory).

[58]Id. at 25.

[59]See Donald G. Saari, Mathematical Complexity of Simple Economics, 42 Notices Am. Mathematical Soc’y. 222, 222 (1995).

[60] Gould, The Hedgehog, the Fox, and the Magister’s Pox,supra note 17, at 207.

[61]See id. at 202, 224-28.

[62] Gary Stix, A Calculus of Risk, 278 Sci. Am. 92, 92-93 (1998) (noting the growth of “financial engineering” or “econophysics” during the 1990s with the expansion of new financial instruments).

[63]See id. at 94-95.

[64]See Emanuel Derman, A Guide for the Perplexed Quant, 1 Quantitative Fin. 476, 476-77 (2001).

[65]Id. at 478.

[66]See id. at 477.

[67] For example, financial modelers have attempted to adapt the quantum mechanical theory to model the dynamics of financial markets, although with mixed results. See Stix, supra note 62, at 95-96.

[68]E.g., Derman, supra note 64, at 477.

[69] Statistical methods, at base, are premised on finding patterns in aggregates, which includes multiple observations of the same thing, the same type of thing (for example, large rat studies), or the same class of things (for example, all mammals). By design, statistical methods filter out most of the dynamical details by reducing most systems to simple linear models. This is equivalent to the process of reducing the physical characteristics of people in a television cartoon to their most prominent features (for example, Marge’s beehive hairdo or Homer’s bald head on The Simpsons). The objective is simply to identify the most important factors using the simplest assumptions.

[70] Stix, supra note 62, at 95. An option gives the owner a right to buy (or sell) a stock at a set price for a specified period of time. For example, an option might give the owner a right to buy a stock for $100 anytime within a six-month period of time. Intuitively, the more volatile the underlying stock (i.e., price variability) or the longer the owner has to purchase it, the higher the value of the option. This follows from the fact that the most the owner of an option can lose is the option price, whereas the price of a highly volatile stock can really spike-if the value of the stock is $500, the option owner has the right to purchase it for $100 and make a killing. Similarly, the longer the right to purchase a stock is open, the greater the odds are that the owner of an option will profit, as there is simply more time for the stock price to fluctuate.

[71]Science & Nature: The Midas Formula (BBC television broadcast Dec. 2, 1999) (transcript available at http://www.bbc.co.uk/science/horizon/1999/midas_script.shtml) (describing the importance of the Black-Scholes theory in solving the problem of a model that correctly predicted the price of an option at any time).

[72] Hakan Erdogmus & John Favaro, Keep Your Options Open: Extreme Programming and the Economics of Flexibility,in Extreme Programming Perspectives ch. 43, at 16 (Giancarlo Succi et al. eds., 2002).

[73] Stix, supra note 62, at 94.

[74] The “risk premium” is simply the price discount on an option due to investor risk aversion. Black and Scholes realized that the underlying stock price itself reflects the average risk premium for an option. They based their claim on the observation that the price of a stock reflects investor discounting for its volatility, which they argued is a reasonable surrogate for the risk premium of an associated option as well. Id.at 93.

[75] Erdogmus & Favaro, supra note 72, at 35. A risk-neutralizing strategy entails purchasing two options for each stock (i.e., one giving the right to buy and one giving the right to sell the stock), so that if the individual stocks in the portfolio go below (or above) their original purchase price, the stock portfolio owner can exercise option rights that collectively offset any potential losses for the portfolio as a whole. This strategy is referred to as “hedging.” The Black-Scholes theory is based on creating an idealized risk-free portfolio using a hedging strategy. Science & Nature: The Midas Formula, supra note 71.

[76]Science & Nature: The Midas Formula, supra note 71.

[77] Derman, supra note 64, at 478.

[78]See Stix, supra note 62, at 94 (“The basic Black-Scholes formula made unrealistic assumptions about how the market operates.”). See also Derman, supra note 64, at 480 (“Good theories, like Black-Scholes, provide a theoretical laboratory in which you can explore the likely effect of possible causes. They give you a common language with which to quantify and communicate your feelings about value.”).

[79]See Derman, supra note 64, at 478 (“Most real things are too messy for a full theoretical treatment, and that’s why implied values, which mask so many unknowns in one effective calibration parameter, play such a large role.”).

[80]Id. See also Salih N. Neftci, Principles of Financial Engineering 437 (2005) (“the Black-Scholes formula is simple and depends on a small number of parameters. In fact, the only major parameter that it depends on is the volatility, σ. A simple formula has some advantages. It is easy to understand and remember. But, more importantly, it is also easy to realize where or when it may go wrong. A simple formula permits developing ways to correct for any inaccuracies informally by making subjective adjustments during trading.”).

[81]See Derman, supra note 64, at 480.

[82]Id.

[83] Franklin R. Edwards, Hedge Funds and the Collapse of Long-Term Capital Management, 13 J. Econ. Persp. 189, 199 (1999).

[84]Id.

[85]See Derman, supra note 64, at 478 (“[F]inancial valuation will always have much in common with art or antiques valuation, where knowledge and experience and street sense are as important as any formula.”).

[86]See, e.g., Jenny Anderson, After Loss, Hedge Fund Will Close, N.Y. Times, Sept. 30, 2006, at C1.

[87]See David Clarke, Gartmore Says Regulators Should Adopt Uniform Hedge Fund Rules, http://www.bloomberg.com/apps/news?pid=10000102&sid=aUt3sL4SLq9A&refer=uk (last visited Nov. 18, 2007) (stating that 72% of hedge funds are unprofitable and that the average hedge fund in 2005 lost 0.7% while others lost up to 28% of their value in the first four months of 2005).

[88] Derman, supra note 64, at 480.

[89] Allen Newell & Herbert A. Simon, Computer Simulation of Human Thinking, 134 Sci. 2011, 2011 (1961).

[90]Id.

[91]See generally id.

[92] Simon A. Levin, The Problem of Pattern and Scale in Ecology, 73 Ecology 1943, 1946-47 (1992) (“At very fine spatial and temporal scales, stochastic phenomena (or deterministically driven chaos) may make the systems of interest unpredictable. Thus we focus attention on larger spatial regions, longer time scales, or statistical ensembles [collections of things], for which macroscopic statistical behaviors are more regular.”).

[93]Id. at 1947 (indicating that efforts to understand complex systems, like ecosystems, “must revolve around attempts to discover patterns that can be quantified within systems, and compared across systems”).

[94]Id. (“As one moves up the hierarchy to larger and larger aggregates, one obtains more statistical predictability, while sweeping under the rug details of variation within an aggregate . . . . however one defines classes, there will be differential evolution among classes, and differential evolution within.”).

[95]See id.

[96]See id. See also Derman, supra note 64, at 480.

[97]Levin, supra note 92, at 1950 (“[T]he philosophy behind the application of models is not that the finer detail does not exist, but that it is irrelevant for producing the observed patterns.”).

[98]Id. at 1947.

[99]Id. at 1950.

[100]See id. (“[T]his suppression of detail . . . allows the demonstration that the observed ensemble behavior can be explained entirely without reference to the extra detail.”).

[101]Id. at 1960 (“That there is no single correct scale or level at which to describe a system does not mean that all scales serve equally well or that there are not scaling laws.”).

[102] John Harte, Toward a Synthesis of the Newtonian and Darwinian Worldviews, Physics Today, Oct. 2002, at 29, 31 (arguing for simple, falsifiable models that “capture the essence of the problem, but not all the details”).

[103]See id. at 32 (explaining how a simple model approach to studying lake eutrophication “cut through the confusion of [more complex] analyses,” and correctly identified phosphorus loading as the cause of the problem).

[104]See id. (“The knowledge of [mechanisms at work in specific environments] then provides the basis for formulating reliable generalizations at larger scales.”); Levin, supra note 92, at 1948 (“Once patterns are detected and described, we can seek to discover the determinants of pattern, and the mechanisms that generate and maintain those patterns. With understanding of mechanisms, one has predictive capacity that is impossible with correlations alone.”).

[105] Harte, supra note 102, at 32.

[106]Id. at 32-34 (describing three examples of detailed field data successfully fed into general models to improve predictive power).

[107]See id. at 32.

[108]Id.

[109]Id.

[110]Id.

[111]Id.

[112]Id.

[113]But cf. id. (describing the analysis and modeling done on uneutrophied lakes and on the Great Lakes that proved the eutrophication-phosphorus relationship).

[114]Id.

[115]Id.

[116]See id.

[117]Id.

[118]See id.

[119]See supra notes 19-23 and accompanying text.

[120]See Cong. Budget Office, Uncertainty in Analyzing Climate Change: Policy Implications 7 n.12 (2005), available at http://www.cbo.gov/ftpdocs/60xx/doc6061/01-24-climatechange.pdf.

[121]Id.at 8, 22.

[122]See Ronald W. Morris, Limitations of Quantitative Methods for Research on Values, 16 Can. J. Educ. 82, 82 (1991).

[123] Stephen J. Gould, The Mismeasure of Man 54 (rev. 1996).

[124]Id.; see also Thomas O. McGarity, Our Science is Sound Science and their Science is Junk Science: Science-Based Strategies for Avoiding Accountability and Responsibility for Risk-Producing Products and Activities, 52 U. Kan. L. Rev. 897, 898 (2004) (“Appeals to science are more politically saleable than appeals to economics. Talk of science raises the expectation that decisions will be determined by objective criteria, solid empirical data and rational analysis.”).

[125]See Donald Kennedy, An Epidemic of Politics, 299 Sci. 625, 625 (2003) (describing the lack of objectivity in scientific appointments under the Bush administration).

[126] The Bush Administration, for example, has been accused of “stacking” several high-profile committees, most notably the CDC’s Advisory Committee on Childhood Lead Poisoning and FDA’s Reproductive Health Drugs Advisory Committee. See Kennedy, supra note 125; Dan Ferber, Critics See a Tilt in a CDC Science Panel, 297 Sci. 1456, 1457 (2002) (analyzing the appointment of new members to the Center for Disease Control’s National Center for Environmental Health) onmentalism); Rick Weis, HHS Seeks Advice to Match Bush Views, Wash. Post, Sept. 17, 2002, at A1 (describing the restructuring of multiple scientific advisory committees). See generally Michael Specter, Political Science, New Yorker, Mar. 13, 2006, at 58, 63. (contending the Bush administration imposed a social ideology into areas traditionally reserved for science).

[127] McGarity, supra note 124, at 919 (citing argument by Wendy Wagner that industry “manufactures uncertainty” about scientific studies).

[128]Id. at 922 (detailing how data and studies are picked off one-by-one rather than evaluated holistically).

[129] Tribe, Technology Assessment,supra note 11, at 627.

[130]Id.

[131] Laurence H. Tribe, Policy Science: Analysis or Ideology, 2 Phil. & Pub. Aff. 69, 97 (1972) [hereinafter Tribe, Policy Science]; Wagner, The “Bad Science” Fiction,supra note 7, at 122.

[132]See McCloskey, supra note 3, at 35; Tribe, Policy Science, supra note 131, at 76-77.

[133] McCloskey, supra note 3, at 35.

[134] A scientist seeking to understand the habitat needs of an endangered species may, for example, start with a specific case, say a particular area of critical habitat for an endangered species, and extrapolate from this example. In doing so, the scientist undertakes an analysis that becomes more general through a process of relaxing assumptions, which necessarily leads to a loss of realism and precision. Richard Levins, The Strategy of ModelBuildingin Population Biology, 54 Am. Scientist 421, 422 (1966).

[135] Harte, supra note 102, at 34 (“[P]articularity and contingency, which characterize the ecological sciences, and generality and simplicity, which characterize the physical sciences, are miscible, and indeed necessary, ingredients in the quest to understand human kind’s home in the universe.”).

[136] Tribe, Technology Assessment,supra note 11, at 627.

[137]See infra Part II.C.

[138] William James, Pragmatism 88 (1975) (noting that under such circumstances, “[e]xperiences might have all been singulars, no one of them occurring twice. In such a world logic would have had no application; for kind and sameness of kind are logic’s only instruments.”).

[139] McCloskey, supra note 3, at 21. Or, as McCloskey puts it more artfully, “[a]imless comparison is bad poetry and bad engineering; one damned thing after another is bad fiction and bad history.” Id.

[140]See, e.g., David Kriebel et al., The Precautionary Principle in Environmental Science,109 Envtl. Health Persp. 871, 872 (2001).

[141]See, e.g., Tribe, Policy Science,supra note 131, at 97.

[142]See, e.g., Wagner, The “Bad Science” Fiction,supra note 7, at 122.

[143] Gould, The Mismeasure of Man, supra note 123, at 181 (noting that John Stuart Mill once observed that it is a logical error “to believe that whatever received a name must be an entity or being, having an independent existence of its own”).

[144] Daniel Bell, The End of Ideology: On the Exhaustion of Political Ideas in the Fifties 405 (2000). Daniel Bell is one of the most eloquent critics of the ways in which social and political theories have distorted reality. In this seminal book, he argues that “ideologists are ‘terrible simplifiers'” and that “[o]ne simply turns to the ideological vending machine, and out comes the prepared formulae.” Id. Bell’s criticism of ideology mirrors those of scientific methods when he observes that “all ideology is bound to self-destruct. Ideology is a reification, a frozen mimicry of reality, a hypostatization of terms that gives false life to categories.” Id. at 444. While perhaps more extreme in his rejection of simplifying theories and models than I am, Bell does not reject them altogether; instead, he cautions that such “metaphors, at best, are only suggestive, and at worst, dissolve all history into cloudy abstractions.” Id. at 100.

[145] McCloskey, supra note 3, at 31-32 (“The common opinion of those educated in a rhetoric of linear differential equations is that large results must have large causes.” This bias “is not particular to quantification. What one admires in Marx or Tocqueville is precisely their insight into the large causes of large events.”).

[146]Id. at 36 (“Narration in a [complex] world is difficult regardless of whether the problem is numerical or not. One does not avoid nonlinearities by not knowing what they are called . . .  when variables feedback into themselves, we have an exciting story to tell, but unless we know its metaphors [i.e., simple relations] already we have no way to tell it.”).

[147] Tetlock, supra note 10, at 214. Philip Tetlock describes these tradeoffs eloquently as follows:

[W]e are continually making decisions about how to decide, about how best to mix theory-driven and imagination-driven modes of thinking. Theory-driven thinking confers the benefits of closure and parsimony but desensitizes us to nuance, complexity, contingency, and the possibility that our theory is wrong. Imagination-driven thinking sensitizes us to possible worlds that could have been but exacts a price in confusion and even incoherence.

Id.

 

Comments are closed

Sorry, but you cannot leave a comment for this post.