Home » Articles » Will More, Better, cheaper, and faster monitoring Improve Environmental Management?


Will More, Better, cheaper, and faster monitoring Improve Environmental Management?


View PDF

I. Introduction

Natural resource management of any kind faces twin challenges. First is the “data problem,” in which we lack the raw observational data one might want before making a decision.[1] For example, data on fish distribution, abundance, and catch rates are simply absent for many fisheries, making it difficult to assess the status and trends of such data-poor fish stocks.[2] The second, related challenge is the “model problem,” in which we have a less-than-optimal understanding of the ways in which the world’s living resources are interconnected, including the crucial interactions between human activities and their environmental effects.[3] Developing such an understanding—i.e., a working model—of ecosystem structure and function depends upon the existence of raw data, as well as upon the secondary understanding of the data collected, and hence the second challenge is nested within the first.

As costs of gathering data rise, both challenges worsen and ultimately become insurmountable. At the extreme, where data[4] is infinitely expensive, we can know nothing new about the world’s resources, and so all decisions are made in the dark. How many fish there are and how quickly they reproduce; where natural gas deposits or sites that maximize wind energy are located; how frequent hurricanes have been historically and might be this year, and so on—we would make all decisions without any of these kinds of data.[5] We would accordingly lack the essential feedback mechanism of environmental policy, i.e., the ability to gauge the effect of management decisions on their target natural resources.

It is more difficult to say what the effects of decreased data costs on management might be. What does a world of high-quality, inexpensive data look like?[6] Answering this question depends upon the particular purposes for which we gather data in the first place. Professor Eric Biber has discussed two ends of an information-gathering continuum, ambient monitoring and compliance monitoring, distinguished by their purposes: “The monitoring of ‘ambient environmental conditions,’ i.e., the state of the environment at the local, regional, national, or global scale, contrasts with ‘compliance monitoring,’ which focuses on compliance with a legal standard or regulation.”[7] Hence, both enforcement of environmental laws, via compliance monitoring, and basic understanding of earth and ecosystem processes, via ambient monitoring, depend upon sustained efforts to gather environmental data.

Past waves of emerging technology have made both ambient and compliance monitoring more powerful and more attractive to the federal agencies most strongly associated with environmental management. Satellite tracking and other remote sensing technologies, for example, have made possible compliance monitoring of waste transport and automobile emissions,[8] while the same techniques have driven fundamental improvements in environmental sciences via ambient monitoring of climate and other earth processes.[9]

These historical advances in monitoring have tended to focus on physical and chemical measurements such as sea surface temperature, speed and direction of surface currents, atmospheric pressure patterns, and so on.[10] Biological data—on which decisions about natural resources use and management often turn—have tended to lag behind, in part because of the difficulty of gathering information about the living world.[11] However, genetic monitoring techniques now coming to the fore are likely to drive the cost of biological data downward, making biological monitoring both cheaper and more effective. Improving the resolution and cost-effectiveness of biological data speaks to both ambient and compliance monitoring; my purpose in this Article is to illustrate why this matters.

These emerging techniques use deoxyribonucleic acid (DNA) collected from the environment—water, dirt, etc., and collectively known as environmental DNA (eDNA)—to detect and perhaps quantify the species living nearby.[12] Although recovering tiny quantities of species’ invisible genetic material from surrounding habitats may sound like science fiction to a lay audience, research groups around the world are making real-world eDNA monitoring applications a reality. This includes work on applications such as detecting invasive species,[13] counting hard-to-find endangered species in the wild,[14] and detecting human pathogens in waters near the beach.[15] These advances offer an unprecedented look at the environments upon which humanity depends.

Genetic monitoring is likely to make biological monitoring much more powerful than the present labor-intensive methods of manual biological surveys—for example, for fisheries’ stock assessments, or detecting invasive species in ballast water. As genetic methods pass out of research labs and into practical use—as is already beginning to happen[16]—genetic surveys will offer a less-invasive, easier alternative to many forms of data collection about the living world. And because the costs of sequencing continue to plummet, genetic methods will very likely be the cheapest means of biological sampling.

This Article therefore focuses on the implications of more, better, and cheaper environmental monitoring for U.S. law and policy. I focus the discussion on laws relevant to the marine environment, which is generally a challenge to monitor because of its size, complexity, and hostility to instrumentation,[17] but virtually all of the following discussion applies to the collection of environmental data more generally. In Part II, I briefly review the science behind monitoring as an enterprise and behind genetic monitoring tools more specifically, to the extent this science is relevant for understanding the legal and policy implications of these emerging methods. Part III then looks at environmental monitoring applications under selected U.S. statutes, addressing key shortfalls in data collection and in linking human actions to their environmental consequences. Parts IV and V put these applications in a broader context of administrative law and the political realities of environmental management.[18] Part VI concludes by linking the foregoing discussion to the larger goals of environmental management. Throughout this Article I ask whether new tools help to solve the data and model problems, and if so, whether this is likely to lead to better substantive outcomes for natural resource management.

II. A Lawyer’s Introduction to Environmental Monitoring and Genetic Monitoring

This Part provides the scientific basis for the remainder of the Article, briefly surveying the relevant techniques of environmental monitoring for a legal audience.

A. The Science of Monitoring

Monitoring has long been the neglected stepchild of basic science, with many research scientists seeing monitoring as far from the front lines of “real” research.[19] The distinction goes as follows: basic science tests hypotheses about the world by generating new data and new insight, seeking to answer both “known unknowns” and “unknown unknowns.”[20] These are the front lines, the frontiers of knowledge, with emerging results debated among members of the scientific community in the form of peer-reviewed literature in specialty journals.

Meanwhile, monitoring holds down the fort at home, collecting bits of information about what we think are “known knowns,” or at least ongoing and prosaic unknowns.[21] That is, monitoring accrues a pile of data addressing questions we already know enough to ask. The results of routine environmental monitoring—such as climate and weather data, fisheries landings, concentrations of air pollutants, etc.—often appear in grey literature produced by government agencies, or appear online in ever growing datasets. Monitoring outputs may be raw data such as real-time sea surface temperature readings from buoys in the water,[22] or more processed information such as estimates of ecosystem conditions based upon selected indicators.[23]

One might see the difference between basic science and routine monitoring as the difference between a research meteorologist and a weatherman.[24] A research meteorologist is a scientist who studies climatic phenomena using networks of data-collecting stations, computer modeling, and the like, seeking to improve the basic understanding of atmospheric science.[25] A weatherman, by contrast, reports and contextualizes the weather observations and forecasts, which in turn are generated using techniques that research meteorologists developed.[26] The research meteorologist pushes the boundaries of the known world; the weatherman fills in blanks that we already know are there.

And so proceeds the centuries-old false dichotomy between basic and applied research. One reason the basic-versus-applied distinction breaks down in the context of environmental monitoring is the Catch-22 about monitoring: we have to know a lot about what we are monitoring—say, the state of an estuary—before we can intelligently collect information about its status and trends. One cannot simply “monitor,” just as one cannot simply “cook;” one must be cooking something in particular. Taking the example of an estuary, knowing how to measure its ecological state requires having a substantial amount of information about which variables influence the estuary’s function—for example: chlorophyll content, degree of vertical mixing, nutrients, and so on, and which variables responsibly may be ignored.[27] One must know which questions to ask, which in turn requires a substantial amount of scientific groundwork before monitoring comes into play.[28]

Nevertheless, ambient monitoring remains deeply unattractive to many scientists.[29] One way of rehabilitating monitoring in the eyes of research scientists is to frame monitoring not as maintenance, but rather as hypothesis testing. In order to effectively address its aims, environmental monitoring must seek to answer a well-posed question, just as a well-designed experiment must test a hypothesis.[30] From this perspective, even truly ambient monitoring has a clear direction and a clear purpose to which the resulting data must speak.[31] Similarly, adaptive management—the oft-discussed (but rarely implemented) strategy for environmental decision making in the context of omnipresent uncertainty, which accelerates the need for environmental monitoring[32]—is best defined as an explicit test of a policy hypothesis with newly generated data.[33]

In short, good monitoring is science. And in the sense that responsible management recognizes its successes and failures by collecting information and responding to that information, good management is science. Consequently, new scientific tools have important ramifications for public management of natural resources, as those tools make it possible to know more about the world (the data problem), and about our effects on it (the model problem).

Such new tools can even bring tangible economic and social benefits. Weather and climate monitoring provide useful examples of technological improvements that have increased data resolution while decreasing cost, directly leading to better management decisions.[34] Farmers, for example, benefit substantially from long-term climate forecasting[35] that has improved in recent years as a result of better modeling and better remote sensing abilities.[36] Hurricane tracking has avoided billions of dollars in damages and prevented many lost lives.[37]

The weather and climate examples underscore an important point about the distinction between data and information. For present purposes, a dataset is the raw material from which a human might draw conclusions. Those conclusions, in turn, constitute information. The critical element here is the human, who interprets data in light of context and background understanding to develop new information. It is the information, rather than the dataset itself, that is the immediately useful product.[38] In the hurricane example above, the raw satellite data on a hurricane’s location required a human to make it into useful information.

Because good monitoring asks and answers a well-honed question, by extension, good monitoring produces information and not merely data. The irreducible human element of monitoring means that, even as new tools provide more powerful ways of finding out about the world, the familiar questions of administrative law—discretion, judgment, expertise, and so forth—will remain.[39] Most notably, the question of which questions to ask—where to point the new, powerful tools—never completely recedes.[40]

B. Introduction to Genetic Monitoring for Lawyers, and Why It Matters

Improved technology does not mean improved understanding; neither does improved understanding necessarily translate to more responsive policy.[41] However, when technological advances could both reduce the cost of gathering data and improve resolution, administrative agencies will likely take notice.[42] I outline below the science behind eDNA processing insofar as it is necessary for understanding the legal and policy discussion that follows. Although this Section focuses on a particular method of environmental detection, note that many emerging technologies, including improved remote sensing and image processing,[43] speak to the same issues covered in the subsequent law and policy Sections.

Virtually all living things produce DNA, with nearly every cell containing a complete genetic code unique to that species and to that individual.[44] Similarly, all living things shed some amount of DNA via metabolic waste, sloughed cells, damaged tissues, and the like.[45] Hence, all living creatures give off DNA into the environments in which they live. And because the genetic signature of each species is unique and identifiable,[46] any given environment contains diagnosable traces of all of the species that live there.[47] We can then read this cast-off material like a compendium of which species live where.[48]

The past decade’s advances in DNA sequencing technology have made detecting this eDNA feasible, and research on genetic monitoring techniques is well on its way from experimental to routine bench science. For the purposes of this Article, I define genetic monitoring as the suite of tools that use DNA from a sampled environment to make inferences about the species living nearby.[49] The aims of genetic monitoring are detecting, mapping, and quantifying the abundances of species of management concern.[50] These aims distinguish genetic monitoring from related DNA-based techniques for discovering unknown biological diversity, such as documenting the bacteria living in the nooks and crannies of the human body[51] or deep-sea volcanoes.[52]

The elements that make this technological feat possible are: first, polymerase chain reaction (PCR); second, small pieces of DNA flanking the target regions of interest; and third, high-throughput DNA sequencing. PCR is the simple, powerful process of making more copies of a target region of DNA, also known as “amplification,” through a series of heating and cooling cycles with the help of a heat-stable enzyme.[53] Like a photocopier, PCR can make duplicate copies of a stretch of genetic code—some combination of adenosine, guanosine, cytidine, and thymidine—creating enough copies to make sequencing or subsequent analysis possible.[54] Unlike a photocopier, however, PCR makes millions or billions of highly accurate reproductions.

Second on the list of elements for eDNA analysis are short pieces of DNA that help begin the PCR reaction, acting as seeds for the synthesis of the copied fragment of genetic material, and hence known as “primers.” Suppose you were interested in a particular human gene such as BRCA1, a mutation that may lead to a type of breast cancer.[55] To amplify this gene via PCR, you would use, or design, primers based upon the DNA sequences flanking the BRCA1 gene region.[56] The PCR reaction would then make copies of everything in between the two primer regions—that is, copies of the gene of interest.[57]

In the context of eDNA—in which DNA fragments from thousands or millions of different species living in a common environment are mixed together—primers also function as fishing hooks, pulling out fragments from only the species of interest.[58] To give three examples, a researcher can design primers to amplify strictly animal DNA—thus excluding bacterial, fungal, plant, or other DNA common in many environments including the ocean—or strictly fish DNA, or only the DNA of a single species of fish such as Bluefin tuna.[59] By calibrating the specificity of the primers, a researcher can both isolate and amplify genes of interest from target species to meet his or her needs.

The final element of eDNA detection is the high-throughput sequencing that has become available in recent years. Once a researcher has selected and amplified the target DNA using PCR and specialized primers, the result is a mixture of synthetic gene fragments reflecting the mix of genetic material in the sampled environment.[60] If the researcher used primers specific to animals, for example, the result would not contain any bacteria or fungi or plants, but would still be a mixture of animal DNA fragments. Prior to about five years ago, the presence of such a mixture—rather than a pure sample of a single creature’s genetic material—was a significant hurdle to analysis, requiring labor-intensive lab work to separate out unique fragments and reading them individually.[61] It was impossible to read a jumble of different sequences simultaneously, akin to reading all the pages of a book at once. However, the past five years have seen commercial high-throughput sequencers become widely available, making it possible to generate millions of DNA sequences, mixed or not, for a cost that continues to plummet.[62]

This cost curve bears noting because it contributes substantially to eDNA’s attractiveness as a tool for monitoring. Figure 1 shows the cost of DNA sequencing in dollars per megabase (or one million base pairs (bp), the letters of the genetic code). Most current work on eDNA for animal species uses fragments of DNA that are 300bp or shorter.[63] At 300bp per sequence, one megabase of DNA would provide more than 3000 sequences.[64] As of 2014, these sequences would cost a total of five cents, which is down from more than $5,292 in September 2001.[65] As DNA sequencing continues to become cheaper, obtaining genetic information from the environment—and thereby tracking any number of policy-relevant species or ecosystem variables—becomes increasingly tractable. The emergence of cheap, high-throughput sequencing has allowed researchers to survey bacteria, Archaea, and fungi for the past decade, uncovering hidden and unnamed diversity in a wide variety of environments.[66] Only over the past three years has similar work on animals risen to prominence,[67] and it is this work that is likely to have an impact on environmental monitoring, regulation, and management.

Important details remain to be worked out—for example, how much DNA a given individual sheds, how quickly the genetic material degrades, the relationship between number of DNA molecules detected and the number of individual animals or plants present, and so on. But the trajectory of genetic tools as a monitoring technique is clear, and such improved access to environmental information has important implications for natural resource management, pollution control, environmental impact assessment, and a host of related environmental policy issues.[68]

The primary federal agencies that conduct life-science work—National Oceanic and Atmospheric Administration (NOAA)/National Marine Fisheries Service (NMFS),[69] U.S. Fish and Wildlife Service (FWS),[70] U.S. Forest Service (USFS),[71] U.S. Geological Survey (USGS),[72] National Aeronautics and Space Administration (NASA),[73] and others—each already run or contract with genetics laboratories in pursuit of agency-specific goals. The emerging genetic monitoring tools are therefore already within practical grasp of the most relevant agencies. Moreover, NMFS and FWS already routinely employ genetics data to aid regulatory decisions, such as listing determinations under the Endangered Species Act (ESA).[74]

III. Applications of Genetic Monitoring in U.S. Federal Environmental Law

The leap from basic science to application happens in the details: Which provisions of which laws will benefit from a new technology? Where are the legal hooks that require, allow, or prohibit the use of emerging technologies to make natural resources decisions? We can map these specific questions onto the larger-scale challenges of responsible natural resource management—not enough relevant information (the data problem) and insufficient means of assessing impacts of human activities (the model problem)—taking on each in turn.

A. Solving the Data Problem: Three Genetic Monitoring Applications Under U.S. Federal Law

This Part sets out three specific examples in which genetic monitoring might provide critical data to inform environmental decisions.

1. Genetic Monitoring, Public Health, and the Clean Water Act

The much-noted goals of the Clean Water Act (CWA)[75] include ensuring fishable, swimmable waters in the United States and along its coast.[76] The teeth of the CWA, however, lie in its point source provisions and, to a lesser extent, in the water quality standards that define state regulatory floors for the condition of a state’s water bodies.[77] The idea is to regulate point sources through source-specific, technology-based National Pollution Discharge Elimination System (NPDES) permits.[78] Where those technology-based standards are insufficient to meet baseline water quality standards and the quality of receiving waters still falls short after reining in the point sources, states then must consider the combined effects of all pollutants on the receiving waters, parceling out permissible point and nonpoint source allocations via Total Maximum Daily Loads (TMDLs).[79] Both enforcement of point source permit conditions and the measurements necessary to develop TMDLs require a substantial amount of water quality data for each of the nation’s water bodies.[80]

By and large, the substantive aspects of CWA regulation focus on chemical and physical—rather than biological—parameters as measurable aspects of water quality.[81] These parameters include temperature, pH (acidity), dissolved oxygen, lead, and so on.[82] But much of what we mean by “clean” water is really about biology, rather than chemical or physical attributes of the water habitat. For example, we think of clean water as being without high concentrations of bacteria or parasites, rather than water of a particular temperature.[83] Most vividly, “fishable” waters require fish as well as the plants and animals that those fish eat and otherwise require.[84] Understanding the status and trends of each of these elements requires biological measurements that, taken together, help describe the quality of a water body.

The problem is that gathering biological data is generally more difficult than gathering physical or chemical data is. Counting fish, marine mammals, small parasites, or any other biological parameter of interest requires a degree of manual human labor.[85] By contrast, automated equipment routinely takes temperature, dissolved oxygen, and other physical and chemical measurements.[86] Measuring harder-to-gather biological data requires at least a feasible way of gathering and analyzing such data.

One present application of genetic monitoring illustrates the power of these techniques to drive down costs and increase feasibility of near real-time biological monitoring in coastal waters. To date, this effort has focused on public health, although in principle it can be extended to any species of interest. The idea is this: Blooms of some bacteria and harmful algae in near-shore waters make beachgoers sick and result in many hospital visits each year.[87] Beach closures and advisories routinely top 20,000 per year in the United States alone, largely due to levels of fecal indicator bacteria that exceed water quality standards.[88]

The regulatory response to these blooms—closing beaches—is limited by detection capability and time because the standard detection methods require samples to be cultured in the lab for eighteen to ninety-six hours[89] before confirming the presence of harmful microbes. By the time the test is confirmed, the offending beach should have been closed for days, and when local authorities finally do close the beach, the contamination is often gone.[90] Constant monitoring and sampling is prohibitively expensive, and in any case, would not address the time-lag problem that culturing samples entails.[91]

Genetic monitoring solves the problem by routinizing sampling and by near real-time detection of harmful target species.[92] This latter fact relies upon selective amplification and detection of target species’ DNA rather than in-lab culturing; where harmful species’ DNA is in high abundance, the harmful species are also in high abundance.[93] The result is regulatorily relevant information on which state agencies could act immediately, saving hospital visits and generally safeguarding public health.[94]

Aside from the substantial public health benefits associated with improved beach monitoring, the CWA provides a readily applicable legal context for genetic monitoring in the form of Section 303(d) lists of impaired waters,[95] the mechanism by which states must take stock of the sum total of point and nonpoint sources causing a water body to fall below regulatory standards.[96] Routine genetic monitoring would allow a state or other interested parties to monitor water quality in near real-time. Watchdog groups have an incentive to adopt such technology to the extent it makes meeting their organizational goals easier and cheaper. State agencies have the same incentive to the extent genetic monitoring tools and other technologies lower their costs and perhaps allow them to remove water bodies from the section 303(d) list by demonstrating they meet or exceed prevailing standards.[97] Similarly, cheaper water monitoring methods would be attractive for NPDES permittees, easing their path to meeting permit requirements.[98] In the data-intensive context of CWA permitting and pollution allocation under TMDLs, genetic monitoring methods therefore could be a quantum leap over the present techniques in the foreseeable future.

2. Improved Stock Assessment Under the Magnuson–Stevens Act

Fisheries stock assessments are a second set of challenges posed by expensive biological data. Under the Magnuson–Stevens Act (MSA),[99] regional Fisheries Management Councils generate a Fishery Management Plan (Plan) for major commercial fisheries.[100] Each Plan must include a stock assessment or assessments for the species included in the Plan, an estimate of the status and trend of each exploited fish population.[101] Thus, the MSA requires biological data as a feedback mechanism for calibrating management decisions that go toward determining the allowable level of catch from year to year.

However, despite the heroic efforts and quantitative prowess of fisheries scientists, stock assessment remains as much art as science. Part of the challenge of assessing and predicting fish populations is that stock levels are to some extent inherently unpredictable, changing according to climatic processes as well as annual spawning, mortality, and growth rates.[102] But a large part of the stock assessment challenge is simply gathering data on how many fish are out there; the ocean is a big place, fish are mobile and patchily distributed, and reliable estimates of population size or biomass are accordingly difficult to come by and generally involve expensive ship-borne surveys to obtain fisheries-independent data.[103] Developing cheaper, more reliable ways of estimating fish populations would reduce the uncertainty in stock assessments, which are, after all, regulatory decisions impacting billions of dollars annually,[104] not to mention ecosystems that cover a majority of the nation’s territory.[105]

Rockfish provide one example of the potential for improved stock assessments.[106] Rockfish (genus Sebastes) include dozens of species, many of which live along the Pacific coast of North America.[107] Sixty-four rockfish species are included in the Pacific region’s commercial groundfish fishery,[108] and accordingly, these commercially exploited species require a Plan and therefore a stock assessment.

Faced with a mandate to assess the status and trends of individual fish stocks, NMFS relies on a handful of sampling techniques to provide the necessary population data. The technique relevant to rockfish and other species in the groundfish fishery is bottom trawling, in which a boat drags a large, weighted net along the seafloor, catching whatever fish species are living on or around the bottom.[109] Trawling depends upon a soft-sediment bottom, as rocky habitat will catch or destroy the trawl.[110]

The problem is that, as their name implies, rockfish often live near rocky-bottomed habitat, making it difficult for trawl surveys to sample rockfish populations.[111] Hence, obtaining reliable rockfish population estimates is difficult. Genetic monitoring could be useful here, sampling where trawls cannot, by using water samples from rocky habitats.[112] Automated water sampling and genetic monitoring could create an integrated and continuous record of the rise and fall of populations of each target species, and could do so by noninvasive means.

Genetic methods have already been used to document fish species’ presence in the marine environment,[113] and to assess the abundance of a target species as it varies over space.[114] Refining and applying these techniques to improve fisheries stock assessments could both improve fisheries management plans and better shield the Fisheries Management Councils from lawsuits alleging arbitrary and capricious decision making.[115] Moreover, because genetic techniques can collect noninvasive information about individual fish species, such advanced monitoring could reveal spatial or temporal differences between exploitable and ESA-listed species that could ultimately reduce or avoid the take of listed species.

3. Tracking Invasives and Monitoring Endangered Species

Two other straightforward applications for genetic monitoring are noteworthy, as they are both examples of data-hungry management and are proven uses of genetic detection techniques. The first is the use of DNA to identify and track invasive species, a critical management issue for both ecological and financial reasons. Invasive species are non-native plants, animals, or fungi introduced to the United States from elsewhere and that cause, or are likely to cause, ecological or economic harm.[116] Such species cause many billions of dollars in economic damage annually in the United States alone.[117] Early detection and careful tracking of these species—which are imported from other ecosystems and which can, in some cases, come to dominate the invaded environment—have the potential to save vast sums of economic and environmental resources.[118]

Scientists and fisheries managers are already using genetic monitoring techniques to detect the spread of invasive carp species—a handful of freshwater fish species, in this case from Asia—in the Great Lakes Basin,[119] in part because these methods are far more sensitive than manual censuses.[120] In the not-too-distant future, one can easily imagine applying similar techniques at ports and border crossings, sampling for a list of harmful invasive species in near real-time by looking for DNA from the target species. Such techniques might be particularly helpful where the target species are too small to detect by eye, or too similar to native species to reliably discern, and would help implement the variety of anti-invasive-species laws presently on the books.[121] These include, at least, the Nonindigenous Aquatic Nuisance Prevention and Control Act,[122] the injurious wildlife provisions of the Lacey Act,[123] Brown Tree Snake Control and Eradication Act of 2004,[124] Plant Protection Act,[125] and the Alien Species Prevention and Enforcement Act of 1992,[126] as well as Executive Order 13,112.[127]

The second existing and emerging use for genetic monitoring tools is to detect endangered species. These species are by nature rare—which is why they are endangered—and thus are difficult to count. In Europe, researchers have shown genetic methods to be effective in detecting selected species from rivers, streams, and lakes.[128] Those methods can easily be extended to the marine environment.[129] Genetic methods are also attractive for surveying rare species in terrestrial environments.[130]

The ESA is a blunt instrument, with the occurrence of a listed species triggering significant limitations on property uses for both private and public actors.[131] Consequently, the incentives to develop and use improved monitoring tools lie largely with regulators, rather than with regulated parties. Federal wildlife agencies might use high-resolution genetic monitoring—as others have done with satellite tags[132]—to track species’ populations by noninvasive means, for example, to identify occupied habitat relevant to a listed species’ critical habitat designation or to speed consultation with other federal agencies.[133] Cheaper data would allow agencies greater budgetary flexibility by lowering the costs of nondiscretionary duties, and could permit agencies to be proactive producers of information where budgets and incentives allow.

However, those whose actions are limited by ESA listings may also have an interest in generating cheaper data. For instance, one can imagine the Navy remotely monitoring the locations of high concentrations of listed marine mammals, perhaps reducing the need for on-deck observers during training exercises that could impact those species. Parties may similarly have an incentive to develop more cost-effective means of complying with monitoring requirements under an Incidental Take Permit issued under the ESA, or else to argue that a species has recovered and should be delisted.[134]

B. Solving the Model Problem: Measuring Human Impacts

A more ambitious agenda for advanced monitoring tools would speak to the model problem, in which we seek a deeper understanding of the ways in which the world works and in which human activities impact that world.[135] Fulfilling such an agenda depends upon cheap and powerful methods for biological quantification in concert with a degree of automation that does not yet exist, but which will likely arise in the coming years.

Picking up the fisheries example discussed above, NMFS does single-species stock assessments,[136] consistent with its requirements under the MSA.[137] However, the status and population trend of one species ineluctably impacts those of other species: In an oversimplified example, if species A eats species B, the number of either species humans catch will impact the other species. This makes it impossible to accurately assess the effects of human activities—here, fishing—on real-world stocks with single-species assessment, particularly where many species interact[138]—hence the logic behind ecosystem-based management.[139] But ecosystem-based management depends entirely upon a working knowledge of the interactions of the ecosystem’s constituent parts, most notably species.[140] And an ecosystem has so many moving, interacting parts, it is impossible with current technologies to track the ebbs and flows of so many different populations of different species.[141] However, if cheaper and higher-resolution methods, such as genetic monitoring, make it possible in the future to routinely detect and quantify all major species in an ecosystem, disentangling these species interactions would become more tractable.

Part III.B.1 details one distant, future application of high-resolution genetic monitoring, which speaks to the interactions between data availability, model understanding, and environmental management.

1. Cumulative Impacts Under the National Environmental Policy Act

Any “major Federal actions significantly affecting the quality of the human environment” must undergo an impact assessment under the National Environmental Policy Act (NEPA).[142] Sixteen U.S. states also have equivalent laws,[143] such as California’s Environmental Quality Act[144] and New York’s State Environmental Quality Review Act.[145] Even in states without analogous laws, tradeoffs between environment and economics are subject to case-by-case analysis for human health effects, nuisance, and other conflicts that arise when one party’s actions impact the use or value of another’s property, or of public property.[146] Many other countries have similar rules requiring environmental impact analysis,[147] either in wholesale (as in the case of NEPA and similar statutes)[148] or in retail fashion (as in the case of conflict resolution stemming from individual property rights). These assessments are limited by the ability to quantify both discrete and cumulative environmental impacts resulting from human activities.[149]

In part, the difficulty arises out of the messiness of the living world—ecosystems are emergent properties of billions of interactions among their constituent living and nonliving parts, and so there is no one way to measure an ecosystem or our effects on it.[150] Moreover, the living parts of ecosystems—species—interact differently across geographic regions, may have substantial natural temporal variability, show different patterns over different spatial scales, and may or may not appear to vary when compared to different baselines.[151] One result is that there is no standardized method for conducting environmental impacts assessments.[152]

The two core aspects of environmental assessment are defining an appropriate baseline and measuring an ecosystem’s departures from it.[153] Cumulative impacts analysis further requires information about the effects of previous or related human activities relative to that same baseline.[154] However, the scientific and legal practices of baseline selection for cumulative impacts measurement differ wildly. Scientific practice quite reasonably dictates use of a historical baseline; that is, the state of an ecosystem at some identified time in the past.[155] Legal practice, however, requires a baseline of current condition;[156] that is, at the time of the proposed action triggering environmental review.[157] In the legal world, the clock is constantly reset.

Nevertheless, even if parties could agree o­n a sensible baseline comparison, it would remain unclear how best to measure impacts relative to those baselines. Some notable efforts to catalogue the accumulation of human impacts have focused on counting stressors rather than effects themselves, factoring in a measure of ecosystem sensitivity to those stressors to estimate impact.[158] Effects-based assessments are more demanding, requiring extensive—and often nonexistent—baseline data against which to compare, and moreover demand a reliable understanding of ecosystem processes to estimate human impacts on those processes.[159]

Genetic monitoring technologies may alleviate some of these difficulties in the not-too-distant future, and could prove a valuable tool for understanding the linkages between human and ecological systems. Any meaningful environmental effect of human activity will be reflected in the species composition and relative abundance that genetic techniques aim to document.[160] Measuring shifts in ecosystem composition and structure by means of genetic monitoring, therefore, might offer a way of quantifying environmental change, and of assessing just how biological conditions in impacted areas are different from those of less impacted areas. The permissible limits of ecological change would, of course, remain a question of law rather than of science. But determining whether the magnitude of the difference exceeded those permissible limits would become newly tractable. Assessing the impacts of the Deepwater Horizon oil spill, for example, has been a daunting task due to the large geographic extent of the problem, the many and subtle impacts, and the difficulty of finding adequate baseline data.[161] Improved technology for documenting species, trophic levels, and changes to ecosystem function would improve similar assessments in the future, and could add substance to calculations of environmental and economic damages.

Some of the largest drivers of environmental change are diffuse—such as carbon dioxide emissions, principally from the energy and transportation sectors—and are associated with large-scale climate shifts and changes in ocean temperature and chemistry.[162] In the case of these drivers, tracing human actions to biophysical impacts in particular times and places is not a matter of increasing the precision of our measurements, but rather a need to redefine causation for purposes of environmental impact analysis. For example, must a petroleum company report the effects of burning the oil it is in the business of extracting, or merely the effects due to drilling for that oil? As a set of tools, genetic techniques do not speak directly to this causation problem, but by making a census of community composition cheaper, faster, and more sensitive, these techniques could help define the trends necessary to assess the overall effect of these large-scale drivers of ecosystem change.[163]

Part III focused on a small sample of practical applications of a suite of emerging tools for monitoring the living world. Part IV explains why such developments matter, examining the possible ramifications of more accessible data for agency behavior and management of living resources, and on the broader implications of more, better, cheaper, and faster data.

IV. Implications of More, Better, Cheaper, and Faster Monitoring for Agency Behavior

Given the foregoing discussion, genetic monitoring is poised to become better, faster, cheaper, and more sensitive than traditional monitoring techniques, with many applications under U.S. law. As the cost of data decreases, both the data and the model problem ease. Therefore, genetic monitoring makes it possible to find out more about the world, about more of the world, or both.

A good analogy for this technological shift—or any technological improvement that has similar effects—is that of a better microscope: genetic techniques give us a higher-resolution view of what we choose to look at. But what do we choose to look at? A great tool remains just a tool; the real power is in its use. Someone still needs to decide when and where to point this new, more efficient and powerful microscope. And as Part II briefly addressed above, converting raw data to useful information also requires a human to put data in context, adding additional decision points to the data-to-information pipeline.[164]

This Part explores some of the implications of improved monitoring technology for domestic environmental policy. Public agencies have nondiscretionary duties including setting limits on, and assessing levels of, water and air pollution, monitoring the status of endangered species or of metrics of ecosystem state, and so on.[165] The mandatory nature of these jobs creates built-in incentives to employ efficient means of carrying them out while meeting other competing demands. In addition, some agencies have ongoing and long-term research agendas that, while discretionary, nevertheless represent commitments over multiple budget cycles.[166] If improved tools make it possible for agencies to fulfill their duties in a more cost-effective way, important downstream effects—including budgetary flexibility, calculation of the political value of information, and impacts to substantive decisions—likely follow.

Private parties, by contrast, have no relevant nondiscretionary duties. Cheaper and more powerful tools are likely to provide nongovernmental watchdog organizations, for example, with greater power to assess the actions of governmental or private-party actors, but probably do not fundamentally change the incentives driving the watchdog groups. A more powerful microscope will likely empower natural resources interest groups of all political stripes to continue doing what they are doing now, only more of it. I therefore focus on government agencies charged with natural resources management or research agendas, as these are the entities most likely to exhibit the qualitative effects that new monitoring techniques could bring.

A. Cheaper Data Means More Budgetary Flexibility

Generalizing about the effects of emerging technology on the budgets of a diverse suite of federal and state agencies is perhaps unwise. Nevertheless, it seems safe to suggest that saving time and money in carrying out necessary tasks would be welcome within any institution, administrative agencies included. For purposes of this discussion, I assume that agencies have some degree of rebudgeting authority within or between their existing programs, and that the agencies have the ability and incentive to obligate all of the money they are allotted in a given time period. Under such a scenario, a decrease in the cost of doing business frees up resources to be spent elsewhere within the agency.[167] Because agency mandates are unlikely to reach the granular level of line-item budgetary decisions, a relevant authority within the agency likely makes the decision about how to rebudget the cost savings. As a result, such savings increase budgetary discretion at the level of agency field offices. Put differently, cheaper data means more wiggle room.

In fairness, the amount of wiggle room genetic monitoring might buy is likely to be small relative to the total field office budgets. For example, NMFS spends hundreds of millions of dollars each year on fisheries research and management[168] and protected species research and management[169] alone. Some fraction of this money is allocated to monitoring and primary research that the emerging genetic techniques could make more efficient,[170] but the total savings is likely to be relatively small. Nevertheless, even half of one percent of $179.9 million—NOAA’s FY2014 request for Fisheries Research and Management Programs[171]—is $899,500, money that dwarfs the research budgets of most academic biological research operations.[172] The ability to re-budget funds of such aggregate magnitude could quickly translate to real results in an applied research context.

Increased discretion in the realm of environmental monitoring has two probable effects. First, the standard depth-versus-breadth tradeoff becomes less severe. Rather than counting ESA-listed salmon at more locations or over a longer period of time, for example, NMFS could do both. Easing this tradeoff generates more data—though not necessarily more understanding—for the subjects that the agency decides to focus on. The agency might also decide that, rather than generating more primary data, it would prefer to invest in an analyst to turn existing data into useable information. That is, budgetary discretion means the agency could decide to tackle the data problem (by producing more primary data) or the model problem (by greater analysis of existing data) of environmental management.

The second likely effect of budgetary discretion goes to the area of focus: The agency still must decide where to point the more powerful microscope.[173] Field-level career staffers probably make many such decisions, being the day-to-day operations personnel and simply being more knowledgeable about the scientific tasks at hand. But at the margins, one can imagine political appointees influencing the monitoring agendas of federal and state agencies.[174] In such a scenario, cheaper data and greater agency discretion may increase the scientific “wobble” that can attend a change of political administration following an election.[175] The result would be a systematic change in what we do and do not learn about the world around us, animated by top-down changes in political–administrative worldview.

B. Data Has Political Value

Another issue with a more powerful microscope is the potentially inconvenient data that the microscope might reveal. New findings tend to decrease agency discretion in the subject areas in which the agency chooses to collect data while the agency retains more discretion in subject areas in which it collects little or no data.[176] As Professor Biber points out:

An agency might be reluctant to monitor not because it creates a specific, clear conflict with a current project, but because monitoring data might prove troublesome in the future. . . . Monitoring data are to some extent unpredictable or uncontrollable and might undermine an agency’s decision in the future.

The lack of information, on the other hand, generally gives an agency a tremendous amount of political or legal leeway.[177]

Ignorance maximizes discretion, and therefore has significant political value. After all, data collected by public agencies becomes part of an administrative record, publicly discoverable, and subject to second-guessing by interested parties.[178]

We might therefore expect incentives to differ between political appointees and career staff within an administrative agency.[179] While appointees might be more concerned with the possibility of binding their own hands through increased data collection, career staff is likely to be more focused on efficiently carrying out nondiscretionary duties.[180] These differing views may, in turn, lead to careerists being more willing to adopt emerging monitoring technology than their political counterparts, particularly where low-level budgetary decisions (in careerists’ hands) are at stake.

Such internal tensions in agency incentives are nothing new. As is the case with external watchdog organizations, agencies’ ready access to a deeper well of data points probably amplifies incentives that already exist, making potential conflicts more apparent. In this case, external safeguards on agency behavior—such as statutory Best Available Science mandates[181] or the Administrative Procedure Act’s[182] arbitrary and capricious standard[183]—are particularly valuable backstops to arrest any agency slide toward nakedly impermissible decision-making mechanisms.

C. Impacts to Substantive Agency Decisions

Finally, more, better, and cheaper data is necessary—but not sufficient—for creating better, more data-driven natural resources policy. For example, NMFS would very much like to know how many individual fish exist in each stock, so as to make more accurate assessments of how many fish we might be able to harvest in coming years. There is considerable political incentive for both career staff and appointees to better forecast fisheries stock abundances.[184] Such improvements will require the agency to address both the data problem (by diversifying the manner in which it collected raw data about fish populations) and the model problem (by improving the computer simulations of ecological interactions in the ocean). One can point to many such examples in environmental management, in which better information has legitimately yielded better decisions.[185]

Using information about the effects of one’s past actions to make future decisions is, of course, simply a rational way to do business. This is true whether the business in question is a coffee shop, an economy, or an ecosystem.[186] But in the case of environmental management, the past twenty years have seen an effort to formalize these kinds of information feedback loops into regulation within the framework of adaptive management.[187] The idea is that management can be and should be like science: explicitly testing hypotheses about environmental outcomes through structured trial, error, and iteration.[188]

New technology to generate more, better, and cheaper data could facilitate the kinds of rapid adaptive steps that public agencies often claim to seek,[189] by creating tighter feedback loops between management decisions and their effects. For agencies to reap these benefits, however, they will need to effectively deploy the new technology by asking the right questions, and pointing the improved microscope at a part of the world that will faithfully reflect the effects of their management decisions.[190] To do so successfully would be to turn raw data into useful information.

V. So What? Turning Data into Information

Of course, when you have a hammer, everything looks like a nail.[191] Genetic monitoring is a new, potentially powerful tool, but it is just one such tool; science will continue to develop new tools and better, cheaper data. Hard drives are littered with data that has never yielded understanding, let alone influenced a policy change. Genetics in particular may be susceptible to a poor data-to-information ratio.[192] The real trick will be to produce information so that we can use existing and emerging tools to make decisions that better comport with reality. That is, turning data into information.[193]

One can imagine genetic monitoring (or analogous new tools) creating real value in at least three ways. Most obviously, the most salient data is that which answers a question that resource managers are already asking.[194] In the context of administrative agencies, such “questions” include the agencies’ nondiscretionary duties.[195] In effect, these are questions that agencies are required by statute or regulation to ask. “Is a protected species present?”[196] “Is this stock depleted?”[197] “How will this proposed federal action impact the surrounding environment?”[198] By making it cheaper and easier for agencies to carry out nondiscretionary duties, technological advances––including genetic monitoring––can avoid the danger of simply accumulating data that fails to speak to live questions. Further, aligning technology with existing agency agendas—to which agencies already dedicate staff time—maximizes the chances of efficient use.

Second, emerging techniques, like genetic monitoring, can speak to questions agencies want to ask, but cannot at present, due to some data or resource constraint. Many agencies have competitive processes that set or influence at least part of their research agendas. These may be intramural competitions—for example, an annual grant to develop improved methods of fisheries stock assessments at NMFS[199]—or extramural public grants, such as in the case of the Department of Energy.[200] Lower costs and higher-resolution data would presumably increase the breadth or depth of funded projects each year, ensuring that more worthy projects get funded.[201]

Third and finally, as the costs of data become cheaper, it becomes possible to generate questions that agencies do not presently know enough to ask, but that they should be asking. This function—filling in the unknown unknowns and uncovering hidden aspects of the world—highlights the way in which work on monitoring technology can erode the distinction between basic and applied science. Importantly, as the frontiers of monitoring technology advance, scientists—including agency scientists—uncover more about the nature of the links between social and ecological systems.

Not all discoveries about the world are policy-relevant, of course. Policy-relevant information may arise when a scientist proceeds far enough to disseminate important findings to a policy audience that is sufficiently motivated to act upon that information. Examples include the observation of ocean acidification and sea level rise, discoveries of new viruses and other threats to human health, or more generally, any new discovery that fits into a category that demands action because it is sufficiently frightening or tantalizing, or because it fits into a preexisting area of policy concern—e.g., endangered species or harmful invasive species. In such cases, new data sheds light on the world, and the scientist must play the translator or broker, changing data into knowledge. At an institutional level, this kind of discovery-and-brokerage is a core function of the science sub-agencies including USGS,[202] FWS,[203] NMFS,[204] and NASA.[205]

New tools do not necessarily address information limitation. But where those tools—such as genetic monitoring, but of course not limited to this context—can speak to existing and perceived needs, it seems likely they will be helpful. This same logic applies to monitoring in general: monitoring must speak to particular aims.

VI. Conclusion

Before concluding that improved technology and decreasing data costs might improve the process of environmental management, we first must ask what the goals of this process are. What is it that we are seeking to optimize? For the present purposes, I will frame the goal as more closely tying public agencies’ substantive resource-use decisions to the ecological limitations apparent in the world.[206] In short, the goal is informed, rational decision making surrounding management of scarce public resources.

Approaching such a goal requires technical and social progress on both environmental management’s model problem (understanding the behavior of the living world—and human impacts on it—to some degree) and the underlying data problem (raw observations on which to base understanding and subsequent decisions). Emerging tools such as genetic monitoring techniques have significant potential to mitigate at least the data problem, and perhaps also the model problem. To that extent, the more, better, and cheaper data that such tools will produce could prove a substantial benefit to management of earth’s natural resources.

Substantive decisions about resource use will remain in the hands of relevant agency officials, and the new tools will not speak to the values underlying such decisions. But increased information will limit, to some degree, the discretion that officials enjoy, ideally increasing transparency and tying decisions to some external reality. A better view of the world around us will, I hope, drive more informed and rational decisions about that world.


About Author

Assistant Professor, School of Marine and Environmental Affairs, University of Washington. J.D., University of California, Berkeley, School of Law (Boalt Hall), Ph.D., Columbia University. Email: rpkelly@uw.edu. Many thanks to Harry Sheiber, Holly Doremus, and other organizers and participants in the 2013 Law of the Sea Institute Conference in Berkeley, California, at which I presented a draft of the material I discuss here. Jesse Port, Kevan Yamahara, Ashley Erickson, Erin Prahler, Meredith Bennett, and Megan Mach at Stanford University’s Center for Ocean Solutions—and Philip Thomsen at the University of Copenhagen—contributed to discussions and drafts of this and related material, in particular focusing on cumulative impacts analysis. Thanks to Meg Caldwell, Larry Crowder, and Ali Boehm at Stanford and the Center for Ocean Solutions for consistent support on related scientific projects that have animated the work in this paper. Natalie Lowell and David Fluharty provided valuable editing and feedback on a later draft of this piece, which substantially improved the product. Finally, thanks to Kai Lee and Kate Wing for discussions on, and support of, early-stage ideas that led down interesting alleys, literal and figurative. This work was supported in part by a grant from the David and Lucile Packard Foundation.


Footnotes    (↵ returns to text)

  1. Eric Biber, The Problem of Environmental Monitoring, 83 U. Colo. L. Rev. 1, 4 (2011).
  2. See, e.g., Alec D. MacCall, Depletion-Corrected Average Catch: A Simple Formula for Estimating Sustainable Yields in Data-Poor Situations, 66 ICES J. Marine Sci. 2267, 2267 (2009) (“The problem of estimating sustainable yields for data-poor fisheries has been heightened in some fishery management systems, such as in the United States where recent legislation has required determination of annual catch limits without regard for adequate supporting data.”) (citation omitted).
  3. See Biber, supra note 1, at 80 (explaining that analysis and prediction is complicated by the “interaction of human pollution with biotic and abiotic systems”).
  4. Note that here and throughout, I treat the word “data” as singular rather than plural. While the plural is strictly correct (“datum” being the singular), I have opted for better readability for a legal, rather than scientific, audience.
  5. Or at least, we would make such decisions with only the data already in hand.
  6. We may think about the cost of data, in this context, as being measured by the financial outlay required to obtain one unit of information about the world. The overall cost of data may decline, therefore, by increasing the volume of data produced per unit of money, reducing the financial outlay required for a given volume of data, or both. Data are only valuable—and hence, only merit financial outlay—in proportion to their perceived present or future utility. Basic science often generates data of foundational and prospective utility, while applied science tends to generate data of more practical and immediate utility. See infra Part II.
  7. Biber, supra note 1, at 9–10 (citing Clifford S. Russell, Monitoring and Enforcement, in Public Policies for Environmental Protection 243, 244–45 (Paul R. Portney ed., 1990)); C.S. Russell, Monitoring, Enforcement, and the Choice of Environmental Policy Instruments, 2 Regional Envtl. Change 73, 74 (2001) (referencing C.S. Russell as drawing the distinction between ambient and compliance monitoring).
  8. See Todd A. Stewart, E-Check: A Dirty Word in Ohio’s Clean Air Debate—Ohio’s Battle Over Automobile Emissions Testing, 29 Cap. U. L. Rev. 265, 308 (2001) (discussing remote sensing among alternatives for emissions compliance on automobiles in Ohio); Allison F. Gardner, Environmental Monitoring’s Undiscovered Country: Developing a Satellite Remote Monitoring System to Implement the Kyoto Protocol’s Global Emissions-Trading Program, 9 N.Y.U. Envtl. L.J. 152, 152–53, 215 (2000) (discussing previous satellite tracking of hazardous waste and the possibility for emissions monitoring).
  9. See William Boyd, Ways of Seeing in Environmental Law: How Deforestation Became an Object of Climate Governance, 37 Ecology L.Q. 843, 843–44, 886–87 (2010) (discussing remote sensing and other means of detection as fundamentally altering environmental law and environmental study).
  10. See, e.g., Nat’l Oceanic & Atmospheric Admin., CoastWatch Browser, http://coastwatch.pfeg.noaa.gov/coastwatch/CWBrowser.jsp (last visited Nov. 22, 2014) (providing sea surface temperature and other satellite-derived data in near real-time).
  11. Imagine, for example, the difference between measuring the temperature of lake water over the course of a decade and measuring the number of frogs living in that lake during that decade. Collecting the temperature data requires a relatively straightforward set of equipment—a thermometer and a means of storing or transmitting its readings at some time interval. Counting frogs requires, well, counting frogs. One can imagine electronic monitoring of frog calls, for example, but actual counts of living things tend to be human labor-intensive.
  12. See Andrew D. Foote et al., Investigating the Potential Use of Environmental DNA (eDNA) for Genetic Monitoring of Marine Mammals, PLOS ONE, Aug. 29, 2012, at 1, 1, available at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0041781.
  13. See Christopher L. Jerde et al., “Sight-Unseen” Detection of Rare Aquatic Species Using Environmental DNA, 4 Conservation Letters 150, 150 (2011) (discussing eDNA as a detection tool for Asian carp in and near the Great Lakes); David M. Lodge et al., Conservation in a Cup of Water: Estimating Biodiversity and Population Abundance from Environmental DNA, 21 Molecular Ecology 2555, 2556 (2012) (discussing eDNA as a technique “to facilitate management goals for species and ecosystems”); Ryan P. Kelly et al., Harnessing DNA to Improve Environmental Management, 344 Sci. 1455, 1455 (2014) (discussing particular legal and policy applications of eDNA).
  14. See, e.g., Philip Francis Thomsen et al., Monitoring Endangered Freshwater Biodiversity Using Environmental DNA, 21 Molecular Ecology 2565 (2012) (explaining that rare or threatened freshwater species can be detected using eDNA); Foote et al., supra note 12 (discussing eDNA studies to monitor marine mammals in Denmark).
  15. See generally Reagan R. Converse et al., Correlation Between Quantitative PCR and Culture-Based Methods for Measuring Enterococcus spp. over Various Temporal Scales at Three California Marine Beaches, 78 Applied & Envtl. Microbiology 1237 (2012) (discussing quantitative polymerase chain reaction method to target DNA molecules); Kevan M. Yamahara et al., Occurrence and Persistence of Bacterial Pathogens and Indicator Organisms in Beach Sand Along the California Coast, 78 Applied & Envtl. Microbiology 1733 (2012) (discussing quantitative polymerase chain reaction (qPCR) method to target DNA molecules).
  16. For example, Denmark and the U.S. Great Lakes region currently use eDNA studies to monitor marine mammals. See supra notes 13–14.
  17. Genetic monitoring has special relevance in the marine environment, in which we expect species’ shed DNA to mix and remain present in the aquatic medium. In the marine environment, monitoring challenges are compounded by the fact that the ocean is huge, heterogeneous, and dynamic. Consequently, the oceans are subject to significantly less biological monitoring than are other habitats. David M. Marsh & Peter C. Trenham, Current Trends in Plant and Animal Population Monitoring, 22 Conservation Biology 647, 649 (2008).
  18. By “environmental management,” I mean to include subsidiary functions of policymaking, regulation, and so on. Monitoring provides necessary data for each of these functions. See infra Part II. Natural resource management is also a subset of environmental management.
  19. This distinction between pure and applied science, with its attendant gradient of prestige, has existed since at least the 16th century, when pure scientists looked down on those who actually built things, such as engineers. See Philip Ball, Making Stuff: From Bacon to Bakelite, in Seeing Further: The Story of Science, Discovery, and the Genius of the Royal Society 295, 296 (Bill Bryson ed., 2010) (discussing the cultures of pure scientists versus those in the productive industry); see also Ulrich Wengenroth, Science, Technology, and Industry, in From Natural Philosophy to the Sciences: Writing the History of Nineteenth-Century Science 221, 221–22 (David Cahan ed., 2003) (explaining that “[e]ngineering appeared to be subordinate to enlightened progress in the ‘hard’ sciences” during the late eighteenth century).
  20. See Donald H. Rumsfeld, U.S. Sec’y of Def., U.S. Dep’t of Def., DoD News Briefing––Secretary Rumsfeld and Gen. Myers (Feb. 12, 2002), http://www.defense.gov/transcripts/transcript.aspx?transcriptid=2636 (last visited Nov. 22, 2014) (“[T]here are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.”); see also Richard A. Epstein, In Defense of the Contract at Will, 51 U. Chi. L. Rev. 947, 969 (1984) (discussing “known unknowns” and “unknown unknowns” in the context of contracts at will) (internal quotation marks omitted).
  21. See Josh Goldfarb, Known Knowns, An Analytical Approach (Mar. 7, 2011, 3:40 PM), http://ananalyticalapproach.blogspot.com/2011/03/known-knowns.html (explaining known knowns as something “we understand well and can firmly identify” in the context of network traffic monitoring).
  22. See, e.g., Nat’l Oceanic & Atmospheric Admin., supra note 10 (providing sea surface temperature and many other satellite-derived data in near real-time).
  23. See, e.g., Climate Prediction Ctr., Climate Diagnostics Bulletin, http://www.cpc.ncep.noaa.gov/products/CDB/ (last visited Nov. 22, 2014) (providing a monthly digest of climate-related data relevant to the El Niño Southern Oscillation climate pattern).
  24. Nat’l Careers Serv., Job Profiles: Meteorologist, https://nationalcareersservice.direct.gov.uk/advice/planning/jobprofiles/Pages/meteorologist.aspx (last visited Nov. 22, 2014) (discussing the two fields of meteorology). Here, I refer to a forecasting meteorologist as a weatherman.
  25. See Am. Meteorological Soc’y, Career Center: All About Careers in the Atmospheric and Related Sciences, http://www.ametsoc.org/careercenter/careers.html#meteorologist (last visited Nov. 22, 2014) (explaining what research meteorologists do and what technologies they develop).
  26. See id. (discussing the job of a weather forecaster).
  27. See Angel Borja et al., Overview of Integrative Tools and Methods in Assessing Ecological Integrity in Estuarine and Coastal Systems Worldwide, 56 Marine Pollution Bull. 1519, 1521–22 (2008) (discussing nutrient load limits and the use of chlorophyll as an overall eutrophic condition); Nat’l Oceanic & Atmospheric Admin., NOAA Ocean Service Education: Estuaries, http://oceanservice.noaa.gov/education/kits/estuaries/estuaries05_circulation.html (last visited Nov. 22, 2014) (listing a classification of estuaries as vertically mixed).
  28. In addition, interpreting monitoring results requires information about the detection limits, sensitivity, and variance of the data. See Julien Martin et al., Importance of Well-Designed Monitoring Programs for the Conservation of Endangered Species: Case Study of the Snail Kite, 21 Conservation Biology 472, 473 (2007).
  29. See Biber, supra note 1, at 51 (noting that there is a widespread attitude that scientists should not spend too much time monitoring because it is not good for professional advancement).
  30. See Ryan P. Kelly & Margaret R. Caldwell, “Not Supported by Current Science”: The National Forest Management Act and the Lessons of Environmental Monitoring for the Future of Public Resources Management, 32 Stan. Envtl. L.J. 151, 209 (2013).
  31. See generally id. (discussing the problem of agency discretion in defining the aims of required monitoring programs in the context of “Management Indicator Species” under the National Forest Management Act).
  32. See Biber, supra note 1, at 18.
  33. SeeKai N. Lee, Compass and Gyroscope: Integrating Science and Politics for the Environment 53 (1993) (“An adaptive policy is one that is designed from the outset to test clearly formulated hypotheses about the behavior of an ecosystem being changed by human use.”).
  34. See generally Richard M. Adams et al., Value of Improved Long-Range Weather Information, 13 Contemp. Econ. Pol’y 10 (1995).
  35. See generally Am. Meteorological Soc’y, Weather Analysis and Forecasting: An Information Statement of the American Meteorological Society (2007), available at http://www.ametsoc.org/policy/2007weatheranalysisforecasting.pdf (providing an overview of weather-related costs and benefits of weather forecasting).
  36. Adams et al., supra note 34, at 11–12. But see Steve Rayner et al., Weather Forecasts are for Wimps: Why Water Resource Managers Do Not Use Climate Forecasts, 69 Climatic Change 197, 199, 216 (2005) (discussing the institutional and social phenomena that reduce natural resource managers’ use of these improved data). See generally Economic Value of Weather and Climate Forecasts 109–10 (Richard W. Katz & Allan H. Murphy eds., 1997) (noting that weather-sensitive decisions can be accomplished through decision-analytic models).
  37. Nat’l Weather Serv., Nat’l Oceanic & Atmospheric Admin., Value of a Weather-Ready Nation 13, 16 (2011), available at http://www.ppi.noaa.gov/wp-content/uploads/PPI-Weather-Econ-Stats-10-13-11.pdf (describing the value of weather forecasts and their economic benefits to society); Jeffrey K. Lazo et al., Household Evacuation Decision Making and the Benefits of Improved Hurricane Forecasting: Developing a Framework for Assessment, 25 Weather & Forecasting 207, 207 (2010) (“These [hurricane] warnings have significantly reduced the number of hurricane-related fatalities in the last several decades.”).
  38. The further leap from information and knowledge to action is the subject of a growing body of literature. See, e.g., David W. Cash et al., Knowledge Systems for Sustainable Development, 100 Proc. Nat’l Acad. Sci. 8086, 8086, 8089 (2003), available at http://www.pnas.org/content/100/14/8059.full.pdf (mobilizing science and technology for sustainability is likely more effective by intelligently managing boundaries between knowledge and action).
  39. See infra Parts IV–V.
  40. See infra Parts IV­–V.
  41. See Holly Doremus, Listing Decisions Under the Endangered Species Act: Why Better Science Isn’t Always Better Policy, 75 Wash. U. L.Q. 1029, 1032­­–33, 1039 (1997) (explaining how both advocates and critics of the Endangered Species Act use science to support their contrary positions).
  42. Indeed, agencies already have begun to take notice. For example, I have had meetings with staff from NMFS and the California Ocean Science Trust, and fielded an inquiry from the Oregon Department of Fish & Wildlife about the use of environmental DNA in natural resource management. For examples of studies that have already triggered governmental interests see, Converse et al., supra note 15; Foote et al., supra note 12; Thomsen et al., supra note 14; Yamahara et al., supra note 15.
  43. See, e.g., Woody Turner et al., Remote Sensing for Biodiversity Science and Conservation, 18 Trends in Ecology & Evolution 306, 306, 309–10 (2003) (highlighting the ways in which new technologies in the field of remote sensing provide valuable information about biodiversity).
  44. Ryan P. Kelly et al., Using Environmental DNA to Census Marine Fishes in a Large Mesocosm, PLOS ONE, Jan. 15, 2014, at 1, 1, available at http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0086175&representation=PDF (“All living things contain DNA . . . .”).
  45. Kelly et al., supra note 44, at 1 (“All living things contain DNA and generate waste (e.g., sloughed cells, metabolic waste) that persists in the environment for some period of time.”). For single-celled organisms, of course, the entire cell effectively becomes a waste product as the cell dies.
  46. It is possible to match environmental samples to known species’ DNA sequences using an annotated database. See Nat’l Inst. of Health, GenBank Overview, http://www.ncbi.nlm.nih.gov/genbank/ (last visited Nov. 22, 2014) (“GenBank is the [National Institutes of Health] genetic sequence database, an annotated collection of all publicly available DNA sequences.”) (citation omitted).
  47. Environments also contain traces of species that have lived there. Depending upon the climate—which influences DNA degradation rates—it is possible to sample environments and discern which species lived there in the recent past (months to years). Eske Willerslev & Alan Cooper, Review Paper, Ancient DNA, 272 Proc. Royal Soc’y B 3, 5–6 (2005), available at http://www.adelaide.edu.au/acad/publications/papers/RoyalSoc%20review.pdf. In some cases, it is possible to recover much older genetic information. See generally Kenneth Andersen et al., Meta-Barcoding of ‘Dirt’ DNA from Soil Reflects Vertebrate Biodiversity, 21 Molecular Ecology 1966 (2012) (discussing samples that are decades old); Willerslev & Cooper, supra, at 6 (discussing samples that are up to thousands of years old).
  48. It is possible to amplify and sequence DNA from environmental samples to detect individual animal species of interest. Foote et al., supra note 12, at 5; Teruhiko Takahara et al., Using Environmental DNA to Estimate the Distribution of an Invasive Fish Species in Ponds, PLOS ONE, Feb. 20, 2013, at 1, 1, available at http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0056584&representation=PDF. Amplification and sequencing of DNA is possible even when target species are present at very low abundances—such as in the case of rare and endangered species. Thomsen et al., supra note 14, at 2565. Lab-based work has demonstrated a quantitative relationship between density of individuals of a species and amount of DNA present in the species’ habitat. Foote et al., supra note 12, at 5 (“The detection probability is likely to be dependent upon density of the target species . . . .”); Takahara et al., supra, at 1–4; see also Kelly et al., supra note 44, at 1 (discussing the use of eDNA as an efficient, cost-effective method of detecting and determining the abundance of individual animal species).
  49. These tools may involve recovering DNA sequence data from the sampled environment, or may avoid the costs of sequencing by using species-specific probes to reveal the presence of a particular target species’ DNA without requiring subsequent steps. See, e.g., Jerde et al., supra note 13, at 150–53 (discussing use of eDNA method of detecting fish species). Note that this definition differs from that of Schwartz and colleagues, who defined genetic monitoring on the basis of changes in population genetics. Michael K. Schwartz et al., Genetic Monitoring as a Promising Tool for Conservation and Management, 22 Trends Ecology & Evolution 25, 25 (2007) (“[W]e define genetic monitoring as quantifying temporal changes in population genetic metrics or other population data generated using molecular markers. We distinguish monitoring, which must have a temporal dimension, from assessment, which reflects a snapshot of population characteristics at a single point in time.”). I do not distinguish between monitoring and assessment in the present Article, although I agree that this is a meaningful distinction that may bear further thought; by definition, monitoring must have a temporal component.
  50. See Schwartz et al., supra note 49, at 25–28 (discussing uses of genetic monitoring).
  51. See generally Steven R. Gill et al., Metagenomic Analysis of the Human Distal Gut Microbiome, 312 Sci. 1355, 1355 (2006) (describing the use of DNA sequencing to document the diversity of microbes in the human gut, as opposed to mapping or quantifying the amount of bacteria).
  52. See generally Mitchell L. Sogin et al., Microbial Diversity in the Deep Sea and the Underexplored “Rare Biosphere”, 103 Proc. Nat’l Acad. Sci. 12116 (2006), available at http://www.pnas.org/content/103/32/12115.full.pdf (describing the use of gene sequencing to estimate microbial phylogenetic diversity in an underwater volcano).
  53. See generally Randall K. Saiki et al., Primer-Directed Enzymatic Amplification of DNA with a Thermostable DNA Polymerase, 239 Sci. 487 (1988) (a seminal paper describing the PCR reaction). See Garland Science, Polymerase Chain Reaction (PCR), YouTube (Apr. 16, 2009), http://www.youtube.com/watch?v=eEcy9k_KsDI, for a nice illustration of the way PCR works.
  54. Nat’l Human Genome Research Inst., Polymerase Chain Reaction (PCR), http://www.genome.gov/10000207 (last visited Nov. 22, 2014). See Kary Mullis, Dancing Naked in the Mind Field 5–6 (1998), for the story of PCR’s invention and one view of its quirky, Nobel prize-winning inventor.
  55. Yoshio Miki et al., A Strong Candidate for the Breast and Ovarian Cancer Susceptibility Gene BRCA1, 266 Sci. 66, 66 (1994).
  56. See id. at 71 n.26.
  57. See id. at 66–67; see also Saiki et al., supra note 53, at 487 (explaining PCR amplification).
  58. See, e.g., Tiayyba Riaz et al., EcoPrimers: Inference of New DNA Barcode Markers from Whole Genome Sequence Analysis, 39 Nucleic Acids Res. 1, 2 (2011). See generally Saiki et al., supra note 53, at 487 (describing the PCR amplification process and the role of primers).
  59. See Riaz et al., supra note 58.
  60. See Saiki et al., supra note 53, at 487.
  61. See generally Michael L. Metzker, Emerging Technologies in DNA Sequencing, 15 Genome Res. 1767, 1767 (2005) (“Current sequencing technologies are too expensive, labor intensive, and time consuming for broad application in human sequence variation studies.”).
  62. See Olena Morozova & Marco A. Marra, Applications of Next-Generation Sequencing Technologies in Functional Genomics, 92 Genomics 255, 256 (2008).
  63. See Riaz et al., supra note 58, at 7.
  64. See id. This calculation is arrived at by dividing 1,000,000 base pairs, the number of base pairs in a megabase, by the 300 base pairs needed for animal DNA sequencing. The resulting number is 3,333 sequences per megabase, to be more exact.
  65. Nat’l Human Genome Research Inst., Sequencing Cost Table, http://www.genome.gov/27541954 (last visited Nov. 22, 2014) (open “Sequencing Cost Table” hyperlink) (listing the cost per megabase at five cents in April 2014).
  66. See, e.g., J. Craig Venter et al., Environmental Genome Shotgun Sequencing of the Sargasso Sea, 304 Sci. 66, 66 (2004) (using whole-genome shotgun sequencing to find thousands of species in a sample of the Sargasso Sea); Susannah Green Tringe et al., Comparative Metagenomics of Microbial Communities, 308 Sci. 554, 554–55 (2005) (using high-throughput shotgun sequencing to analyze microbial diversity in samples “derived from agricultural soil and from three isolated deep-sea ‘whale-fall’ carcasses”).
  67. See supra notes 13–14, 48.
  68. Emerging technology such as genetic monitoring techniques must meet a baseline standard of reliability before being useful in administrative decision making. See, for example, Sharon Hatch Hodge, Satellite Data and Environmental Law: Technology Ripe for Litigation Application, 14 Pace Envtl. L. Rev. 691, 716, 719–20 (1997), for an earlier example focused on satellite-driven remote sensing. Despite calls for greater oversight of agency science by administrative law judges, the Federal Rules of Evidence—as interpreted by Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 578, 579 (1993)—do not control the evidence agencies may consider. Solutia, Inc., 10 E.A.D. 193, 211–12 n.22 (EAB 2001) (“The Region urges that the Board weigh the evidence in light of Federal Rule of Evidence 702 (Testimony by Experts) and the four factors delineated in the Supreme Court’s ruling in [Daubert]. The Region posits that the affidavit evidence presented by Petitioner fails to satisfy the Daubert admissibility standards and is inadequate as expert testimony. We disagree for several reasons. First, while it is appropriate for us to look to the federal rules and court guidance in determining the weight to be given the evidence presented, it is a well-settled rule that ‘[A]gencies are not bound by the strict rules of evidence governing jury trials.’ Thus, Rule 702 and the Daubert factors are not controlling principles.”) (citations omitted). The relevant standards of evidence therefore remain the “arbitrary and capricious” standard under the Administrative Procedure Act (APA), in addition to statute-specific “best available science” or similar requirements. Administrative Procedure Act, 5 U.S.C. § 706(2)(A) (2012) (detailing the APA standard of review); National Environmental Policy Act (NEPA), 42 U.S.C. § 4332 (2012) (detailing NEPA criteria generally referred to as the “best available science” standard). Every technology requires a breaking-in period, in which researchers work out the kinks before widespread commercial or regulatory application is possible. For genetic monitoring, we are still in the breaking-in period, despite interest from environmental monitoring agencies. However, the technology has progressed to a point at which it is appropriate to review its legal and policy implications.
  69. See Nw. Fisheries Sci. Ctr., Functional Genomics and Bioinformatics, http://www.nwfsc.noaa.gov/research/divisions/efs/bioinformatics.cfm (last visited Nov. 22, 2014).
  70. See U.S. Fish & Wildlife Serv. Forensics Lab., Science Professionals: Genetics Unit, http://www.fws.gov/lab/genetics.php (last visited Nov. 22, 2014).
  71. See U.S. Forest Serv., Placerville: Institute of Forest Genetics, http://www.fs.fed.us/psw/locations/placerville/ (last visited Nov. 22, 2014).
  72. See U.S. Geological Survey, Ecosystems––Genetics and Genomics, http://www.usgs.gov/ecosystems/genetics_genomics/ (last visited Nov. 22, 2014).
  73. See Nat’l Aeronautics & Space Admin., NASA Ames Genome Research Facility, https://phenomorph.arc.nasa.gov/ (last visited Nov. 22, 2014).
  74. 16 U.S.C. §§ 1531–1544 (2012). See generally Ryan P. Kelly, The Use of Population Genetics in Endangered Species Act Listing Decisions, 37 Ecology L.Q. 1107 (2010) (providing an overview of genetic data analyses used by federal agencies to make decisions under the ESA).
  75. Federal Water Pollution Control Act, 33 U.S.C. §§ 1251–1387 (2012).
  76. See id. § 1251(a)(2); 40 C.F.R. § 131.10(a) (2014).
  77. See Michael P. Healy, Still Dirty After Twenty-Five Years: Water Quality Standard Enforcement and the Availability of Citizen Suits, 24 Ecology L.Q. 393, 397­–98 (1997).
  78. Id. NPDES permit limits take the forms of technology-based limitations and water quality-based limitations. However, water quality-based limitations apply only if the technology-based limits are insufficient to meet the overall water quality standards. 33 U.S.C. § 1311(b)(1)(C).
  79. 33 U.S.C. § 1313(d)(1)(C); see also U.S. Envtl. Prot. Agency, Water: Total Maximum Daily Loads (303(d)), http://water.epa.gov/lawsregs/lawsguidance/cwa/tmdl/overviewoftmdl.cfm (last visited Nov. 22, 2014) (providing an overview of TMDL procedures and requirements).
  80. See U.S. Envtl. Prot. Agency, supra note 79 (describing the data processes used for calculating TMDLs).
  81. See U.S. Envtl. Prot. Agency, National Recommended Water Quality Criteria: Aquatic Life Criteria Table, http://water.epa.gov/scitech/swguidance/standards/criteria/current/index.cfm (last visited Nov. 22, 2014).
  82. Id. Note, however, that bacterial counts are often taken as part of CWA regulatory efforts, and that these are a key biological, as opposed to physical or chemical, parameter. Id. These bacterial counts make a key point about water quality, in that ideas about cleanliness and water quality are closely tied to biology and what lives in the water.
  83. See, e.g., U.S. Envtl. Prot. Agency, Basic Information About Pathogens and Indicators in Drinking Water, http://water.epa.gov/drink/contaminants/basicinformation/pathogens.cfm (last visited Nov. 22, 2014) (describing methods of eliminating disease-causing pathogens from drinking water).
  84. See U.S. Envtl. Prot. Agency, Water Quality Standards Handbook, Chapter 4: Antidegredation (40 C.F.R. 131.12), http://water.epa.gov/scitech/swguidance/standards/handbook/chapter04.cfm (last visited Nov. 22, 2014) (“The term ‘aquatic life’ would more accurately reflect the protection of the aquatic community that was intended in section 101(a)(2) of the [Clean Water] Act [than the expression ‘fishable’ conveys].”).
  85. See, e.g., Crater Lake Inst., Fishes and Stream Habitat in Tributaries of the Klamath River in Crater Lake National Park, with Reference to the Sun Creek Bull Trout: Fish Population Estimates, http://www.craterlakeinstitute.com/online-library/fish-stream/fish.htm (last visited Nov. 22, 2014) (explaining “[e]stimates of fish abundance were made by direct observation by a single snorkel diver” to estimate populations of the Sun Creek Bull Trout in Crater Lake National Park).
  86. See generally Richard J. Wagner et al., U.S. Geological Survey, Guidelines and Standard Procedures for Continuous Water-Quality Monitors: Station Operation, Record Computation, and Data Reporting 1 (2006), available at http://pubs.usgs.gov/tm/2006/tm1D3/pdf/TM1D3.pdf.
  87. See Cal. Dep’t of Pub. Health, Blue-Green Algae (Cyanobacteria) Blooms, http://www.cdph.ca.gov/healthinfo/environhealth/water/pages/bluegreenalgae.aspx (last visited Nov. 22, 2014).
  88. Mark Dorfman & Angela Haren, Natural Res. Def. Council, Testing the Waters: Executive Overview 3 (23d ed. 2013), available at http://www.nrdc.org/water/oceans/ttw/2013/ttw2013_Executive_Overview.pdf.
  89. Rachel T. Noble & Stephen B. Weisberg, A Review of Technologies for Rapid Detection of Bacteria in Recreational Waters, 3 J. Water & Health 381, 381–82 (2005); Sarah P. Walters et al., Persistence of Nucleic Acid Markers of Health-Relevant Organisms in Seawater Microcosms: Implications for Their Use in Assessing Risk in Recreational Waters, 43 Water Res. 4929, 4930 (2009) (citing a 24-hour time period). Note that genetic monitoring methods have been used to detect more serious viral diseases in drinking water. See, e.g., Soile Blomqvist et al., Detection of Imported Wild Polioviruses and of Vaccine-Derived Polioviruses by Environmental Surveillance in Egypt, 78 Applied & Envtl. Microbiology 5406, 5407 (2012) (finding systematic environmental surveillance can be used to monitor the wild poliovirus and vaccine-derived poliovirus circulation in populations to support polio eradication initiatives); S. Skraber et al., Survival of Infectious Poliovirus-1 in River Water Compared to the Persistence of Somatic Coliphages, Thermotolerant Coliforms and Poliovirus-1 Genome, 38 Water Res. 2927, 2928 (2004) (concluding that viral genome is an indicator of viral contamination and estimator of health hazard in river water, which is often used for drinking water).
  90. Noble & Weisberg, supra note 89, at 382.
  91. See id. at 381–82. Note that in some cases, states may close beaches or issue warnings as a result of heavy precipitation, rather than waiting to culture bacterial samples. See Cal. Envtl. Prot. Agency, State Water Res. Control Bd., California Beach Water Quality Background Information, http://www.swrcb.ca.gov/water_issues/programs/beaches/beach_water_quality/background.shtml (last visited Nov. 22, 2014) (describing California’s rain advisory policy).
  92. Fecal indicator bacteria are examples of harmful species that researchers are presently using genetic monitoring tools to detect. See supra note 89; see also Antonella Penna et al., Toxic Pseudo-nitzschia spp. in the Northwestern Adriatic Sea: Characterization of Species Composition by Genetic and Molecular Quantitative Analyses, 35 J. Plankton Res. 352, 353–54 (2013) (discussing monitoring harmful algae using molecular methods).
  93. See Penna et al., supra note 92, at 353–54, 357.
  94. To my knowledge, no regulatory agency yet routinely uses these genetic methods for beach monitoring. But agencies may soon be able to do so thanks to the work of Professor Ali Boehm at Stanford and many others. See, e.g., Alexandria B. Boehm et al., A Sea Change Ahead for Recreational Water Quality Criteria, 7 J. Water & Health 9, 10 (2009) (detailing the findings from a workshop made up of experts whose purpose was “to conceptualize the best approaches for [recreational water quality] criteria development and to identify research priorities which would assist [EPA] in developing criteria”); Converse et al., supra note 15, at 1237 (comparing qPCR and culture methods in measuring fecal indicator bacteria within a beach).
  95. See also Beaches Environmental Assessment and Coastal Health Act of 2000, Pub. L. 106-284, § 2, 114 Stat. 870, 870–71 (2000) (setting coastal recreation water quality criteria under the CWA). See generally Boehm et al., supra note 94, at 11 (discussing Section 303(d) lists and criteria for fecal indicator bacteria).
  96. State water quality laws may also provide relevant legal hooks. See Jan G. Laitos & Heidi Ruckriegle, The Clean Water Act and the Challenge of Agricultural Pollution, 37 Vt. L. Rev. 1033, 1056 (2013) (noting that a state has discretion “to adopt the allocations and enforce [TMDLs for pollutants entering Section 303(d) impaired waters] under state law, but the EPA cannot compel that result; nor may it enforce the allocations once they have been made by a state”).
  97. At some point in the future, genetic monitoring could be used for identifying and tracking nonpoint source pollution, essentially making nonpoint sources more like point sources in that they would become known discharges from particular geographic origins. The technology would use characteristic microbial fingerprints present in runoff from individual creeks or farms, for example, and use mathematical modeling––such as a Bayesian mixing model––to estimate the contribution of each nonpoint source into receiving waters, based upon the combined fingerprint of those receiving waters. As this scenario is plausible, but not yet in development as far as I am aware, I will not treat it further here, but will note the desirability of this kind of nonpoint source tracking.
  98. The Southern California Coastal Water Research Project, a collaboration between the state and large dischargers (and NPDES permittees), Cal. Coastal Water Research Project, About SCCWRP, http://www.sccwrp.org/AboutSCCWRP.aspx (last visited Nov. 22, 2014), is interested in new methods of rapid water quality assessment, in part for these reasons. See S. Cal. Coastal Water Research Project, Project: Side-by-Side Beta Testing of Rapid Methods, http://www.sccwrp.org/ResearchAreas/BeachWaterQuality/SideBySideBetaTestingOfRapidMethods.aspx (last visited Nov. 22, 2014) (describing the development of rapid testing methods to monitor fecal indicator bacteria levels).
  99. Magnuson–Stevens Fishery Conservation and Management Act, 16 U.S.C. §§ 1801–1891d (2012).
  100. Id. §§ 1851, 1853.
  101. Id. § 1853(a)(2)–(3), (9).
  102. George Lapointe et al., Ctr. for Am. Progress, Counting Fish 101: An Analysis of Fish Stock Assessments 1, 3 (2012), available at http://cdn.americanprogress.org/wp-content/uploads/2012/09/FisheriesScienceBrief-3.pdf.
  103. See id. at 5.
  104. See Nat’l Marine Fisheries Serv., Fisheries Economics of the United States 2012: Economics and Sociocultural Status and Trends Series 6 (2014), available at http://www.st.nmfs.noaa.gov/Assets/economics/documents/feus/2012/FEUS2012.pdf.
  105. See Nat’l Oceanic & Atmospheric Admin., The United States is an Ocean Nation, http://www.gc.noaa.gov/documents/2011/012711_gcil_maritime_eez_map.pdf (last visited Nov. 22, 2014) (demonstrating that the U.S. Exclusive Economic Zone occupies more ocean area than the total U.S. land area).
  106. Thanks to Jameal Samhouri, Ole Shelton, and Phil Levin at NFMS for suggesting this example.
  107. See FishBase, Fish Identification, http://www.fishbase.org/identification/SpeciesList.php?genus=Sebastes (last visited Nov. 22, 2014) (a major online database of fish information).
  108. See Pac. Fishery Mgmt. Council, Groundfish: Background, http://www.pcouncil.org/groundfish/background/ (last visited Nov. 22, 2014) (including “widow, yellowtail, canary, and vermilion rockfish”).
  109. See Darin T. Jones et al., Evaluation of Rockfish Abundance in Untrawlable Habitat: Combining Acoustic and Complementary Sampling Tools, 110 Fishery Bull. 332, 332 (2012) (describing how bottom trawling is used to measure rockfish populations but can result in inaccuracy); Natural Res. Def. Council, Protecting Ocean Habitat from Bottom Trawling, http://www.nrdc.org/water/oceans/ftrawling.asp (last visited Nov. 22, 2014) (explaining the process of bottom trawling).
  110. Jones et al., supra note 109 (“The bottom trawl survey of the Gulf of Alaska . . . routinely encounters areas that are untrawlable because of rough substrate or known hazards to fishing gear on the sea-floor. When untrawlable substrate is located at a designated sampling station, an alternate location with suitable substrate is sought nearby. Mean estimates of species abundance from sampling stations are then extrapolated over the entire management area, including known untrawlable areas. Yet rockfish abundance between trawlable and untrawlable areas can vary considerably and is often lower in trawlable areas than in untrawlable areas.”) (citations omitted).
  111. Id.
  112. Stanford University, among others, is currently conducting experiments that will improve upon existing information about the rates of DNA shedding and degradation in marine waters, show how oceanographic parameters influence these rates, and model how to estimate the magnitude of transportation of DNA with ocean currents. See Julia Turan, Stanford Woods Inst. for the Env’t, New Tool Offers Near Real-Time Info About Marine Species, https://woods.stanford.edu/news-events/news/new-tool-offers-near-real-time-info-about-marine-species (last visited Nov. 22, 2014) (discussing the ramifications of eDNA as a tool to better monitor populations of oceanic species). For stock assessments over large spatial scales, it is likely that dozens of water samples, taken at different time points, would be necessary to estimate the location and relative abundances of fish species. More precise quantification would require a known relationship between number of individuals and concentration of eDNA, which is again the subject of research at Stanford, the University of Washington, and elsewhere. See id. These applications are consequently in the future, but the groundwork for them already exists.
  113. See Philip Francis Thomsen et al., Detection of a Diverse Marine Fish Fauna Using Environmental DNA from Seawater Samples, PLOS ONE, Aug. 29, 2012, at 1, 1–2, available at http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0041732&representation=PDF (“In this study we present the first recording of marine fish biodiversity using eDNA from seawater samples.”).
  114. See Takahara et al., supra note 48, at 1 (“We evaluated the distribution of bluegill in the ponds on the mainland and on islands in the Seto Inland Sea based on detection of eDNA and visual observation.”).
  115. NMFS is routinely sued over its use or nonuse of fisheries’ stock assessment data to set catch limits. See, e.g., Oceana, Inc. v. Bryson, 940 F. Supp. 2d 1029, 1033, 1048 (N.D. Cal. 2013) (“[The Fishery Management Plan at issue] does not specify an MSY [i.e., Maximum Sustainable Yield, a key variable based upon the population size of the fish stock] for the northern subpopulation of the northern anchovy, noting that the portion of the subpopulation in U.S. waters was unknown. The Federal Defendant argues that it lacked sufficient data to determine MSY for this population, and that the MSA does not require setting an MSY under such circumstances.”) (citation omitted); Flaherty v. Bryson, 850 F. Supp. 2d 38, 52 (D.D.C. 2012) (“Defendants fail to provide any explanation or analysis from which the Court can conclude that the delay in considering the composition of the fishery, which entailed exclusion of river herring, was reasonable.”).
  116. Exec. Order 13,112, 3 C.F.R. § 13112 (1999); 64 Fed. Reg. 6183 (Feb. 3, 1999) (defining “invasive species” as “alien species whose introduction does or is likely to cause economic or environmental harm or harm to human health,” and “alien species,” in turn, as “with respect to a particular ecosystem, any species, including its seeds, eggs, spores, or other biological material capable of propagating that species, that is not native to that ecosystem”). The Executive Order focused on preventing invasive species introductions and on mitigating the effects of already-introduced species. Id.
  117. David Pimentel et al., Update on the Environmental and Economic Costs Associated with Alien-Invasive Species in the United States, 52 Ecological Econ. 273, 282 (2005) (calculating an annual cost of $120 billion annually from invading alien species in the United States); U.S. Fish & Wildlife Serv., The Cost of Invasive Species, available at http://www.fws.gov/home/feature/2012/pdfs/costofinvasivesfactsheet.pdf (citing Pimentel et al.’s calculation of the annual cost with approval).
  118. Pimentel et al., supra note 117, at 273 (explaining that 42% of ESA-listed species are primarily threatened by invasive species). By extension, preventing further invasions—and stopping the spread of those invasives already entrenched—has great potential to mitigate pressures on endangered species and ecosystems.
  119. Jerde et al., supra note 13, at 2.
  120. In the case of invasive species detection, a false-negative result may be more costly than a false-positive result, especially where the species is anticipated to be of significant potential economic harm. That is, the cost of failing to detect a single individual may be very high, if that single individual is a gravid female capable of establishing a self-sustaining population on her own. As a result, it stands to reason that agencies would employ the most sensitive survey techniques possible in such situations, and this is indeed the case for agencies charged with detecting the spread of Asian carp species into the Great Lakes basin. See, e.g., id. (“We show that [eDNA] is more sensitive than traditional tools, has no risk of harming the species under study and effort can feasibly be increased for species management.”) (citations omitted).
  121. For further discussion of ocean-based biological invasions, see generally N. Bax et al., The Control of Biological Invasions in the World’s Oceans, 15 Conservation Biology 1234 (2001) (providing a decision-making framework to assist policymakers and other stakeholders in the control of alien marine species).
  122. Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990, 16 U.S.C. §§ 4701–4741 (2012).
  123. Lacey Act Amendments of 1981, 16 U.S.C. §§ 3371–3378 (2012).
  124. 7 U.S.C. §§ 8501–8507 (2012).
  125. 7 U.S.C. §§ 7701–7786 (2012).
  126. 39 U.S.C. § 3015 (2012).
  127. 64 Fed. Reg. 6183 (Feb. 3, 1999).
  128. See Thomsen et al., supra note 14, at 2565–66 (detecting and quantifying diversity using DNA collected from water samples taken from rivers, lakes, and ponds in Europe).
  129. See, e.g., Foote et al., supra note 12, at 1–2 (discussing the inherent challenges of eDNA monitoring in a marine environment, but finding the method overall to be feasible); Kelly et al., supra note 44 (commenting on the significant potential of eDNA to survey large portions of the marine environment); Thomsen et al., supra note 113, at 1–2 (remarking on the surprising success of early efforts to use eDNA for tracking of species in marine environments).
  130. See, e.g., Sam C. Banks et al., Demographic Monitoring of an Entire Species (the Northern Hairy-Nosed Wombat, Lasiorhinus krefftii) by Genetic Analysis of Non-Invasively Collected Material, 6 Animal Conservation 101, 101 (2003) (discussing the success of an experiment to estimate the population of the northern hairy-nosed wombat, one of the rarest terrestrial animals in the world).
  131. See, e.g., 16 U.S.C. § 1540(e) (2012) (giving agencies enforcement authority to seize property in violation of the ESA).
  132. See, e.g., Paul Wade et al., Acoustic Detection and Satellite-Tracking Leads to Discovery of Rare Concentration of Endangered North Pacific Right Whales, 2 Biology Letters 417, 418 (2006), available at http://rsbl.royalsocietypublishing.org/content/2/3/417.full.pdf+html (describing a successful effort by researchers to identify and track a pod of whales using a combination of satellite and genetic monitoring).
  133. 16 U.S.C. § 1536 (2012) (requiring interagency consultation to further the conservation of threatened and endangered species).
  134. For examples of the value of monitoring data in imperiled species management, see generally Julien Martin et al., supra note 28, at 473; and Leah R. Gerber et al., Gray Whales and the Value of Monitoring Data in Implementing the U.S. Endangered Species Act, 13 Conservation Biology 1215 (1999).
  135. See supra note 3 and accompanying text.
  136. See, e.g., Pac. Fishery Mgmt. Council, Stock Assessments, STAR Reports, STAT Reports, Rebuilding Analyses, Terms of Reference: By Species, http://www.pcouncil.org/groundfish/stock-assessments/by-species/ (last visited Nov. 22, 2014) (providing available stock assessments for groundfish).
  137. See Nat’l Marine Fisheries Serv., Fish Stock Assessment 101, at 2 (2012), available at http://www.st.nmfs.noaa.gov/Assets/stock/documents/Fish_Stock_Assessment_101.pdf.
  138. See id. at 8.
  139. See Katie K. Arkema et al., Marine Ecosystem-Based Management: From Characterization to Implementation, 4 Frontiers Ecology & Env’t 525, 525 (2006) (discussing ecosystem-based management in the marine context); E. K. Pikitch et al., Ecosystem-Based Fishery Management, 305 Sci. 346, 346 (2004) (discussing ecosystem-based management in the context of fisheries specifically).
  140. See Pikitch et al., supra note 139, at 346–47 (“In data-poor situations with little or no information about target species status or ecosystem processes, [ecosystem-based management] may simply involve using natural history and general knowledge to develop precautionary set-asides or safety margins . . . .”).
  141. Thanks to Professor Dave Fluharty for noting in our conversations that many fishery scientists and managers consider single-species assessments to be important aspects of ecosystem-based management precisely because of their simplicity. These assessments may function as indicators useful for gauging the status and trends of whole ecosystems of which the focal species is a part.
  142. 42 U.S.C. §§ 4321–4347 (2012); id. § 4332(C).
  143. Zhao Ma et al., Barriers to and Opportunities for Effective Cumulative Impact Assessment Within State-Level Environmental Review Frameworks in the United States, 55 J. Envtl. Plan. & Mgmt. 961, 965 (2012).
  144. California Environmental Quality Act, Cal. Pub. Res. Code §§ 21000–21165 (West 2007).
  145. State Environmental Quality Review Act, N.Y. Envtl. Conservation Law §§ 8-0101 to 8-0117 (McKinney 2005).
  146. See, e.g., Sec’y, Vt. Agency of Natural Res. v. Earth Constr., Inc., 676 A.2d 769, 772 (Vt. 1996) (outlining Vermont’s Act 250 test for whether substantial change has occurred by looking at potential significant impact such as effect on noise and air pollution, safety of roads, soil erosion, and effects on surrounding property). See generally Ma et al., supra note 143, at 966, 969 (discussing states that do not employ the NEPA model).
  147. For laws requiring environmental impact analyses, see, for example, Canadian Environmental Assessment Act, 2012, S.C. 2012, c. 19, s. 52, (Can.), available at http://laws-lois.justice.gc.ca/eng/acts/C-15.21/index.html;Kankyo Kihon Keikaku [Basic Environment Plan], Law No. 91 of 1993, pt. III, ch. 4, § 1, http://www.env.go.jp/en/policy/plan/basic/ (Japan); Neth. Comm’n for Envtl. Assessment, Environmental Assessment, http://www.eia.nl/en/environmental-assessment (last visited Nov. 22, 2014) (describing the Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) processes); Resource Management Act 1991 (N.Z.).
  148. See, e.g., Canadian Environmental Assessment Act, c. 19, s. 52, available at http://laws-lois.justice.gc.ca/eng/acts/C-15.21/index.html (requiring consideration of multiple factors in environmental assessment including environmental effects of the designated project, alternative means of carrying out project and environmental effects of any such alternative means, and purpose of the designated project); Ma et al., supra note 143, at 962 (explaining that the Canadian Environmental Assessment Act requires assessment of any cumulative impacts likely to result from a project in combination with other projects, similarly to NEPA).
  149. See Ma et al., supra note 143, at 965 (identifying the two key problems with cumulative impacts analysis as a lack of time and resources to effectively perform cumulative impacts work, and a lack of data). My construction of the data and model problems of environmental management, in part, reflect Ma et al.’s observations.
  150. See David K. Gattie et al., Informing Ecological Engineering Through Ecological Network Analysis, Ecological Modelling, and Concepts of Systems and Engineering Ecology, 208 Ecological Modelling 25, 28, 35 (2007) (examining different modeling methods for ecosystem analysis).
  151. See Ma et al., supra note 143, at 964.
  152. Id. (describing “[t]he inability of NEPA and related regulations to favorably structure [cumulative impacts assessment] implementation”).
  153. U.S. Dept. of Commerce, Nat’l Oceanic & Atmospheric Admin., Guidance on Describing the Affected Environment in EAS and EISs 3 (2012).
  154. See 40 C.F.R. § 1508.7 (2014) (defining cumulative impact as “the impact on the environment which results from the incremental impact of the action when added to other past, present, and reasonably foreseeable future actions regardless of what agency (Federal or non-Federal) or person undertakes such other actions”). This is very curious given the case law surrounding NEPA and the California Environmental Quality Act, which requires cumulative impacts analysis, but appears to ignore all but the most proximate effects. See Cmtys. for a Better Env’t v. S. Coast Air Quality Mgmt. Dist., 226 P.3d 985, 993–94 (Cal. 2010) (specifying that the relevant baseline for calculating environmental impacts is the actual conditions existing at the time of environmental analysis. Because each analysis uses a present-day baseline, the cumulative effects of previous permitted projects go unassessed).
  155. See Erin E. Prahler et al., It All Adds up: Enhancing Ocean Health by Improving Cumulative Impacts Analyses in Environmental Assessments, 33 Stan. Envtl. L.J. 351, 378–79 (2014) (explaining scientific preference for historical baselines in order to more accurately assess ecosystem trends).
  156. See, e.g., Cal. Code Regs. tit. 14, § 15125 (West 2014) (“An [environmental impact report] must include a description of the physical environmental conditions in the vicinity of the project, as they exist at the time the notice of preparation is published . . . .”); Fat v. Cnty. of Sacramento, 119 Cal. Rptr. 2d 402, 407 (Cal. Ct. App. 2002) (stating that “environmental conditions existing at the time environmental analysis is commenced ‘normally’ constitute the baseline for purposes of determining whether an impact is significant”); Loren McClenachan et al., From Archives to Conservation: Why Historical Data Are Needed to Set Baselines for Marine Animals and Ecosystems, 5 Conservation Letters 349, 349, 355 (2012) (“Relevant data from the past are often overlooked or discarded in extinction risk assessments, recovery target setting, and fisheries management.”) (citation omitted).
  157. See generally Prahler et al., supra note 155, at 379–80 (explaining that a baseline that includes existing conditions is the legal standard for an agency’s proposed action).
  158. See, e.g., Benjamin S. Halpern et al., A Global Map of Human Impact on Marine Ecosystems, 319 Sci. 948, 948, 950–51 (2008) (identifying certain human activities as “drivers” that correspond to specific impacts on ecosystems); Harry Spaling & Barry Smit, Cumulative Environmental Change: Conceptual Frameworks, Evaluation Approaches, and Institutional Perspectives, 17 Envtl. Mgmt. 587, 590–92 (1993) (discussing a stressor-based analysis).
  159. See Monique Dubé & Kelly Munkittrick, Integration of Effects-Based and Stressor-Based Approaches into a Holistic Framework for Cumulative Effects Assessment in Aquatic Ecosystems, 7 Hum. & Ecological Risk Assessment 247, 251 (2001).
  160. See supra text accompanying note 49.
  161. Nat’l Acad. of Sci. et al., An Ecosystem Services Approach to Assessing the Impacts of the Deepwater Horizon Oil Spill in the Gulf of Mexico 2 (2013), available at http://dels.nas.edu/resources/static-assets/materials-based-on-reports/reports-in-brief/Ecosystem-Services-Report-Brief-Final.pdf (“To judge the extent of damage caused by the spill, scientists need to understand the state of the ecosystem before the Deepwater Horizon disaster occurred. Depending on the ecosystem service being addressed, there are substantial differences in the amount and quality of data available. Furthermore, many ecosystems are rapidly changing and were already degraded even before the spill, making pre‑spill baselines more difficult to establish.”).
  162. See Thomas R. Karl & Kevin E. Trenberth, Modern Global Climate Change, 302 Sci. 1719, 1719–20 (2003) (explaining the global range of carbon dioxide from energy and other emissions along with the associated effects on climate).
  163. Another future, ecosystem-level use of genetic monitoring in the regulatory context is effects-based water quality monitoring under the CWA. The Environmental Protection Agency recommends effects-based criteria in the case of nutrient loadings; rather than applying a single numeric water quality criterion to a system with naturally dynamic background nutrient levels, states can set impairment thresholds based on the ecological response of a particular water body to nutrient pollution. See Federal Water Pollution Control Act, 33 U.S.C. § 1313(c)(2)(A) (2012).When a predetermined, quantitative ecological threshold is crossed, the water body is classified as impaired, and the listing process ensues. See id. § 1313(d)(1)(C). This more holistic, ecosystem-level focus is a move away from the single-stressor metrics of impairment that are the traditional focus of the CWA. One difficulty at present is the lack of ecosystem-level information that would enable state enforcement agencies to routinely use these kinds of effects-based criteria.
  164. See generally Holly Doremus, Data Gaps in Natural Resource Management: Sniffing for Leaks Along the Information Pipeline, 83 Ind. L.J. 407, 414, 417 (2008) (discussing the production of information for natural resource management by means of an analogy to the steps in hydrocarbon fuel production).
  165. See, e.g., Tony A. Sullins, ESA: Endangered Species Act 6 (2001) (“Section 4 of the ESA vests the Secretary with a ‘mandatory non-discretionary duty to list qualified species as endangered or threatened.’”) (footnote omitted); see also supra Part III.A.1 (describing nondiscretionary duties under the CWA).
  166. See, e.g., NOAA, FY 2014 Budget Summary 7-113 to -121 (2014), available at http://www.corporateservices.noaa.gov/nbo/fy14_bluebook/FINALnoaaBlueBook_2014_Web_Full.pdf (describing NOAA’s continued engagement in various research areas across fiscal years 2012, 2013, and 2014).
  167. Of course, this also assumes that higher-level budgets do not decrease accordingly as the cost of data decreases.
  168. Id. at 8-132.
  169. Id. at 8-131.
  170. See id. at 8-132 to -133 (allocating funds to “Survey and Monitoring Projects” and “Marine Resources Monitoring, Assessment & Prediction Program”).
  171. Id. at 8-132.
  172. Professor Dave Fluharty noted in a conversation that as large as this number is, relative to individual research labs, NMFS’s money has to cover the entire United States Exclusive Economic Zone.
  173. Note that this is a profound and often non-obvious decision point. Deciding what to monitor is a decision about what to know and what not to know. To responsibly make such a decision would seem to require the management system’s “known unknowns” greatly outnumber its “unknown unknowns.” This is another reflection of monitoring’s Catch-22, requiring we know enough to intelligently decide what to measure and what not to measure.
  174. See, e.g., Holly Doremus, Science Plays Defense: Natural Resource Management in the Bush Administration, 32 Ecology L.Q. 249, 250–51 (2005) (quoting Democratic Staff of the House Committee on Resources) (“Within hours of moving into the White House, the Bush Administration put a hold on numerous regulations that had been in the making for years.”).
  175. See, e.g., Phil Taylor, Endangered Species: Obama Settlement with Green Groups Spurred Major Change in Listing Decisions, Greenwire, Jan. 11, 2013, http://www.eenews.net/greenwire/stories/1059974669 (last visited Nov. 22, 2014) (“New endangered species listings dramatically fell under the George W. Bush administration, but they have risen more than threefold under the Obama administration . . . .”).
  176. Biber, supra note 1, at 48­–49.
  177. Id. at 48; see also id. at 49–50 (citing an example of USFS grazing lands, in which the agency’s own monitoring data was being used against the Service in court).
  178. See, e.g., Endangered Species Act of 1973, 16 U.S.C. § 1533(a)(8) (2012) (requiring that a summary of data be published in the Federal Register for ESA determinations).
  179. Note also that conflicts remain between the permitting-and-research arms and the revenue-generating arms of agencies, as was most memorably the case in the former Minerals Management Service. See Drew Griffin et al., MMS Was Troubled Long Before Oil Spill, CNN (May 27, 2010, 2:24 p.m.), http://www.cnn.com/2010/POLITICS/05/27/mms.salazar/.
  180. One EPA career staffer voiced a similar tension, saying the agency’s central problem was not too much data or too little data, but rather the lack of a mechanism for setting priorities. The staffer noted that setting priorities was difficult, in part, because it requires senior agency officials to give up discretion, in that they have to agree to be bound by the priority-setting process and by the results of that process. Interview with EPA Employee, in Seattle, Wash. (Dec. 11, 2013).
  181. See, e.g., 16 U.S.C. § 1533(b)(1)(A) (2012) (“The Secretary shall make determinations required by . . . this section solely on the basis of the best scientific and commercial data available to him after conducting a review of the status of the species . . . .”).
  182. 5 U.S.C. §§ 551–559, 701–706, 1305, 3105, 3344, 4301, 5335, 5372, 7521 (2012).
  183. See id. § 706.
  184. In the same way, the National Weather Service has a clear interest in improving weather forecasts. These analyses are, in both the cases of NMFS and the Weather Service, core products of public interest, and large sums of money depend upon the forecasts in each case.
  185. For example, views have changed about shoreline armoring as a result of data and understanding of coastal erosion processes. See, e.g., Whatcom Cnty., Wash., Shoreline Management Program, tit. 23,ch. 100.13.A (Aug. 8, 2008), available at: http://www.co.whatcom.wa.us/pds/naturalresources/shorelines/pdf/SMP_CountyApproved_EcologyApproved_090323_clean_000.pdf (“New or expanded structural shore stabilization for new primary structures should be avoided. Instead, structures should be located and designed to avoid the need for future shoreline stabilization where feasible.”). Another example involves changing guidance about the use of native versus nonnative plant species as ground cover to slow erosion near highways as a result of information about the secondary impacts of nonnative species. Fed. Highway Admin., U.S. Dep’t of Transp., Wildlife and Habitat: Federal Highway Administration Guidance on Invasive Species August 10, 1999, http://www.environment.fhwa.dot.gov/ecosystems/wildlife/inv_guid.asp (last visited Nov. 22, 2014) (providing guidance “to address roadside vegetation management issues” during road construction and maintenance activities). For an introduction to the literature on why some information leads to action, while other information does not, see generally Cash et al., supra note 38, at 8086.
  186. One might imagine the same conversation happening across different staff meetings: “What happened to our sales last time we ordered coffee beans from this distributor?”; “What happened to the unemployment rate last time we raised the prime interest rate?”; “What happened to the river last time we closed the upstream dam?”
  187. See Melinda Harm Benson & Ahjond S. Garmestani, Embracing Panarchy, Building Resilience and Integrating Adaptive Management Through a Rebirth of the National Environmental Policy Act, 92 J. Envtl. Mgmt.1420, 1422 (2011) (describing what adaptive management is and providing examples of efforts to integrate adaptive management into regulatory regimes in the United States); Marj Nelson, The Changing Face of HCPs, Endangered Species Bull., July–Aug. 2000, at 4–5.
  188. See, e.g., Nelson, supra note 187, at 4­–5 (explaining that the components of adaptive management in the ESA “include identifying potential uncertainties in the [habitat conservation plan], incorporating a range of alternatives . . . and establishing a feedback loop from the monitoring program that allows for change in the management strategies, if needed”). Putting adaptive management into actual practice, however, is of course much more difficult. In part, this is because setting out an explicit hypothesized outcome—which data will support or refute—is contrary to the discretion-protecting behavior of public agencies.
  189. See, e.g., Howard E. McCurdy, Faster, Better, Cheaper: Low-Cost Innovation in the U.S. Space Program 1–2 (2001) (discussing NASA, in particular).
  190. See, e.g., Kelly & Caldwell, supra note 30, at 193–94 (citing example of agency using ecological generalists—species unlikely to reflect changes in their environment—as indicators of habitat change, and concluding that those species make poor indicators).
  191. This metaphor, known as “the law of the instrument,” is variously attributed to Abraham Kaplan and Abraham Maslow. Abraham H. Maslow, The Psychology of Science 15 (2006).
  192. This is in part because genomes, consisting of DNA sequences, are simply enormous. The human genome is about 3 billion base-pairs (i.e., units) long. Nat’l Human Genome Research Inst., The Human Genome Project Completion: Frequently Asked Questions, http://www.genome.gov/11006943 (last visited Nov. 22, 2014). Keeping a few genome sequences on one’s hard drive occupies gigabytes. See, e.g., UCSC Genome Bioinformatics, Sequence and Annotation Downloads, http://hgdownload.soe.ucsc.edu/downloads.html (last visited Nov. 22, 2014).
  193. See generally Jennifer Rowley, The Wisdom Hierarchy: Representations of the DIKW Hierarchy, 33 J. Info. Sci.163, 163 (2007) (reviewing the literature on hierarchical concepts of “data, information, knowledge, and wisdom”).
  194. See, e.g., James N. Sanchirico et al., Investment and the Policy Process in Conservation Monitoring, 28 Conservation Biology 361, 361 (2013) (asserting that natural resource managers need data about information relevant to them).
  195. See supra notes 75–79, 165–166 and accompanying text (noting some of EPA’s and FWS’s nondiscretionary duties).
  196. See Endangered Species Act of 1973, 16 U.S.C. § 1536(a) (2012) (requiring an agency to consult with FWS or NMFS when the action agency has reason to believe that an endangered or threatened species is present in the area of a federal project and will likely be affected by the project).
  197. See Marine Mammal Protection Act of 1972, 16 U.S.C. § 1361­ (2012) (finding that marine mammal population stocks “should not be permitted to diminish beyond the point at which they cease to be a significant functioning element in the ecosystem of which they are part”).
  198. See National Environmental Policy Act of 1969, 42 U.S.C. § 4332 (2012).
  199. See, e.g., Julie Palakovich Carr & Nadine Lymn, Biological and Ecological Sciences in the FY 2014 Budget, inAAAS Report XXXVIII: Research and Development FY 2014, at 211, 215 (2013) (noting competitive research on fisheries funded by NOAA, NMFS, and National Ocean Service).
  200. See 10 C.F.R. § 605 (2014).
  201. Speaking to existing agency interests and incentives avoids the pitfall of producing surplus data without real information value. Doing so reliably, however, requires early buy-in on the part of the agency. For example, California must monitor the state’s marine protected areas under the Marine Life Protection Act. Cal. Fish & Game Code § 2851(e), (h) (West 2013); see also MPA Monitoring Enter., About Us, http://monitoringenterprise.org/about.php/ (last visited Nov. 22, 2014) (noting that California’s Ocean Science Trust (OST) Monitoring Enterprise is the lead organization implementing monitoring of the state’s marine protected areas). But OST has neither the budget nor the staff to effectively carry out monitoring itself; rather, the organization directs a modest amount of funding to contractors to maximize output while minimizing overhead. See, e.g., MPA Monitoring Enter. et al., North Coast MPA Baseline Program: List of Projects and Project Leads 1–2 (2014), available at http://monitoringenterprise.org/pdf/NorthCoast_ListofProjects&ProjectLeads.pdf (listing one region’s projects, none of which were led by OST staff). By consulting with OST to learn how the agency functions and what its institutional aims might be, one may design a genetic monitoring strategy that increases the probability OST would use data generated by genetic monitoring.
  202. See, e.g., Interagency Working Grp. on Ocean Acidification, Strategic Plan for Federal Research and Monitoring of Ocean Acidification 1–2 (2014), available at http://www.whitehouse.gov/sites/default/files/microsites/ostp/NSTC/iwg-oa_strategic_plan_march_2014.pdf (applying research to form policy recommendations on ocean acidification).
  203. See, e.g., id.
  204. See, e.g., Nw. Fisheries Sci. Ctr., Nat’l Marine Fisheries Serv., Ocean Acidification 1–2 (2011), available at http://www.nwfsc.noaa.gov/publications/documents/Ocean%20acidification.pdf (applying research to articulate real-world consequences of ocean acidification).
  205. See, e.g., Interagency Working Grp. on Ocean Acidification, supranote 202.
  206. Setting aside, for now, the larger question of optimum allocation of public resources among interested groups, I focus here only on the use of external environmental information in natural resource decisions.

Comments are closed

Sorry, but you cannot leave a comment for this post.