I just finished reading Kirk Wallace Johnson's The Fishermen and the Dragon: Fear, Greed, and a Fight for Justice on the Gulf Coast. Here's my book review (from the view of an Environmental Economist of course).
The Fishermen and the Dragon tells the true story of the evolution of the Texas Gulf Coast crab and shrimp fisheries in the post-Vietnam era. The era is important to the story because tension builds along the Texas coast when post-war Vietnamese refugees locate to the Gulf Coast creating economic competition and (racial) tension between the existing anglers and the refugees. The racial tensions play out against the backdrop of the encroachment of the Ku Klux Klan into post-war political asylum issues and the the environmental problems that come along with rapid industrialization of the Gulf Coast.
So why would an environmental economist care? Honestly the story has a bit of everything: Tragedy of the commons, Coasian bargaining, Ostrom-like institutional building, credible (and incredible) threats, industrial organization, the role of government in regulating the commons, natural resource damages (oil spills), environmental and health externalities, social and environmental justice.
Part Erin Brokovich (an initially reluctant environmental crusader), part John Grisham-like southern legal thriller tinged with undertones of racism, part Pat Conroy-like description of the hardships of coastal life, this true story reads like a popular fiction thriller.
If I have one complaint, it would be that there is a slight disjoint between the story of race tensions between the refugees and local fishers, and the story of corporate greed and environmental disaster. There is overlap in the main characters, but stories seem to be separated in both time and presentation.
Nevertheless, a good read with Env-Econ lessons abound.
I give it a solid four out of five beers raised.
There is a great new policy forum article in Science defending the use of the social cost of carbon (SCC) in climate policy analysis, written by four great environmental economists (Joe Aldy, Matt Kotchen, Robert Stavins, and James Stock). While I enjoyed reading the article, I am not sure that I endorse its conclusion.
The background: for the last thirty-plus years, the economics profession has been studying climate change through integrated assessment models (IAMs), pioneered by William Nordhaus, who was awarded the Nobel Prize in 2018. These IAMs include specifications of the costs and benefits of reducing greenhouse gases, and so they can be used to calculate the "efficient" path of emissions reductions (i.e. the path that maximizes net benefits according to the model) and to calculate the marginal damages of each unit of carbon emitted along the efficient path, the so-called SCC. Then, this SCC can be smacked on to carbon emissions in the real world to get the real world to behave like the efficient outcome in the model.
Then, there have been some critics of the use of IAMs and the SCC, including from among the economics profession. Robert Pindyck has been very critical of the false level of certainty implied by IAMs. (He has a paper titled "Climate Change Policy: What do the Models Tell Us?", and the first sentence of the abstract answers "Very little.") Pindyck writes:
These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g., the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models' descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.
More recently, Nicholas Stern and another Nobel-prize winner, Joseph Stiglitz, have released a working paper that also criticizes IAMs and the SCCs they produce for neglecting, among other things,
...the immense risks and impacts on distribution across and within generations; the many failures, limitations or absences of key markets; and the limitations on government, both in offsetting these failures and distributional impacts.
Stern and Stiglitz propose an alternative to the SCC, which can be called a "target-consistent price." The SCC uses the costs and benefits in the IAM to calculate the efficient level of emissions and associated price. The target-consistent price treats the target as exogenous, whether that target is a specific emissions reduction level, or a specific temperature threshold, and then uses the IAM to calculate the cost-effective (i.e. least-cost) path to achieve that target. The target-consistent price is the associated price of carbon for that cost-effective path. So the difference between the SCC and the target-consistent price is that the SCC uses a cost-benefit analysis while the target-consistent price uses a cost-effectiveness analysis.
These critiques of IAMs and the SCC by economists are similar to critiques by non-economists, who have argued that IAMs like Nordhaus's DICE model have yielded policy prescriptions that are not nearly ambitious as they should be.
The recent article by Aldy, Kotchen, Stavins, and Stock defend the use of IAMs and calculation of the SCC. The basic idea is that SCCs are fundamentally sound, and we (scientists and economists) should be working to improve the calculation of the SCC and its use in climate policy design rather than throw it out and settle for mere cost-effectiveness analysis. It was the Trump administration, after all, that effectively rewrote the rules to set the SCC to near-zero, while the Biden administration is bringing back legitimate policy analysis and science in its calculation.
I like the article and agree with most of the points it raises but am not sure that I endorse its conclusion. There are two ways of defending or attacking the SCC: on pragmatic grounds and on existential grounds. In either case, I don't know that we need or want an SCC, or that an SCC is preferable to something like Stern and Stiglitz's target-consistent price.
The pragmatic question is: which type of analysis is going to make the world a better place, which in this context means which is going to lead to more aggressive climate action? (Assuming here that we are all on the same page and want aggressive climate policy, relative to what's actually been enacted.) The implication in the recent Science piece is that relegating ourselves to setting exogenous policy goals and only using economics for cost-effectiveness analysis will put us at the mercy of political whims or administrations who can willy-nilly throw out any justification of aggressive policy. But that is true when using an SCC too! That's just what the Trump administration did. Furthermore, the longstanding critique of DICE and other IAMs is that they are not aggressive enough because they omit many difficult-to-quantify costs of climate change, like tipping points, and they tend to ignore or downplay distributional and equity concerns. This is the heart of Stern and Stiglitz's, as well as Pindyck's, critique of IAMs. Setting clearly-defined policy goals, like a maximum temperature increase, seems likely to increase the support and possibility of getting aggressive climate action done. Of course, that is an open question that will be hard to answer (which type of policy analysis will lead to more aggressive policy?), but it is by no means obvious that the SCC wins.
Beyond the pragmatic justification of the SCC, there is an existential issue with it. Namely, is the SCC in fact a real thing, a meaningful concept for which a concrete value exists in the world (say, $45 per ton), and we just need to find out what it is? Or, is the SCC itself a construct, which exists only within and because of the methodological framework of neoclassical welfare economics? The Science article takes for granted that the SCC is a real thing. But that claim assumes that a valuation of damages to the environment exists. The existence of that valuation is essential to neoclassical welfare economics, but a large literature in ecological economics and philosophy questions its existence.
I am especially interested in this issue since I'm currently in the middle of reading Elizabeth Anderson's 1995 book Value in Ethics and Economics. This is a must-read for economists interested in alternative ways of thinking about policy analysis. Like many ecological economists and some political philosophers like Michael Sandel, Anderson is critical or skeptical of the authority of economic valuation techniques across numerous domains. She and others defend value pluralism, or the incommensurability of values - it is not appropriate to simply put a dollar value on the damages caused by climate (even if those dollar values come from revealed preference studies of people's willingness to pay) and toss that into a benefit-cost analysis. She writes
... Goods can be plural, ... they can differ in kind or quality; they differ not only in how much we should value them, but in how we should value them
It could be that the SCC is bad, not because of the problems and limitations of IAMs that are raised by Stern and Stiglitz and other critics, but because the entire philosophical infrastructure of the valuation of environmental damages is unsound. It's hard for me to fully accept this view, since I've been trained for the past 20 years in neoclassical economic orthodoxy. But there are many compelling arguments in favor of value pluralism that are worth struggling with.
Or, it could be that the existential questions about the SCC aren't important and what matters is the pragmatic effects of using or not using an SCC or cost-benefit analysis in climate policy.
In either case, I think it's an open and important question to ask whether the SCC is worth pursuing or not.
I recently read an article in the journal Economics and Philosophy, written by Lisa Herzog, which has nothing whatsoever to do with environmental economics but nonetheless I think has interesting implications for it and for Pigouvian pricing in particular.
In case you are unfamiliar with it, the journal Economics and Philosophy is a scholarly journal publishing articles on topics related to (wait for it) economics and philosophy. And, often, the philosophy of economics. This is an area that I have become interested in recently, influenced no doubt by my years as an undergraduate philosophy major.
In the article, Herzog addresses and basically dismantles Hayek's model of the price mechanism in markets as being an efficient way of processing and distilling information necessary for buyers and sellers. Hayek's idea is basically that the world is a complicated place, and markets have lots of complicated information about costs, benefits, demands, etc. But, once markets determine a price for something, that price contains all of the information that buyers and sellers need to make their optimal decision. Herzog quotes Hayek: markets process "dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess."
Herzog's argument is that Hayek is ignoring important pieces of knowledge that buyers and sellers need to make morally informed choices and that are missing from the price signal. Without this information, or "epistemic infrastructures, market participants do not act as morally responsible agents: when acting in such markets, it often seems fair to say that they do not know what they are doing." The motivating example is buying clothes made in sweatshops overseas. The price mechanism contains lots of information about production and transportation costs, etc., but it is missing important moral information, like the conditions of the workers and their outside opportunities. Without this information buyers cannot make morally complete (and therefore optimal) decisions.
What does this have to do with environmental economics? Of course, when there are negative externalities (like pollution), the unregulated market price will not reflect those externalities and thus will not provide buyers and sellers the necessary information to make optimal choices. This is not new; this is Pigou, and even the most die-hard Hayekian would admit that market failures like externalities need to be incorporated somehow into the price signal to achieve efficiency. (Aside: maybe it's not actually Pigou, according to this working paper by my colleague Spencer Banzhaf.)
But I think there is a deeper and less obvious point than that. Supposing that there is a Pigouvian price on pollution that accounts for its negative externalities. Does this price still contain the necessary "epistemic infrastructure" for all buyers and sellers to make morally responsible decisions? If the Pigouvian price is "right" then it accounts for the spillover costs, but it's not obvious that these costs are all that is morally necessary to make informed choices. Just like with sweatshops, there potentially are moral considerations above and beyond any cost considerations (even spillover costs) that need to be incorporated. It it not clear that a Pigouvian price would incorporate any of this.
Anyways, some Pigouvian price on pollution is still no doubt better than no Pigouvian price on pollution, but there is potentially a strong argument to be made here based on Herzog's article that any Pigouvian price can't get us enough information to make morally optimal choices.
Why not?
Monetizing Bowser: A Contingent Valuation of the Statistical Value of Dog Life
Deven Carlson, Simon Haeder, Hank Jenkins-Smith, Joseph Ripberger
Abstract
Households in the USA spend about $70 billion annually on pets. Dogs, the most common pet, can be found in nearly half of American households. An important shadow price in the analysis of policies affecting human mortality is the value of statistical life (VSL), which is imputed from how people make decisions involving tradeoffs between small mortality risks and other goods. The value of statistical dog life (VSDL) is also an important, but until now unavailable, shadow price for use in regulation of such goods as pet foods and environmental toxins. Additionally, an estimate of the VSDL would have uses outside the regulatory process in valuing programs involving zooeyia, in setting tort awards for wrongful dog death, and in property divisions in divorce settlements where joint custody of dogs is not feasible. In order to estimate the VSDL, we conducted a contingent valuation of a national sample of dog owners that elicited willingness-to-pay for changes in mortality risk for pet dogs. Specifically, respondents were asked about willingness-to-pay for a vaccine that would reduce the risk of canine influenza. The design included both quantity (different magnitudes of risk reduction from the offered vaccine) and quality (differences in nature of death from the influenza) treatments as scope tests. It also included treatments involving spillover effects to other dogs and a priming question about disposable income. Based on the analysis and consideration of its assumptions, we recommend $10,000 as the VSDL.
Journal of Benefit-Cost Analysis, November 11, 2019
The most recent issue of the Journal of Economic Perspectives features a three-article symposium on the 50th anniversary of the Clean Air and Clean Water Acts. (Though that's an iffy anniversary: the CWA was passed in 1972, so it's only 50 with rounding, and the CAA was originally passed in 1963, though the 1970 amendments contained much of the most important stuff.)
In the first article, Janet Currie and Reed Walker provide "a reflection on the 50-year anniversary of the formation of the Environmental Protection Agency, describing what economic research says about the ways in which the Clean Air Act has shaped our society—in terms of costs, benefits, and important distributional concerns."
In the second article, Richard Schmalensee and Robert Stavins discuss the evolution of the CAA over time: "We trace and assess the historical evolution of the Environmental Protection Agency's policy instrument use, with particular focus on the increased use of market-based policy instruments, beginning in the 1970s and culminating in the 1990s."
In the third article, David Keiser and Joseph Shapiro study the CWA and the Safe Drinking Water Act. They summarize four main conclusions: "First, water pollution has fallen since these laws were passed, in part due to their interventions. Second, investments made under these laws could be more cost effective. Third, most recent studies estimate benefits of cleaning up pollution in rivers and lakes that are less than the costs ... Fourth, economic research and teaching on water pollution are relatively uncommon."
Well, when I drafted this, it was a recent publication I wanted to announce, but now that I have some free time, I can finish it...
The March issue of Environmental and Resource Economics is a special issue on "Benefit Transfer: Current Practice and Future Prospects." Most of the papers arose from an EPA-sponsored workshop on benefit transfer; either through direct funding, as comments, or to introduce the issue. Kerry Smith put a ton of intellectual capital into identifying topics and authors for commissioned papers, and he wrote the introduction to the issue:
This paper introduces a special issue devoted to the benefits transfer methods used as part of benefit costs analysis for policy analysis. Benefits transfer methods, as they are applied for environmental policy analyses, use economic concepts together with existing empirical estimates to predict the incremental benefits from a change in some feature of an environmental resource. After giving two examples of the decisions that analysts confront in performing these analyses, I discuss the interconnections between the papers in this issue and the research challenges that emerged from discussions of them.
I helped contribute to organize the workshop and contribute to an EPA overview paper on benefit transfer challenges. I also discussed a paper by Laura Blow and Richard Blundell using non-parametric revealed preference to value environmental quality changes:
We develop an approach to valuing non-market goods using nonparametric revealed preference analysis. We show how nonparametric methods can also be used to bound the welfare effects of changes in the provision of a non-market good. Our main context is one in which the non-market good affects the marginal utility of consuming a related market good. This can also be framed as a shift in the taste for, or quality of, the market good. A systematic approach for incorporating quality/taste variation into a revealed preference framework for heterogeneous consumers is developed. This enables the recovery of the minimal variation in quality required to rationalise observed choices of related market goods. The variation in quality appears as a adjustment to the price for related market goods which then allows a revealed preference approach to bounding compensation measures of welfare effects to be applied.
This was actually one of harder things I have had to do at EPA. I volunteered to be a discussant because I could recall being exposed to the technique in micro theory (Varian was one of the earlier developers and it's in his textbook). This method has not been used much in environmental economics (I only found four examples) but allows bounding of indifference curves--and therefore welfare measures--without functional form assumptions. Let me try to sketch the logic. In the figure below, there are two goods (superscripted) with one observed choice (subscripted). The shaded area RW is revealed worse, because any choice in that area was attainable but not chosen. The shaded area RP is revealed preferred, because any choice in that area would have strictly more of both goods. The indifference curve must therefore line in the unshaded area.
Now, if you have more choices, you can narrow the indifference curve.
There are further details in the paper on how to tighten the estimates of the indifference curves and then bound welfare estimates. The idea for benefit transfer (and let's be clear, this was Kerry's idea) is to use these kinds of estimates to help us benchmark parametric estimates of welfare. But it's a really cool paper!
Also in the issue are a paper by Cathy Kling and Dan Phaneuf on scope and adding up; and a paper by four current or former EPA authors (Steve Newbold, Patrick Walsh, Matt Massey, and Julie Hewitt) that improves a meta-regression framework by imposing theoretically-consistent assumptions on the functional form. And Kevin Boyle and Jeff Wooldridge have a great paper on how to handle errors structures and panel data in meta-regressions. And there are more, but I don't have a lot to say about them.
Important note: Whitehead (2016) gets a lot of love in this issue.
This work is not a product of the United States Government or the United States Environmental Protection Agency, and the author is not doing this work in any governmental capacity. The views expressed are those of the author only and do not necessarily represent those of the United States or the US EPA.
I recently got around to reading the 2010 book "Identity Economics: How our Identities Shape our Work, Wages, and Well-Being," by George Akerlof and Rachel Kranton. It is a lay summary of some of the work that Akerlof and Kranton have been doing to incorporate identity and social norms into economic modeling of decision-making. Basically, identity economics modifies a person's utility function - in addition to caring about the normal stuff like consumption, people also care about how their actions conform to the norms of the identity that they belong to. For example, if an employer wanted its workers to work harder or better, standard economic theory would suggest that the employer could modify the contract and include standard economic incentives like merit pay or output-based bonuses, since workers respond to these incentives. Identity economics suggests that the employer alternatively could attempt to create norms for the workplace and encourage workers to identify with those norms, such that the norms themselves and workers' identities would provide incentives for working hard. I might work hard or long hours because I get paid more to do it, or I might do so because I feel like I am an important part of a team, and I believe in the mission of the firm, etc.
After reading it I was struck by the potential application to environmental policy. We want people to reduce their energy consumption, say, by purchasing fuel efficient cars or appliances. Standard economic theory (Pigou) would suggest that we can price the externalities from energy consumption correctly so that everyone's incentives are such that they purchase the right amount of fuel efficient cars and appliances based on financial considerations alone. Looking out my window, it seems pretty obvious that lots of people who buy fuel efficient cars like hybrids and electrics (perhaps most of them) are doing so for other reasons - and perhaps the financial reasons aren't a part of it at all. People buy a hybrid because they are (or want to be seen as) a "green" - this doesn't mean that they're image-obsessed jerks, but it does mean that identity economics seems to play a part in these decisions.
This is somewhat related to the idea of intrinsic motivation, and how it may be crowded out by government policy (some people have studied this in relation to environmental policy). And it's also kind of related to the huge "behavioral nudge" literature on environmental policy (maybe).
But I think there's something innovative about the approach of identity economics that can yield some insights into the optimal design of environmental policy. Someone should write a paper, I decided, incorporating identity economics into a model of optimal externality policy.
Then a few days ago I see that someone basically did. "Environmental Policy When Consumers Value Conformity," by Alistair Ulph and David Ulph, has recently been online-published by JEEM. Their model is one in which consumers value conformity - sticking with the norms of a group. This is similar to the modification of the utility function in identity economics, though (I think) they don't have in their model the endogeneous creation of norms for different identity groups (e.g. the greens and the browns). This yields some striking policy implications, including the fact that the Pigouvian tax could be welfare-reducing.
I just finished reading Seeing Like a State by James C. Scott. I've been thinking quite a bit about its implications for environmental policy and in particular the standard neoclassical policy prescriptions for the environment (i.e. Pigouvian taxes and/or cap-and-trade).
The book has somewhat of a libertarian/conservative/small-government angle to it, though it's a captivating read and worth checking out especially if you don't encounter arguments like that very much. The basic premise is a critique of certain types of government interventions into the market and into society. The type of interventionist policy that Scott criticizes he calls "high modernism," which involves a faith in science and technology to re-order and re-shape society and the economy in a way that improves people's lives. The motivating examples of high modernism include the Bolshevik Revolution and major planned cities like Brasilia. Scott argues that high modernism ignores local and individualized knowledge and as a result often results in spectacular failure that can make people's lives substantially worse.
Rather than relying on high modernist technocrats, Scott puts more stock into what he calls "metis," the local, practical knowledge that high modernism ignores. It's been noted elsewhere that this argument is somewhat Hayekian, but it also strikes me as very Burkean: a fundamental reason for trusting traditional institutions is that they capture the accumulated metis of a society.
So, what are the implications for environmental policy, and what are neoclassical Pigouvians like myself supposed to get out of this? Pigouvian neoclassical welfare economics and its policy implications seem like textbook examples of high modernism - technocrats (like me!) are overruling the locals and telling them how the economy ought to be run, at least when it comes to environmental externalities like carbon pollution. However, it also seems like market-based policies like pollution taxes and cap-and-trade programs respect metis in an important way that command-and-control policies don't, since they still give freedom to people and businesses to more or less make decisions on their own (the only interference being a slight tweak of the market incentives from the policy instrument).
Would a metis-loving, high-modernist-hater like Scott approve of Pigouvian taxes? I would like to think that he would. The fact that there is a growing conservative push for market-based climate policy, including a GOP Representative recently sponsoring a carbon tax bill, makes me think so.
From the Miscellany section of Economic Inquiry, Andy Keeler:
Welfare economics must adapt to the growing consensus over the assignment of rights to animals. We extend nonmarket valuation techniques to the study and measurement of the preferences of Chinook salmon regarding their aquatic habitat and the value of their existence. We find that these techniques are as valid for fish as they are for humans. Our applied study indicates that opportunities exist for Pareto-improving trades between salmon and California agricultural and hydropower interests. (JEL Q510)
I feel moved to boast that I provided some feedback, not a formal review, on this paper to the MISC editor.
-----
And replicate:
We report on various aspects of replication research in economics. Our report includes (i) a brief history of data sharing and replication; (ii) the results of a survey administered to the editors of all 333 economics journals listed in Web of Science; (iii) an analysis of 162 replication studies that have been published in peer-reviewed economics journals from 1977–2014; (iv) a discussion of the future of replication research in economics; and (v) observations on how replications can be better integrated into research efforts to address problems associated with publication bias and other Type I error phenomena. This paper is part of an ongoing project which includes the website replicationnetwork.com, which provides additional, regularly updated information on replications in economics.
via econjwatch.org
The psychologists seem to be pushing back against replication but I don't understand how repeating an experiment to see if (or when) it really works (e.g., scope effects) can be a bad thing in social science.