The course is for senior economics majors, MBAs and graduate students from other programs. My reading list includes BCA examples, overviews, technical stuff (in mostly non-technical language) and other stuff.
Am I missing anything?
Allen, Bryon P., and John B. Loomis. "The Decision to use Benefit Transfer or Conduct Original Valuation Research for Benefit-Cost and Policy Analysis," Contemporary Economic Policy 26, no. 1 (2008): 1-12.
Atkinson, Giles, and Susana Mourato. "Environmental cost-benefit analysis."Annual Review of Environment and Resources 33 (2008): 317-344.
Banzhaf, Spencer H. "Consumer surplus with apology: a historical perspective on nonmarket valuation and recreation demand." Annual Review of Resource Economics 2, no. 1 (2010): 183-207.
Barget, Eric, and Jean-Jacques Gouguet. "The total economic value of sporting events theory and practice." Journal of Sports Economics 8, no. 2 (2007): 165-182.
Blomquist, Glenn C. "Self-protection and averting behavior, values of statistical lives, and benefit cost analysis of environmental policy." Review of Economics of the Household 2, no. 1 (2004): 89-110.
Blomquist, Glenn C., Paul A. Coomes, Christopher Jepsen, Brandon C. Koford, and Kenneth R. Troske. "Estimating the social value of higher education: willingness to pay for community and technical colleges." Journal of Benefit-Cost Analysis 5, no. 1 (2014): 3-41.
Cohen, Mark A., Roland T. Rust, Sara Steen, and Simon T. Tidd. "Willingness-to-pay for Crime Control Programs,” Criminology 42, no. 1 (2004): 89-110.
Farrow, Scott. "How (Not) to Lie with Benefit-Cost Analysis." The Economists’ Voice 10, no. 1 (2013): 45-50.
Graves, Philip E. "Benefit-Cost Analysis of Environmental Projects: A Plethora of Biases Understating Net Benefits." Journal of Benefit-Cost Analysis 3, no. 3 (2012).
Griffiths, Charles, Heather Klemick, Matt Massey, Chris Moore, Steve Newbold, David Simpson, Patrick Walsh, and William Wheeler. "US Environmental Protection Agency valuation of surface water quality improvements." Review of Environmental Economics and Policy (2012): rer025.
Loomis, John B. "Incorporating distributional issues into benefit cost analysis: why, how, and two empirical examples using non-market valuation." Journal of Benefit-Cost Analysis 2, no. 1 (2011).
Rhodes, Raymond J., John C. Whitehead, and T.I.J. Smith, A Benefit-Cost Analysis of a Red Drum Stocking Program, unpublished paper presented at the 2006 Southern Economic Association Meetings, Charleston, SC, November.
Richardson, Leslie, Tatjana Rosen, Kerry Gunther, and Chuck Schwartz. "The economics of roadside bear viewing." Journal of environmental management140 (2014): 102-110.
Robinson, Lisa A. "How US Government Agencies Value Mortality Risk Reductions." Review of Environmental Economics and Policy 1, no. 2 (2007): 283-299.
Rose, Adam, Keith Porter, Nicole Dash, Jawhar Bouabid, Charles Huyck, John Whitehead, Douglass Shaw et al. "Benefit-cost analysis of FEMA hazard mitigation grants." Natural Hazards Review 8, no. 4 (2007): 97-111. [see also: Congressional Budget Office, Potential Cost Savings from the Pre-Disaster Mitigation Program, September 28, 2007]
Sunstein, Cass R. "The Real World of Cost-Benefit Analysis: Thirty-Six Questions (and Almost as Many Answers)." Columbia Law Review (2014): 167-211.
Van Houtven, George, and Maureen L. Cropper. "When is a Life Too Costly to Save? The Evidence from US Environmental Regulations." Journal of Environmental Economics and Management 30, no. 3 (1996): 348-368.
Vitaliano, Donald F., “Repeal of Prohibition: A Benefit-Cost Analysis,” Contemporary Economic Policy (2014).
I'm not sure why I'm posting this since I get them all the time (maybe because I've been gullible once?). From the inbox:
Economics, Commerce and Trade Management: An International Journal (ECTIJ)
Scope & Topics
Economics, Commerce and Trade Management: An International Journal (ECTIJ) is a peer-reviewed, open access journal that addresses the impacts and challenges of all areas of economics, Commerce and Trade Management. It is an interdisciplinary journal which brings together researchers, academics, policy makers and practitioners in business and non-profit organizations.
Original research papers, state-of-the-art reviews are invited for publication in all areas of Economics, Commerce and Trade Management
Authors are invited to submit papers for this journal through E-mail: firstname.lastname@example.org or email@example.com Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Submission Deadline : August 22, 2014
Notification : September 22, 2014
Final Manuscript Due : September 25, 2014
Publication Date : Determined by the Editor-in-Chief
For thirty years the official story of general equilibrium went like this: Kenneth Arrow and Gerard Debreu, working independently at first, then joining forces, proved that Adam Smith was right, and the rest is history....
It was some time in the 1970s that [E. Roy] Weintraub first became aware that [Lionel] McKenzie, by then of the University of Rochester, had in the early 1950s proved the same result as had Arrow and Debreu, and slightly earlier at that, but somehow had failed to share in the enormous credit assigned or their famous result. ...
... McKenzie had succeeded in rehabilitating the kernel of his derailed Princeton thesis, and, as something of an afterthought, setting out an existence proof in “On Equilibrium of World Trade in Graham’s Model of World Trade and Other Competitive Systems.” The paper appeared in Econometrica in April 1954. ...
Debreu arrived from Paris in 1950 .... He and Arrow began working on the equilibrium proof separately; when learning of each other’s work, they threw their lots in together and presented their results at the 1952 meetings of the Econometric Society, in Chicago – a day after McKenzie had talked about his work. Their paper, “Existence of an Equilibrium for a Competitive Economy,” appeared in Econometrica eighteen months later, more general than that of McKenzie, but three months after his.
Weintraub learned all this in the late ’70s, in the course of retooling as a historian of economic thought. The result was a long review article, “The Existence of a Competitive General Equilibrium, 1930-1954,” in the Journal of Economic Literature, in 1983. McKenzie’s name was at last on its way to being firmly appended to the famous proof — small comfort, perhaps, considering the Nobel Prize that Debreu would receive in 1985 for his contributions to mathematical economics. ...
Weintraub kept after it. He noticed that the principals had been somewhat reluctant to discuss the details surrounding their respective proofs. He badgered them, gradually learned that Debreu had attended McKenzie’s session and hadn’t told Arrow about it. The matter of priority clearly bothered McKenzie, too, not a lot, but such that he returned to it in conversations with friends. In the only autobiographical account he gave, delivered orally at Keio University, in Japan, in June 1998, on the occasion of an honorary degree, he stuck to the stoic account of his originality he had given Weintraub for his 1983 article.
By the early ’00s, Wientraub was back in the hunt. Arrow’s account of Debreu’s omission was now in the record. Much archival material had become available. Weintraub took a second stab at assessing the record, in 2002, this time in collaboration with a mathematically sophisticated student, Ted Gayer. The behind-the-scenes background of the Arrow-Debreu paper was coming clearer all the time.
Enter a young German researcher, Till Duppe, with access to the Debreu papers, maintained at the University of California at Berkeley, where Debreu had taught for thirty years. The two met via an Internet conference and agreed to collaborate. Further details had emerged, including an astonishing fact: the anonymous referee, who bottled UP McKenzie’s submission toEconometrica for a critical time, while Arrow and Debreu tidied up their proof, was none other than Debreu himself; and Debreu hadn’t disclosed his conflict of interest to the editor, Robert Solow. Debreu’s conduct was thus revealed as having been dishonorable....
Weintraub published a third article, this time in theJournal of Economic Perspectives, surveying the concealed flow and ebb of tensions between Debreu and McKenzie over the years. McKenzie, who died in 2010, lived long enough to read the last draft.
My next post will be a list of all the papers for which I feel slighted with the name of the person(s) for which my paranoid delusions tell me were the referee and held it up while their paper got published and took prominence.
Many organisations rely on prosocial behaviours – choices that benefit others but have a personal cost – to achieve their objectives. For instance, foundations rely on charitable contributions for funding, governments partly rely on voluntary compliance for tax revenue, and employers rely on voluntary referrals for hiring. Because such prosocial behaviours have positive externalities by definition, increasing such behaviour can improve welfare. What are the most effective policies to encourage prosocial behaviour?
To figure out, Chetty et al (not sure why Al gets all the credit) run an experiemnt on academic reviewers for the journal of Public Ecnomics. They find...drum roll...if you pay reviewers to meet deadlines, they meet deadlines. Here's how they explain their findings:
Shorter deadlines ‘nudged’ referees to submit reports earlier. Cash incentives also reduced turnaround times, suggesting that any ‘crowding out’ of intrinsic motivation is small. Social incentives – publication of turnaround times – were more effective for tenured referees than shorter deadlines or cash incentives.
And they make recommendations too...
Our findings offer three lessons for improving the peer review process.
1. Shorter deadlines are extremely effective in improving the speed of the review process. Moreover, shorter deadlines generate little adverse effect on referees’ agreement rates, the quality of referee reports, or performance at other journals. Indeed, based on the results of the experiment, the Journal of Public Economics now uses a four-week deadline for all referees.
2. Cash incentives can generate significant improvements in review times and also increase referees’ willingness to submit reviews. However, it is important to pair cash incentives with reminders shortly before the deadline. Some journals, such as the American Economic Review, have been offering cash incentives without providing referees reminders about the incentives. In this situation, sending reminders would improve referee performance at little additional cost.
3. Social incentives can also improve referee performance, especially among subgroups such as tenured professors who are less responsive to deadlines and cash payments. Light social incentives, such as the Journal of Financial Economics’ policy of posting referee times by referee name, have small effects on review times. Stronger forms of social pressure – such as active management by editors during the review process in the form of personalised letters and reminders – could potentially be highly effective in improving efficiency.
More generally, our results reject the view that the review process in economics is much slower than in other fields, such as the natural sciences, purely because economics papers are more complex or difficult to review. Instead, our findings show that small changes in journals’ policies can substantially improve the peer review process at little cost.
In other words, economists respond rationally to incentives and the reason we are so slow at reviewing is that there is little incentive to be faster.
While I'm on the subject, I've been reading a lot (a lot) about choice experiments this summer. One thing I've developed is a distaste for the term "discrete choice experiment" (DCE). I much prefer the term favored by the National Marine Fisheries Service, stated preference choice experiment (SPCE). It makes it clear that this is stated preference data, not that dissimilar from other stated preference methods (i.e., they are prone to hypothetical bias). SPCE is more similar to CVM than revealed preference (RP) methods and experiments (EXP ... field or lab).
The discrete choice experiment rhetoric seems designed to raise the valuation approach to the top of a valuation pyramid when, in my opinion, the visual should be more like this:
If there were a hierarchy, the top would be when one or more valuation methods overlap, as each are severely limited on their own (by the way, here is another book! and this is the best paper ever).
And, yes, this is mostly all just post-CVM angst.
And, yes, I'll try to avoid the insider post in the future.
The Journal Impact Factor is published each year by Thomson Reuters. It measures the number of times an average paper in a particular journal has been referred to.
The Impact Factor of journal J in the calendar year X is the number of citations received by J in X to any item published in J in (X-1) or (X-2), divided by the number of source items published in J in (X-1) or (X-2).
Impact factors aren't the best way to rank journals due to publication lags but this ranking seems to make sense. What I find interesting is how high JEEM shows up compared to the other top field journals. Journal of Financial Economics was #1 among Elsevier journals with 3.769 and Energy Economics came in at #3.
Regarding the recent post on env-econ about time to publication, another factor affecting the difference could be the skill set of the authors attempting the methods.
In my experience as a referee, CE papers are written almost exclusively by economists or statisticians with a lot of technical training. The learning curve for CM involves design as well as estimation, both of which require specialized software. So, people don’t just jump into CE willy-nilly. CVM seems much more likely to be attempted by people who don’t really know what they’re doing. They think that all they have to do is ask a WTP question, collect demographic information and report the results. As a result, a lot of CVM papers are marginal, and may take a long time to get through the review process. The meta analysis did not look at the background/discipline of the authors, which might shed light on this.
This is so true, but I don't know if it has a big influence on the data in this paper. The sample is from papers published in Resource and Energy Economics, Ecological Economics and Environmental and Resource Economics. My experience is that the CVM hobbiests aim for journals a bit lower either at first or after they are rejected from these three journals. But, including a Google Scholar score for the authors might take care of that omitted variable.
The paper is titled ''Contingent valuation versus choice experiments: a meta-analysis application exploring the determinants of the time for publication acceptance" [pdf] and here is the abstract*:
In this paper, we test whether the time it takes for a submitted paper to be accepted by the editor(s) is sensitive to the stated preference method used. Two methods are considered: the Contingent Valuation (CV) and the Choice Experiments (CE). A meta-analysis based on a sample of 129 papers published in Resource and Energy Economics, Ecological Economics and Environmental and Resource Economics between 2005 and 2011 is conducted. The dependent variable in the ordinary least squares regression model is the number of days between the submission of the paper and the acceptance of the paper, referred to as Time for Publication Acceptance, or TPA. The main results are that TPA is lower for CE papers than CV papers, especially for those that aim at improving the method which can be interpreted as a higher academic demand in the CE field. However, a convergence is observed over the years.
The time to publication is 68% lower for choice experiment papers. My theory is that choice experiments don't need to jump through the same narrow hoops that CVM papers must. Here is a previous rant on this subject.
Here is the first paragraph:
Adamowicz (2004) provided an overview of the future directions that the academic demand in the environmental valuation field may take by examining the number of publications between 1975 and 2003 for several valuation methods. According to the author, “the most significant advance in environmental valuation may be to move away from a focus on value and focus instead on choice behaviour and data that generate information on choices” (page 439). It implies that the Choice Experiments (CE) method may become more popular than the Contingent Valuation (CV). Whitehead (2011) confirmed such shift in the academic demand by examining the number of papers published between 1989 and 2010 for each method using the ISI database.
Romain Crastes and Pierre-Alexandre Mahieu, (2014) ''Contingent valuation versus choice experiments: a meta-analysis application exploring the determinants of the time for publication acceptance'', Economics Bulletin, Vol. 34 No. 3 pp. 1575-1599.
"This blog aims to look at more of the microeconomic ideas that can be used toward environmental ends. Bringing to bear a large quantity of external sources and articles, this blog presents a clear vision of what economic environmentalism can be."
... the Environmental Economics blog ... is now the default homepage on my browser (but then again, I guess I am a wonk -- a word I learned on the E.E. blog). That is a very nice service to the profession. -- Anonymous
"... I try and read the blog everyday and have pointed it out to other faculty who have their students read it for class. It is truly one of the best things in the blogosphere." -- Anonymous