That figure, said Rep. Rosa DeLauro of Connecticut, the top Democrat on the House Appropriations subcommittee in charge of education issues, is how much the National Institutes of Health generates in economic growth for every taxpayer dollar it receives.
"That is an over-100-percent return on the investment," Ms. DeLauro assured her legislative colleagues. Others, including the NIH’s director, Francis S. Collins, the hearing’s lead witness, have been citing the $2.21 figure for years.
Yet researchers studying the question remain in profound disagreement. They’ve been working on the assumption that the true economic value of the nation’s scientific investment, through the NIH and other agencies, is almost surely a lot higher than $2.21 per dollar. Putting real precision on that number, however, has proved a highly elusive goal.
But this week, after five years of trying, a team of analysts announced progress toward solving the puzzle. The group, led by Julia I. Lane, a former National Science Foundation official, published an article on Thursday in Science magazine offering a series of seemingly random findings on the economics of federal spending on science. They include the fact that faculty researchers account for fewer than one in five workers supported by federal science spending, and that universities given federal research money spend about 70 percent of it outside their home states.
The most important value of such information, Ms. Lane said in an interview, still remains several years away. That’s because this week’s data only capture the first step in the life of a federal research dollar, she said. Much more now needs to be done to keep tracing those dollars in a scientifically rigorous manner throughout the economy, she said, to get a firm idea of what benefit the money ultimately brings. ...
The calculation will be far different from the estimates that produced the $2.21 figure, which merely use a standard economic multiplier to measure the immediate stimulative effect of NIH spending on a local economy—in activities such as workers buying lunch at a nearby restaurant—that would apply to any kind of government spending, whether for medical research or road construction.
A major barrier to getting a better number for science has been that the full effects on society of research discoveries often can take decades to be realized. For too long, Ms. Lane said, the scientific community has taken that complexity as an excuse not to try.
The current attempt to establish economic value stems from the federal stimulus measure of 2009, which led to a project known as Star Metrics. Through it, more than 100 universities agreed to electronically collate data related to their federal grant spending, so that researchers can now automatically collect details such as new jobs, journal publications, and patents associated with each grant.
"It’s going beyond this mechanical magic-multiplier stuff," said Ms. Lane, currently a senior economist at the American Institutes for Research.
The logic behind the $2.21 number is sound, but misapplied. There should be no multiplier attached to federal spending because local and regional economic impacts are mostly beggar-thy-neighbor policies. A government spending policy that improves the well-being of a region tends to lower the well-being of competing regions. With federal policies the additional spending is completely offset by the lost spending elsewhere.
The real benefit of research is very difficult to measure with market data since these are mostly spillover benefits. The funding agency and the researcher are not able to capture the results of their efforts. This infuriates politicians who don't like scientists who do research, which leads to nonsensical economic impact studies.
Daniel M. Haybron (an associate professor of philosophy at St. Louis University):
What does it mean to be happy?
The answer to this question once seemed obvious to me. To be happy is to be satisfied with your life. If you want to find out how happy someone is, you ask him a question like, “Taking all things together, how satisfied are you with your life as a whole?”
Over the past 30 years or so, as the field of happiness studies has emerged from social psychology, economics and other disciplines, many researchers have had the same thought. Indeed this “life satisfaction” view of happiness lies behind most of the happiness studies you’ve read about. Happiness embodies your judgment about your life, and what matters for your happiness is something for you to decide.
This is an appealing view. But I have come to believe that it is probably wrong. Or at least, it can’t do justice to our everyday concerns about happiness.
I've seen very little critical appraisal of happiness studies in the economics journals. Other than Kerry Smith in REEP, it seems that economists have swallowed the blue pill:
Happiness economics seems to have captivated both the editors and referees of the flagship journals in economics. Theorists are trying to reconcile existing economic models with the empirical results of happiness economics, and behavioral economists are using the empirical results to support calls for new approaches to consumer sovereignty. Serious responses to happiness economics from environmental economists are long overdue. This article examines how a happiness survey would fare if it had to face the same standards used to evaluate contingent valuation or stated choice questions.
I presented a paper at the SEA meetings a couple of years ago that compared contingent valuation and happiness measures of value for the same sample of respondents (maybe I'll submit this to a journal someday ... imagine). The willingness to pay measure from the happiness question did not exhibit the sort of validity and reliability that contingent valuation researchers must show with every study.
I was in College Station, TX at the end of March to talk about hypothetical bias of stated preference data to the folks in agricultural economics (here is the PDF of the PPT). I covered three recent papers that have ex-ante/ex-post SP/RP data which, as a whole, show (I argue):
ex-ante SP data is positively correlated with ex-post RP data;
there is hypothetical bias in ex-ante SP data (i.e., survey respondents say they'll do more of activity X than they actually end up doing);
the hypothetical bias can be adjusted (ex-ante) to reflect future behavior fairly accurately.
Here is the abstract from a new working paper which is the second of the three papers (here is a post describing the third):
One of the major criticisms of stated preference data is hypothetical bias. Using a unique data set of both stated and actual behavior we test for hypothetical bias of stated preference survey responses. We consider whether respondents tend to overstate their participatory sporting event behavior ex ante when compared to their actual behavior at different registration fees. We find that behavioral intentions accurately predicts actual behavior at a middle level of respondent certainty, over predicts actual behavior at a lower level of certainty and under predicts behavior at a higher level of certainty. This suggests that respondent uncertainty corrections can be used to mitigate hypothetical bias. Stated preference data can be used better understand actual behavior in situations where no data exist.
The first paper is being readied for submission to a third journal. We first sent it to Economics Letters where we received this confusing review:
Reviewer #1: "Criterion and Predictive Validity of Revealed and Stated Preference Data: The Case of Music Concert Demand"
The manuscript compares RP and SP visitation data to music concerts (in the NC area). As is clearly indicated by the title, the purpose is to explore the criterion and predictive validity of RP and SP data. The authors find evidence for predictive validity. The authors also recommend a method and modeling strategy for accurately predicting ex post RP visitations accurately. I beieve the authors have a good application to test validity and recommend a correction (i.e., music concert demand, provided by a not-for-profit organization). Similar to market goods, it has the benefit of experience and as such seems to be a good application for exploring the appropriateness of the correction. A concern is how applicable it is in a nonmarket setting.
I enjoyed the topic and believe the research question is important. As I read the manuscript, however, I tried to keep in mind an aim of Economics Letters, "...submit...important preliminary results, where perhaps the threshold for robustness, thoroughness or completeness of the analysis is not as high as it would be for a complete paper," and determine whether greater explanations were necessary in order to determine if the research provided important preliminary results. In other words, was the completeness there without being explicitly described.
Other comments: 1. Intercept Sampling information and response rates. I was a little confused. A 70% response rate is 91 (70% of 13 x ten concerts = 130 total surveys). There were a total of 83 usuable responses, so approximately only 8 of total responses were unusable. How was it possible to send a follow-up survey to 120 people when at most you had 91 original survey responses? 2. While I found the HB analysis interesting, what value does this add to the purpose of the study? 3. SP in the HB equation seems to describe something different than SP in LnQuantity equation. Confusing. 4. I was unable to confirm marginal effects Betax*Qbar with the results and means provided. This is also true for elasticities.
The first paragraph does a nice job of describing the paper and the author is correct, it may not be generalizable to a nonmarket setting. But, the papers that Hausman cites on hypothetical bias are from the marketing literature ... i.e., not generalizable in a nonmarket setting. The second paragraph doesn't seem to finish the thought. Am I to infer that the paper's results are not important and/or the "completeness" is not there? The "other comments" seem to be minor details. We next sent it to the Journal of Cultural Economics. It was rejected there because the sample is small (n=38). I can live with that reason but the Economics Letters review is mostly gibberish.
None of this should be a big deal, papers are rejected for all sorts of vague reasons. But the first time I had some data like this (ex-ante SP, ex-post RP) a referee tried to hold up publication because s/he thought hurricane evacuations were easy decisions to make (see the second half of this old post)! It would be great if I could get a referee who understands how difficult it is to collect this sort of data.
And none of this is in response to the response I received at TAMU. They seemed to understand. It didn't hurt that one of the authors of this paper was in the audience. And, thanks to everyone I hung out with for a great time!
Author: Kevin Atkinson; Department of Economics, Appalachian State University, Boone, NC 28608
Economists prefer revealed preference data, yet some situations lack sufficient revealed preference information for economic analysis. Stated preference data, acquired from surveys asking respondents about their behavior under hypothetical scenarios, may be useful in such situations. Stated preference data is often biased, but revealed preference data also has limitations. Combining both types of data may be especially useful in many situations because it grounds the results from stated preference surveys in the reality of revealed preferences while using data that extends beyond what can be observed from the past. The purpose of this research is to investigate the predictive validity of stated preference data, a current topic of debate among economists (Hausman 2012, Haab et al. 2013). This project will inform that debate through a survey of mountain bike park recreation participants about proposed trail development scenarios and then collecting data to determine revealed preferences after the proposed scenarios become reality. The Rocky Knob Trails Survey was conducted during 2011 and 2012, garnering 302 nearly complete responses. During these years the average number of annual trips to Rocky Knob reported by survey respondents was 16. The trails were not yet completed, so we asked respondents how many trips they would take during a typical year after completion of the trails. The average number of annual trips to Rocky Knob during a typical year with 6 and 8 miles of trail reported by survey respondents is 24 and 60. One half of the survey respondents agreed to be interviewed after the trails were completed. To date, 99 have responded to a follow-up survey begun in November 2013. We asked respondents for the number of mountain bike trips they had taken to Rocky Knob during the past 12 months. No results are yet available since data collection is ongoing (data will be available in January). Regression models will be estimated to determine if stated preference trips accurately predict revealed preference trips. The dependent variable will be revealed preference trips taken as reported in the November 2013 survey. Independent variables will include stated preference trips reported in the previous survey, time between surveys, socioeconomic and other variables. Recreation demand models will then be estimated using both the revealed and stated preference data to determine if the stated preference data can be calibrated to predict accurately. Consumer surplus estimates from the demand models will be estimated.
If you happen to be going (and why wouldn't you not attend an undergraduate research conference on a Saturday morning?), be sure to tell him his presentation was awesome.
I really like the public shaming treatment of listing referee names with turnaround times (and the feigned disgust with public shaming [and low incentives ... however, it is Elsevier so they could afford more] as an excuse to decline a review!).
Another tip might be to expand the pool of referees. It seems like everyone has a constant stack of 3-4 papers on their desk. My impression is that there are a boatload of competent referees that simply refuse to referee papers. Since refereeing is a voluntary contribution to a public good, I have no idea how to increase those contributions.
I also like the $100 Amazon Gift Card incentive. That said, the accounting policy at Appalachian State University is that no survey or incentives can be greater or equal to $100. Because all hell breaks lose if someone gets a $100 incentive (e.g., it could be one of the signs of the apocolypse), know what I mean?
But what I really want to know is, is there a written paper? I've presented several "papers" in "slides" form that never made it to being a real (written) "paper."
I was thinking that you and your blog readers might be interested in “An Economist’s Guide to Visualizing Data” by Jonathan Schwabish, in the most recent Journal of Economic Perspectives (which is the American Economic Association’s main “outreach” journal in some ways).
Ooh, I hate this so much! This seems to represent a horrible example of economists not recognizing that outsiders can help them. We do much much better in political science.
To which Jenkins wrote:
Ha! I guessed as much — hence sent it. And I’ll now admit I was surprised that JEP took the piece without getting Schwabisch to widen his reference points.
To elaborate a bit: I agree with Schwabish’s general advice (“show the data,” “reduce the clutter,” and “integrate the text and the graph”). But then he illustrates with 8 before-and-after stories in which he shows an existing graph and then gives his improvements. My problem is that I don’t like most of his “after” pictures!
In just about every case, Swabish’s advice is reasonable and his graphs improve on the originals. But I just don’t think his versions represent best practice. And, in an influential journal, you’d like to demonstrate best practice. ...
My other problem with this paper is its lack of ambition. In each case, an existing graph is redrawn with only slight changes. But what is really needed in economics, I think, is a larger sense of the importance of graphical discovery. The excitement of visualization is not conveyed in this article at all. Rather it all seems like a boring application of certain principles of graphics design.
Andrew Gelman is having an interesting discussion with himself about why he should continue publishing in journals (here and here; apparently Columbia University doesn't feel the need to make sure that you are maintain "research active" status after tenure). Here is how he describes the publishing environment in economics and political science:
Economics seems much more well organized (for better and for worse) than statistics. They have a few journals that are almost universally agreed to be the best, and it seems that papers published in these places automatically get attention.
Parallel to this is a network of popular economics bloggers that has no real parallel in any other academic field. I mean, sure, if I happen to blog a new paper, people will read it, but I’m just one guy and I don’t try to cover the field or even all of Bayesian statistics. In contrast, economics has a bunch of blogs that are a lot more popular than ours and which regularly plug and argue about new published work.
Political science doesn’t have this density of blogging but it does have a recognized set of top journals.
The other thing that helps is that economists and political scientists typically publish less than statisticians, or at least it seems that way to me. In these social science fields, a paper gets workshopped for awhile before submission, then there can be a grueling review process. This is a pain in the ass but it does have the effect of reducing the rate at which papers get published.
Still, a lot of work in social science, especially in policy analysis, never gets published in journals. There’s lots of information in Gallup reports, Pew reports, various documents prepared by organizations doing studies in different countries, etc etc. This stuff gets emailed around but can be hard to find if you don’t know where to look.
That sounds accurate to me.
One reason I need to keep publishing (at a rate of 2+ every five years) is so my teaching load doesn't increase from 3/3 to 4/4. Is that how they do it in the Ivy League?
The other reason that I keep trying to publish is that I can't get these stupid papers out of my head until they are typeset in journal format. My stupid "workplan" has a long list of potential papers that I have data for and a neat little result. This is the list of papers that I'll probably never write, that don't have the potential to hit a second tier environmental/resource economics journal, but I can't seem to bring myself to delete from the workplan. Sometimes this stupid papers get magnified so much in my head that they take control and crowd out work that has more potential. This is what is commonly known as an inefficient research strategy: driving the marginal benefits down to zero when the marginal costs are rising. And this post is a cry for help. I need help.
Have I posted this one before? It seems like it but maybe I only fantasized about it. Anyway, I'm revising the paper now and the wound is still fresh:
I now have two referee reports on your submission .... One referee recommends acceptance as is (though that report has little content) while the other recommends a revise and resubmit, raising some concerns. I have read the paper carefully myself and, unfortunately, the concerns raised by the second referee (plus others of my own) loom larger than for me than for either of the referees. Based on this, I have decided to reject the paper.
In other words, I sent the paper out for review for experts in this area and after reading their reports feel that was a waste of everyone's time. Based on my own non-expert read of the paper I find that I don't like it. I just don't like it at all.
Also, in my experience an "accept as is" report typically does not contain much content.
I've been department chair for almost five years now and I haven't much enjoyed a 40% decrease to our department budget and trying to allocate a one-time 1.3% (or something like that) raise. News like this is disheartening (but I'll add a caveat at the bottom):
In a memo on Feb. 28, [Art Pope, the state budget director] took university leaders to task, saying they’re asking for far too much money at a time when the state has competing priorities such as Medicaid and raises for K-12 teachers and state employees. He said the university system had basically ignored his office’s instructions in December to come forward with budget expansion requests of no more than 2 percent. ...
This year, the UNC system received $2.5 billion in state money for operations and another $64 million for building repairs and construction.
Pope said the board has requested an increase of $288 million, or 11.3 percent over the current year’s state budget for UNC. Those figures do not include any raises for employees.
While the state’s economy is improving, an 11 percent increase is a fantasy, he said. Such a spending increase for UNC, Pope said, would require the governor and legislature “to make major reductions in other state agencies and programs, such as our courts, the ‘K-12’ public schools, and health care.” ...
From 2007-08 to 2012-13, appropriations per student have declined 7 percent while tuition receipts per student have jumped 47 percent, according to the university system’s budget proposal. Controlling for inflation, education spending per degree at UNC has declined by 18 percent, UNC said. ...
Pope, too, seems to be casting his eye toward the university’s ability to pay its own bills.
He pointed out that the system had a cash balance of nearly $269 million by the end of the 2012 fiscal year and collected $228 million in overhead payments accompanying grants and contracts, mostly from the federal government.
“How much of the overhead receipts are being used for the repairs and renovations for the facilities used to generate the overhead receipts, as opposed to requesting $163 million in General Fund appropriations for repairs and renovations?” Pope asked in his memo.
Appstate is looking for something like $90 million, I think, for a new nursing building. That is a lot of money in the current budget environment.
After almost 25 years in the UNC system, I've gotten way tired of the mission creep. ECU wanted to move up the Carnegie ladder and when we did they immediately announced a goal of moving up the next rung. Universities want new PhD programs, engineering schools, dental schools and et cetera. I've always wondered: why don't we just try to do better the stuff we are currently doing?
"This blog aims to look at more of the microeconomic ideas that can be used toward environmental ends. Bringing to bear a large quantity of external sources and articles, this blog presents a clear vision of what economic environmentalism can be."
... the Environmental Economics blog ... is now the default homepage on my browser (but then again, I guess I am a wonk -- a word I learned on the E.E. blog). That is a very nice service to the profession. -- Anonymous
"... I try and read the blog everyday and have pointed it out to other faculty who have their students read it for class. It is truly one of the best things in the blogosphere." -- Anonymous