I was in College Station, TX at the end of March to talk about hypothetical bias of stated preference data to the folks in agricultural economics (here is the PDF of the PPT). I covered three recent papers that have ex-ante/ex-post SP/RP data which, as a whole, show (I argue):
- ex-ante SP data is positively correlated with ex-post RP data;
- there is hypothetical bias in ex-ante SP data (i.e., survey respondents say they'll do more of activity X than they actually end up doing);
- the hypothetical bias can be adjusted (ex-ante) to reflect future behavior fairly accurately.
Here is the abstract from a new working paper which is the second of the three papers (here is a post describing the third):
One of the major criticisms of stated preference data is hypothetical bias. Using a unique data set of both stated and actual behavior we test for hypothetical bias of stated preference survey responses. We consider whether respondents tend to overstate their participatory sporting event behavior ex ante when compared to their actual behavior at different registration fees. We find that behavioral intentions accurately predicts actual behavior at a middle level of respondent certainty, over predicts actual behavior at a lower level of certainty and under predicts behavior at a higher level of certainty. This suggests that respondent uncertainty corrections can be used to mitigate hypothetical bias. Stated preference data can be used better understand actual behavior in situations where no data exist.
The first paper is being readied for submission to a third journal. We first sent it to Economics Letters where we received this confusing review:
Reviewer #1: "Criterion and Predictive Validity of Revealed and Stated Preference Data: The Case of Music Concert Demand"
The manuscript compares RP and SP visitation data to music concerts (in the NC area). As is clearly indicated by the title, the purpose is to explore the criterion and predictive validity of RP and SP data. The authors find evidence for predictive validity. The authors also recommend a method and modeling strategy for accurately predicting ex post RP visitations accurately. I beieve the authors have a good application to test validity and recommend a correction (i.e., music concert demand, provided by a not-for-profit organization). Similar to market goods, it has the benefit of experience and as such seems to be a good application for exploring the appropriateness of the correction. A concern is how applicable it is in a nonmarket setting.
I enjoyed the topic and believe the research question is important. As I read the manuscript, however, I tried to keep in mind an aim of Economics Letters, "...submit...important preliminary results, where perhaps the threshold for robustness, thoroughness or completeness of the analysis is not as high as it would be for a complete paper," and determine whether greater explanations were necessary in order to determine if the research provided important preliminary results. In other words, was the completeness there without being explicitly described.
Other comments:
1. Intercept Sampling information and response rates. I was a little confused. A 70% response rate is 91 (70% of 13 x ten concerts = 130 total surveys). There were a total of 83 usuable responses, so approximately only 8 of total responses were unusable. How was it possible to send a follow-up survey to 120 people when at most you had 91 original survey responses?
2. While I found the HB analysis interesting, what value does this add to the purpose of the study?
3. SP in the HB equation seems to describe something different than SP in LnQuantity equation. Confusing.
4. I was unable to confirm marginal effects Betax*Qbar with the results and means provided. This is also true for elasticities.
The first paragraph does a nice job of describing the paper and the author is correct, it may not be generalizable to a nonmarket setting. But, the papers that Hausman cites on hypothetical bias are from the marketing literature ... i.e., not generalizable in a nonmarket setting. The second paragraph doesn't seem to finish the thought. Am I to infer that the paper's results are not important and/or the "completeness" is not there? The "other comments" seem to be minor details. We next sent it to the Journal of Cultural Economics. It was rejected there because the sample is small (n=38). I can live with that reason but the Economics Letters review is mostly gibberish.
None of this should be a big deal, papers are rejected for all sorts of vague reasons. But the first time I had some data like this (ex-ante SP, ex-post RP) a referee tried to hold up publication because s/he thought hurricane evacuations were easy decisions to make (see the second half of this old post)! It would be great if I could get a referee who understands how difficult it is to collect this sort of data.
And none of this is in response to the response I received at TAMU. They seemed to understand. It didn't hurt that one of the authors of this paper was in the audience. And, thanks to everyone I hung out with for a great time!