From Hensher, Shore and Train (2014) [emphasis added]:
Wang et al. [12] investigated the determinants of public willingness to accept tiered electricity pricing (TEP) and use the findings to identify the acceptable range of a rate premium. A questionnaire survey in four urban cities of China was undertaken to identify the drivers and barriers to the public’s acceptance of TEP. ... The question on the household’s willingness to accept TEP reform is ‘‘How much additional payment do you think is reasonable to put on the present electricity price in the second tier of TEP?’’ Respondents selected their willingness to pay in four increasing ranks of choices: (a) None; (b) within 0.05 RMB/kW h; (c) 0.05–.1 RMB/kW h; (d) above 0.1 RMB/kW h. An ordered logit model was used to estimate the relationship between the TEF ordered response and the likert scaled influences. The main finding is that low income respondents and high income groups seem more willing to accept a higher premium. They suggest that this might be attributed to the fact that the electricity consumption of the residents with lowest income is small, which often does not reach the upper bound of the first tier. While this is an interesting study, the method is somewhat different to our approach in that they use a contingent valuation question, which can be criticised in terms of a risk of strategic bias; in contrast to stated choice experiments where attribute packages are being evaluated and the risk of voting through response to keep prices down is far less likely, in part because there is an offer of varying service levels associated with electricity rates.
Now, I may be a simple unfrozen caveman, confused by your modern ways (i.e., choice experiments), but I do know this, a blanket statement without explanation or citation to any literature, is an empty assertion that a referee should have flagged.
My introduction to strategic bias was in graduate school while reading Cummings, Brookshire and Schulze (1986) [which resists the Google]. On page 26 they conclude:
Results from the experimental laboratory and CVM studies concerning efforts to test the strategic bias hypothesis reviewed above do not support the hypothesis. Of course, these results cannot be interpreted as definitive evidence that subjects will not behave strategically in applications of the CVM.
Even Hausman (2012) doesn't list strategic bias as one of the three major concerns with CVM. That may be because, these days, attention to the incentives embedded in the survey question will usually lead to a lack of strategic behavior (i.e., one-shot referendum questions may be incentive compatible, see Carson and Groves 2007).
Noting that I don't have any problem with choice experiments (I've even been known to work that way a bit) since I think that one can learn a lot about preferences from both questionning approaches, let me air one of my most frequent gripes. The assertion that choice experiments are free of strategic bias (i.e., "better" than CVM) is never explained in the paper. Like many choice experiment papers, in this one issues that CVM researchers must always deal with (e.g., hypothetical bias, external [one-shot] split-sample scope tests) are somehow not a concern. The choice experiment questionning approach automatically removes the problems inherent in stated preference data. I would argue that this removal is simply by assertion.
Researchers who would like to become more familiar with the differences and similarities between CVM and discrete choice experiments (DCE) should, as a start, read these two papers:
- Carson, Richard T., and Jordan J. Louviere. "A common nomenclature for stated preference elicitation approaches." Environmental and Resource Economics 49, no. 4 (2011): 539-559.
- Hanley, Nick, Susana Mourato, and Robert E. Wright. "Choice Modelling Approaches: A Superior Alternative for Environmental Valuation?" Journal of economic surveys 15, no. 3 (2001): 435-462.
A few papers that compare CVM and DCE are:
- Adamowicz, Wiktor, Peter Boxall, Michael Williams, and Jordan Louviere. "Stated preference approaches for measuring passive use values: choice experiments and contingent valuation." American journal of agricultural economics 80, no. 1 (1998): 64-75.
- Christie, Mike, and Christopher D. Azevedo. "Testing the consistency between standard contingent valuation, repeated contingent valuation and choice experiments." Journal of Agricultural Economics 60, no. 1 (2009): 154-170.
- Hanley, Nick, Douglas MacMillan, Robert E. Wright, Craig Bullock, Ian Simpson, Dave Parsisson, and Bob Crabtree. "Contingent valuation versus choice experiments: estimating the benefits of environmentally sensitive areas in Scotland." Journal of agricultural economics 49, no. 1 (1998): 1-15.
- Loomis, John, and Luis Santiago. "Economic Valuation of Beach Quality Improvements: Comparing Incremental Attribute Values Estimated from Two Stated Preference Valuation Methods." Coastal Management 41, no. 1 (2013): 75-86.
And then read this new one for a questionning of the incentive compatibility of repeated choice experiment questions:
- Petrolia, Daniel R. and Matthew G. Interis, "Should We Be Using Repeated-Choice Surveys to Value Public Goods?" AERE Newsletter 33(2):19-25, November 2013.
What else should someone be reading? In other words, have I "selectively" reviewed the literature and missed something important?