A cool new paper is the first (?) to use a discrete choice experiment in the AER:
Mas, Alexandre, and Amanda Pallais. 2017. "Valuing Alternative Work Arrangements." American Economic Review, 107(12): 3722-59.
Abstract: We employ a discrete choice experiment in the employment process for a national call center to estimate the willingness to pay distribution for alternative work arrangements relative to traditional office positions. Most workers are not willing to pay for scheduling flexibility, though a tail of workers with high valuations allows for sizable compensating differentials. The average worker is willing to give up 20 percent of wages to avoid a schedule set by an employer on short notice, and 8 percent for the option to work from home. We also document that many job-seekers are inattentive, and we account for this in estimation.
I might be in too deep (ya think?) but there are some questionable statements made on the quality of stated preference data. Before I go on, let me emphasize that this is a cool paper. Stated preference research can push the frontier in many fields besides environmental economics. The problem is that environmental economic stated preference researchers must run the 110 high hurdles while this paper gets entered into the 100 dash (to use a track analogy).
On page 3724:
In this paper we report estimates of worker valuations over alternative work arrangements from a field experiment with national scope. The experiment elicits preferences on work arrangements by building a simple discrete choice experiment into the application process for a national call center. In this way we employ a method that can flexibly back out a willingness to pay (WTP) distribution from close to real market transactions.9
I don't consider something a "field experiment" unless there is a real monetary transaction.
Here is footnote 9:
Discrete choice experiments are an extension of the contingent valuation literature whereby rather than directly asking people for valuations over an attribute (the stated preference method), people are given the choice of two or more scenarios and are asked to choose their preferred option. These scenarios usually vary the attributes and the prices and WTP can be estimated using random utility models (McFadden 1973; Manski 1977). Choice experiments have been shown to have better properties relative to stated preference valuation methods (Hanley, Wright, and Adamowicz 1998). A question is whether these experiments, which are usually survey-based, correspond to actual market behavior. This is something we can overcome by embedding the choice in a real market setting. Diamond and Hausman (1994), who critique stated preference valuation methods, hypothesize that the problem with the approach is not methodological but due to “an absence of preferences” over the attributes they are being asked to value. This is far less of a concern here since we are asking people to make choices over realistic work arrangements.
Is this the sort of lazy assertion that flies in labor economics papers at the AER? It wouldn't fly at JEEM, JAERE, Land Econ, Env and Res Econ, Res and Energy Econ, MRE, and etc (basically any environmental and resource economics journal).
- "Choice experiments have been shown to have better properties relative to stated preference methods" Huh? See "CVM surveys suck so we're using choice experiments, Q.E.D. (#174)".
- A "discrete choice experiment" is one type of "stated preference method" and choice experiments have lots of problems just like contingent valuation. See "What is Contingent Valuation?".
- Market experience does not eliminate hypothetical bias. See Morgan et al. (2016) for the case of oyster demand. The same problem can be found in the context of trip taking decisions and voting.
Here is one of the hypothetical situations:
This question might be inconsequential (i.e., your choice will not affect anything whatsoever). If so, there is little reason for respondents to take it very seriously. See Groothuis et al. (2017).
The authors collect more data in online survey. Here is one scenario (parenthetical added):
Imagine that you are applying for a new job in your [current line of work, same line of work as your last job], and you have been offered two positions. Both positions are the same as your [current/last] job in all ways, and to each other, other than the work schedule and how much they pay. Please read the descriptions of the positions below.
Position 1) This position is 40 hours per week. The work schedule is Monday–Friday 9 am–5 pm. This position pays the same as your [current/last] job.
Position 2) This position is 40 hours per week. The work schedule in this position varies from week to week. You will be given your work schedule one week in advance by your employer. The hours can be morning through evening, weekdays and weekends, but not nights. This position pays “X” (e.g., "15% more than") your [current/last] job.
Which position would you choose?
Another hypothetical question for those having a "flexible job" is:
Suppose your primary employer gives you the option of working a fixed work schedule, Monday-Friday during the daytime. Under this arrangement you would continue to work your usual number of hours but once your schedule is set you may not change the times and days of work. In exchange for having this fixed rather than flexible schedule you would get [2/5/10/20/35]% higher pay. Would you agree to this arrangement if given the choice?
Labor markets are characterized by negotiation. It is difficult to imagine that every survey respondent did not see these questions as the opening round of a negotiation. In both questions, for example, the respondent might be saying I choose position 2 but will only take the job if I can negotiate higher pay. Market researchers face the same issues in a market setting. If you are asked to purchase a new and improved product at a higher price you might answer no, thinking that the product might be offered anything and your signal will result in a lower price.
These are issues that stated preference folks get hassled about over and over again. In contrast, Ivy leaguers write a dismissive footnote and get an AER**. Ok, the whine is over.
*I've been enjoying my semester long sabbatical from blogging (and teaching and dressing like a grown-up) ... until this.
**And they compare their "field experiment" surveys to the online survey results to deal with a referee who thinks that Diamond's adding up test is a paper killer. Here is footnote 43:
Diamond (1996) recommends testing for internal consistency in contingent valuation surveys. We go further in Section IV by comparing WTP estimates in the market setting to estimates from a nationally representative survey.
Do you think BP is going to let you get past the adding up test by asking some more hypothetical questions? I think not!