Is now available at the new journal Data in Brief (DIB):
Albemarle–Pamlico Sounds revealed and stated preference data
Under a Creative Commons license
In this article we describe the contingent valuation and behavior methods scenario developed in the 1995 Albemarle–Pamlico Sounds Survey. The survey elicits revealed and stated preference recreation behavior which are used to estimate the value of water quality improvements  and . The survey elicits willingness to pay data which are used to conduct a split-sample scope test . The data are used to jointly estimate revealed and stated preference recreation data and willingness to pay data  and . The data has been, and can continue to be, used to investigate econometric specification , bid design and other nonmarket valuation issues. The data have been used as illustrations and examples in three books that develop nonmarket valuation methods ,  and . Data are supplied with this article.
These are the data that Haab, Huang and Whitehead, and others (since we like to share), used for several papers. It was online for years, I received an email inviting submission to DIB (an Elsevier, not a predatory, journal) and so I submitted it. It was reviewed, revised and accepted for publication.
Here is a side note that only I may find interesting. Desvousges, Mathews and Train (2012) classified the scope effects paper that used these data as a "pass" (Whitehead, Haab and Huang 1998). In other words, it passed all scope tests that were published. That is true but we did our best to show that it didn't. Here is the working paper that we submitted to the Southern Economic Journal (Download submitted paper). It shows "mixed" evidence and we did a little meta-analysis on our own results to try to determine when the data passed the test and when it didn't. One referee thought the sensitivity analysis was "nonsense," the editor agreed and so we dropped it for the sake of publication (we also dropped the data appendix which is the raw material for the DIB article). In contrast, Whitehead and Cherry (2007) is classified as "mixed" because the sensitivity analysis was actually published. We could have chosen to try to publish only the best (or worst, I suppose) results, but that isn't very interesting (or honest). This illustrates that the pass, fail, mixed classification in Desvousges, Mathews and Train (2012) is a bit superficial and/or subject to a sort of publication bias.