It still costs less than all of the higher quality substitutes, right?
This week Amazon changed the terms for a service that has become a standard tool in social-science research, and many scholars are complaining that it will mean higher costs to conduct surveys.
The service is called Mechanical Turk, and it is a marketplace that connects people on the Internet looking for paid piecework with anyone who has a small task and is willing to pay someone to do it. The concept is known as crowd-work, and many researchers have used it to pay strangers small amounts to take part in social-science surveys.
Amazon announced on Monday that it would take a larger commission on each gig, increasing its percentage to 20 percent from 10 percent next month. That means researchers will have to pay Amazon 20 percent of the roughly $7 or $8 per hour that respondents earn for completing a survey. Researchers have said the change will mean a significant extra charge to younger scholars who have relied on the service to quickly gather a large number of survey responses at a low cost.
Mechanical Turk is probably the most popular research pool in the social sciences right now, said Carey Morewedge, an associate professor of marketing at Boston University. While graduate students conducting research once had to spend a day or two surveying undergraduates, they can now gather at least double the number of responses in two hours using Mechanical Turk, he said.
via chronicle.com
Here is a recent working paper that used MTurk data: http://econpapers.repec.org/paper/aplwpaper/15-03.htm. Here is what we say about it:
MTurk is a crowdsourcing internet marketplace for work that enables researchers to access a representative sample of individuals willing to participate as survey respondents and is growing in popularity for online experiments and surveys. In terms of developing nationally representative samples, recent research has examined and compared the demographic characteristics of MTurk users to other sampling techniques and found that MTurk users are more representative than samples derived from experimental lab studies and in-person convenience samples.
MTurk seems to work as good as other convenience samples (link):
Abstract. Recent and emerging technology permits psychologists today to recruit and test participants in more ways than ever before. But to what extent can behavioral scientists trust these varied methods to yield reasonably equivalent results? Here, we took a behavioral, face-to-face task and converted it to an online test. We compared the online responses of participants recruited via Amazon’s Mechanical Turk (MTurk) and via social media postings on Twitter, Facebook, and Reddit. We also recruited a standard sample of students on a college campus and tested them in person, not via computer interface. The demographics of the three samples differed, with MTurk participants being significantly more socio-economically and ethnically diverse, yet the test results across the three samples were almost indistinguishable. We conclude that for some behavioral tests, online recruitment and testing can be a valid—and sometimes even superior—partner to in-person data collection.
Casler, Krista, Lydia Bickel, and Elizabeth Hackett. "Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing." Computers in Human Behavior 29, no. 6 (2013): 2156-2160.
I'd like to see it compared to the inexpensive panels at Survey Sampling, Inc or SurveyMonkey to see how it performs. Has anyone tried that?