We will use this space each week in the news blast for Justin to share new ideas he thinks may be of interest for our behavioral & experimental community. If you ever have ideas for topics, please share your ideas with
This Week's BRITE Idea:
For this week's BRITE ideas we want to highlight the issue of data quality on the Mechanical Turk (MTurk) platform.
MTurk is a common platform for behavioral research and has a lot of advantages, including being fast, fairly low-cost and providing a more diverse respondent panel than traditional university labs. But the big problem is that the data quality can potentially be low. There are a lot of mixed results on the general quality of MTurk respondents, including some signs that you often can't get about the same quality on MTurk.
My own (limited) MTurk experience has been quite positive and in particular I like using MTurk for pilot testing of Qualtrics experiments. I have found that my pilot tests on MTurk are reliable guides to what I see in the BRITE lab in most cases.
However, this summer there was a "mini-crises" among academics who use MTurk heavily because many were starting to see a rise in low-quality responses. There was initially a big fear that people were creating automated bots to take MTurk studies. However, it now appears that the issue is actually that there are a number of foreign "workers" who are using survey farms and proxy servers to take multiple studies and often to pass themselves off as being in the U.S. so that they can qualify for studies that set higher screens.
There was some really interesting research on this that I'd like to highlight
. There was also a good
this week on a technique for getting these respondents using server farms out of any study you might run.