Researchers work to ensure data validity amid rise in online studies

May 28, 2018, 11:43 p.m.

Amid the growing use of online survey platforms to conduct research, Stanford labs are working to both increase the number of participants in their experiments and, at the same time, reduce the inadvertent skewing of data being produced.

The Daily spoke solely with non-profit labs on campus for this article, yet much of the research those labs conduct has far-ranging, real-world impact in areas ranging from business practices to organizational behavior to politics.

According to Olivia Foster-Gimbel, a social science research coordinator at the Graduate School of Business (GSB) Behavioral lab, such research is not intended to make money or improve advertising but rather to advance the discipline in which it is conducted.

“At the end of the day, while we are a lab in the business school, we are a psychology lab,” Foster-Gimbel said. “We might be using Doritos and Fritos [chips] as study material, but what we’re really looking for isn’t ‘do people like Doritos better,’ it’s ‘how do you pursue your goals when tempted by sugary snacks’ or ‘how do you understand interactions with other people.’”

“It is more about the psychology and getting in people’s heads than [about] market research,” she added.

In order to make psychological findings to achieve that goal, however, researchers must first produce statistically significant and replicable data; and to do that, many are increasingly turning to online studies.

Some of those researchers, such as Social Science Research Coordinator for the GSB Behavioral Lab Aastha Chadha, laud the capacity of online platforms like the crowdsourcing-based Amazon Mechanical Turk because they help to increase sampling diversity.

While the paid incentive might seem to attract a non-random or even skewed demographic of participants, Chadha said that online platforms are actually fairly economically diverse.

“People automatically assume [that] because the entire sample is all people who would take a survey for 20 cents [which is the 2 minute rate for Amazon Mechanical Turk], they must be very very poor or very in need,” Chadha said. “But the survey [population] on Mechanical Turk is a lot more diverse than you think.”

Some academics worry that participants can exploit the online interface of Mechanical Turk by, for example, taking the same study multiple times or rushing through a study without actually reading the questions.

However, according to Associate Director of the Laboratory for Social Research Chrystal Redekopp, there are significant measures in place to prevent cheating. For instance, studies often include questions that explicitly check to see if the participant is paying attention, and if answered incorrectly, the data is discarded.

Additionally, because participants have to connect Mechanical Turk to a personal bank account in order to receive payment, it is difficult to create multiple accounts.

Redekopp added that meta-experiments have been run using data replication testing techniques to gauge the validity of online research, showing that online experimentation is generally accurate.

“Most of those papers have come out on the positive side of using online platforms,” Redekopp said. “The nice thing for us is that the research community is constantly interested in the quality and efficacy of the research.”

Moreover, all of the researchers that The Daily spoke with agreed that online studies allow for cheaper access to a large pool of subjects, from which they can select for certain characteristics based on a demographic survey.

In contrast, in-person lab studies require a huge commitment of time and resources, according to Foster-Gimbel.

“To run a study online, you can get 200 people for $200,” she said. “Conversely, if you wanted to get 200 people down here [in person], at a minimum it would cost $1000, and I’ve [even] run studies that have cost $20,000.”

Despite the significant resource requirement, Foster-Gimbel said that academic journals prefer that data be collected via in-person, rather than online, for peer-reviewed studies.

“[Journals would] be more likely to publish [a paper] if it were four online studies and one study in the lab compared to five online studies, because it’s taken more seriously,” Foster-Gimbel said.

Foster-Gimbel added, however, that in lab studies suffer from similar issues.

Those issues largely relate to the much smaller pool of participants that in-lab, on-campus studies have to draw from, at least relative to online studies.

With a smaller pool of participants, researchers worry that subjects self-select and thus garner non-representative or statistically insignificant data.  

“A lot of the times, things we think are facts in psychology, when cross cultural psychologists will test [them in] different populations, they will … realize [they’re] not applicable in other contexts,” said Nicole Abi-Esber, another social science research coordinator at the GSB Behavioral Lab. “So it’s inherently problematic that we’re testing this specific sample [of Stanford students], and [students] being paid is just one of the problematic things.”

While Stanford Economics Research Lab (SERL) Director Muriel Niederle echoed some of the researchers’ qualms, she pointed out that in-lab studies are more suited to examining interaction and negotiation than online alternatives.

Niederle explained that SERL performs many game theory experiments, in which participants must think about what other participants are thinking, which can be more difficult to run online.

In-person SERL experiments ensure data validity by providing monetary incentives to participants who perform a certain way within the experiment itself. These incentives are added to the base payment, and encourage students to take the experiment seriously.

Students who participate in research often prefer those types of escalating experiments, as they offer the opportunity to earn more money.

“I think if it’s a fixed [pay] I’m less invested in whether I make rushed decisions, because, I mean, at the end of the day, who really cares?” said Alessandra Marcone ’20, a frequent study participant.

Overall, Niederle said that SERL mostly uses Stanford students for experiments that don’t require too many subjects.

Other labs, such as the Laboratory for Social Research, run almost all of their studies online because they require larger subject pools, according to Redekopp.

“We just finished a big, in-person [study] that we ran over the course of two years,” Redekopp said. “But I’d say that the majority of our studies, we run online.”

Despite the popularity of online studies, some researchers think that the digital platforms they use to conduct their research — such as the scheduling system Sona or the survey building service Qualtrics — are due for an upgrade.

“We have a lot of online systems that aren’t necessarily great, but they’re the best we have, so we deal,” Chadha said.

The GSB Behavioral Lab researchers that spoke with The Daily also explained that some of the problems with research are unrelated to their methodology, and are simply inherent to the self-selecting nature of population sampling, such as the fact that women generally take far more studies than men.

Foster-Gimbel hypothesized that this might be because there are more women in the social sciences overall, or perhaps due to greater altruistic tendencies among women.

“I think there is something about women and helping,” Foster-Gimbel said. “There’s some sort sort of [phenomena] of, ‘Oh, you need me to take a survey, sure.’”

Other researchers, such as Chadha, speculated that it has more to do with either increased female industriousness or the way that women’s social networks are structured (allowing for awareness of paid experiments to spread more quickly via word of mouth).

Regardless, this trend can sometimes make finding male subjects for experiments difficult, and is just one example of the difficulties researchers face when trying to compile a diverse pool for their experiments.

But despite the various challenges of representation and integrity, Redekopp emphasized that undergraduate participation in research — on any scale — is ultimately rewarding and enjoyable.

“It was a really enriching part of my undergraduate experience, and obviously I liked it enough to keep doing it,” Redekopp said. “It can be really fun.”

 

Contact Ellie Bowen at ebowen ‘at’ stanford.edu.

Ellie Bowen is a junior from Grand Rapids, Michigan, studying Symbolic Systems and English Lit. She works as managing editor of news for Vol. 255. When she’s not spending inordinate amounts of time at the Daily building, Ellie loves to read National Geographic, play the piano, and defiantly use oxford commas.

Login or create an account

Apply to The Daily’s High School Winter Program

Applications Due NOVEMBER 22

Days
Hours
Minutes
Seconds