By Tim Rubbelke
In mid-2014, researchers at Cornell University released a study titled “Experimental evidence of massive scale emotional contagion through social networks.” The general idea of the study was that manipulating the Facebook feeds of users to display either primarily positive or negative posts caused subjects’ mood to shift accordingly. The study involved over 600,000 individuals.
Soon after the study was published, tech websites took notice. One of the obvious questions was whether the study participants had consented and whether they knew they were part of this study.
Facebook pointed to a broad clause in its “Data Use Policy” (DUP) as implying consent (although it seems no one in the study was aware of their participation.) Some sites questioned whether the clause in question was even in the DUP at the start of the experiment.
In light of the criticism, the Proceedings of the National Academy of Sciences (PNAS) (the journal that published the research) issued an Expression of Concern about the study. In it they noted that Cornell’s IRB chose not to weigh in on the research because the data their researchers received was anonymized and thus it was not human subjects research under the regulations.
While Cornell’s decision likely fits the technical description of the human subjects research regulations, it serves to highlight a very interesting question: when researchers (and institutions) receive data, should they be concerned about how the data was collected?
One hopes that Facebook carried out the experiment in an ethical manner, but without oversight we cannot be sure. It certainly seems that at the very least many participants were subjected to a form of harm—by having their moods unknowingly altered to be sadder. Perhaps more disturbingly, the study gives no indication of what would happen if a participant’s emotions swung far beyond the expected range.
The rules and regulations surrounding academic research are designed to hold it to a high ethical standard, especially regarding the treatment of human subjects. When data is shared between academic institutions then the research will be subject to the oversight of at least one institution. In the case of data from private sources (such as Facebook) things are substantially murkier. As the PNAS expression of concern noted, privately funded organizations like Facebook do not have to follow the Common Rule. This is compounded by the fact that, as saying on the internet goes, “If you’re not paying for it, you are the product.” Still, as one expert interviewed by CNet pointed out, the experiment carried out by Facebook went beyond the normal types of platform and advertising enhancement most websites carry out. Furthermore if academic researchers are going to accept data from privately funded entities, and wish to maintain the high ethical standards, they should be cognizant of how the data was collected.
As Kahn, et al. note, it’s inadvisable to try and shoehorn a 20th century research regulation apparatus onto a 21st century world. And yet, as we try to bring the research oversight into the modern world we should be careful not to discard bedrock principles either. After all, public trust in academia is what fosters participation in research.
Tim Rubbelke is a PhD Candidate at the Saint Louis University Albert Gnaegi Center for Health Care Ethics. He contributes regular pieces on research ethics.
 Additionally there’s no information about the age of the subjects.