Can we take something positive from the Facebook furore?

The discussion about the ethics of the Facebook study is nuanced, but something went wrong. Photograph: THOMAS WHITE/REUTERS


Between the high-profile cases of fraud and bitter arguments about replication, psychology has had a bit of a rough ride over the past few years. What the discipline didn't really need was another public fiasco highlighting questionable research practices. Unfortunately, a study published in PNAS last week has resulted in just that.


In a nutshell, Facebook conducted some research in 2012 on nearly 700,000 users, in which they manipulated the type of content that would appear in news feeds. In one condition, posts from friends that had positive wording were selectively removed. In another, posts which had negative content were omitted from feeds. The researchers then looked at whether this had an effect on the emotional content of the users' own posts. It did - those whose feeds had positive content selectively removed ended up producing fewer positive posts. The effect is small, but there.


The reason so much outrage has been directed towards the study is due to the nature in which it was conducted. A central ethical tenet of psychological research is the requirement for informed consent - people should be able to make a decision about whether they want to take part in a study, based on an awareness of what the research actually involves. In some cases it's acceptable to mask the true purpose of the study, but nevertheless people should be (at the very least) aware that they are being tested.


This didn't happen with the Facebook experiment. By all accounts, the researchers involved took advantage of a clause in Facebook's data use policy which states that '...we may use the information we receive about you... for internal operations, including troubleshooting, data analysis, testing, research and service improvement.' So their reasoning was that they didn't need to obtain fully informed consent in the usual experimental sense, because Facebook users had already agreed that their data (i.e. what they post on their news feeds) could be used for research purposes.


As Chris Chambers noted in an earlier blog post this week, the specifics around the ethics protocol for the study are a bit murky. Cornell University issued a statement on Monday claiming that they didn't need to give the usual ethical approval for the study because the data had already been collected by Facebook, not by their own researchers. This is despite the fact that the researchers in question were involved in designing the study. The journal editor responsible for the paper's publication at PNAS has simultaneously expressed concerns about the study and stated that she didn't want to 'second-guess the relevant IRB [ethical approval board].'


All this has led to a huge backlash against Facebook, PNAS and the researchers in question. One thing that is clear is that if the intention was all along to treat this as a scientific study, with the aim of publishing it, then it should have been cleared through an ethics board (in this case, Cornell's). The fact that the paper's authors were involved in the design process means that they should have flagged this issue early on, and it's bizarre that no attempt to do this appears to have been made.


But the problem is nuanced. What if the original intention wasn't to publish the study as scientific research? Companies do private marketing research of a similar kind to this all the time using a technique called A/B testing, which isn't subject to ethical approval. Should it be? If so, which ethical approval board should be used? Perhaps, in the aftermath of this work, we need to look at reforms to ethical approval processes so that we can more adequately deal with an issue like this in the future. The BPS guidelines on ethical internet research are a good place to start. But there needs to be clearer international guidelines on how experimental work involving public data sets is appropriated and used, so that research isn't conducted at the cost of public health, wellbeing or trust. To my mind, the onus here is on scientists and research institutes to lead the charge.


The absolute worst thing that can happen as a result of this furore is that (a) Facebook keep any and all data they collect private, and run these sorts of studies behind closed doors, and (b) scientists wouldn't want to touch their data anyway, for fear of public outrage. This would be disastrous, because social media hosts a wealth of information about human behaviour, and there's absolutely no reason why ethical, robust science can't be undertaken with that data.


So between the vitriol, conspiracy theories and general 'FACEBOOK IS EVIL' outrage, I can't help but feel that we risk missing a huge opportunity to improve the process by which digital research is conducted in the future. We've been offered a glimpse of the potential insights we can get from social media research. We've also been warned about what needs to change for that work to be ethical and responsible.


Comments

Popular posts from this blog

5 Reasons iPhone 6 Won't Be Popular

Eset nod32 ativirus 6 free usernames and passwords

Apple's self