On that controversial Facebook emotion study
The best controversies are those in which the headlines make you think one thing, but the full article pushes you another way. Eventually, you say, “I have no idea what to think on this one.” That happened to me last week when investigating Facebook’s social experiment on happiness.
Here’s the quick summary: a group of researchers published a study that finds “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” Say what? Basically, if Facebook shows you more happy stuff, you feel happier. If they show sad stuff, you feel sad. The effect is slight, but it means big bucks for Facebook. Presumably, the happier you are when you leave Facebook, the more likely you are to come back.
So what’s the big deal? Just this little thing called informed consent. See, in academic research with human subjects (because of some crazy horrible experiments over the years), the gold standard before a researcher conducts an experiment is to receive express permission from those experimented upon. Such experiments include disclosures, and are governed by institutional review boards. This is well and good.
But Facebook, being Facebook, didn’t take their experiment to an IRB. They also seem not to have a company ethics board charged with overseeing published research. Thus, outrage, and outrage, and more outrage after Sheryl Sandberg’s piddly apology. People—especially academics—are piling on Facebook. And, sure, that’s easy to understand. After all, it’s not a good idea to screw with people’s emotions and then publish the findings. Heck, I may have been included in the study. You too. We don’t know; they didn’t ask our permission.
There’s more going on, however, than anger at experimentation. As dana boyd reminds us in a post, “What does the Facebook experiment teach us?” Facebook experiments with this sort of thing every day. Building the right sharing algorithm is essential to the site. They fiddle with it every day. Facebook selects what friends’ updates you see, every day. Facebook makes money off ads crafted specifically to you, every day. Facebook works with social scientists to perfect their practices, every day. All the hubbub around a time Facebook decides to actually publish its discoveries disincentivizes Facebook to make their research public in the future.
boyd writes that basically we’ve decided it’s
acceptable to manipulate people for advertising because that’s just business. But when researchers admit that they’re trying to learn if they can manipulate people’s emotions, they’re shunned. What this suggests is that the practice is acceptable, but admitting the intention and being transparent about the process is not.
boyd goes further, saying that the IRB system in academia is flawed: “We’ve trained an entire generation of scholars that ethics equals ‘that which gets past the IRB’ which is a travesty.” boyd considers many IRBs as overly concerned with protecting universities from lawsuits, and concerned too little with actual ethics.
In the Times, Farhad Manjoo also takes a somewhat nuanced approach, accepting that Facebook experimentation can give us insight into human behavior. “It is only by understanding the power of social media that we can begin to defend against its worst potential abuses,” Manjoo also points out.
My thoughts are still muddled, but I’ve come to a few quick proto-conclusions.
First, a general lack of appreciation for how the internet works contributes to the public’s anxiety and anger towards internet companies (this is largely stolen from boyd). When we teach technology—and often we don’t—we need to move past the how to instructions to why and how the technology works. Your average user of Facebook and Netflix doesn’t need to know how to code, but users should know what an algorithm is. Such knowledge is essential to informed citizenship today. We currently don’t teach it nearly well enough.
Second, putting on my professor hat even tighter: humanists must lead the way (or at least, join) in our society’s conversation about internet communication technology. This means humanists have to be willing to engage, to move beyond personal use of social media and embark on meaningful research. We have to join teams with computer scientists and sociologists and bring our humanist edge. The worst thing we could do is write off such questions and simply not join in the ethical debate.
We need to take up the call of scholars like Fiona Barnett, who calls for taking on new research questions, molding them with old ones, and not getting stuck when uninformed naysayers yack about. In “The Brave Side of Digital Humanities,” Barnett considers if this might be what the digital humanities are about:
At the heart of [digital humanities] is a kind of misrecognition, a merging of attention to technologies that have been deemed extraneous to the humanities with tools (and objects of study) that have been more familiar to the disciplinary conventions in the humanities.
Finally, less directly related to the Facebook research question but still important, we need to somehow get away from such a knee-jerk, black-and-white view of the world that paints those with whom we disagree as wholly bad, deems graduation speakers from differing political parties as wholly unworthy, and decrees companies with thousands of employees as wholly broken. We are great at writing shocking headlines that build outrage and petitions. Click-bait is cheap. Critical analysis is hard work, and always takes longer than a news cycle.
Update: PNAS, the journal that published the article, has issued this statement.
Originally posted at A Wee Blether