Facebook’s Explanation: We Wanted to Make Sure You Weren’t Turned Off By Facebook

Adam D. I. Kramer, an author of the research, writes on an incredibly ugly Facebook page:

The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product.

Ok, I can get behind that, you care about users. That’s nice.

We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.

Oh, you were so close then you had to go and be all honest. So let’s boil down the ‘why Facebook did this’ to just this: we were worried people may stop visiting Facebook because of what they see in their feed. Or, alternatively: we need to know if we should show more or less positive feed postings to users so that they keep coming back more.

Yeah, that sounds about right. Not really about the user, so much as about how much the user drives page views.

Nobody’s posts were “hidden,” they just didn’t show up on some loads of Feed.

Ummm… Let’s try that again:

Nobody’s posts were “hidden,” they just didn’t show up on some loads of Feed.

Hmm, pretty sure not showing up on some loads of the Feed is the definition of “hidden”, but I’m not an expert here.

And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.

Hold the fuck up now. You found a correlation between seeing negative posts and posting negative things. The research, if I understand it correctly, shows that the more negative stuff people see the more negative they become.

Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was reduced.

Meaning Facebook caused users to feel better or worse at random, but on purpose. So instead of allowing for natural balance (seeing both good and bad posts) this “experiment” limited some peoples feeds to showing more good, or more bad. That actually does have a fucking impact on people.

The goal of all of our research at Facebook is to learn how to provide a better service.

Wait, that contradicts what you opened with when you said:

The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product.

I guess goal and reason are different at Facebook?

The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.

Translation: We are still at it, but now we believe we are on moral high ground.

Facebook, taking UX design to a whole new level of fuckery.

Become a Member

This site is 100% member supported and free of advertising. Members receive access to exclusive weekly content: iPad Productivity Report, videos, and the best products listing.

Join Now

Already a member? Please sign in.

Article Details

Published
by Ben Brooks
3 minutes to read.


tl;dr

Adam D. I. Kramer, an author of the research, writes on an incredibly ugly Facebook page: The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. Ok, I can get behind that, you care about users. That’s nice. We felt that it […]