Everything in Moderation

Season 4: Episode 4

What, if anything, should be banned from online media? And who should review violent and explicit content, in order to decide if it’s okay for the public? Thousands of people around the world are working long, difficult hours as content moderators in support of sites like Facebook, Twitter, and YouTube. They are guided by complex and shifting guidelines, and their work can sometimes lead to psychological trauma. But the practice of content moderation also raises questions about censorship and free expression online.

In this IRL episode, host Manoush Zomorodi talks with a forensic investigator who compares the work she does solving disturbing crimes with the work done by content moderators. We hear the stories of content moderators working in the Philippines, as told by the directors of a new documentary called The Cleaners. Ellen Silver from Facebook joins us to outline Facebook’s content moderation policies. Kalev Leetaru flags the risks that come from relying on artificial intelligence to clean the web. And Kat Lo explains why this work is impossible to get exactly right.

Some of the content in this episode is sensitive, and may be difficult to hear for some listeners.



Published: January 21, 2019

Show Notes

Read the New York Times article on Facebook’s content moderation policies and also Facebook’s response.

Want more? Mozilla has teamed up with 826 Valencia to bring you perspectives written by students on IRL topics this season. Nicole M. from De Marillac Academy wrote this piece on inappropriate content online.

And, check out this article from Common Sense Media, on disturbing YouTube videos that are supposed to be for kids.

And finally, this IRL episode’s content underscores the importance of supporting companies committed to ethical tech and humane practices. Thank you for supporting Mozilla by choosing Firefox.

Transcript

Marla Carroll: I have been trained to do the work. Have I been trained to deal with the work? I don’t know that.

Manoush Z.: Marla Carroll is a forensic analyst in Florida. That means that she’s constantly reviewing video, audio, and digital evidence of crimes.

Marla Carroll: I would say that over 50% of the work that I do is related to difficult content, meaning something that may be difficult to hear or see.

Manoush Z.: And therefore, difficult to forget. Crime investigators are faced all the time with images that show the worst of humanity committing the most atrocious acts.

Marla Carroll: It never really leaves you. It definitely has an effect on how you feel.

Manoush Z.: Marla’s job though is necessary. It helps punish the guilty and exonerate the innocent.

Marla Carroll: I can say for myself that it’s also a matter of speaking for those who cannot speak for themselves.

Manoush Z.: That’s why analysts like Marla do this work. It’s why they’ll review brutal video footage of a triple homicide, analyze audio recordings of domestic abuse, and examine awful photographs down to the pixel.

Online, there’s another kind of job that exists with parallels to the work that Marla does. It’s called content moderation. Like Marla, content moderators review video, audio, photos, texts, and tweets and much of that content is just as challenging to take in.

Marla Carroll: I think the average person would have no clue that content moderation for the internet even exists. I would think everyone goes, “Oh, it’s algorithms.” I don’t know that the average user has ever thought that it’s an actual live person that has to view, sanitize, moderate whatever content.

Manoush Z.: But, unlike Marla and other forensic investigators, content moderators don’t work for justice. They work for the social platforms. It’s to keep the internet clean.

Marla Carroll: It seems like the content moderator is the judge in the internet world, what is considered right or wrong, and who determines that before that button is clicked to delete or ignore.

Manoush Z.: Huge swathes of the web are curated by people we will never meet. They make judgment calls all day, every day, and it can scar them deeply.

Marla Carroll: I have no doubt that content moderators are affected by the images and sounds that they deal with every day. The question is, is it worth it?

Manoush Z.: If you know where to look, and you want to look, you’ll find plenty of disturbing content online. But, much of that is filtered out of the more mainstream platforms like YouTube or Facebook or Twitter, and so on, and that’s no coincidence. Those companies work hard to keep that stuff off their services, as best they can.

Thousands of people around the planet work as content moderators. It’s a giant, mostly invisible field of labor. It can involve long hours and earn low pay. There can be little or no psychological support. Yet, their work determines what we can say, what we can see, and what we can share online.

Today, we discover who some of these moderators are. We learn how this work affects their offline lives, and we explore how machine learning can help, and how it can’t. And I speak to Facebook to better understand how they approach content moderation.

I’m Manoush Zomorodi, and this is IRL, online life, is real life, an original podcast from Mozilla, who also make Firefox, a browser dedicated to keeping the Web open, accessible, and safe.

Just a heads up, some of the stuff described in this episode is disturbing.

When Tumblr decided to ban all pornography from its site this December, it was likely motivated by increased calls for oversight. Major platforms are under major pressure to rebuild trust. A crucial component of that work is the way that they can scan, review, and curate their content, and that means, all around the world, the business of content moderation is booming.

Content Mod 1: Delete. Delete. Delete. Ignore.

Manoush Z.: The person you’re hearing is a content-

Content Mod 1: Delete.

Manoush Z.: … moderator featured in a new-

Content Mod 1: Delete.

Manoush Z.: … documentary called The Cleaners. This person is clicking through a series of images deciding what should go, and what should stay.

Content Mod 1: Ignore. Ignore. Delete.

Manoush Z.: The workers featured are in Manila in the Philippines. You’ll hear more of their voices from the documentary in this story.

Content Mod 2: You are not allowed to commit one mistake. It could trigger war. It could trigger bullying.

Content Mod 1: I have seen hundreds of beheadings. But, the worst scenario would be the little knife, similar to a kitchen knife.

Content Mod 3: I am different from what I am before. It’s just like a virus in me. It’s slowly penetrating in my brain. I need to stop. There’s something wrong happening.

Hans Block: The content moderators in our film, The Cleaners, we focused on young Filipinos sitting in front of a desk and reviewing the worst you can imagine, and they have to decide if we are supposed to see that or not.

Manoush Z.: Hans Block is one of the directors of the documentary.

Hans Block: They’re going work, work there eight to 10 hours, reviewing 25,000 pictures, then they go back home, and they are in a way the breadwinners for the family, so they need to do the job.

Manoush Z.: The other director is Moritz Riesewieck.

Moritz R.: Most of the cases, it’s their first job after college, and the way employees are hired by these companies, it’s literally in the streets. There are just recruiters, and they tell you, “Oh, are you looking for a job? It’s in a clean environment. It’s nicely looking inside the offices. It’s for a big U.S. major brand, and you will be able to earn at least between one to three dollars an hour.”

Manoush Z.: But, once you start the work, you realize just how challenging it can be. After being walked through the guidelines specific to whichever platform you’re assigned, you start clicking, delete, delete, ignore, delete.

Moritz R.: They mostly watch the videos completely. Sometimes, they fast-forward. But, if they miss a part of the video, this is a quality issue. If you don’t want to cause a problem, if you don’t want to cause a mistake, you’re only allowed to make three mistakes in a month.

Manoush Z.: What happens if you make a mistake?

Moritz R.: Yeah, if you make more than three mistakes, you are just fired. So it seems the case that there are quite strict rules for guaranteeing a certain quality without providing the workers with the training to do this job properly.

Manoush Z.: Hans, what kind of trauma have you learned about from these employees? Is there a story that sticks out for you?

Hans Block: So someone told us he’s afraid to go into public, into public places, because he was reviewing terror attack videos every day. He lost the trust in human being. Others told us that they have eating disorders, or they have problems to having relationship with girl or boyfriend, because they are watching hardcore pornography and abuse videos every day.

There’s one case in our film, there was someone working for a Special Force Team reviewing suicide videos, self-harm videos all day long. He asked the team manager to be transferred, because he can’t handle it any longer, and the team manager did nothing so he remains in that position, and, after a while, he commits suicide himself.

Content Mod 4: This guy who committed suicide has been in the company since the very start. I saw in his eyes at the time that I was talking to him that he is very sad. Three times, he already informed the boss, the operations manager to please transfer him. Maybe this is a cry for help already.

He hanged himself in his house with a rope and with a laptop at his front.

Manoush Z.: What sort of support do these moderators get in terms of counseling, or if they decide to leave their jobs, Moritz?

Moritz R.: There are companies outsourcing partners for all these big social media sites who have a psychologist onboard. What that means is that he or she, the psychologist just comes by, and all the staff is gathered in a room, and then this psychologist asks them, “How do you guys feel?” And that’s it.

Then, of course, nobody dares to share his or her problems, mental health problems, sleeping disorders, eating disorders, sexual disorders, in front of all the colleagues. They’re quite dependent on this job, so they will do everything to be able to handle it for longer.

Manoush Z.: It’s crazy to me, because it felt like I was learning about the morality police. On the one hand, when one young woman was describing in extremely graphic detail about the different kinds of beheadings that she has seen, I was like, “Wow, I am glad these people are doing this job, because no one should ever see that.”

On the other hand, though, they’re also potentially, some could call it censorship when it comes to an art piece, a painting, depicting Donald Trump in the nude. I really didn’t quite know what to think in many cases.

Moritz R.: Whenever Facebook, YouTube, Twitter states that all the content moderation process is somehow objective, because it’s based on guidelines, and the content moderators just follow these guidelines, that’s not the whole picture. Because, in so many cases, there are so many grey areas remaining in which the content moderators told us they just need to decide by their gut feelings.

Manoush Z.: It’s easy to empathize with moderators like the ones featured in the movie, The Cleaners. Harder to understand though is the fact that despite abiding by guidelines produced by the platforms they moderate, in the end, they’re using their gut to make choices on our behalf. Moritz points out that cultural context is part of what makes this messy.

Moritz R.: They’re executed by also very specifically ideologized people. I mean the Philippines are now run by a President who won the election by claiming, “I will clean up society.” A lot of content moderators, they agree with this kind of politics. They agree with the idea that we can all get rid of all the problems, so the rhetoric of cleaning up can also be a very bad ideology. If we just outsource the responsibility of deciding what should be acceptable in the digital public sphere to companies and then to their outsourcing partners or to young college graduates completely being ideologized by a specific fanaticism, this is just dangerous.

Manoush Z.: Is there an alternative? We do hear Mark Zuckerberg say that they’re developing artificial intelligence to be able to go through millions of more pictures faster and weed out the “bad stuff,” but is this the best we have for now?

Moritz R.: The problem is that whenever this topic is tackled, it’s always tackled in this way of, “We can fix it. Me and my team we will follow up on that. We have everything under control.”

This is missing the main point. I mean we really need to question if it’s right to outsource big parts of our digital public sphere to private companies. Why is that the case?

Content Mod 2: I suddenly asked myself, “Why am I doing this?” Just for the people to think that it’s safe to go online. When in fact, in your everyday job, it’s not safe for you.

We are slaving ourselves. We should wake up to the reality.

Manoush Z.: Hans Block and Moritz Riesewieck are the co-directors of The Cleaners.

Like I said, thank goodness someone is out there trying to screen out some of the more disturbing, sensitive, or illegal stuff in my social media feeds. We have a responsibility to make sure that these workers aren’t being harmed by the work they do.

But, what does protecting them look like really for a platform that’s trying to review content at such a massive scale? Take Google’s YouTube, for example, hours and hours and hours of video is uploaded every minute of every day. Over on Facebook, they get two million reports from their users every day. That’s two million flags coming at them from countries around the globe, two million times when someone’s saying, “Hey, this thing over here, this is not okay. I want it taken down.” So anyway you slice it, this is tough work.

Ellen Silver: Yeah, thank you for saying it, this work is hard.

Manoush Z.: Ellen Silver is the Vice President of Operations at Facebook. She runs Facebook’s content review workforce, which works 247. Facebook has 30,000 people working on safety and security issues. Half of those people are content moderators. Some are full-time, others are contracted through other companies. Ellen says they put a lot of effort in making sure their moderators are looked after.

Ellen Silver: What we do from a resiliency program is we want to ensure that they’re aware of the resources that are available to them. Specifics can be there are counseling, there are, and this is depending on the locale of where they are, they could have availability for in-person counseling, over-the-phone counseling, trauma and stress management. Those are some of the elements of what we have as part of our wellness program.

Manoush Z.: And so these content moderators that are working for you, how rigorously are they trained on your guidelines?

Ellen Silver: Yeah, so, we have training that’s broken out into three different phases. The first is our pre-training: so what is Facebook, what is content moderator, what are the type of content they’re going to see.

Next, we go into hands-on classroom training where after we’ve communicated and trained on our community standards, they have the opportunity to apply those in simulated use cases, so that, one, they can get used to the tools. They can understand how to apply the policy. We are able to give them feedback. They can sit next to a more tenured and senior content moderator for feedback and discussion.

And then, the third step is that we have ongoing training. Our policies do evolve. We want to be able to ensure that we are up-to-date and providing that context and resources to our content moderators. So it really comes to those three elements.

Manoush Z.: And how are you measuring whether your third-party operators are actually enforcing these trainings or standards?

Ellen Silver: Yeah, we take this very seriously. We provide and do what we call weekly audits. We look at a sample of decisions that content moderators make to understand how are they applying the policy consistently. Based off of that, we then give feedback to the content reviewer and/or the vendor partner sites.

Manoush Z.: I guess when I feel the most conflicted, or when I see both sides of the coin, it’s when it’s a piece of content, something somebody has said on Facebook that, on the one hand, is totally controversial. Some people might find it extremely horrible. It might even be hate speech in some countries. Then, in other places, it’s just free expression. It’s free speech.

Ellen Silver: Yeah, we’re constantly ensuring that we have feedback loops with what our content reviewers are seeing. We have a policy team that works with third-party advocates, specialists, and other groups to ensure that we’re hearing are there new trends or behaviors that are happening there.

Manoush Z.: Can you just sort of spell out where artificial intelligence fits into content moderation when it comes to Facebook?

Ellen Silver: Yeah, so we do have automation that helps supplement a lot of parts for how we think about content review. There are things that our technology is really sharp and strong in detecting. An example would be spam where there may be commercial links inside messages that are getting sent or things that are posted.

Then, there are some areas of content where contextual knowledge is going to be helpful, that humans provide that judgment.

Manoush Z.: I guess this conversation is taking place at a time where it feels like more people are starting to question the ethical choices that tech companies decide to make or not make. How is Facebook responding to that sort of fraught relationship that we’ve seen emerge over the last year or so?

Ellen Silver: I think you’ve seen that over the year with our ability to be transparent on our comprehensive community standards. We’ve published two of our Community Standards Enforcement Reports. We really do very much care about hearing feedback from our community and those around us, and how we could improve. I think it’s a dialogue.

Manoush Z.: Ellen Silver and Facebook say that dialogue with users is a necessary part of getting moderation right. Okay, I get that. But, when I hear the people in The Cleaners documentary, I am still left wondering if their voices are being heard clearly enough in this dialogue and not just at Facebook. After we recorded our interview with Ellen Silver, the New York Times published a story about Facebook’s moderation guidelines. It suggested some of those rules were being drawn up ad hoc. That the organization had a disorganized approach to deciding what is and isn’t allowed on the platform. And that in some cases, moderators were confused about what they needed to do.Facebook responded by further clarifying who makes these decisions, and how. They say a global forum of staff and outside experts meet every two weeks to review their policies. And those policies are continuously updated based on emerging trends. They also say content moderators are not required to meet quotas. This kind of back and forth illustrates the pressure companies like Facebook are under to show they are getting it right - or trying to get it right. They are be struggling, but I guess at least they’re being transparent about it.

Ellen alluded to how AI helps screen content automatically. That means less for humans to review, which has to help with the mental health of these workers. Yet, as artificial intelligence grows more, well, intelligent, there’s a temptation to offload more and more of this work onto the machine’s plate, and that raises other questions.

Kalev Leetaru, a senior fellow at George Washington University, puts it this way.

Kalev Leetaru: When you start using automated filtering, you start kind of relying on the judgment calls of that. You think about even in cases when you have a machine doing the initial filtering and then handing off to a human moderator, over time, having run many large content review projects in the past, one of the challenges that you face with this is you get a learned helplessness. Where, essentially, what you have is the human moderator begins to really trust the machine. Every time the machine says, “Hey, here’s an image that is terrorism,” they’re just going to glance at it and rubber-stamp it over time.

At first, they’re going start really checking things. But, over time, they were like, “It’s been months since the machine made a mistake. I’m not going to second-guess that machine.”

Manoush Z.: YouTube removed videos of atrocities in Syria. Facebook deleted a famous photograph of a naked child survivor of a napalm attack in Vietnam. In both cases, humans at the companies intervened and restored the content.

You can’t effectively screen all this stuff without AI, but you can’t truly get it right without humans. Humans trained in cultural distinctions, subtleties of language, local norms, history, and customs. For these platforms, it’s a never-ending balancing act, even as they work to avoid that learned helplessness that Kalev worries about.

The logistics with these issues can quickly become a Gordian knot. Kat Lo is a researcher at the University of California Irvine, and she consults with social media companies on these issues.

Kat Lo: I used to be very antagonistic towards platforms. But, the thing that I learned over a period of time is that at least people who work in trust and safety, they’re good people who are often very, very well-informed about what the issues are. It’s more that they don’t have resources, or the problems are very, very complex.

Manoush Z.: How much does this all come down to the fact that most of these big tech companies are based here in the United States where the First Amendment right to free speech is the main underlying principle that they do not want to question?

Kat Lo: It’s not just a matter of free speech but being worried about being sued that is stopping companies from sometimes making really significant policy decisions. This is a tough thing to make a statement about, but I do think that regulation in the EU is the direction that the US should be going in.

Manoush Z.: Is it a tough statement? Why is it so tough?

Kat Lo: Because, I worry sometimes about state actors in the US having too much control over companies, because given, say, that politicians are worried about anti-conservative bias, and they want to regulate content in terms of that, I think it’s actually preventing companies from pushing more significant measures in trust and safety issues and policy around hate speech.

Manoush Z.: I feel like we are left with a very frustrating situation. There’s no answer. Regulation is problematic, leaving it up to the companies is problematic, not moderating is obviously problematic. How are we going to fix this?

Kat Lo: I think that a lot of it is not finding the solution but making things better. We realistically have to work with systems that have already shown that they’re not effective in a sense at scale. I think making that progress and actually seeing changes as a result of that work is really encouraging.

Manoush Z.: Platforms want to protect us from content we may not want to see, and they want to protect their own brands too, naturally. Beyond that, we’re talking about which parts of conversation and creation get cut out of public discourse.

In The Cleaners documentary, the filmmakers also speak to David Kaye. He’s with the United Nations, and he works on freedom of expression, and he warns against a world where moderation edits too much out of our conversation space.

David Kaye: I think over time, it will interfere with our ability to have critical thinking. It interferes potentially with our ability to be challenged. People shouldn’t be surprised if in the future, there’s less information available to them, less edgy, less provocative information available online. We’ll be poorer societies for it.

Manoush Z.: There are some games that have a clear path to victory. Tic-tac-toe, you just need an X-X-X or an O-O-O in row. A game of checkers, just get your pieces across the board. But, then there’s chess, so many variables and moving parts, more factors than can ever be accounted for. Online moderation is kind of like that. In this space, content is the king, and moderators are the pawns. They’re the ones on the front lines making all the sacrifices, and you may or may not win.

Content moderators are just trying to do their best when they’re filtering what is and isn’t okay to be online. And when it works, it helps build an internet that promotes civil discourse, human dignity, and individual expression. That’s the kind of internet worth fighting for, and Mozilla is committed to that fight.

You can support Mozilla’s mission by downloading and using the Firefox browser. It’s built by people who believe online life can be a healthy, open resource that benefits everyone. Learn more at Mozilla.org and at Firefox.com.

A quick thank you to Jonas in Denmark. He’s an IRL listener who sent an email last summer asking if we’d do an episode about content moderation. Jonas, this episode’s for you.

IRL is an original podcast from Mozilla. I’m Manoush Zomorodi, and I’ll see you back here in a couple of weeks.