Over*Flow: Digital Humanity: Social Media Content Moderation and the Global Tech Workforce in the COVID-19 Era
Sarah T. Roberts / University of California, Los Angeles

Author’s Note: Over the past days, I have fielded many questions
asking about commercial content moderation work during the global coronavirus
(COVID-19) crisis. There are many aspects to consider, including location,
logistics and infrastructure, legal worker protections, and state-level
actions. As I have written and rewritten this article, I have needed to
repeatedly come back to update this article based on changing circumstances. At
this point, the evening of March 17, I will not fundamentally change it but
will continue to update it until it goes to press. My heart goes out to all of
those around the world who are touched by this disease: all of us

A small gathering at UCLA last week, in what we could not know at the time was likely to be the last of its kind for most of us for the foreseeable future, a group of scholars at all levels of career and life gathered with community activists, artists and others to respond to a conversation curated by Professor Patrik Svensson under the aegis of Humane Infrastructures, an appropriate context for what we were about to collectively experience, despite assuredly not having been on the horizon during the event’s planning.

For the purposes of this event I was asked, in what I have now come to regard as an uncanny bit of timing, to discuss technology labor forces dedicated to social media content moderation and/as infrastructure, prompting me to open my remarks with a nod to “human infrastructure” more generally. It is an exercise I find useful to my work but a metaphor or description that has serious limitations. And so I use it, while also applying to it various caveats, the first of which is simply that humans are humans. They are not pipe. They are not fiber. They are not, despite all attempts of management theorists of the early 20th century and gigwork proponents of the 21st, cogs to be replaced when one becomes worn, reducible to their motion study-documented singular movements, or blips on a delivery map.

Yet because the approach to provisioning labor for large-scale technology operations often takes on these overtones, it bears discussing labor forces as infrastructure, if for no other reason than to accurately account for them in the production chain of things like, in my case, social media, or manufactured goods, or textiles, or whatever the product or output may be. I also believe that gaining insight into corporate orientations toward such labor forces is helpful to develop a more thorough and sound critique of said orientations and the concomitant practices that emerge from characterizing workforce as infrastructure in the first place. In other words, we need to see how the firms see to make the most salient and effective critiques of their practices and credos.

I will cut to the chase of what many readers want to know: how is the pandemic of COVID-19, the coronavirus that is sweeping around the globe, impacting the moderation of social media. More to the point, your question may be, “Why is corona having an impact on moderation at all?” Let me give the briefest of overviews that I can to say that the practice of social media moderation happens at industrial scale with many of the transnational service outsourcing firms now involved, and with countless other players of lesser size at the table. It is a global system that involves labor pools at great geographic and cultural distance, as well as jurisdictional and legal remove, from where we might imagine the center of social media action is: Menlo Park, or Mountain View, or Cupertino or another Silicon Valley enclave.

The second thing to bear in mind is that there is a vast human workforce doing an incredible amount of high-impact content moderation for firms; my typical estimate (that I consider to be extremely conservative) is that the global moderation workforce is in the six figures at any given time, and I likely need to try to revise this number significantly. Yes, there are AI and computational tools that also conduct this work, but it is important to keep in mind that it is exceedingly difficult for those systems to work without human oversight or in the absence of humans vetting and doing manual removals, too.


Facebook's Announcement on March 16, 2020
Facebook’s announcement on March 16th indicated to many that a new experiment in content moderation was forthcoming.

This particular fragility has been seen most acutely today at Facebook, which announced yesterday evening that it would shut down as much of its operations as it could and have workers work from home when possible. In the case of their commercial content moderators, Facebook has explained that there are many cases in which workers cannot do their work effectively from home and the company is therefore moving to a much greater reliance on its AI tools and automated moderation systems. The switch in reliance upon automated removal appears to have occurred today, when vast numbers of users began reporting the deletion of benign and sometimes even newsworthy content (in many cases, about COVID-19). Confirmed by some representatives from Facebook is that there was a “bug” with some of the automated content removal systems that has now been corrected.[ ((It bears mentioning that there was some debate on Twitter about whether or not this bug was related to the letting go of human content moderators, with Guy Rosen of Facebook stating that it was not and former Facebook CSO Alex Stamos expressing skepticism. My guess is that the new widespread reliance on AI tools has already revealed and will continue to reveal a variety of hits a human would not make.))]


Professor Vaidhyanathan's Tweet
Professor Siva Vaidhyanathan of UVA expresses frustration with Facebook’s moderation under all-AI, March 17, 2020.

To understand this better, I will describe the general status quo for much of the top-tier American social media firms and their content moderation ecosystem.[ ((The operative phrase here is “top-tier”; many smaller firms have considerably fewer resources to put on moderation and may have devised other systems entirely to manage the bulk of their moderation needs. Two important examples of alternative systems are Reddit and Wikipedia, both of which rely on a huge network of volunteer community moderators whose interventions are user-facing and who are typically themselves close to the communities they moderate.))] The characteristics of the ecosystem is that it tends to be arranged with contract labor through third-party companies and has a global footprint. The firms have created their own network of call center-like facilities that form a web across the globe, and cover a massive array of linguistic, cultural, regional and other competencies and knowledge (although there are inevitable gaps and gaffes).

The distributed nature of the contract commercial content moderation system indeed allows for some degree of redundancy when it comes to issues of natural disaster or other catastrophic events that could take a center, a city or even a region offline. That said, most firms are at capacity when it comes to their screening needs, and the loss of a major site could very well impact quality. That appears to have happened in the last 72 hours, when Metro Manila and, indeed, much of the island upon which it is located, Luzon—a part of the Philippine archipelago that is home to 57 million people—went into quarantine. Reports the Singaporean Straits Times, “Police began closing off access to the Philippines’ sprawling and densely populated capital Manila, a city of some 12 million people, imposing a month-long quarantine that officials hope will curb the nation’s rising number of coronavirus cases.”

The Philippines is also the call center capital of the
world, and competes with India for the vast outsourced business of commercial
content moderation for the so-called Global North. In short, the Philippines is
where social media content goes to be screened.

Eleven days ago, I communicated with a reporter colleague to give my sense of how a virus-related shutdown in the Philippines could affect American social media giants. I told him that while a lot of the most irritating and highest-volume unwanted content (as deemed by the platforms) can be found and removed by automated tools—here I refer to spam, pornographic content, copyright violations, and other already known-bad material—they tend to be imperfect and blunt instruments whose interventions can be calibrated to be more sophisticated or to cast a wider net.[ ((See the work of Safiya U. Noble, Ruha Benjamin, Cathy O’Neill, Frank Pasquale, Joan Donovan and others who demonstrate that algorithmic interventions are deeply imbued with and shaped by a host of values, manipulation and bias, following key critiques of the politics of software by Wendy HK Chun, of computation by David Golumbia, after the fundamental question posed and answered by Langdon Winner that artifacts, indeed, have politics.))] But the loss of a major moderation site that would mean a switchover to reliance on these tools would invariably cause disruption in social media’s production chain, and could even potentially lead to quality issues perceived by users.

That appears to be the very case we saw today, where we see users become frustrated by false positives: cases where overzealous and undersophisticated AI tools aggressively remove reasonable content, because its judgment is too rudimentary. The alternative is also no alternative at all, for if the AI tools were turned off altogether, the result would be an unusable social media platform flooded with unbearable garbage, spam and irrelevant or disturbing content. One moderator interviewed in my book described the internet without workers like him as “a cesspool.”

Which, then, is the lesser of two evils, an overpoliced automated AI-moderated internet, or a “hole of filth” (as another Silicon Valley-based worker described it) of unbearable human self-expression? Ultimately, the firms will decide for the former, as it is powerful matter of brand protection and legal mandates (most from outside the United States) that will drive their choice in this matter. I suspect that it will be much of the public’s first contact to both the contours of content moderation on its platform, as well as how the disappearance virtually overnight of legions of humans doing this work has led to marked and immediate quality decline.

I return to the most important question, perhaps, that has been asked about this issue, which is why the work cannot simply be done by the workers from home. The answer, like everything about this issue, is complex. In many cases, such work can and is done at home. In the case of AAA social media firms, however, constraints like privacy agreements and data protection policies in various jurisdictions may preclude this. There is also a nontrivial infrastructure that goes into setting up a computing center with requisite hardware, software (internally developed and maintained systems) and routing of data. The call center locations themselves are often highly secure, with nothing allowed on the floor where workers are logged in. Working from home eliminates the ability for oversight and surveillance of workers and their practices, both what they are doing and what they are not, to the extent that can be achieved on-site. This alone is possibly a deal-breaker for moving the work home. In a moment of dark humor, one rightly cynical colleague pointed out that this is an event that, while likely wholly unimagined and unplanned, is allowing for a certain amount of stress testing of these tools at scale.

Bringing the work consisting of the rapid review of thousands of images and videos, many of which can be psychologically difficult and taxing, into the home also may be considered too much to ask of workers in a time of crisis. Workers in call centers rely on each other and their teams for support doing commercial content moderation, and may have access to an on-site or on-call therapist, counselor or other mental health professionals.[ ((Even when counselors are available, it is not always the panacea it may seem. Some workers contracted by Accenture discovered that what they presumed were private sessions with workplace therapists were actually reporting on those sessions to Accenture’s management, according to The Intercept.))] But it is also worth mentioning that many people already do this kind of work at home, whether as contractors or on microtask sites from anywhere in the world.[ ((See this report released just yesterday on the state of microwork in Canada, from the Toronto Workforce Innovation Group (TWIG), or an interview with sociologist Antonio Casilli on microwork in France.))]

Further, infrastructure differences will play into the picture locally. For example, European tech hub the Republic of Ireland has widespread penetration of at-home fixed broadband, whereas in the Philippines the story looks different. Here is where we return to the way the firms themselves view the matter of outsourced labor in what we can consider the production chain of social media: as a component in a production cycle characterized by the East to West flow of supply-chain logistics for manufactured goods. The model is one of just-in-time, in which all aspects of the process, from putting up a site to hiring in workers to the actual moderation itself, takes place as quickly and as “leanly” as possible, particularly for functions such as content moderation that are seen as a “cost center” rather than a “value-add” site of revenue generation.

Just-in-time supply-chain logistics may be being tested in other parts of the tech industry and in industries reliant on other types of manufactured products, when we consider the goods’ origin point (frequently East Asia, in general, and China, specifically, particularly for textile, tech and other material goods). Consider the recent shuttering of numerous retail chains (e.g., Apple Stores; Lululemon; Victoria’s Secret) not only as issues of lack of clientele or safety of employees, but one that may reflect a significant gap in the availability of goods making their way out of factories and across oceans: “Just how extensive the crisis is can be seen in data released by Resilinc, a supply-chain-mapping and risk-monitoring company, which shows the number of sites of industries located in the quarantined areas of China, South Korea, and Italy, and the number of items sourced from the quarantined regions of China,” reports the Harvard Business Review.

When we consider a social media production chain that is less material, perhaps, in terms of the product (user-facing content on a social media site) than an H&M fast fashion jacket or a pair of Apple AirPod Pros, the essential nature of the presence of humans in that chain is just as apparent as when a production line goes down for a month and no goods leave the factory. Here, where content moderators are both the product (in the form of their cultural and linguistic sense-making ability upon which their labor is frequently valued and sold) and the producer (in the form of the work they undertake), their impact of their loss in the production chain must be considered profound.


Microsourcing, a Manila-based commercial content moderation outsourcing firm
Microsourcing, a Manila-based commercial content moderation outsourcing firm, advertised their laborforce as having specialized linguistic and cultural “skills.” In this way, these “skills” were the commodity on offer.

In essence, what is supposed to be a resilient just-in-time chain of goods and services making their way from production to retail may, in fact, be a much more fragile ecosystem in which some aspects of manufacture, parts provision, and/or labor are reliant upon a single supplier, factory, or location. Just as it is in manufacturing, where a firm discovers that a part is made only in one factory and its going offline affects everything downstream, such is it decidedly the case for the fragile ecosystem of outsourced commercial content moderation and its concentration in areas of the world such as the Philippines. The reliance on global networks of human labor is revealing cracks and fissures in a host of supply-chain ecosystems. In the case of human moderators who screen social media, their absence is likely to give many users a glimpse, quite possibly for the first time, of the digital humanity that goes into crafting a usable and relatively hospitable online place for them to be. In the face of their loss, perhaps just when we need them the most—to combat the flood of misinformation, hate speech, and racism inspired by the global pandemic that is COVID-19 now circulating online—they are gone. Will we learn to finally collectively value this aspect of the human infrastructure just a little bit more than not at all?



Image Credits:

  1. Facebook’s announcement on March 16th indicated to many that a new experiment in content moderation was forthcoming.
  2. Professor Siva Vaidhyanathan of UVA expresses frustration with Facebook’s moderation under all-AI, March 17, 2020.
  3. Microsourcing, a Manila-based commercial content moderation outsourcing firm, advertised their laborforce as having specialized linguistic and cultural “skills.” In this way, these “skills” were the commodity on offer. Source: Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019)


References:




Community Guidelines and the Language of Eating Disorders on Social Media
Ysabel Gerrard / The University of Sheffield

Content Warning: This post contains an in-depth discussion of eating disorders and includes difficult imagery.

All social media platforms have a set of community guidelines which lay out, in ‘plainspoken’ terms, how they want their users to behave and what kinds of content they think are (and are not) acceptable. [ (( Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale: Yale University Press, p.76. ))] They have rules against supporting terrorism, crime and hate groups; sharing sexual content involving minors; malicious speech, amongst other acts that aim to threaten or damage certain parties, to use Tumblr’s words in the quote above. But some of these rules are harder to justify and enforce than others.

For example, in 2012, and in response to a Huffington Post exposé about the ‘secret world of teenage “thinspiration”’ on social media, Instagram, Pinterest and Tumblr released new guidelines about content related to eating disorders, like anorexia and bulimia. They said they would draw lines between accounts and posts that ‘promote’ eating disorders and those aiming to ‘build community’ or facilitate ‘supportive conversation’ about the issue.

Yet this promotion/support dialectic indicates a misunderstanding of online eating disorder communities, and in what follows I present a series of examples to provoke a discussion about the language used in social media’s community guidelines.

Locating the ‘Pro’ in ‘Pro-Eating Disorder’

The language of promotion used in community guidelines was likely influenced by the online pro-eating disorder (pro-ED) movement, formerly found in the homepages, forums and chat rooms of a pre-social media Web. The term ‘pro’ is commonly (and insufficiently) understood to denote the promotion of eating disorders, but internet users have always varied in how they operationalise this term. For example, some adopt it as an identity to break away from the medicalisation of eating disorders; some use it to embrace eating disorders and break away from stigma; some use it to create spaces of support for others; some want to find likeminded people; and yet others – though these people are said to be in a minority – use it to promote and encourage harmful behaviours in others.

While some posts do straightforwardly promote eating disorders – like ‘meanspo’ agreements, short for ‘mean inspiration’, where users agree to post cruel comments to one another to encourage starvation and weight loss – a lot of it blurs the line.

The ‘What Ifs’ of Reading Images

Several internet researchers, myself included, have shown how social media users savvily work around platforms’ rules. For example, after the Huffington Post exposé, Instagram stopped returning results for ED-related hashtag searches like #proana, but users coined lexical variants to evade moderation (e.g.,#proana became #proanaa). In a recent paper I showed how users now avoid using hashtags or other textual clues to align their content with pro-ED discourses, meaning the work of deciding whether a post promotes eating disorders has become even harder.

For example, in its community guidelines, Pinterest gives users an example of an image that does not, in its view, promote eating disorders. They claim this image is acceptable because ‘the focus is on nutrition and fitness’:

But what if Pinterest removed the text overlay – ‘it’s not a diet, it’s a way of life. FIT meals’ – and simply depicted a slender female body, perhaps in black and white, a common visual aesthetic in online eating disorder communities? Why is this level of thinness acceptable? And how do we decide if it’s ok to promote certain diets and meal plans and ‘way[s] of life’ above others?

Here are some more examples, taken from Instagram: [ (( These images are taken from the same dataset used in my latest paper: Gerrard, Y. (2018). Beyond the hashtag: circumventing content moderation on social media. New Media and Society. 1-20.))]

Would you say the above images promote eating disorders? Yes, the people’s bones are outlined and emphasised in the framing of the images, but when do they become too bony, to the point where these images are read as the promotion of anorexia or similar? Does the act of posting these images alone constitute promotion? And what might happen if these were male bodies? These are just some of the many questions that could be asked about the challenges of drawing the line between harmlessness and promotion.

‘Things You Might Love’: The Gender Politics of Recommendation Systems

Another way content circulates on social media is through algorithmic recommendation systems. In short, platforms show you what they think you want to see. This is especially true of Pinterest, which arguably functions as more of a search engine than a place to make deep connections with other users. But what we don’t know is how Pinterest and other platforms decide which posts have similarities to others.

Take the below screenshot of Pinterest recommendations as an example. I found these images by searching for ‘thinspo’ on Pinterest (a term that has long been linked to eating disorders) and selected the first image, which had a black background and read ‘need to be skinnier for summer’ in large white letters. Pinterest listed other ‘ideas’ I ‘might love’ underneath the post:

Different kinds of content are being conflated here, as images about athleticism and getting ‘healthy’ sit alongside suggestions for ‘skinny bones disorders’. Some of these images have ‘no specific connection’ to eating disorders and yet they have been re-contextualised within a new environment that makes them seem problematic. [ (( Vellar, A. (2018). #anawarrior identities and the stigmatization process: an ethnography in Italian networked publics. First Monday. 23(6), n.p.))] So which of these posts would you say promote eating disorders, which don’t, and why?

I then selected a different image – the ‘nothing tastes as good as skinny feels’ quote shown in the post above – and these were my recommendations:

Again, which of these posts do you think promote eating disorders? Are any of them bad enough to be removed from Pinterest?

What’s interesting to me is that Pinterest’s suggestions for ‘fitness motivation’, ‘get[ting] healthy’ and spotting the signs of anorexia are mixed in with posts urging readers not to eat, and meanspo quotes like ‘not skinny enough’ and ‘you’re a slut’. Pinterest is thus conflating content related to eating disorders with posts about thinness (health, fitness, nutrition, diet plans, weight loss, and so on), reinforcing a longstanding and narrow view of what an eating disorder is (hint: anorexia isn’t the only one, and not everyone wants to lose lots of weight).

Algorithmic personalisation is making it even more challenging to draw the line between posts that promote EDs and those which promote other aspects of female body control, potentially having material effects on how people find content related to eating disorders and learn about what they are.

Getting It Right

Only a minority of users in pro-ED spaces actually promote eating disorders, yet platforms borrow this language and use it to justify their decisions about content moderation. This is precisely why we need more insight into platforms’ decision-making processes: how do rule-makers define ‘promotion’, and how is this kind of language operationalised by those whose job it is to scrub objectionable content from social media (the commercial content moderators (CCMs))?

Sometimes social media content moderation is necessary and I respect the difficulties companies face as they grapple with their desire to provide spaces for self-expression while needing to set some limits. But if platforms are going to take on the moral work of deciding what content should stay or go, especially when it comes to users’ health, they need to make sure they get it right.

Image Credits
1. Tumblr’s Community Guidelines
2. Pinterest’s Community Guidelines
3. Author’s screenshot of an anonymised user’s Instagram post
4. Author’s screenshot of an anonymised user’s Instagram post
5. Author’s screenshot of an anonymised user’s Instagram post
6. Author’s screenshot of Pinterest recommendations
7. Author’s screenshot of Pinterest recommendations

Please feel free to comment.