Over*Flow: Digital Humanity: Social Media Content Moderation and the Global Tech Workforce in the COVID-19 Era
Sarah T. Roberts / University of California, Los Angeles

Author’s Note: Over the past days, I have fielded many questions
asking about commercial content moderation work during the global coronavirus
(COVID-19) crisis. There are many aspects to consider, including location,
logistics and infrastructure, legal worker protections, and state-level
actions. As I have written and rewritten this article, I have needed to
repeatedly come back to update this article based on changing circumstances. At
this point, the evening of March 17, I will not fundamentally change it but
will continue to update it until it goes to press. My heart goes out to all of
those around the world who are touched by this disease: all of us

A small gathering at UCLA last week, in what we could not know at the time was likely to be the last of its kind for most of us for the foreseeable future, a group of scholars at all levels of career and life gathered with community activists, artists and others to respond to a conversation curated by Professor Patrik Svensson under the aegis of Humane Infrastructures, an appropriate context for what we were about to collectively experience, despite assuredly not having been on the horizon during the event’s planning.

For the purposes of this event I was asked, in what I have now come to regard as an uncanny bit of timing, to discuss technology labor forces dedicated to social media content moderation and/as infrastructure, prompting me to open my remarks with a nod to “human infrastructure” more generally. It is an exercise I find useful to my work but a metaphor or description that has serious limitations. And so I use it, while also applying to it various caveats, the first of which is simply that humans are humans. They are not pipe. They are not fiber. They are not, despite all attempts of management theorists of the early 20th century and gigwork proponents of the 21st, cogs to be replaced when one becomes worn, reducible to their motion study-documented singular movements, or blips on a delivery map.

Yet because the approach to provisioning labor for large-scale technology operations often takes on these overtones, it bears discussing labor forces as infrastructure, if for no other reason than to accurately account for them in the production chain of things like, in my case, social media, or manufactured goods, or textiles, or whatever the product or output may be. I also believe that gaining insight into corporate orientations toward such labor forces is helpful to develop a more thorough and sound critique of said orientations and the concomitant practices that emerge from characterizing workforce as infrastructure in the first place. In other words, we need to see how the firms see to make the most salient and effective critiques of their practices and credos.

I will cut to the chase of what many readers want to know: how is the pandemic of COVID-19, the coronavirus that is sweeping around the globe, impacting the moderation of social media. More to the point, your question may be, “Why is corona having an impact on moderation at all?” Let me give the briefest of overviews that I can to say that the practice of social media moderation happens at industrial scale with many of the transnational service outsourcing firms now involved, and with countless other players of lesser size at the table. It is a global system that involves labor pools at great geographic and cultural distance, as well as jurisdictional and legal remove, from where we might imagine the center of social media action is: Menlo Park, or Mountain View, or Cupertino or another Silicon Valley enclave.

The second thing to bear in mind is that there is a vast human workforce doing an incredible amount of high-impact content moderation for firms; my typical estimate (that I consider to be extremely conservative) is that the global moderation workforce is in the six figures at any given time, and I likely need to try to revise this number significantly. Yes, there are AI and computational tools that also conduct this work, but it is important to keep in mind that it is exceedingly difficult for those systems to work without human oversight or in the absence of humans vetting and doing manual removals, too.


Facebook's Announcement on March 16, 2020
Facebook’s announcement on March 16th indicated to many that a new experiment in content moderation was forthcoming.

This particular fragility has been seen most acutely today at Facebook, which announced yesterday evening that it would shut down as much of its operations as it could and have workers work from home when possible. In the case of their commercial content moderators, Facebook has explained that there are many cases in which workers cannot do their work effectively from home and the company is therefore moving to a much greater reliance on its AI tools and automated moderation systems. The switch in reliance upon automated removal appears to have occurred today, when vast numbers of users began reporting the deletion of benign and sometimes even newsworthy content (in many cases, about COVID-19). Confirmed by some representatives from Facebook is that there was a “bug” with some of the automated content removal systems that has now been corrected.[ ((It bears mentioning that there was some debate on Twitter about whether or not this bug was related to the letting go of human content moderators, with Guy Rosen of Facebook stating that it was not and former Facebook CSO Alex Stamos expressing skepticism. My guess is that the new widespread reliance on AI tools has already revealed and will continue to reveal a variety of hits a human would not make.))]


Professor Vaidhyanathan's Tweet
Professor Siva Vaidhyanathan of UVA expresses frustration with Facebook’s moderation under all-AI, March 17, 2020.

To understand this better, I will describe the general status quo for much of the top-tier American social media firms and their content moderation ecosystem.[ ((The operative phrase here is “top-tier”; many smaller firms have considerably fewer resources to put on moderation and may have devised other systems entirely to manage the bulk of their moderation needs. Two important examples of alternative systems are Reddit and Wikipedia, both of which rely on a huge network of volunteer community moderators whose interventions are user-facing and who are typically themselves close to the communities they moderate.))] The characteristics of the ecosystem is that it tends to be arranged with contract labor through third-party companies and has a global footprint. The firms have created their own network of call center-like facilities that form a web across the globe, and cover a massive array of linguistic, cultural, regional and other competencies and knowledge (although there are inevitable gaps and gaffes).

The distributed nature of the contract commercial content moderation system indeed allows for some degree of redundancy when it comes to issues of natural disaster or other catastrophic events that could take a center, a city or even a region offline. That said, most firms are at capacity when it comes to their screening needs, and the loss of a major site could very well impact quality. That appears to have happened in the last 72 hours, when Metro Manila and, indeed, much of the island upon which it is located, Luzon—a part of the Philippine archipelago that is home to 57 million people—went into quarantine. Reports the Singaporean Straits Times, “Police began closing off access to the Philippines’ sprawling and densely populated capital Manila, a city of some 12 million people, imposing a month-long quarantine that officials hope will curb the nation’s rising number of coronavirus cases.”

The Philippines is also the call center capital of the
world, and competes with India for the vast outsourced business of commercial
content moderation for the so-called Global North. In short, the Philippines is
where social media content goes to be screened.

Eleven days ago, I communicated with a reporter colleague to give my sense of how a virus-related shutdown in the Philippines could affect American social media giants. I told him that while a lot of the most irritating and highest-volume unwanted content (as deemed by the platforms) can be found and removed by automated tools—here I refer to spam, pornographic content, copyright violations, and other already known-bad material—they tend to be imperfect and blunt instruments whose interventions can be calibrated to be more sophisticated or to cast a wider net.[ ((See the work of Safiya U. Noble, Ruha Benjamin, Cathy O’Neill, Frank Pasquale, Joan Donovan and others who demonstrate that algorithmic interventions are deeply imbued with and shaped by a host of values, manipulation and bias, following key critiques of the politics of software by Wendy HK Chun, of computation by David Golumbia, after the fundamental question posed and answered by Langdon Winner that artifacts, indeed, have politics.))] But the loss of a major moderation site that would mean a switchover to reliance on these tools would invariably cause disruption in social media’s production chain, and could even potentially lead to quality issues perceived by users.

That appears to be the very case we saw today, where we see users become frustrated by false positives: cases where overzealous and undersophisticated AI tools aggressively remove reasonable content, because its judgment is too rudimentary. The alternative is also no alternative at all, for if the AI tools were turned off altogether, the result would be an unusable social media platform flooded with unbearable garbage, spam and irrelevant or disturbing content. One moderator interviewed in my book described the internet without workers like him as “a cesspool.”

Which, then, is the lesser of two evils, an overpoliced automated AI-moderated internet, or a “hole of filth” (as another Silicon Valley-based worker described it) of unbearable human self-expression? Ultimately, the firms will decide for the former, as it is powerful matter of brand protection and legal mandates (most from outside the United States) that will drive their choice in this matter. I suspect that it will be much of the public’s first contact to both the contours of content moderation on its platform, as well as how the disappearance virtually overnight of legions of humans doing this work has led to marked and immediate quality decline.

I return to the most important question, perhaps, that has been asked about this issue, which is why the work cannot simply be done by the workers from home. The answer, like everything about this issue, is complex. In many cases, such work can and is done at home. In the case of AAA social media firms, however, constraints like privacy agreements and data protection policies in various jurisdictions may preclude this. There is also a nontrivial infrastructure that goes into setting up a computing center with requisite hardware, software (internally developed and maintained systems) and routing of data. The call center locations themselves are often highly secure, with nothing allowed on the floor where workers are logged in. Working from home eliminates the ability for oversight and surveillance of workers and their practices, both what they are doing and what they are not, to the extent that can be achieved on-site. This alone is possibly a deal-breaker for moving the work home. In a moment of dark humor, one rightly cynical colleague pointed out that this is an event that, while likely wholly unimagined and unplanned, is allowing for a certain amount of stress testing of these tools at scale.

Bringing the work consisting of the rapid review of thousands of images and videos, many of which can be psychologically difficult and taxing, into the home also may be considered too much to ask of workers in a time of crisis. Workers in call centers rely on each other and their teams for support doing commercial content moderation, and may have access to an on-site or on-call therapist, counselor or other mental health professionals.[ ((Even when counselors are available, it is not always the panacea it may seem. Some workers contracted by Accenture discovered that what they presumed were private sessions with workplace therapists were actually reporting on those sessions to Accenture’s management, according to The Intercept.))] But it is also worth mentioning that many people already do this kind of work at home, whether as contractors or on microtask sites from anywhere in the world.[ ((See this report released just yesterday on the state of microwork in Canada, from the Toronto Workforce Innovation Group (TWIG), or an interview with sociologist Antonio Casilli on microwork in France.))]

Further, infrastructure differences will play into the picture locally. For example, European tech hub the Republic of Ireland has widespread penetration of at-home fixed broadband, whereas in the Philippines the story looks different. Here is where we return to the way the firms themselves view the matter of outsourced labor in what we can consider the production chain of social media: as a component in a production cycle characterized by the East to West flow of supply-chain logistics for manufactured goods. The model is one of just-in-time, in which all aspects of the process, from putting up a site to hiring in workers to the actual moderation itself, takes place as quickly and as “leanly” as possible, particularly for functions such as content moderation that are seen as a “cost center” rather than a “value-add” site of revenue generation.

Just-in-time supply-chain logistics may be being tested in other parts of the tech industry and in industries reliant on other types of manufactured products, when we consider the goods’ origin point (frequently East Asia, in general, and China, specifically, particularly for textile, tech and other material goods). Consider the recent shuttering of numerous retail chains (e.g., Apple Stores; Lululemon; Victoria’s Secret) not only as issues of lack of clientele or safety of employees, but one that may reflect a significant gap in the availability of goods making their way out of factories and across oceans: “Just how extensive the crisis is can be seen in data released by Resilinc, a supply-chain-mapping and risk-monitoring company, which shows the number of sites of industries located in the quarantined areas of China, South Korea, and Italy, and the number of items sourced from the quarantined regions of China,” reports the Harvard Business Review.

When we consider a social media production chain that is less material, perhaps, in terms of the product (user-facing content on a social media site) than an H&M fast fashion jacket or a pair of Apple AirPod Pros, the essential nature of the presence of humans in that chain is just as apparent as when a production line goes down for a month and no goods leave the factory. Here, where content moderators are both the product (in the form of their cultural and linguistic sense-making ability upon which their labor is frequently valued and sold) and the producer (in the form of the work they undertake), their impact of their loss in the production chain must be considered profound.


Microsourcing, a Manila-based commercial content moderation outsourcing firm
Microsourcing, a Manila-based commercial content moderation outsourcing firm, advertised their laborforce as having specialized linguistic and cultural “skills.” In this way, these “skills” were the commodity on offer.

In essence, what is supposed to be a resilient just-in-time chain of goods and services making their way from production to retail may, in fact, be a much more fragile ecosystem in which some aspects of manufacture, parts provision, and/or labor are reliant upon a single supplier, factory, or location. Just as it is in manufacturing, where a firm discovers that a part is made only in one factory and its going offline affects everything downstream, such is it decidedly the case for the fragile ecosystem of outsourced commercial content moderation and its concentration in areas of the world such as the Philippines. The reliance on global networks of human labor is revealing cracks and fissures in a host of supply-chain ecosystems. In the case of human moderators who screen social media, their absence is likely to give many users a glimpse, quite possibly for the first time, of the digital humanity that goes into crafting a usable and relatively hospitable online place for them to be. In the face of their loss, perhaps just when we need them the most—to combat the flood of misinformation, hate speech, and racism inspired by the global pandemic that is COVID-19 now circulating online—they are gone. Will we learn to finally collectively value this aspect of the human infrastructure just a little bit more than not at all?



Image Credits:

  1. Facebook’s announcement on March 16th indicated to many that a new experiment in content moderation was forthcoming.
  2. Professor Siva Vaidhyanathan of UVA expresses frustration with Facebook’s moderation under all-AI, March 17, 2020.
  3. Microsourcing, a Manila-based commercial content moderation outsourcing firm, advertised their laborforce as having specialized linguistic and cultural “skills.” In this way, these “skills” were the commodity on offer. Source: Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019)


References:




Rose McGowan and the “Neutrality” of Social Media Platforms
Adrienne Massanari / University of Illinois at Chicago


TRIGGER WARNING: This article or section, or pages it links to, contains information about sexual assault and/or violence which may be triggering to survivors.

Rose McGowan Tweet

On October 11, Rose McGowan, best known for her roles in the movie Scream and in the TV show Charmed, was banned temporarily from Twitter. Ostensibly the ban was for McGowan’s inclusion of a private phone number in a tweet, thus, running afoul of the platform’s rules prohibiting the sharing of private information. But the timing couldn’t have been more suspicious – McGowan had very publicly come out to denounce Harvey Weinstein as having allegedly raped her in 1997. McGowan was also in the process of publicly condemning a number of other male celebrities who she alleges knew about Weinstein’s long history of sexual assault and harassment. After McGowan reported the suspension through her Instagram feed, her followers and the news media became suspicious, as Twitter offered no reason for the action. Many speculated that the suspension was actually a result of McGowan’s criticism of the well connected, high profile Weinstein, rather than any specific infraction. After Twitter notified McGowan of the actual problem, she was given the opportunity to delete the tweet in question, which she did. However, it took Twitter more than 12 hours to actually reinstate McGowan’s account.

Scene from Mad Men

At even the best of times, Twitter’s application of its own content rules is inconsistent. For those of us who watched the ongoing harassment and doxxing of women game developers and their allies for months during #Gamergate while Twitter seemed to do little to protect them, the actions against McGowan were both frustrating and entirely predictable. Twitter has a long history of not supporting marginalized communities, especially when it comes to racist, sexist, transphobic, homophobic, etc. speech. But it’s not just Twitter. Facebook has suspended people of color for reporting on the racist, virulent harassment they’ve received – a twisted application of the platform’s rules against violent speech.

Prince

And on Reddit, which is the focus of much of my research, the problems are even more pronounced. [ (( See also, Massanari, Adrienne L. (2015). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society. ))] Site administrators have long resisted enforcing even the most basic rules that might protect and support marginalized communities, instead applying a misguided approach to “free speech” that implicitly serves to amplify the speech of certain communities (such as the “alt-right”) and effectively silence others. Complicating matters is that Reddit’s upvoting/downvoting system creates the illusion of a democratic system at work, even though it can be gamed and often works instead to create a kind of power-law dynamic where the most popular content becomes even more popular because it has already been upvoted (and is thus more visible).

Moments like this remind us that social media platforms are not politically neutral, despite what the Mark Zuckerbergs of the world might have us believe.

Awkberg

They are actually a complex assemblage of policies, algorithms, user practices, and economics; what constitutes appropriate “use” of these spaces is an ongoing negotiation between platform designers, users, and technologies. As Tarleton Gillespie argues, we tend to regard platforms as if they are simply neutral spaces for reading the news or watching cat videos. [ (( Gillespie, Tarleton. (2010). The politics of “platforms”. New Media & Society, 12(3), 347-364 ))] But because design itself is a rhetorical act as it shapes what and how we are to do something with a given object, platforms reflect particular values and political-economic realities. For example, the imperative of “sharing” has become normed as a primary way we experience social media, in no small part to the advertising revenue model most of these spaces employ. [ (( For more on sharing as a cultural value in social media, see van Dijck, José. (2013). The culture of connectivity: a critical history of social media. Oxford: Oxford University Press. ))]

The culture of Silicon Valley also plays a large role in the ways in which platforms are designed and thus experienced by users.

Aziz Ansari

In his book about Stewart Brand and his Whole Earth network, Fred Turner traces how the counterculture of the 1960s and 70s played an integral role in the development of Silicon Valley. [ (( Turner, Fred. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press. ))] These early technologists built alternative communities, championed meritocratic organizing, and viewed technology as a way to both ensure personal liberation and solve social problems. And, we can see these forces still at play today – just listen to the CEOs of Apple or Google or Facebook talk – and it becomes apparent that they view their mission as nothing less than changing the world through technology.

The problem, of course, is that Silicon Valley is the domain primarily of cisgender, educated, straight white men, whose privileged experiences in the world remain foregrounded in the kinds of products and experiences they create.

Silicon Valley

This extends, in many cases, to the policies and technologies platforms use to regulate content. Universal, one-size-fits-all content rules become normed around the experiences of the most privileged in these spaces. So, for example, Twitter creates a “no sharing of personal information” rule (which, on the surface is sensible), but then does not apply it with regard to context or in any nuanced way. Is Rose McGowan’s inadvertent sharing of a phone number during a discussion about the problem of sexual harassment and assault the same as a Gamergater sharing the address of a woman game developer with the intention of encouraging others to harass her until she flees her home in fear? Should it result in the same response from Twitter? The answer is clearly no, but the tools and policies that Twitter employs do not account for the complexities and power differentials at play in these cases. This kind of disregard for context becomes especially problematic in light of Twitter’s reluctance to do anything about the myriad bots spreading propaganda and misinformation and the large numbers of “alt-right” figures who have successfully staked out the platform as being welcoming to their invective. In 2015, Twitter’s CEO acknowledged that their team “suck[s] at dealing with abuse,” but subsequently has done very little to fix it.

Scene from Workaholocs

Underlying all of this is the assumption that platforms should provide a space for all kinds of speech, no matter how violent, racist, sexist, transphobic or homophobic. On the surface such an approach might appear appealing – isn’t this what Supreme Court Justice Louis Brandeis was talking about when he advocated that the “remedy” for problematic speech was more speech in Whitney v. California? Maybe. But social media platforms themselves are not the US government and thus are not constrained in the same way governmental bodies are when it comes to the First Amendment; they are private corporations that can pretty much do whatever they want (for good or for ill). Unfortunately, many platforms have encouraged certain populations to feel emboldened to such a degree that they seem to think “free speech” equals consequence-free speech. Further complicating matters is that these same platforms are loath to suspend the accounts of some of their most troubling users because they are advertising cash cows. Take, for example, Richard Spencer, the white nationalist whose account remains on Twitter despite a history of violent speech and troubling harassment.

Lucille

Such an approach ignores the ways in which algorithms, site policies, and design decisions actually do more to support and amplify the speech of particular individuals and groups. Like most social media platforms, Twitter’s design reflects the values of Silicon Valley – emphasizing personal liberty over community safety. Instead of creating effective tools and policies that ensure everyone can engage in important democratic debates (or just share their favorite reaction GIFs), this strategy actually serves to suggest some users (particularly those whose social location already provides significant power and privilege) are more important than others. So, the question remains: how and for whom are these platforms actually changing the world?

Image Credits:
1. Tweet from Rose McGowan
2. Scene from Mad Men
3. Prince
4. Scene from Saturday Night Live
5. Scene from Parks and Recreation
6. Scene from Silicon Valley
7. Scene from Workaholics
8. Scene from Arrested Development

Please feel free to comment.




I Lost my Wife to Facebook, and Other Myths that Might be True

by: Michele Byers / Saint Mary’s University

Collective Facebook

Collective Facebook

I first heard about Facebook a few months ago. My cousin, an undergraduate student, was having dinner at my house and mocking my husband mercilessly about his MySpace page. MySpace, she intoned, was over; Facebook was “it.” As a scholar of media, she found me guilty of not keeping up with the times. The Internet isn’t really one of my main research areas, I reasoned; I barely have time to keep up with my email. But I was intrigued. The only way to get into the Facebook system is to sign up and in so doing get a page of one’s own. I did so. A bare bones page… but four hours later I was still “on” Facebook, complaining to my husband about how few old friends I could find.

Facebook is a social networking site or social utility. It has one of those dizzying pedigrees we associate with the Internet age. In 2004, its creator, Mark Zuckerberg, began with the idea of developing an online version of the paper facebook produced for incoming students at Harvard, where he was a student. In less than a year he had dropped out of school and moved to California. Within the next six months, Facebook was made available to students at most universities in the US, soon high school students were invited to join, and networks were expanded to include Britain, Canada, and Australia. Less than two years after it began, Facebook opened its doors to the general public and was estimated to have 12 million users. Today it is said to have as many as 20 million users as rumours circulate that a buyout for as much as 2 billion dollars is in the works.

Mark Zuckerberg

Mark Zuckerberg

Facebook is a strange place. You can provide an almost unlimited amount of personal information on your page. You can post pictures. You can engage in an ongoing commentary about your life, minute by minute. The system also creates a running mini-feed that lets you know every minute change any one of your “friends” effects on their pages. You can invite people you know and strangers to be your friends, to come to social events, or to join Facebook groups that you start or are merely a member of. A lot of people use Facebook like email. It’s a place to chat and to keep people informed about your life. But for some people — and this is how I, and I suspect many people my age (the over 35s, who are even older than 25–34 year old “oldies” discussed in one recent Globe & Mail article), get hooked on Facebook — it’s a place to find old friends that we weren’t likely to track down anywhere else. And this is where, I suspect, we differ from younger cohorts of Facebookers who probably aren’t as nostalgic as we are… yet.

Nostalgia is a definite key for many of the Facebookers I know — and I can’t quite believe how words like Facebooker and Facebooking flow out of me after just a few short weeks of interface. While quite a few of my “friends” are people I work with, live near, or am related to, the ones I really seek out and the ones I, in a sense, have thus far derived the greatest satisfaction from being connected to, are the old friends I lost touch with along the way. Frederic Jameson says that “an addiction to the photographic image is itself a tangible symptom of an omnipresent, omnivorous, and well-nigh libidinal historicism” (18), one that “cannibalizes” the past into a cacophony of “overstimulating ensembles” (19). This is an apt description of Facebook for many users (and perhaps of many users as well).

An April 23 editorial in the Toronto Star makes the libidinal quality of the Facebook experience explicit by suggesting that social networking sites are now rivaling Internet pornography for sheer number of users. Jen Gerson writes, “though the website du jour may change, the desire to be connected all the time isn’t going to go away among the youth cohort.” I agree, although I think the focus on youth misses the fact that people in their 30s, 40s, 50s (and so on) are increasingly using social networking sites to stay connected with their pasts as well as their presents. Here’s one example: I recently invited one of my best friends from high school, a busy working mom with four kids, to join Facebook. She so rarely responds to her email, I thought she’d just delete it. But, just a week later she sent me a message (on Facebook) saying she just couldn’t keep herself from trolling the system looking for old friends.

Linda Hutcheon makes the important point that a predilection for seeing postmodern culture and its artifacts as inherently nostalgic is something to be cautious of. She’s right of course. Nostalgia often implies the longing for a mythic past, whose doors, always barred to us, make what is behind them infinitely desirable and whose completedness masquerades as simplicity, as authenticity, as a time that was really “real.” We can go home, but we can’t go home. In her recent book, Giving An Account of Oneself, Judith Butler argues that one of the problems of doing so — of giving an account of oneself — is that there is always an originary part that we cannot know. A piece that is beyond language and memory but that is nonetheless foundational to who we are. We are, on some level, aware of this, and search for ways of contacting or connecting with this originary part for which we have no vocabulary of enunciation. This may be part of Facebook’s appeal. Many people are casting around in its multitudes looking for connections to a past they feel cut off from; from parts of themselves they think might be lost; for threads of a narrative that will allow them to give a fuller and more complete account of themselves.

And yet, is there an ironic aspect of Facebook? Is this type of postmodern cultural production/immersion (as we are both in it and, in a sense, co-producers of it) simply a reification of a mythic past, a giving in to longing for something unrecoverable, or do we engage with Facebook via an “ironic distance,” or both (Hutcheon)? Hutcheon argues that irony and nostalgia are both responses to things, rather than things in and of themselves. Facebook can be read as a space where irony and nostalgia co-exist, especially for those who use it to seek out the past. The pages and groups people create are genuine but evidence a type of ironic distancing that comes from a recognition that the people reading are, in some sense, at a temporal distance. These groups play a bit like the newest ad in the Diet Pepsi “Forever Young” campaign. Called “Make-Out,” the ad features a couple in their thirties who wish they could make-out like they did when they were teens… flash to them making out in the grocery store and at parent-teacher interviews to the 1983 Bonnie Tyler power-ballad “Total Eclipse of the Heart.” Happily, they return to their present, but with the recognition that Diet Pepsi keeps them young, connected to their youth… like Facebook.

Facebook T-shirt

Facebook T-shirt

Ironic, nostalgic, Facebook does have a high school quality. One friend told me she had resisted joining because of an experience on an earlier social utility: she kept looking for ex-boyfriends and their new girlfriends, comparing the “coolness” of their sites and hers, comparing the numbers of friends each had. It was, she finally explained, like being back in high school. Alyssa Schwartz, in the Globe & Mail, discovered similar invocations of a “high school mentality,” finding Facebook guilty of “bring[ing] back behavior that went out with plaid shirts, Tuff boots and Nirvana.” Note how Schwartz’s invocation of high school, here coded through fashion and music, marks the ironic nostalgia of Facebook participation, but also locates “old” Facebookers as people who were teens in the 1990s, not the 1980s (shoulder pads, doc martens, and Madonna), 1970s (hot pants, platform shoes, and Led Zepplin), or 1960s (mini skirts, saddle shoes, and The Beatles).

There is clearly much to be studied in the worlds of Facebook. The desire to network the past in the present, the availability and massive usage of this technology, certainly warrants closer examination.

Works Cited
Jen Gerson. “Social Networking rivals porn on Web.” The Toronto Star. April 23, 2007.
Linda Hutcheon. “Irony, Nostalgia, and the Postmodern.”
Fredric Jameson. Postmodernism or, the Cultural Logic of Late Capitalism. Durham: Duke UP: 1991.
Alyssa Schwartz. “Grownups get their Facebook fix.” Globe & Mail. March 31, 2007.

Image Credits:
1. Collective Facebook
2. Mark Zuckerberg
3. Facebook T-shirt

Please feel free to comment.