Rose McGowan and the “Neutrality” of Social Media Platforms
Adrienne Massanari / University of Illinois at Chicago
TRIGGER WARNING: This article or section, or pages it links to, contains information about sexual assault and/or violence which may be triggering to survivors.
On October 11, Rose McGowan, best known for her roles in the movie Scream and in the TV show Charmed, was banned temporarily from Twitter. Ostensibly the ban was for McGowan’s inclusion of a private phone number in a tweet, thus, running afoul of the platform’s rules prohibiting the sharing of private information. But the timing couldn’t have been more suspicious – McGowan had very publicly come out to denounce Harvey Weinstein as having allegedly raped her in 1997. McGowan was also in the process of publicly condemning a number of other male celebrities who she alleges knew about Weinstein’s long history of sexual assault and harassment. After McGowan reported the suspension through her Instagram feed, her followers and the news media became suspicious, as Twitter offered no reason for the action. Many speculated that the suspension was actually a result of McGowan’s criticism of the well connected, high profile Weinstein, rather than any specific infraction. After Twitter notified McGowan of the actual problem, she was given the opportunity to delete the tweet in question, which she did. However, it took Twitter more than 12 hours to actually reinstate McGowan’s account.
At even the best of times, Twitter’s application of its own content rules is inconsistent. For those of us who watched the ongoing harassment and doxxing of women game developers and their allies for months during #Gamergate while Twitter seemed to do little to protect them, the actions against McGowan were both frustrating and entirely predictable. Twitter has a long history of not supporting marginalized communities, especially when it comes to racist, sexist, transphobic, homophobic, etc. speech. But it’s not just Twitter. Facebook has suspended people of color for reporting on the racist, virulent harassment they’ve received – a twisted application of the platform’s rules against violent speech.
And on Reddit, which is the focus of much of my research, the problems are even more pronounced.  Site administrators have long resisted enforcing even the most basic rules that might protect and support marginalized communities, instead applying a misguided approach to “free speech” that implicitly serves to amplify the speech of certain communities (such as the “alt-right”) and effectively silence others. Complicating matters is that Reddit’s upvoting/downvoting system creates the illusion of a democratic system at work, even though it can be gamed and often works instead to create a kind of power-law dynamic where the most popular content becomes even more popular because it has already been upvoted (and is thus more visible).
Moments like this remind us that social media platforms are not politically neutral, despite what the Mark Zuckerbergs of the world might have us believe.
They are actually a complex assemblage of policies, algorithms, user practices, and economics; what constitutes appropriate “use” of these spaces is an ongoing negotiation between platform designers, users, and technologies. As Tarleton Gillespie argues, we tend to regard platforms as if they are simply neutral spaces for reading the news or watching cat videos.  But because design itself is a rhetorical act as it shapes what and how we are to do something with a given object, platforms reflect particular values and political-economic realities. For example, the imperative of “sharing” has become normed as a primary way we experience social media, in no small part to the advertising revenue model most of these spaces employ. 
The culture of Silicon Valley also plays a large role in the ways in which platforms are designed and thus experienced by users.
In his book about Stewart Brand and his Whole Earth network, Fred Turner traces how the counterculture of the 1960s and 70s played an integral role in the development of Silicon Valley.  These early technologists built alternative communities, championed meritocratic organizing, and viewed technology as a way to both ensure personal liberation and solve social problems. And, we can see these forces still at play today – just listen to the CEOs of Apple or Google or Facebook talk – and it becomes apparent that they view their mission as nothing less than changing the world through technology.
The problem, of course, is that Silicon Valley is the domain primarily of cisgender, educated, straight white men, whose privileged experiences in the world remain foregrounded in the kinds of products and experiences they create.
This extends, in many cases, to the policies and technologies platforms use to regulate content. Universal, one-size-fits-all content rules become normed around the experiences of the most privileged in these spaces. So, for example, Twitter creates a “no sharing of personal information” rule (which, on the surface is sensible), but then does not apply it with regard to context or in any nuanced way. Is Rose McGowan’s inadvertent sharing of a phone number during a discussion about the problem of sexual harassment and assault the same as a Gamergater sharing the address of a woman game developer with the intention of encouraging others to harass her until she flees her home in fear? Should it result in the same response from Twitter? The answer is clearly no, but the tools and policies that Twitter employs do not account for the complexities and power differentials at play in these cases. This kind of disregard for context becomes especially problematic in light of Twitter’s reluctance to do anything about the myriad bots spreading propaganda and misinformation and the large numbers of “alt-right” figures who have successfully staked out the platform as being welcoming to their invective. In 2015, Twitter’s CEO acknowledged that their team “suck[s] at dealing with abuse,” but subsequently has done very little to fix it.
Underlying all of this is the assumption that platforms should provide a space for all kinds of speech, no matter how violent, racist, sexist, transphobic or homophobic. On the surface such an approach might appear appealing – isn’t this what Supreme Court Justice Louis Brandeis was talking about when he advocated that the “remedy” for problematic speech was more speech in Whitney v. California? Maybe. But social media platforms themselves are not the US government and thus are not constrained in the same way governmental bodies are when it comes to the First Amendment; they are private corporations that can pretty much do whatever they want (for good or for ill). Unfortunately, many platforms have encouraged certain populations to feel emboldened to such a degree that they seem to think “free speech” equals consequence-free speech. Further complicating matters is that these same platforms are loath to suspend the accounts of some of their most troubling users because they are advertising cash cows. Take, for example, Richard Spencer, the white nationalist whose account remains on Twitter despite a history of violent speech and troubling harassment.
Such an approach ignores the ways in which algorithms, site policies, and design decisions actually do more to support and amplify the speech of particular individuals and groups. Like most social media platforms, Twitter’s design reflects the values of Silicon Valley – emphasizing personal liberty over community safety. Instead of creating effective tools and policies that ensure everyone can engage in important democratic debates (or just share their favorite reaction GIFs), this strategy actually serves to suggest some users (particularly those whose social location already provides significant power and privilege) are more important than others. So, the question remains: how and for whom are these platforms actually changing the world?
1. Tweet from Rose McGowan
2. Scene from Mad Men
4. Scene from Saturday Night Live
5. Scene from Parks and Recreation
6. Scene from Silicon Valley
7. Scene from Workaholics
8. Scene from Arrested Development
Please feel free to comment.NOTES
- See also, Massanari, Adrienne L. (2015). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society. [↩]
- Gillespie, Tarleton. (2010). The politics of “platforms”. New Media & Society, 12(3), 347-364 [↩]
- For more on sharing as a cultural value in social media, see van Dijck, José. (2013). The culture of connectivity: a critical history of social media. Oxford: Oxford University Press. [↩]
- Turner, Fred. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press. [↩]