TikTok’s Digital Eugenics: Challenging Ableism and Algorithmic Erasure through Disability Activism
Jessica Sage Rauchberg / McMaster University

Photograph of a hand holding a phone with the TikTok logo on a black screen background
Recent #challenge videos on TikTok exhibit ableist and eugenicist attitudes.

This past summer, a viral back-to-school TikTok prank amassed millions of views and accusations of creating ableist attitudes online. In the #NewTeacher challenge, parents set up a fake FaceTime with their child’s “new teacher” using TikTok’s Green Screen feature and capture their child’s humorous reaction. While some videos featured a model or celebrity as the “new teacher,” many parents chose to use photos of people with facial difference and other apparent disabilities. The parents laugh as their child screams, cries, and vocalizes their disgust about how their “new teacher” is disabled and ugly. Though disability activists with facial difference used their platforms to collectively speak out against the #NewTeacher challenge’s ableism, those who reported the videos were told that there were no violations of TikTok’s guidelines. My choice not to include #AutismChallenge and #NewTeacher challenge videos in this column is deliberate. By including sample videos, the spectacle of ableist and eugenicist attitudes on the app are centered and therefore naturalized. I instead chose to feature videos by disability activists who use TikTok as a platform for community building and engagement in spite of algorithmic suppression. 

Created by the Chinese company ByteDance, TikTok has been downloaded nearly 2 billion times since its 2016 inception and is host to over 100 million users in the United States. The micro-vlogging app is no stranger to controversial, viral challenges. In May 2020, the #AutismChallenge surfaced, where users uploaded videos of themselves making facial expressions and hand gestures that “appeared to parody people with [autism and other] disabilities,” all to the tune of The Black Eyed Pea’s “Let’s Get R—.” The videos collectively received millions of views and criticism from autistic TikTok users, influencers, and media outlets alike before videos tagged under #AutismChallenge were removed from the app.[1] These ableist, dehumanizing TikTok trends surfaced months after the platform came under fire for hiding disabled users’ content while simultaneously failing to remove videos that mock disabled people. It is clear that the app’s governance imagines a social media site where disabled people do not exist: a type of digital eugenics. My coinage of digital eugenics refers to how TikTok and other social media platforms rely on the use of algorithmic coding that literally erase the possibility for disabled, queer, trans, and fat users from full participation and visibility on a social media network.

Social media algorithms are not neutral technologies: through meticulous coding practices, social media networks use algorithms as ideological tools to convey the guidelines for citizenship and belonging on the internet.[2] For instance, if disability is represented as something negative offline, such beliefs, representations, and tropes will influence the development and design of digital platforms. My belief that algorithms are an ideological tool used to support dominant offline cultural practices is not novel: algorithms reproduce anti-Blackness and misogynoir on Google and other search engines.[3] These coding practices replicate racial biases online while claiming to fix them and are used to track and police poor folks’ use of the internet.[4] Likewise, these same algorithms promote content and hashtag networks that further online ableism (as seen with TikTok’s #NewTeacher challenge and #AutismChallenge).

One way TikTok and other social media platforms regularly enforce digital eugenics is through shadowbanning (also referred to as “shadowban” or “soft block”). Shadowbanning is an increasingly popular practice used to hide and suppress the visibility of user’s content without formal notice—even if the account is open to a digital public. Popular users on a social networking app may suddenly discover that likes, comments, shares, and overall engagement with their content no longer reflects a larger follower count. Once a platform flags an account and places it under a shadowban, users may be unable to use hashtags to promote their posts. Shadowbans are not random, isolated, or coincidental; rather, they are intentional and deliberate. The use of shadowbanning strategies are intended to surveil and control marginalized communities. They are reflections of our own offline cultural beliefs about who “naturally” belongs, and who does not.

Such practices are embedded into the core of TikTok’s governance. For example, the network’s “Auto R” moderation guidelines mark certain vulnerable or minoritized user populations into “risk groups” as a means to prevent cyberbullying. For example, disabled people are placed into a “Risk 4” category, meaning that any videos that the account posts are only available to other accounts based in the same country. Instead of reaching a global audience of over 1 billion users, a disabled TikToker whose account is flagged under “Risk 4” may have extremely low engagement with their content, even if they host several thousand followers on their page. As of September 2019, TikTok also flagged any account that discussed disability into an Auto R risk group (specifically autistic users and users with facial difference).

For disabled users who are also fat, queer, trans, and/or poor, the Auto R shadowban measures may subject these accounts to further “risk” protections that will hide, censor, or eliminate their accounts from being featured on the app’s #ForYou home page, where many accounts gather viral acclaim. An August 2020 video from Imani Barbarin (@crutches_and_spice), a Black queer disability writer and digital activist, acknowledged the ways that TikTok’s various risk labels engage in intersectional suppression to censor multiply minoritized users: “I saw TikTok as a way to reach gen z, millennials, and older millennials about ableism and issues in the disability community… I knew going into [using TikTok] that TikTok suppresses the videos of Black, disabled, fat, and queer people, and I am all of those things.” Though the company claimed that the Auto R measures were a temporary fix to counter cyberbullying when TikTok rose to global prominence, the feature remains, demarcating which bodies are exceptional and worthy of viral attention, and which should be erased.


To the people that brought it to my attention I may be sh@dowb@nned. ##lgbt ##thicc

♬ original sound – Crutches&Spice ♿️ :
Imani Barbarin (@crutches_and_spice) addresses nearly 30,000 followers about being shadowbanned.

My criticism of TikTok’s digital eugenics and algorithmic infrastructure should not be read as total skepticism for the app as a site of community building and disability activism. Top videos tagged under #disability feature disabled accounts using their platforms to educate thousands of followers about operating a mobility device or scripts for talking about disability and ableism. For example, Lizzie Velasquez (@littlelizziev) whose image was frequently used in the #NewTeacher Challenge, has used her platform to ignite conversations about disability, facial difference, and stigmatization. In one video, captioned “#duet with @dr.allison.rodgers Teaching a child about kindness & empathy starts with a conversation as simple as this! #newteacherchallenge #bekind”, Velasquez is shown on the left half of the screen as she watches a parent and her child talking positively about Lizzie’s photo and the cruelty of #NewTeacher challenge.



♬ original sound – Dr. Allison Rodgers
Lizzie Velasquez (@littlelizziev)’s duet with a follower who spoke out against the #NewTeacher challenge: “It’s as simple as this!”

Disabled TikTokers also spoke out against the platform’s “Risk 4” shadowbanning practices that purposefully target and hide disabled users’ content. In one video, model Jess Quinn (@jessicaemily.quinn) dances as she removes her prosthetic leg and shows off her rotationplasty. The video, captioned: “Hey TikTok, what’s that you said about people with disabilities being shadowbanned to ‘protect them?’ We’re fine, thanks. #allbodieswelcomehere.” Quinn’s video went viral, garnering over 375,000 likes.


Hey TikTok, what’s that you say about people with disabilities being shadowbanned to “protect them”? We’re fine, thanks. ##allbodieswelcomehere

♬ CHAMPION – Bishop Briggs
Jess Quinn (@jessicaemily.quinn) dances as she removes her prosthetic limb in response to TikTok claiming that Auto R algorithms are used to “protect” disabled users from cyberbullying.

TikTok’s disability activism is uniquely resilient: on a platform that simultaneously suppresses disabled voices while permitting viral challenges that are predicated on eugenicist logics, disabled users continue to find ways to thrive, educate, and build community. For many disabled people, social media platforms like TikTok facilitate possibilities for survival.

Image Credits:
  1. Recent #challenge videos on TikTok exhibit ableist and eugenicist attitudes.
  2. Imani Barbarin (@crutches_and_spice) addresses nearly 30,000 followers about being shadowbanned.
  3. Lizzie Velasquez (@littlelizziev)’s duet with a follower who spoke out against the #NewTeacher challenge: “It’s as simple as this!”
  4. Jess Quinn dances as she removes her prosthetic limb in response to TikTok claiming that Auto R algorithms are used to “protect” disabled users from cyberbullying.

  1. At the time of my writing, ableist #NewTeacher challenge videos were still available on TikTok’s platform. For more information on the Autism Challenge: “Autism Challenge.” Know Your Meme, Last updated May 17 2020, https://knowyourmeme.com/memes/autism-challenge. []
  2. Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press, 2018. []
  3. Ibid. []
  4. Ruha Benjamin. Race After the Internet: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press, 2019; Virginia Eubanks. Automated Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018. []

Leave a Reply

Your email address will not be published. Required fields are marked *