Can Television Diversity Overcome the Rise of Algorithmic Recommendations?
Mark D. Pepper / Utah Valley University
From their inception, streaming platforms have offered recommendations based on complex intersections of user data. Netflix doesn’t even hide the basics of the process. First, the streaming giant puts everyone who watches similar shows/movies into a “taste community.” All this media has content descriptive meta-tags affixed by freelance staff (the people who ultimately create categories like: Psychic Murder Mysteries with a Strong Female Lead). Your community peers and these descriptive tags then combine with account data: how fast you watched a show, how many times you pressed pause, even how late you stayed up binging. Yes, Netflix is studying you, and the results seemingly answer the ultimate question: “What do I like?”
Admittedly, these suggestions prove useful when faced with a deluge of streaming content. Anything helps narrow the options, especially when Netflix itself streams over 700 original series. However, these algorithms are not neutral, consequence free suggestions. Sociologist Pierre Bourdieu offered perhaps the best known take on how taste is a complicated manifestation of social positioning, economic class, and cultural capital. Taste always reveals magnitudes about individual upbringing and social influence. Therefore, as our cultural taste is increasingly guided by market research, it’s worth asking how these streaming algorithms are affecting television consumption, especially at a time when television is currently more diverse than ever. Do algorithms lead us towards the industry’s hard won diversity (motivating it towards the work that still needs to be done)? Or does diversity get buried and cancelled under an avalanche of normalized preferences and choices?
Every TV viewer maintains a mental category of “TV Shows I Like,” and thinking about categories is a useful way to start answering these questions. Some of the most exhaustive (and, frankly, dry) thought on categorization comes from Aristotle. Aristotle argues that categories are logical tools that name reality, organize entities, and form wholes. These clearly defined and well delineated categorical types (so much so, Aristotle claims there are only ten) exist independently of our observation. Merely look at a person or thing in the world, list its traits, and compare these to the qualifications for membership in a pre-existing categorical distinction. Does eating those mushrooms kill people? Place them in the “poisonous mushrooms” category. This category already has members (for comparison) and would exist even without human knowledge (i.e., the mushrooms would be poisonous regardless of their properties being discovered). Simple and tidy.
The category “TV Shows Someone Likes” doesn’t match this conception perfectly; such a category cannot exist without a subjectivity to experience or confirm the liking. However, the notion that simply cross-referencing a user’s preferences/traits with already known members of a category is Aristotelian to its core. For example, the subscriber seems to like The Vampire Diaries (The CW, 2009-17), Charmed (The WB, 1998-2006), and The Magicians (Syfy, 2015-present)? Then, The Chilling Adventures of Sabrina (Netflix, 2018-present) is a safe, categorical bet based on similar traits of impossibly attractive people with mystical powers fighting supernatural threats. Additionally, this approach to categorical taste has many implicit assumptions about how taste works. Taste is fairly consistent and rarely broadens. There’s comfort and pleasure in the familiar, with surprising deviations deemed too risky (both for corporate profit and personal time). Finally, knowing your tastes (therefore, to some extent, knowing yourself) is relatively easy—just tick-all-the-boxes of something you’ve enjoyed in the past and your relatively assured path is set.
But what if categories don’t reflect objective reality? What if, instead, they’re on the fly, problem solving heuristics? Dave Berreby suggests this interpretation in his book Us & Them: The Science of Identity. Berreby writes, “Each category you can think of . . . is a solution to some particular person’s problem. You could think of any category, in fact, as the answer to a person’s question. They’re thoughts; mental actions that you take to cope with your current circumstances” (68). After all, there’s really no imperative to categorize poisonous mushrooms until you find yourself faced with the dilemma of wanting to eat a mysterious fungus. Likewise, why have a category of TV shows you like if it doesn’t help you in ways grander than making lists for lists’ sake?
So, if the shows inside the category “TV Shows I Like” represents a solution, as Berreby suggests, what problem(s) does its existence help alleviate? What question(s) does it answer? I suggest categorizing shows by enjoyment primarily answers the questions: what kinds of narrative matter and whose stories are worth telling? Put differently, the content that qualifies for “TV Shows I Like” is one measure of someone’s attunement to narrative diversity and identity representation. Turns out though, algorithms aren’t designed with those qualities in mind—a human design decision that perfectly reflects Bourdieu’s suggestion of cultural influence on taste.
To help illustrate, look at the actual process of picking favorites. When we note a preference, we’re gleaning information within the overwhelming pool of options (as opposed to scrolling Netflix’s options but never picking anything). Gregory Bateson (and other scholars who study complex systems) considers such a distinction as noting a “difference that makes a difference” (453). This is a key distinction about, well, distinctions. I once watched an episode of Netflix’s Fuller House (2016-20), more out of curiosity than an actual recommendation. I quickly noted how different it was from my usual tastes. I also never watched another episode because difference is not enough. The difference must make a difference, and that’s a process of noting the difference matters to me.
Algorithms assume shows make a difference by having the right kind of similarities. Sure, there are people who seek out novelty, but, far more likely, we’ll deem something as mattering if it reflects what we’re accustomed to. Suppose some users really enjoy Tim Allen’s sitcom, Last Man Standing (ABC, 2011-17; FOX, 2018-present), and its reflection on a Caucasian, 60-something man’s struggle to maintain hyper-masculinity in the suburbs of Denver. Would an algorithm recommend them Nahnatchka Khan’s Fresh Off the Boat (ABC, 2015-20), which follows a Taiwanese-American family’s culture shock as they move from D.C.’s Chinatown to Orlando Florida (spotlighting issues of immigration, citizenship, and assimilation along the way)? Would they even enjoy such a show?
I don’t know—the answer obviously depends on the individual in question. It’s certainly not impossible to like both shows, but I struggle to imagine the trait list they would share in a database beyond “shows about family.” Though, if categories answer value questions instead of tallying traits, the real question becomes: if a fan thinks Allen’s narrative is worth telling, will they also think Khan’s deserves attention? There’s no need to choose which narrative matters more, but it’s worth asking if algorithms would ever encourage someone to experience the narrative more unfamiliar to their actual life circumstances? After all, streaming recommendations function as if the user has already figured out everything that personally matters—the algorithm is just trying to learn and reflect. The notion of value as a continuing journey of self-discovery is abandoned for the promise of convenience. The algorithm’s suggestions effectively shape our sense of what matters into a self-gratifying mirror of previously validated ideas, tropes, and identities.
Recommendations are seductive. Streaming services offer the allure of accurate, qualitative data made through the unambiguous, cold calculations of computational logic. Plus, every time someone likes a suggested (and likely very familiar) show, the algorithm’s use value grows more trustworthy. The most disturbing implication here is that, with a history of purposeful and blatant discrimination in the television industry, now something as seemingly innocuous and inconsequential as recommendation systems can unknowingly build discrimination into our preferences. The sheen of scientific validity and hard numbers shapes what kinds of narrative matter and gets dismissed as merely reflecting reality.
I won’t argue (at least here) there’s some moral imperative to consume diverse media. However, television is still the place we spend the most time with long-developed characters. Television still most often represents the family homes and workplaces so familiar to our daily lives. Despite modern, on-the-go viewing, television still retains the intimacy of a medium that originally beamed directly (and only) into our living rooms. Maybe there’s no moral imperative, but there is certainly an abundance of opportunity. Who ends up represented on our television screens matters and plays a powerful role in shaping real world attitudes and behaviors towards people and cultures—especially ones historically marginalized on our culture’s validating screens. However, the importance of diversity and representation in media will never be reflected in a recommended percentage “match.” Even if, under the current computational models, every followed recommendation puts that truth at risk.
- Infographic showing Netflix’s data collection types.
- A man organizes shapes by shared traits.
- Fresh Off the Boat cast promo shot (ABC/Andrew Eccles/Getty Images).
- Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. Routledge, 2013. [↩]
- Aristotle. The Organon. Translated by Harold Percy Cooke and Hugh Tredennick. Harvard UP, 1938. [↩]
- Berreby, David. Us and Them: The Science of Identity. U of Chicago P, 2005. [↩]
- Bateson, Gregory. Mind and Nature: A Necessary Unity. E.P. Dutton, 1979. [↩]