The Worst Happened

American Soldier

American Soldier

This here is a ’76 Chevy and somehow or other, it got stuck over here in Iraq, like the rest of us.
— Sergeant Ronald Jackson, Off to War

I’m giving up my whole life by being here.
— Sergeant Joe Betts, Off to War

On 26 October, U.S. military fatalities in Iraq reached the much-remarked “grim milestone” of 2,000. Television and other media commentaries noted the number’s historical significance and tried to situate it in a larger context, some noting the U.S. administration’s tone-deaf resolve concerning the country President Bush continues to call “the central front in our war on terror.” Just about a week later, FX decided “not to renew” the series Over There, citing its poor ratings performance, and, according to FX general manager John Landgraf, “our belief that the numbers were reflective of what the show is about, rather than its quality or entertainment value.”

If only the war could be cancelled for the same reasons.

But if the war goes on in Iraq, and Afghanistan, and any number of “secret” prisons around the globe, its tv representations remain underwhelming. Typically, the war appears as “numbers” — casualties from car bombs or IEDs, comparisons between months’ accumulations (October 2005 was the “fourth deadliest month” since the war’s start), troops moved from one location to another — graphics that skirt experience that might actually be horrifying for viewers or mobilize resistance to the war. As much as Cindy Sheehan or high school students or even Senate Democrats calling for closed door sessions find ways to make their protests visible on tv, the war itself remains strangely unseen.

This despite the fact that the Iraq war, even more than the Vietnam war or the Gulf war, is the tv war. The fundamental capacity for representing the “personal” and “intimate” experiences of this war is frankly remarkable, given the technologies available to troops and journalists. From blogs to cell phones to videophones, the war in Iraq is represented repeatedly, but for a relatively small, heavily invested audience: families keep in touch through email and internet discussion groups, reporters publish their work online or on non-U.S. outlets — Australian tv, the BBC, Al-Jazeera — so that those seeking data or images might find them.

Given all the potential venues, the lack of imagery — mainstream, incessant, harrowing — might seem surprising. Yes, the war is represented in any number of metaphorical and/or fictional ways: Afghanistan ground battles in E-Ring, the trauma of surviving veterans in Medium, the difficulties of “executive decisions” touched on in Commander in Chief, the fear of a relentlessly threatening environment and incompetent or malevolent authorities in Invasion, Lost, or Surface, and utterly brutal violence relegated to forensics series, from Bones and the CSIs to Killer Instinct and, in its way, Threshold. Meantime, the war as news, the war as tv in any explicit and so-named form, is rendered in brief, unspectacular, “daily installments.”

If few viewers will feel moved to mourning or outrage over the passing of Over There (whose soapy excesses were, as Landgraf suggests, “entertaining,” if nothing else), even fewer notice the war is appearing in a more or less coherent format in Off to War, on the cable network Discovery Times. Following the experiences of the Arkansas National Guard’s 39th Brigade, from Clarksville (pop. 7.719) to Iraq, the series, shot and directed by the brothers Craig and Brent Renoud, first emerged as what seemed a one-off documentary in April 2004. Released theatrically at the Tribeca Film Festival in May 2005, this first part — following only the brigade’s six months of training and first month in Iraq — earned generally positive reviews, specifically regarding its “raw” and “painful” images of troops in dangerous limbo, unsure of their mission and anxious about the disarray in which they leave their families (who are, in turn, unsupported by the military, the federal government or their private sector employers).

The next installments, airing under mainstream radar on Discovery Times beginning this October, are at once extraordinary and banal. The lives revealed are recognizable, sympathetic, even unspectacular. Except, of course, when the troops are fired on in Baghdad, or engage in their own shooting, shot in night vision, and muted black and white. Watching the presidential debates of 2004, the exceedingly young-seeming Specialist Tommy Erp, home on leave, can’t help himself. When President Bush insists that “the world is safer without Saddam Hussein,” Erp speaks to the documentary camera, as his mother sits quietly, her eyes resolutely trained on the television, not her son or the camera. “We’ve got no army that we’re fighting, we’ve got no causes that we’re fighting, we’re just getting killed for no reason.”

Off to War presents such exclamations of frustration as a matter of course, without sensational framing and alongside the declaration that everyone’s better off that Kerry has not been elected. But the expressions of irritation increasing as the series goes on — from October 2003, when the Renaud brothers arrived to film the unit’s training at Fort Hood in Texas (where the Army employs Iraqi and Kurdish American actors to simulate “actual war conditions”), through November 2004, when the bulk of the 39th remains in Iraq for Thanksgiving — as the mission becomes more unknowable, as the terms of deployment are extended, and as the local resentment of the U.S. occupation becomes unavoidably clear. Initially assigned to Camp Taji, renamed Camp Cook, the troops are at first both philosophical and understandably worried. “If the tables were turned and they invaded Arkansas,” goes the prevailing logic, the guys would be just as resentful as the Iraqis appear. And so, the Guardsmen make valiant efforts to show patience, as the local kids offer to sell them “sex movies” and cds and the camera moves in close on the troops’ exasperated faces.

Soldier in Iraq

Soldier in Iraq

The “characters” of the 39th are as compelling as any reality game show participant. This has mostly to do with their diurnal routines, that is, staying alive: “We’re getting mortared twice a day,” says Sergeant Joe Betts, gazing out into a dusty nowhere. Sergeant First Class David Short, framed in tight closeup even as his uniform and helmet make him nearly unrecognizable, instructs his men to wear their body armor, “24 hours a day, seven days a week. Whatever it takes to keep you guys alive, I’m willing to do. All hell has broken loose and it’s right outside that gate.”

Some subjects perform themselves knowingly, as wise guy observers. Specialist Matt Hertlein observes Iraqis at work: “Today is a good day because the National Guard doesn’t have to fill sandbags today.” Tommy Erp takes his digital camcorder around to make a tape to send home, pointing out “our home,” a trailer park that makes everyone feel like they’re back in Arkansas. Here we are, he narrates, engaging in typical activities to make them feel at home, “cigar-smoking and joking.”

Hertlein sits down to eat in the mess hall, assuring his presumed audience, “Mom, I know these potatoes aren’t gonna be as good as yours because they’re instant.” He instructs his little sister Megan never to join the Army: “Stay in school,” he says as the image cuts back to the family watching the tape back home, Megan laughing when Matt calls her name, “We’ll pay for you to stay in school.” And here, says Matt, holding up a carton of cigarettes for his camera, are his “Haji smokes.” He explains that a Haji is “a person who makes a pilgrimage to Mecca, but we kinda use it as a racist term.”

Such self-awareness and cynicism seem almost odd voiced by this sunburned kid, but this is the sort of conflicted, resilient self-understanding that Off to War reveals repeatedly. While the soldiers note their injunction not to speak about “anything political” in public (say, during an appearance at the local high school), in country they tend to say what’s on their minds. “What we’re supposed to be doing here is Security and Stability Operations,” says one sergeant. “But we basically have just tossed that right out the window. There’s no security here and no stability here. It’s basically a full-fledged very hot combat zone.” They argue over Abu Ghraib (“This is exactly why the Iraqi people stay pissed off at us”), they complain about being spit at. When they learn another unit has opened fire on a boy who pulled out a cigarette lighter shaped like a gun, leaving three Iraqis dead, the response is practical and weary. “People are gonna be pissed off,” notes a soldier. “It’s unfortunate, but they gotta know that we’re serious about this.” That is, you can’t be pulling out lighters at troops and thinking you’ll survive the encounter.

Sergeant David Short explains his own attitude to the camera, hunkered down in his helmet and goggles:

“I’m not into going out and meeting the people, you know, pressing the flesh, you know, trying to help ’em out and see what their needs are. They haven’t given me time to see what their needs are, because they won’t quit attacking me long enough for me to find out what their needs are. I really wish we’d a trained more for combat operations: train for the worst, expect the worst. Well, we trained for the best and the worst happened.”

Back home, Off to War’s subjects seem more constrained to appear “upbeat, despite the fact that their troubles are profound — bills unpaid, kids depressed, women lonely and fretful. As Amy Betts puts it, rather perfectly, “The hardest part was when he actually got in Iraq. I quit watching the news completely because I just can’t handle it.” An injured vet is hospitalized for repeated surgeries, his wife noting his “new jaw,” and the rods needed to hold his legs together. When Betts — an ordained pastor — is sent home for three months owing to a back injury, he observes that it probably saved his marriage, which has indeed appeared rocky during phone calls with Amy. The series shows Hertlein’s sister Megan playing softball, his grandmother in attendance. She smiles gently, sits deep in her lawn chair. “We can’t help but be proud of Matt,” she says slowly, “We don’t want him over there, we don’t exactly understand why they’re over there, but we’re proud of him.”

What’s most striking, perhaps, is the way the troops make connections between home and the war, make sense of their new lives because they must. Hertlein jokes, sort of, that he’s about to go out an “interact wit the locals. Hopefully, they won’t throw any rocks at me because if they do, I brought plenty of ammo,” at which point the camera cocks down to show his handful of stones. As a truck load of Iraqis drives by, all yelling and gesturing, Hertlein sighs, “It’s like being in Clarksville where all the Mexicans live at… You can’t understand a word they’re saying.” And with that, he resituates himself and his fellows: “Invading a foreign country and threatening its people: what could be better?”

Unheralded, largely unseen, Off to War shows the war in and against Iraq as a function of images and preconceptions, as the troops bring their prejudices, desires, and hopes. But it also shows, in a more nuanced and disquieting way — for those paying attention — the ways that war is waged by repression and reframing as much as it is by aggression. Engaging and disturbing, Off to War is hardly a whole story, for anyone. But it is one of the many ways that the war is represented and contained.

Image Credits:

1. American Soldier

2. Soldier in Iraq

Please feel free to comment.

Editorial: Mommy, Where do Presidents Come From?

Commander in Chief

Commander in Chief

ABC promised that this fall, a woman would be President — if, that is, we would be so kind as to tune in on Tuesday nights. Commander in Chief has been alternately praised for and accused of being a dry-run for Clinton: Part Deux, but the series pointedly takes several stabs at Hillary with as much force as its self-congratulatory feminist framework can sustain. Sure, C in C‘s concept allows for the pleasurable mobilization of a lot of “what ifs?” (What if an Independent were to take office? What if the First Lady were a he? What if the President’s children happen to be unusually good looking?), but few of these map convincingly onto Hillary ’08.

All of which is not to say that Commander in Chief suffers from any lack of self-awareness or self-importance; the first few episodes should dare not operate heavy machinery under such heady intoxication of “making History” — the first Independent President! the first female President! maybe the first black Vice President! (whoa, too much history, back up!). But the show isn’t as interesting for the questions it answers as the questions it poses — intentionally or not. Scrape off the generous slathering of Velveeta and Commander in Chief reveals itself to be less about who we want the president to be than what we want them to be.

In The Paradoxes of the American Presidency, Thomas Cronin and Michael Genovese identify numerous competing and conflicting demands and expectations that are placed on the office of the president. The book questions what kind of Commander-in-Chief Americans want: “strong and innovative leader or someone who primarily listens to the will of the people? A programmatic party leader or a pragmatic bipartisan coalition-builder? A president who exercises power forcefully or someone who establishes consensus before doing anything?” Not surprisingly, the answer is, rather problematically, C) All of the above.

The navigation of such binary demands structures both Commander in Chief and The West Wing, that other “I wish this was my President” show, but the relative nuance of the latter often obscures these tensions at work. For TWW‘s President Bartlett, this conflict is embodied and resolved in a continuous internal moral struggle. Where Bartlett simply is his Presidency, C in C‘s Mackenzie Allen must craft one from scratch, and the show doesn’t hide (or is less capable of hiding) the scaffolding of this construction from us.

Allen’s bumpy presidential journey finds no mirrored internal existence, but rather is externally grafted directly onto her navigation of the travails of motherhood. It is in the show’s collision of west wing and east wing that the two jobs are brought into mutual relief; the conflicting demands on the presidency are positioned as comparable to those on mothers. Both require the unending oscillation between soothing and stern, lenient and restrictive, active and reactive, and most importantly, the instinctive knowledge of when to be which. While surveying hurricane damage in Florida, President Allen is interrupted with news of a major national threat while reading a book to a group of children; she ends story time immediately, her transition decisive and innate (hmmm, and the non-partisan gloves come off…NOW).

In a veiled summation of Commander in Chief‘s “Rules for Parenting/Presidenting,” a Secret Service agent is reprimanded for allowing the Allen’s eldest daughter to sneak off with a boy: “Do you have kids? They’re always asking for things you can’t give them. Not because you can’t or you don’t want to, but because you know better!” According to C in C‘s internal logic, the president, like your mother, should be that person who just knows better — when the country should be allowed stay up past its bedtime, and when we should be sent to our room without dessert.

Comments welcome!

Image Credits:

1. Commander in Chief

TV in the Season of Compassion Fatigue

The Astrodome

The Astrodome

On the fateful Monday that Hurricane Katrina was passing through New Orleans (before the levees broke, when the biggest question seemed to be whether the Superdome’s roof would blow off) some friends and I were in a Marriott Hotel in the Florida panhandle. Like thousands of other evacuees, we were tracking Katrina’s progress via television through the city we had left behind. The storm was so large that even in Florida it was very rainy and windy and groups of people spent the whole day more or less watching the large screen tv in the hotel lounge. In mid-afternoon a cable television meteorologist reporting live from a semi-sheltered Canal Street doorway dramatically announced that he was going to make his way to a mailbox out on the street. His announcement drew mixed cries of “No!” and “Yes!” from my viewing cohort as people set aside their drinks to devote their full attention to the screen. The meteorologist-stuntman combat crawled his way out to the edge of the sidewalk and gripped the mailbox, bits of which were blowing away, and it looked as if he might join them at any moment. He made a few observations to the camera then attempted to regain the safety of the doorway — nearly there, a fierce gust suddenly blew him off his feet leading him to perform an impromptu somersault into a wall, and with that it was back to the studio. As conversation resumed and a collectively held breath released in front of the tv, a teenage boy stood up to leave but as he did so he momentarily blocked the screen, turning to face our assembled group. “That,” he informed us, “was awesome!”

I had intended to cite this anecdote of spectacle and spectatorship as a reminder of how we used to watch cable news and weather broadcasts before the terrible aftermath of Katrina, supposing that things may be a little different now. Then again, perhaps they are not as different as we would think. In a year that began with the Asian tsunami and marked its midpoint with the London bombings, mainstream television’s coverage of disaster has intensified this season with Hurricanes Katrina and Rita and the earthquake in Pakistan. As I write, Wilma, another major storm, is massing in the gulf, a wooden dam in Taunton, Massachusetts is threatening to give way after record-setting rainfall and all the networks are hyping avian flu as an imminent pandemic. I suspect I am in the majority when I turn on the news in the morning, wondering what new disaster I will learn about.

Miami County, KS Emergency Management

Miami County, KS Emergency Management

Television’s narratives of spectacular environmental disaster this season invite attention to climate change, car culture, overdevelopment and the perils of neglecting an underfunded and aging public infrastructure. They also provide a particular opportunity to examine our own emotional relationship to the medium and to reflect on the ways in which (non-fiction) television disaster narratives constitute epistemological evidence to a wide variety of social constituencies. Fringe groups interpreted the satellite shape of Hurricane Katrina’s vortex to resemble that of a giant fetus, a swirling reproach to a post Roe v. Wade America. Others with a residual investment in the Cold War saw significance in the names Hurricane Ivan and Hurricane Katrina while anti-semitic groups claimed that Israel’s designs on the port of New Orleans for weapons smuggling had instigated a divine retribution. Many of these interpretations build an ideological barrier between the storm’s victims and other Americans, imagining a punishment of various causes but always with the same real-world consequence delivered against the predominantly black, urban underclass of a singular American city.

Watching television this autumn has made me wonder if it might be the right time to revisit the notion of “compassion fatigue,” a term explored by Susan Moeller in her eponymous 1999 book. Moeller largely focuses upon crises and catastrophes outside the U.S. and the factors in play that work to mute American public response, particularly as wars, famine, disease, etc. are represented in terms suggesting these problems are intractable and inevitable in societies other than our own. Yet her claims retain much of their currency in a season when the rapidity with which one disaster has displaced another in the public imagination is so great and threatens to overextend our attention span and emotional limits. Moeller’s arguments might also be re-cast for a time when the “elsewhere” of foreign disaster coverage is situated domestically — Katrina put terms previously associated with foreign disaster (“refugee” and “evacuee”) into the vocabulary of American experience.

In the first decade of the twenty-first century domestic disasters are emerging as staples of U.S. media coverage and some of the factors cited by Moeller are inapplicable to these news stories. However it is clear that sustained and systemic coverage of post-spectacle catastrophe is still deemed “difficult” within the broadcast media and the chicken and egg problem of whether audiences reject such reporting or news organizations reject it on our behalf remains largely unaddressed. Many of the neighborhoods in post-hurricane New Orleans are quiet places with vast areas of destroyed and damaged residential property, closed stores, and no electricity. In a sensationalist media culture they are perhaps particularly unrepresentable. It is significant, no doubt, that the most high-profile New Orleans story in October involved the on-camera beating of a black man by police in the French Quarter — not only did this story have clear precedents tracking back to Rodney King, it was also in compliance with the representational codes of sensation and violence that drive the news media. The case affectively substituted anger for despair and also matched our affinity for blunt problems of law and order rather than the more composite concerns of resource management and reconstruction.

Of course, it might be pointed out that the issue is less one of compassion fatigue than of simple compassion and there would be various elements in the reporting of Katrina to support that view. One might think of the desperate attempts of rooftop-bound hurricane victims using the U.S. flag to signal for attention from passing helicopters (thus effectively claiming their own citizenship status and symbolic integration with a nation that has reinforced its connections between citizenship and patriotic iconography since 9/11). At those moments, it seemed, the victims acted from the belief that a demonstration of their ideological worth would enhance their chance of rescue. One might also recall Barbara Bush’s comments at the Astrodome suggesting that many of those being sheltered there were probably content to be housed in a sports stadium since they lived impoverished lives anyway.

The regularization of catastrophe this autumn challenges us to sustain a compassionate relation to disaster even when television maintains an exploitative relationship to it. While several cable news outlets have slightly expanded their follow-up coverage of Hurricane Katrina and a few have produced hard-hitting investigative pieces, the focus this season remains on the terrible thrill of disasters in progress.

See Also:
Tara McPherson — “Feeling Blue: Katrina, The South and The Nation”
Douglas Kellner — “Hurricane Spectacles and the Crisis of the Bush Presidency”

Image Credits:

1. The Astrodome

2. Miami County, KS Emergency Management

Please feel free to comment.

TV Revisiting TV: Why TV Does the “Remake” Better than Movies Do

Bewitched Remake

Bewitched Remake

This summer I had the great misfortune of paying money to see the remake of Bewitched on the big screen. Normally I don’t take to filmic versions of TV shows, or even continuations of TV shows in my local theater (a la The X-Files, Serenity). My primary reason has been that since Hollywood remakes of films rarely work, why should I trust the industry to remake a TV series? Still, I had hopes for Bewitched (Will Farrell, Nicole Kidman, Nora Ephron — and a robust original source). I was sorely disappointed (an assessment clearly shared by Amy Sherman-Palladino of The Gilmore Girls, who has been skewering Ephron’s film this season every chance she gets).

“Remakes” are nothing new in the world of TV, either — but the “re-making” we see tends to be of a different sort than what occurs in film. Occasionally we do indeed get the literal remake (Night Stalker), but more often we get one of three versions of the remake: 1) the quasi-rip-off (Invasion=Lost, kind-of-sort-of with aliens; Commander In Chief=West Wing, kind-of-sort-of with a woman), 2) the second cousin (all versions of CSI, Law & Order), and 3) the re-visitation (The X-Files as a spin on and re-examination of the original Night Stalker — which raises the interesting conundrum of whether the current Night Stalker is a remake of the original, or a re-visitation of The X-Files‘ re-visitation).

I lay out these trains of thought because I would like to examine how it is that TV is able to do just about any kind of the three remakes above more successfully than Hollywood films do TV remakes (in general — did like The Brady Bunch movie and the first Charlie’s Angels, and all things Muppet-related tend to rock) — and why does Hollywood even try? To begin with, let us put aside the “crasser” reason of Hollywood going after old TV shows for reasons of profit, since television certainly engages in this logic as well. Good reasons still exist for remaking a TV series (or an old film, for that matter): the original sucked but the premise showed promise; the original left something significant unsaid; or the driving force behind the original premise remains or has returned. These reasons often converge historically, making room for the successful and purposeful remake. Thus, I politely disagree with one of Owen Gleiberman’s explanations for why the movie Bewitched failed: “When you strip it [a TV show story] away from its era…what you’re left with is the premise without purpose” (“The Big Screen Gets Small,” Entertainment Weekly. #844/845, October 14, 2005. 27-29). While Bewitched the TV show certainly was a distinctively 1960s show, the central force driving its premise remains with us today: women still live with a cultural pressure that encourages them to “hide their true powers” — especially when attempting to become romantically involved with a man. While this pressure is nowhere near what it was in the 1960s, nor is it of the same kind, it is a pressure many women can relate to that might have been successfully translated into the movie remake.

Or perhaps the problem is that many in Hollywood still don’t “get” TV…More precisely, perhaps it is that films, with their roughly two hour limitation, cannot recapture what it is that any TV show does best: capitalize on the advantages of the series aesthetic. Gleiberman offers this explanation when he points out that viewers don’t seem to be clamoring for big screen versions of those shows that have been most firmly wed to the series aesthetic of slowly building character and story (he mentions shows such as Mary Tyler Moore — and ironically Dallas, which is being talked about as a movie now). These aesthetic elements exist at the very core of TV as a primarily series-driven medium, and they allow for the time and room necessary for a successful remake of an earlier show. It appears that, still today for some, TV is the “bastard child” of film — seen as simply “smaller” and therefore less sophisticated in what it might achieve. Certainly a film can do it better! This, as Gleiberman astutely observes, misses the point of TV for many viewers: “Our whole relationship to a dramatically rich and vivid television series, the way that we live with the characters for 5 to 10 years, their quirks and wrinkles deepening week to week, isn’t really translatable to a movie” (29).

While I could speak for ages about how often cultural critics miss this key element of TV (we all, I am sure, have heard the two famous lines “well, I haven’t seen it, but…” and “all I needed to see was one episode to know that…”), I would like to instead look briefly at two current TV shows’ version of the remake, inviting discussion about whether or not you agree that these are successful — and perhaps if there are other remakes out there we have not caught onto just yet. I will start with Bewitched. Yes, that’s correct — Bewitched. Beginning last season with the episode “Anything You Can Do,” in which Lynette (played by Felicity Huffman) attempts to help her husband with his ad campaign, I do believe we have been watching Desperate Housewives revisit Bewitched — at least with this one character (and with the nosy neighbor). Lynette gave up her powers (at the ad agency) for the sake of home and husband, but clearly she is itching to wiggle her nose and get back to work (paid work, that is). And indeed, this season to-date has focused on what happens when she returns to the work force and her husband Darren (sorry, Tom — equally bland) has to keep an eye on the kids. Such plotlines could not have been fully broached in the 1960s — we could only occasionally see Samantha try her hand at an advertising gig in lieu of her husband — and it appears that viewers today find the questions such a trajectory raises compelling. The slow unraveling of plot and characterization at work in a series such as Desperate Housewives has the potential to add nuance and detail to the issues of being a talented woman who is married with children in a way that two hours of campy film cannot.

The second remake I see at work is nicely doubled. While Malcolm in the Middle is still on the air, there is a slight remake/quasi-rip-off going on with Everybody Hates Chris. Compare the two pilots if you don’t buy this, but even thematically the core is similar: one child of three enters a new school situation because of a mother’s desire for his education to improve; he becomes friends with an outcast and he is an outcast himself. The more intriguing remake at work in this show, however, is the manner in which it is a re-visitation of The Cosby Show. Now, The Cosby Show hardly sucked the first time, but the other two reasons for a remake I listed earlier are at work in Everybody Hates Chris. One of the central forces driving The Cosby Show was the general issue of the status of race relations in the U.S. during the 1980s, which included White people’s perceptions of Black people. However, The Cosby Show also left many things unsaid about being Black in the U.S. (and especially specifically in the New York City area) in the 1980s. This is not to say that The Cosby Show failed or that it “should have” said everything. TV has generally been good (sometimes to a fault) at giving viewers as much as they can handle. I am simply observing that Everybody Hates Chris — set in the New York City neighborhood of Bed-Stuyvesant during the 1980s — is revisiting the socio-cultural landscape of The Cosby Show, saying some of what was left unsaid. In this story we see the glaring discrepancies in school funding that existed across racially segregated school districts (an issue that resonates as well today); we see some of the realities of racial tensions in the New York City area that caused havoc for people living in that region (and eventually beyond) in that decade; in short, we see much of the racism and correlated class issues that existed right outside the Huxtables’ brownstone. It’s a story worth revisiting, in my opinion — and worth revisiting in a series specifically.

(l) The Cosby Show and (r) Everybody Hates Chris

(l) The Cosby Show and (r) Everybody Hates Chris

Critics of TV are often quick to complain that “all” TV does is recycle what it has already accomplished. And I am the first to whine and moan when a new show does so unsuccessfully (or a film for that matter). But it is worth considering the possibility that some ideas and situations need to be revisited in order for TV to continue to operate as a cultural forum. We humans often prefer that stories which make us uncomfortable remain untold; we often need time to have passed in order to examine our shared situations (witness M*A*S*H). The successful remake can help us in this regard — and can keep us from ignoring when difficult socio-cultural currents have re-emerged. The series aesthetic is especially well-suited to such “examinations through entertainment.” While film can do much in this regard as well (watch Stand By Me followed by Boys in the Hood), some things just might be better left to TV.

Bewitched review
Everybody Hates Chris Entries on Bewitched and The Cosby Show

Image Credits:

1. Bewitched Remake

2. (l) The Cosby Show and (r) Everybody Hates Chris

Please feel free to comment.

Television and the Work of Mourning

Nate Fisher Jr.

Peter Krause as Nate Fisher Jr.

“You gotta go through all of the necessary stages of grief. That’s how you honor what a person actually meant to you.” — Nate Fisher, Jr. in “In Case of Rapture”, Episode 2, Season 4 of Six Feet Under.

I had not planned on writing this column. Of course, I had planned on writing a column, just not one about missing a television series, particularly one whose episodes I can easily access on DVD and online. But you can’t always plan where life is going to take you, even when you know that a significant portion of it is coming to an end. And throughout the past five years Six Feet Under held a significant place in my life. It was that one show that I tried to never miss, the one that truly compelled me. When in 2004 Alan Ball announced he would end his HBO series fans prepared with predictions and all of the other chatter typical of fan boards. Much to my chagrin I would have to accept the loss of what one friend of mine called this “own special brand of fucked up melodrama.”

So as the fifth and final season ended with the Fisher family in mourning, we were somewhat prepared. To loyal audiences, Nate Fisher Jr.’s death wasn’t terribly shocking. After all, Nate had had a near death experience that began at the end of season 2 and carried over into the beginning of season 3. And it was never clear whether or not his Arterio-Venous Malformation was sufficiently corrected. Because the Fishers specialized in dose after dose of denial, anger, bargaining and depression, the three episodes of televised grief that followed Nate’s passing weren’t even that shocking. If anything it was appropriate that in the final episode of the final season that we see the Fishers accept the loss of Nate through an impromptu commemoration of his life. The gesture provided as much closure as one might expect from a series that specialized in providing audiences with that occasional unresolved death, the kind that reminds us that narrative is the necessary frustration for those who remain.

Six Feet Under: Nate\'s Burial

Six Feet Under: Nate’s Burial

Yet, after the series ended, what surprised me was how much I missed the show. And it wasn’t just me as friends and acquaintances acted, as if they, too, had lost a significant portion of their lives. As a media scholar, I had never underestimated the television’s significance. What pages of research and speculation had not explained to me was why I would begin to compulsively review old episodes and search for a familiar position on the couch in search of something to take Six Feet Under‘s place. Indeed, I was reminded that television, when it is important, brings to our lives a sort of mystical combination of everyday relevance and predictability that reveals the prosaic as simultaneously ordinary and illuminated. When a program is at its most noteworthy even the most pedestrian elements of life shine through. Since I moved to Ohio, Sunday night has meant that I would watch an HBO drama after my preps for Monday’s classes were finished. But Six Feet Under, more so than any other HBO “Sunday Night Program”, became a part of my life. But I am not here to praise Six Feet Under, I am here to mourn it.

Mourning, grief, that part of the human experience that most of us must endure in order to heal after a significant loss, contains complexities of memory that resist language and conventional “understanding”. Reviewing Derrida’s The Work of Mourning, Sorcha Fogerty notes that Derrida accepts:

“the challenge of making the impossible a possibility in mourning; i.e. (i) invoking the possibility of an interiorization of what can never be interiorized (in that the dead are both ‘within us’ but ‘not ours’); and (ii) establishing a language for the unspeakable work of mourning, of how to mourn and how to speak in mourning, how to contend with the intolerable choice between what appear to be the two betrayals of silence and speech. This leads to the central paradox of the work of mourning: that success fails and failure succeeds. This typically Derridean contradiction indicates that if we achieve in some way the successful interiorization (but this is impossible) of the other, we in fact fail, because then the other is no longer other, we are no longer respecting the other’s ‘otherness’ if we somehow draw the other into ourselves, And conversely, if we fail (which we are bound to), we succeed, because we have retained respect for the other as other.”

If this paradox helps explain anything with regards to television production, it may help us understand why so few spin offs of “dead programs” ever measure up to the programs whose legacy to which they are attempting add. Perhaps the failure of show like After MASH may not only be a confirmation of poor writing and less-than compelling characters as it is confirmation of the strength of its parental text. Indeed, successful spin offs effectively distance themselves from their origins: Frazier succeeds in its failure to become yet another Cheers. Frazier invoked its past, however its legacy was dependent on sufficiently exteriorizing its efficient cause so it could become something sufficiently other from its textual universe.

I don’t want to spend much time debating the above proposition. Frankly, that would be the work of a lengthier paper, and fortunately, this column is not that. I am much more interested in exploring how media institutions address the many acts of memory that are fundamental to the experience of significant loss. To be sure, television contains multiple lessons about how we deal with the past. Yet, to invoke Raymond Williams, perhaps we should continue to look at the practices of social communication to understand what the development of televisual communication systems provide us:

“The true basis of this system had preceded the developments in technology. Then as now there was a major, indeed dominant, area of social communication, by word of mouth, within every kind of social group. In addition, then as now, there were specific institutions of that kind of communication which involves or is predicated on social teaching and control: churches, schools, assemblies and proclamations, direction in places of work. All these interacted with forms of communication within the family” (1974. 14-15).

If the work of the family and the church has been the primary site where the consideration of passing and loss took place, perhaps we
need to think through what our many reactions to media change can teach us about our social status as social animals of the late 20th and early 21st century.

It is no secret that studying television we can learn how we have invested in the past. Derek Kompare points out in his book Rerun Nation that numerous historical and institutional issues have been fervently negotiated so that past television programming can be presented as a fundamental and valued commodity in American television culture. Yet the manner in which media and memory are processed go far beyond the printing of DVD box sets and the process of off-network syndication. Online spaces such as Television Heaven claim “to preserve the memory of television programmes both past and present that the writers/reviewers either consider to be true classics, or have a lasting influence on what we watch or how we view the world around us.” And TV Land’s Caught on Camera web page promises us that we can “Hear the juiciest stories, relive the most touching moments, and find out some of the quirkiest facts — all straight from the source.”

TV Land\'s \'Caught on Camera\'

TV Land’s ‘Caught on Camera’

If “celebrity reunions” provide audiences a chance to celebrate a past televisual memory, we should not forget how the verb, mourn, has an etymological connection to the Ancient Greek term mermEra, a term that means, “to care or cherish”. Such reverence is not only evident in the letter-writing campaigns and online petitions circulated by fans in attempts to save their favorite programs, but on the many chat boards. These online testaments exist are often maintained by producers with a vested interested in preserving a space for memorial. And if all this investment in “cyber cemeteries” feels just a bit uncanny, perhaps it is because there is a fine line between nostalgia and commemoration. Yet while the former longs to return home, the other is mindful that there is no possible return as it calls the past into the present, and ritualistically moves forward.

Allow for the possibility that a fan board at could act as a sort of ritual space where tribute and longing intermingle, where numerous memorial acts may be composed. Take for example, the following post about the 1980s television program, Frank’s Place, that exists on a JumpTheShark.Com board regarding the program:

“I’ve been in mourning for the past 14 years for the best show ever aired. I petitioned my cable company to have BET placed into the line-up because it was the only place where I could see Frank’s Place. Unfortunately, by the time they complied, it was no longer being shown on BET. It was intelligent and wildly funny, unlike most series with predominantly black casts on today. Wish this show would be available on DVD because a visit to the Chez would be like a trip home.”

While the quote reveals a wish to go home, we should not simply conflate it with a nostalgic longing. The post also reveals that the viewer of Frank’s Place, like the mourner who lacks photographs and letters of a loved one, lacks the convenient mnemonic devices that many loved ones utilize in order to move one through the processes of loss. Of course, I am not claiming that by observing how we react to the loss of a television show we necessarily gain a finer understanding of what it means to mourn our brothers and sisters. Rather, I do believe that an honest observation of the way we react to the expiration of a television show offers us another chance to understand the complexities involved in the institution of television as a portion of our social fabric.

All of which, brings me back to the question of what it means to “miss television”. When I informed one of my colleagues about the possibility of writing this column, she reacted by saying, “I think you should. I mean, I miss Buffy even though the final season kind of sucked and I have every available DVD.” Indeed, that was the very sentiment that confused me: even though I have every episode of Six Feet Under I still miss the show. More specifically, I miss the show’s particular rhythmic presence and ability to predictably surprise me about questions of death that I simply would have never asked.

For my money, the character I will miss the most is Nate Fisher, Jr. As the heart of the show, Nate grew from resentful to accepting over his five-season span and moved through more melodrama than anyone short of Job. Indeed, the following testimonial posted on an HBO maintained Six Feet Under fanboard indicates, I am not alone:

“Our family will really miss the Fishers this next year. For the past three years, my teenage son (while in the ninth through twelfth grades) has said that his role model for being a man is a combination of Nate and David–and I am pleased with his choice.

“Nate was a stand-up guy. He didn’t want to be a funeral director; yet, when his family’s finances were in peril, he decided do do what he didn’t wish to do — but to do the ‘right thing’ and pitch in. And he found that he was empathetic and advocated for those grieving (in his profession and in his life as well) in a unique, skilled way. He was horrified at Brenda’s betrayals but tried to understand. He ‘did the right thing’ despite not loving Lisa and married her and TRIED to love her, tried to make their marriage work, despite her distance, her obsessiveness.

“Yes, he acted like a grieving person. A theme of SFU is about how we grieve. And we behave in ‘an unusual manner’ during grief.”

Ending the post with, “Thank you for the best show we have ever seen. Thank you all.”, after substantially rehashing the past narratives, one senses that our writer has finally achieved some sense of closure, no matter how awkward and forced it may seem. But then again, so is grieving.

Work Cited:
Williams, R. (1974). Television: Technology and Cultural Form. Hanover, New Hampshire, Wesleyan University Press.

Image Credits:

1. Peter Krause as Nate Fisher Jr.

2. Six Feet Under: Nate’s Burial

3. TV Land’s ‘Caught on Camera’

Please feel free to comment.

Laughs and Legends, or the Furniture that Glows?: Television as History

1956 Melbourne Olympic Games

1956 Melbourne Olympic Games

2006 marks the fiftieth anniversary of broadcast television in Australia. It was launched just in time for the 1956 Melbourne Olympic Games.

The anniversary has provoked a flurry of events in this country. Among them is a national conference to be held in Sydney on the history of TV in Australia.

With colleagues Joshua Green and Jean Burgess I’ve been preparing a paper for this event. There will be plenty of contributions on the development of the industry, programming and audiences, so the idea we’re working on is not to trace the history of something on TV, but instead to look at television as history in Australia.

No origin; no “it”
One trouble with “television as history” is that it’s not a coherent object of study. TV is one of those things that isn’t really an “it” at all. It doesn’t have an essence, either technically or as a broadcast system, so “it” was improvised, emerging as the work of many hands, individual, corporate and governmental, over a lengthy period.

TV history was and remains strongly national. There’s even a whiff of competitiveness that plays itself out through the public record. For instance Wikipedia plays up the US contribution. There is no doubt that the most influential and widespread forms of broadcast programming and formats, from news to sitcom, originated in the USA in the 1950s. But TV was up and running as a scalable broadcasting system in Europe well before then. Key inventions came from Germany and Britain, while TV as we know it today; i.e. a corporately-owned variety medium playing for leisure consumption in the early evenings to families at home, was launched in Britain by the BBC on November 2, 1936. The US system launched in 1941 (when Europe was at war but the USA wasn’t).

Such national differences mean that any anniversary is pretty arbitrary, even if you concentrate on the launch of broadcast systems as opposed to technical inventions. Thus, 2006 is the 70th anniversary of broadcasting for the Brits; 69th for the Germans, 65th for the USA; 54th for Canada; and so on up to Bhutan, where TV is six years old.

Each of the pioneer countries developed different standards, including internally competing ones. Television was invented twice in various countries, like the USSR, which established electromechanical TV as early as 1931, but then re-started with imported cathode ray tubes in 1938-9.

The context of viewing was also not uniform. The BBC targeted a domestic audience in order to boost receiver sales, which meant in effect that the very first broadcast TV audience was confined pretty much to electrical retailers. The BBC scheduled programming specifically for them during the afternoons, so that they might demonstrate the sets. Meanwhile television was launched in Nazi Germany as a public medium, projected in TV viewing halls.

Australia sat this history out, importing existing technology, system and product. TV was launched in New South Wales and Victoria in 1956. But it didn’t reach the other mainland states until 1959. Tasmania and Canberra waited until the early 1960s and the Northern Territory did without it until 1971.

Academic history
Academic histories of television are less common than you might think, especially histories of programming as opposed to broadcasting systems (Alan McKee has made this point). With few exceptions the academic study of television is stuck in the endless present tense of scientific or policy discourse, pondering questions of effect, behaviour, technology, power and profit.

There are histories, of course, and excellent scholarship, but such work is rarely at the cutting edge of the discipline. Indeed, that is why media scholar Liz Jacka organised the upcoming conference in the first place, because the neglect of television history is especially pronounced in Australia.

Cultural Institutions
Academia is not alone in this regard. Given that watching TV is the most popular pastime in the world and in all history, it is surprising how little the major institutions of cultural memory have taken any notice of it. Museums, galleries and archives that pretend to national status have almost completely ignored it. Television as cultural history is strangely elusive.

On the whole, where they’ve noticed it at all, cultural institutions have not been kind to television. After all these years there is still too much of what Roland Barthes once called “either/or-ism”: Either Cultural Institutions, or the dreaded Tube, viz.:

Cultural Institutions TV
institutions of collection medium of diffusion
public commercial
memorialise art and culture memorialise schlock, dreck, kitsch
city and civic experience suburban and domestic experience
extraordinary ordinary
art behaviour

You know the script.

While the national institutions are a cultural wasteland if you’re interested in popular media, there are specialist museums, archives and cultural institutions. In Australia the National Film and Sound Archive (ScreenSound) has a permanent collection of “representative” TV programming. Its premises in Canberra also feature walk-through exhibitions which include sections on the history of TV.

The Powerhouse Museum in Sydney, host of the conference we’re attending in December, is planning a major exhibition in 2006 called On the Box. They’re billing it as “a spectacular exhibition examining the impact of television on the lives of Australians.” We’ll see “The largest collection of television costumes, props and memorabilia ever displayed in Australia!” “Landmark programs and key personalities, as well as studio technology and behind-the-scenes production!” Thought-provoking displays will explore the role of television in the community. Classic Australian clips will show how TV has kept us entertained for five decades”.

Even though such exhibitions are quite rare, they already conform to what Raymond Williams once called “the culture of the selective tradition.” Some aspects of a cultural form are selected over others, such that “the history of television” — where it is noticed at all — is so standardized that it has itself become a genre.

In the process, television usually becomes a symptom of something else. Part criminal, part fool, it stands for our collective fears, desires and follies. If you’re in a serious mood, it’s the history of social and cultural impact (read: negative) or cultural imperialism (read: Americanisation). But meanwhile let’s wallow in nostalgia and see the ads, comedy shows, kids’ TV and sport from, well, yesteryear. Let’s laugh at those hairstyles, cringe at those clothes, wince at how our favourite celebs used to look (pretty bloody awful if the truth be known — why did we put up with them at the time?).

Such topics also correspond to various target demographics: nostalgia and “the history of me” for the oldies; arch critique and knowing kitsch for the urban sophisticates; celebrities and games for the kids.

The Australian Centre for the Moving Image (ACMI) in Melbourne is also planning to mark the anniversary. I’ve been working with a group of researchers from QUT to assist ACMI with their plans for this exhibition. It has been fascinating to be involved in the very practical problems associated with trying to make television into history.

Not the least of the issues is a familiar conundrum for any curator or artistic director interested in popular culture — what will persuade people to switch off the TV and come in here to watch TV? It all seems counterintuitive. Immersed as everyone is in popular culture, why would anyone bother to invest time in visiting more of it?

It is really hard — so much so that I haven’t discovered an instance of it yet — to find an exhibition on television that takes the medium and its practitioners just as seriously as artists, photographers and filmmakers are taken in galleries. What would television history look like if it were curated for the Tate modern or MOMA? (If anyone knows an example please tell me.)

1956 Television

1956 Television

The closest thing I’ve seen was the inaugural exhibition at Sydney’s Museum of Contemporary Art in 1991, to celebrate 35 years of Australian TV. TV Times was curated by David Watson and Denise Corrigan, and one of its exhibits was a large black box with peepholes through which visitors could spy — as if through an open fridge door and other vantage points — on a suburban couple (played by actors) who sat there watching television (and looking bored leafing through magazines etc.). Very Foucauldian, and an artwork in its own right.

But the MCA collapsed financially soon afterwards and had to be re-launched with a different business plan. Memorialising the popular arts in a serious way seems not to be part of it.

Television on television
There’s an odd but equally standardised genre of TV show that celebrates the history of television.

The very first broadcast in Australia (September 16, 1959) started with announcer Bruce Gyngell (who went on to head up TVam in the UK) saying “Good evening and welcome to television.” He actually did do this. However, the familiar footage that is endlessly re-shown was recorded a year later — to celebrate the first anniversary of Sydney TV station TCN9.

I’m sure Derrida would have something to say about that, but in any case the die was cast. This was how you did television history on television. By faking it. It was simply a matter of promoting the station in question, and if you didn’t have the appropriate materials you just “recreated” them. And on no account did you celebrate the stars, shows or scoops of the opposition.

TV marked its 20th and 21st birthdays with back-slapping gala events in ballrooms packed with personalities. As TV matured and budgets for junketing fell, somewhat, TV history shows moved out of the ballroom and into the archive. The 30th and 40th anniversaries were studio-based affairs, less about the live experience of making television and more about the content screened and the magical moments that television has provided for the delighted viewer. The emphasis was on genre divisions and viewer nostalgia, leavened by celebrity presenters making painful scripted jokes.

In 1991 Channel Nine’s 35 Years of Television made history of its own. It claimed to be the first show that covered commercial TV as a whole, not just one channel. It was presented by stars and personalities from the three commercial networks (although it complained that “the other networks” were reluctant to share their material).

Celebrations for the 50th are already well under way. For instance Kerry Packer’s Nine Network has recently aired a “special” called Five Decades of Laughs and Legends, on the curious grounds that we are now inside the year of the anniversary (tell that to someone who’s 49 and one month!).

As Graeme Blundell (a.k.a. “Alvin Purple”) commented in The Australian, “Five Decades smacks of a grab for ratings desperately — and cheaply — fashioned from the junk pile and the banal hysteria of TV’s supermarket. But despite the less than lofty motives of the networks, the history of TV can’t help being compelling viewing.”

Junk? Banal? Hysteria? Supermarket? Hey — that’s my life! Blundell conceded, however, that “it does illustrate just how far we’ve come since 1956.” Well, yes and no.

To fill the void left by “official culture” and television itself there are the amateurs, fans, and the retired technicians and announcers from the heyday of broadcasting. They maintain museums in barns and sheds. They have migrated enthusiastically to the net. They are the “pro-am” consumer co-creators of television history (e.g. the Australian Museum of Modern Media).

The pro-ams tend to fall into two broad groups, organized around technologies on the one hand and programming on the other.

Those interested in programming tend to be the fans and cult followers (to sample, see facts and trivia about iconic Aussie soapie Neighbours).

The techies divide between “pros” and “ams.” Professionals are those who have worked in the industry and can discuss details down to the question of whether the electron beam in early cathode ray tubes swept right-to-left or left-to-right. Amateurs are those who love the furniture that glows (Television History: The First 75 Years”).

The pro-ams are proving to be much more interesting and useful to the cause of television as history than the great cultural institutions of memory that soak up the tax dollar. Like eBay their websites make accessible curios that would have been impossible to find before. And unlike “official” curators they’re really interested in TV history, in which many of them have played an active role, on both sides of the screen.

Some of them even seem to be working for broadcasters now. The BBC especially seems drawn to the possibilities .

The future of history
It’s clear that television history is not the work of one agency or even one “discursive regime” (as we used to say). The work of producing it is shared among academics, cultural institutions, pro-ams (including fans and TV professionals), and the history that emerges is different in each case, and in each country.

TV history overall still seems to be mostly “folklore” or “ideology” rather than “discipline” or “science.” Legends are spun that serve the interests of the teller, and these stories tell us more about the source of the narrative — whether a national, cultural, academic, commercial or consumerist speaking position — than they do about television as such.

But as we’ve investigated the cultural memorialisation of television it has also become clear that something new is afoot. The internet offers entirely new possibilities for TV as history, and the number of potential participants in the work of piecing it together has dramatically increased with the inclusion of the “pro-ams.” At the moment the various parties to this work have little in common and less mutual contact. But the future of television history looks a lot more interesting than its past. As they used to say; we have the technology.

Links to more “pro-am” sites:
Early television (treasure island)
Radio history (check out the recommended reading)
House of broadcasting (weird enough for you?)
Museum Victoria (an official site, but nerdily it boasts possessing the “first cathode ray tube television in the southern hemisphere”!)
Vintage Electronics Museum (a guy from Hove with a lot of TV sets)
Birth-Of-TV (a European project)

Image Credits:

1. 1956 Melbourne Olympic Games

2. 1956 Television

Please feel free to comment.

When Mullahs Ride the Airwaves: Muslim Televangelists and the Saudi Connection

Dishes and Mosque

Dishes and Mosque


“Soccer is not an illicit form of entertainment, but when practiced in violation of shariah, then it is as abhorrent as any other sin…. When we fanatically love non-Muslim players who perform the sign of the cross upon entering or leaving the field…or when Muslim players imitate the pagan dance of famous infidel players when they score, or put forbidden things on their chests, that’s not acceptable.” The author of this soccer fatwa is Saudi cleric Muhammad Al-Munajid on a set of Islam’s powerful spokeschannel, Iqra’ TV.

Until recently sheikhs like Al-Munajid were only able to reach their audience through audio and video recordings sold on Arab black street markets. Those who preached a rigorous interpretation of Islam had a minimal impact among fringe groups of Arab populations, but as satellite technology becomes greatly appealing to the religious and the secular alike, television channels with a strict religious message as Iqra’ are quickly setting shop. Inaugurated in 1998, Iqra’ is Saudi Arabia’s most recent and probably most effective campaign of spreading its Wahhabi doctrine, which the channel’s producers temper by saying on their website that their mission is to bring “the teachings of Islam into the homes and hearts of Arabs worldwide.” The Saudis take issue with the Wahhabi label because it makes them look less as the real Islam and more like a sect that is highly disputed in some respectable religious circles. But the systematic indoctrination of imams and financing of religious schools and mosques around the world reveal a rigid reading of Islam which forbids close interaction with non-Muslims and calls for the literal application of shariah laws across the region, including hand amputation for theft, sword beheading for capital crimes, and denying women any role in public life.

For years, Saudi Arabia had to flaunt its generosity towards poor Muslim countries by building hospitals, schools, universities and mosques even in Western Europe and the United States. According to Saudi officials, between 1975 and 2002, the Riyadh government spent more than $70 billion on Islamic projects around the world, excluding the millions of dollars volunteered by Saudi charity foundations and unidentified philanthropists. An estimated 80 percent of mosques in the U.S. are funded by Saudi Arabia, according to Sheik Muhammad Hisham Kabbani, head of the Islamic Supreme Council of America. While the funding of mosques and the ideological direction of those who frequent them do not necessarily correlate, the influence of the Saudis over the content of the sermons, the training of imams, and the substance of Islamic schools’ curricula is undeniable.

Religious spending per se is not the problem here, but it is the extremist ideology promoted thanks to this cash availability that is disturbing. The voices of intransigent Islam are featured frequently on the airwaves of Iqra’, and their edicts are often consistent with the Wahhabi attempt to purge Islam of what is perceived as foreign threat disguised as societal change. In fact, some of the messages on the channel can be extreme like Saudi cleric Aed Al-Qarni’s recent on-the-air endorsement of suicide bombing. “Houses and young men must be sacrificed,” he says, “Throats must be slit and skulls must be shattered. This is the road to victory and to shahada (sacrifice). Oh brothers, the idolatrous Vietnamese, Cambodians, and South Africans….Nations with no calling or divine law make sacrifices–sacrificing people, blood, and souls. All the more so should we, the nation of Islam.” And some show moderators often appear as enlightened by their guests’ revelations as when Egyptian historian, Zaynab Abdel Aziz tells a show host that the “Vatican delegated the US to carry out 9/11.”

While religious platforms such as Iqra’ do not call for jihad bluntly, theycontribute to an increasingly radicalized religious culture in the Arab world, making every facet of social, cultural, and economic life a religious issue in need of a fatwa. Fatwas range from Muslim women needing to comply with their husbands’ desire in bed even if they don’t want to, to why hands of stealers should be chopped, to whether Muslims should shake hands with Jews. Iqra’ (literally: “recite” or “read in an
intelligent way”), has found a fertile ground in a region still lacking basic political reforms and jaded with repetitious autocratic and corrupt regimes. For years, religious groups–mostly underground–in the Arab world have become the only viable alternative: when the health
system fails customarily in these countries, Islamic groups with disposable cash can intervene with their own doctors for free; when schools educate poorly, the same groups offer their own teachers for free. In the wake of natural disasters like floods and earthquakes, religious groups often respond quickly and more efficiently than governments to help the victims and alleviate their losses, as was the case in the earthquakes of Algeria and last year’s floods of northeastern Morocco. The failure of secular regimes to provide minimum social welfare and secure political freedom in the region has steadily nurtured a new perception whereby the state benefits the elite while religion benefits the masses.

This is why the world of Arab media seems swamped with religious messages, but by now, Arabs have evolved since the state-owned, everything-is-fine, and dull television channels. So, in order to appeal to a more media saturated audience, the producers of Iqra’ are taunting their skills by making religious preaching less shabby and threatening. The on-screen graphics and studio sets are comparable to entertainment television, but nothing is more alluring than the new look of Islamic scholars and sheikhs who do not always conform to the conventional image of a preacher in a mosque. In fact, many of these preachers and scholars wear suits and use softer tones than usual. Some of them are young and do not claim to be a religious authority like the channel’s superstar preacher, Amr Khaled, a 38-year-old who hosts one of the most popular programs on Arab television, Sunaa al Hayat (Life Makers).

Khaled, who has become a household name across the Arab world, is seemingly an anomaly in the Saudi quest to popularize Wahhabism: he is young, a business accountant not a religious scholar, and with a somewhat liberal and tolerant approach to Islamic preaching. Khaled’s fame at Iqra’ was preceded by a long showdown with Egyptian authorities who expelled him from Egypt after his religious lectures had become spiritual revelations for thousands of well-to-do women and youth in the country. His age, modern look (wearing jeans or a suit and clean-shaven), and the use of colloquial Arabic make him accessible to a young Arab audience extremely tired of the staid, disconnected sheikhs of Islam. But what made Khaled’s message appealing to the Saudi channel Iqra’; is that it is liberal only in style and quite conservative in substance. During his lectures and discussions on the hijab, Khaled is rarely original in citing the reasons why Muslim women should be veiled. Women are the pillars of Islamic education and wearing the veil, he says, is a selfless gesture to protect the sanctity of the faith itself: “I think that the primary purpose of legislating hijab, other than preservation of virtue, is…to remind people in the street about Islam; there will be no way better than hijab.” Islam’s integrity, he says on his show, depends on the virtue of its women and since their responsibility in the temptation of men is inevitable, veiling is a must, even if you don’t understand. While Khaled’s message lacks in originality and critical quality, his highly emotional, talk-show style provides an innovative and soothing statement that you can be pious and still remain modern and cool. And the Amr Khaled phenomenon has just begun despite some already unprecedented television ratings for his show: five million viewers tune in to his weekly show and his web site records millions of hits daily.

By putting Khaled next to the old and conventional sheikhs, Iqra’s producers are hoping to change the moral path of young Arabs who are still deeply influenced by Western popular culture. Major Internet chat rooms in the region are teeming with testimonies, particularly of young women thanking Khaled for convincing them to put on the veil. Programming this year included not only talk shows and lectures, but dramas and cartoons. It is hard to quantify the impact of Khaled’s hip preaching and Iqra’s religious broadcasting, but religion has never been this popular from Cairo to Casablanca. At a time when political regimes in the region continuously fail their constituency and Islam is the subject of humiliating headlines, Khaled and a wave of young preachers seem not only innovative, but also vengeful in a let’s-go-back-to-the-roots fashion. It is therefore not a surprise to find Saudi Arabia at the helm of this religious survival in disguise. Though Wahhabism may never become a preferred doctrine of Muslim Arabs, its signature of uncritical, exclusionary spirituality is quickly infiltrating Arab living rooms and delaying badly needed reforms both in religious interpretations and political rule.

The 30-year-old executive manager of Iqra’, Mohammad Hammam, likes to think of his channel as serving a double mission: counter the post-September 11 image of Islam and guide Muslims to understand better their own religion. Many of the ideas propagated from the sets of the channel, however, belie the core of this mission. If there is one, it seems to be to flood the airwaves with a fatigued interpretation of religion simply refurbished with funky jingles and beardless preachers.

Iqra’ TV

Image Credits

1. Dishes and Mosque

Please feel free to comment.

Awkward Conversations About Uncomfortable Laughter

Mary Douglas\' Implicit meanings

Mary Douglas’ “Implicit Meanings”

In her book, Implicit Meanings, the anthropologist Mary Douglas explores the roles jokes play in mapping points of tension or transition within a culture. Only a thin line separates jokes and insults. The joke gives expressive form to an emergent perspective within a culture — something which is widely felt but rarely said. When a joke expresses a view already widely accepted, it becomes banal and unfunny. When a joke says something the culture is not ready to hear, it gets read as an insult or an obscenity. The job of the clown is thus to continually map the borders between what can and can not be said. This is why a good comedy routine is accompanied as often by gasps as by laughter.

I was reminded of Douglas’s perspective on jokes when I recently participated in a screening and discussion of Sarah Silverman’s new film, Jesus is Magic. For those of you who have not heard of her yet, Silverman is a former Saturday Night Live writer who sparked national controversy in 2001 when she told a joke about “chinks” on Conan and when she defended the joke on Bill Mahr’s Politically Incorrect. The Silverman controversy has resurfaced in recent months both because of a rather memorable appearance in The Aristocrats and because of the release of a film documenting her standup comedy show. She has recently been profiled in The New Yorker and Entertainment Weekly and is currently shooting a pilot for her own series on Comedy Central.

To understand the controversy, we have to return to the now infamous joke she told on Conan in 2001. She was explaining that her various efforts to escape jury duty and her friend’s suggestion that she could try to come across as prejudiced on the questionnaire by writing “I hate chinks.” Silverman pauses, suggesting that she would consider being embarrassed to make such a comment, even in jest, and so instead she wrote, “I LOOOVE Chinks — and who wouldn’t.”

Greg Aoki, the president of the Media Action Network for Asian Americans, argued that the network showed a double standard in allowing the word, “chink”, to air when it would almost certainly have bleeped “nigger.” The network and host later apologized for the decision to air the joke but Silverman refused to apologize, contending “it’s not a racist joke. It’s a joke about racism.” The controversy is one which looks differently depending on whether our focus is on the words used (Aoki rightly sees “chink” as a word deeply entwined in the history of racism in America) or the meaning behind them (Silverman is right that her comedy ultimately raises uncomfortable questions about how white people “play the race card.”)

Writing in Asian Week, columnist Emil Guillermo argues that rather than seeing Silverman’s joke as “fighting words,” they should use it as “talking words,” as the starting point for discussing the current state of American racism. This is not what Aoki experienced when he tried to challenge the appropriateness of Silverman’s joke during their mutual appearance on Politically Incorrect, where the host and guests questioned his sincerity, made fun of his name, called him names, and cut him off when he tried to link the jokes to recent incidents of racial violence. And it is not what Silverman experienced when her critics simply label her a “racist” without exploring what she was trying to say.

How can we distinguish between racist jokes and jokes about racism, especially with the deadpan irony that is Silverman’s hallmark? Most of us have no trouble thinking of cases where jokes have been directed against minorities as a racist exercise of power. Yet we should also keep in mind the many different ways that comedy has been used to challenge racism — think about the first generation of African-American comics who went into black, white, and multiracial clubs and confronted their audiences with words and concepts that were designed to create discomfort; think about the ways that underground comics like R. Crumb sought to “exorcise” the history of racial stereotypes in his medium by pushing them to their outer limits; think about shows like All in the Family which exposed the ways that previous generations of sitcoms had remained silent about the bigotry which was often at the heart of American domestic life. And then there are jokes which are funny simply because they are “politically incorrect,” that is, because they thumb their nose at anyone who would set any limits on speech whatsoever. Perhaps most strikingly, there are jokes which deny the reality of both race and racism simply by refusing to talk about it at all. When was the last time that you heard a joke on a late-night talk show (Okay — outside the Daily Show) that you remembered the next morning, let alone one which provoked debate four years later.

Critics have read Silverman’s comedy as simply “politically incorrect.” There are plenty of times when Silverman’s jokes are, to use Douglas’s definition of obscenity, “gratuitous intrusions.” Yet, at its best, her comedy reflects on the problems of living in a culture where old racial logics are breaking down and new relationships have not yet taken any kind of definitive shape and where there seems to be no established language for speaking to each other across racial lines. Her most consistent target is a white America which is so busy trying to watch its step that it falls on its own face. Several deal with the challenges of negotiating mixed race or multi-ethnic relationships. For example, she gets upset when her half black boyfriend objects to her “innocent compliment” that he would have made “an expensive slave” because he has “self-esteem issues,” smugly insisting, “He has to learn to love himself before I can stop hating his people.” This is after she has suggested it would be more “optimistic” to say that he was “half white” rather than “half black.” At another point, she describes a particular audience as “black,” then corrects herself to say that it was “African-American,” then decides it was “half and half.” Or again, she talks about how she and her Christian boyfriend will explain their religious beliefs to any future offspring: “Mother is one of the chosen people and Dad believes Jesus is magic.”

Sarah Silverman

Sarah Silverman

Silverman’s jokes do not in any simple or direct way represent her personal views; rather, she has adopted a comic persona (perhaps multiple personas) through which she reflects confusions and contradictions in the ways that white America thinks about race and racism, much the way some hip hop performers have argued that the views about race, criminality, and sexual violence they express through their songs are attempts to make visible some of the issues confronting their community. In both cases, critics have tended to read such personas literally. There are no words to describe whiteness which have the same sting as “chink” or “nigger” and so she has to perform whiteness, against a backdrop of other racial identities, so that it can recognize itself in all of its insensitivity and self-centeredness.

Consider, for example, a Silverman routine about her lust for a jewel which is formed by de-boning and grinding own the spines of starving Ethiopian babies. There is a level to the joke which is simply funny because of the cruel and insensitive way she is speaking about human suffering; there is another level, however, which works not unlike the way that Jonathon Swift’s similarly-themed, “A Modest Proposal,” works, exposing the infinite flexibility with which we can rationalize and justify the exploitation of the third world. Silverman delivers the joke with what New Yorker writer Dana Goodyear calls “quiet depravity”: “The expression that lingers on her face is usually one of tentative confusion or chipper self-satisfaction, as if she had finished her homework and cleaned up her room, and were waiting for a gold star.” She doesn’t smirk; she honestly thinks she has no real prejudice or animosity even as she bases her everyday decisions on gross stereotypes. Hers is the face of what cultural critics have called “enlightened racism,” the smug satisfaction with which white Americans excuse ourselves for our own lapses in taste and judgment as long as they do not become too overt or openly confrontational. As she describes this jewel, she hits a moment of conscience, realizing that they probably exploit the “unions” which mine the babies’ spines, but then concedes, “you have to pick your battles.”

Early in the jewel routine, she describes her acquisitiveness as “so JAP,” then pausing to explain that she doesn’t mean “Jewish American Princess” (a stereotype which she has self-consciously embodied throughout the routine) but rather “Japanese.” Instantly, she moves from a stereotype which is more socially acceptable (if only because she would be making fun of her own group) and into one which is totally unacceptable (and the joke only works if we recognize the offensiveness of the word). Indeed, she plays often on the ambiguities of her own status as white and Jewish — sometimes speaking as a member of an oppressed minority, other times blending into a white majority, and often making this desire of Jews to escape their minority status a central theme in her work. It crops up for example when she makes bitter comments about contemporary Jews who drive German-made cars or when she tells a joke about Jews who want to escape racist charges of having killed Christ by blaming the Romans (and then pushing this historical scapegoating one step further by suggesting that personally she blames the blacks.)

Silverman’s comedy depends upon the instability created as we move from thinking of race in black and white terms towards a multi-racial and multi-cultural society. A previous generation of comics would not have made jokes about Asian-Americans or Hispanics because they simply were not part of the way they envisioned America. Much contemporary race theory has sought ways to move us beyond simple black/white binaries in the ways we think about racial diversity. As recent demographic trends suggest, America is rapidly moving towards a time when Caucasians will be in the minority but they are not being replaced by a new majority culture: rather, America will be more ethnically diverse — some would say “fragmented,” “balkanized,” or “disunified” — than ever before and there has been few successful attempts to build coalitions across those diverse populations.

A musical number in Jesus is Magic self-consciously maps the fault lines in this new cultural diversity: dressed like a refugee from an Up With People concert, strumming a guitar, looking her most wide-eyed and innocent, she wanders from space to space, gleefully singing about how much Jews love money, how little blacks like to tip, how well Asians do at math, and ends with a particularly choice lyric about blacks calling each other “niggers.” Then, the little white woman looks over and sees two angry looking black men who glare at her for a long period of silence; then they start to laugh and she tries laughing with them; then they stop laughing and glare at her even more intensely and for an agonizingly long period of time. It is hard to imagine a comedian who is more reflexive about the nature of their own comic practices or more insistent that the audience stop laughing and think about the politics of their own laughter.

Much of the Silverman controversy centers around what anthropologists often call joking relations: in any given culture, there are rules, sometimes implicit, often explicit, about which people can joke with each other, about what content is appropriate for joking in specific contents. During times of social anxiety, these rules are closely policed and transgressions of these boundaries are severely punished. Yet, in times of greater security, cultures may suspend or extend the rules to broaden the community which is allowed inside a particular set of joking relationships. But who determines which jokes are safe and permissible? She openly courts such questions by appearing on The Jimmy Kimmel Show, doing verbatim versions of Dave Chappel skits. Can a white woman make the same jokes as a black man or does changing the race of the performer change everything?

Comedy in the 1990s seemed often about securing boundaries as comedians emerged who could articulate the self perceptions and frustrations of different identity politics groups: Asians made Asian jokes, Blacks made black jokes (and sometimes about white people), Jews made Jewish jokes, and white comedians mostly avoided the topic of race altogether. This places an enormous burden on minority performers not simply to speak on behalf of their race but to bear the weight of any discussion about racism. And of course, when black comedians made jokes about black people, they often did so in front of white or mixed audiences. Just as white comedians were uncertain whether they could joke about race and under what circumstances, white audiences were uncertain whether they could laugh about race and under what circumstances. Silverman has thrust herself out there, saying it is time for white comics to joke about race, and has faced the inevitable push-back for trying to change the rules of discourse.

Contemporary cultural theorists have been urging a move away from identity politics towards one based on coalition building: race will not go away simply because we refuse to talk about it and we cannot meaningfully change how we think about race as a society by remaining within our own enclaves. Consider, for example, Frank H. Wu’s Yellow: Race in America Beyond Black and White. Wu is an Asian-American professor who has chosen to teach at Howard University Law School, a historically black institution, because he wanted to create a context where Asian-Americans and African-Americans can learn to communicate across their racial and ethnic differences. Wu argues that for such coalitions to work, one has to put everything on the table, confront past stereotypes, examine historic misunderstandings, give expression to fears and anxieties. We can’t work through the things that separate us until we feel comfortable discussing them together. This isn’t simply something that has to take place between different minority groups: there has to be a way where whites can express their own uncertainties about the future without being prejudged.

Jokes may fuel such social transformations because they force us to confront the contradictions in our own thinking. They are valuable precisely because the same joke will be heard differently in different contexts and thus can help us to talk through our different experiences of being raced. As Wu writes, “Race is meaningless in the abstract; it acquires its meanings as it operates on its surroundings. With race, the truism is all the more apt that the same words can take on different meanings depending on the speaker, the audience, the tone, the intention and the usage.” Mary Douglas similarly suggests that the reason our culture has such trouble drawing a fixed line between jokes and obscenity is that unlike traditional cultures, we do not occupy “a single moral order” and there are no agreed-upon boundaries.

And that brings us back to Guillermo’s appeal that Silverman’s “chink” joke might be used as “talking words.” From my perspective as a white southern-born male, Silverman is raising important questions about race and racism which white audiences need to hear if they are going to come to grips with a multicultural society. From Aoki’s perspective, the same joke evokes a painful history, using words that many Asian-Americans hear too often. At the risk of sounding naive and idealistic, maybe that’s something we should be talking about, however awkward the conversation is apt to be.

Rotten Tomatoes
The New Yorker on Sarah Silverman

Image Credits:

1. Mary Douglas’ “Implicit Meanings”

2. Sarah Silverman

Please feel free to comment.