Living Life in TiVo Time

TiVo

Like most people, it usually bugs me when I am wrong. However, this time I draw some comfort from what I now think may have been an erroneous conclusion. You see, I was afraid that the world was slipping mindlessly into boorishness. Perhaps because I have now lived in the South for a quarter of a century, I set significant store by manners. You really do open doors for others male or female. You say “please” and “thank you” always. Someone may set every nerve in your body on edge, bless their heart, but you smile and ask how they are. I am not, however, foolish enough to believe that my adopted region of the country is any less intolerant than the rest of America. But here in North Carolina, when regrettable human inclinations do rear their ugly heads, they are usually expressed far more gently and with greater grace than I was accustomed to in my native Midwest, the brusque environs of the Northeast, or the rustic West. The New South gilds the rank lily of social discord.

So I was distressed to note, over the last few years, what seemed to be a decline in that tradition of gentility. I teach a large undergraduate class in Communication and Technology about two hundred students. At the beginning of each semester we talk about the fact that we don’t have much time together, and that disruptive behavior deprives their classmates of the opportunity to absorb content that is, a) of important to their education and will, b) in their eyes, more importantly, be on the test. I tell them if they cannot resist the urge to chat among themselves to just not come to class. I don’t want them there. It is a strategy that drops attendance, but increases the quality of my interaction with the students who show up ready to shut up and pay attention. This semester there seems to be a heightened disconnect between those instructions and class behavior. They come, but still chat among themselves with no semblance of restraint, let alone shame or remorse. They do not see their behavior as aberrant.

“Rude, foolish undergraduates,” I thought. And then I went to my graduate class. Seventeen students, most over thirty years old, most employed, adults, you know what I mean? Even in that group there are several that see nothing wrong with striking up “parallel conversations” during class. “Very weird,” I thought. And then I went to a faculty meeting — twenty or so PhDs, all of whom are deeply invested in the business being conducted. But they, too, feel entitled to address issues of concern with the colleague sitting next to them, regardless of whom actually “has the floor.” “What the hell is going on!?” I thought, “Is civility dead?”

Then I realized it may have nothing to do with manners, it is all about TiVo, technology, and the fracturing of interpersonal time and space. Think about it. TiVo is not about the digital recording of video. That is only part of it. TiVo commercials tell us that TiVo is all about being able to “pause live TV.” We can be watching something unfolding “in real life,” — a hurricane striking the coast of Mississippi and Louisiana, or the Hurricanes playing hockey — and then a parallel real life” intrudes. Your spouse needs help, a child cries, the dog scratches at the door, the phone rings, whatever. No problem, you hit a button and the “live event” on the TV screen freezes. You then tend to the more immediate reality. Afterwards you return to the screen, hit a button, and resume the frozen reality.

It is an increasingly common scenario with very uncommon implications. The notion of the “here and now,” that usually seems so solid, just got a bit strange. The question of “Which ‘real life’ do you mean?” is no longer the sole property of philosophers or absurdist playwrights, it has wiggled its way into our living rooms and our classrooms, into the coffee shop and the faculty meeting.

Here is what I think is happening. Reality now flows around us in a variety of different streams. There is the physical reality of my location and the events unfolding in that location, but there are also the parallel realities outside that location that are now in accessible electronically, digitally. My computer, my cell phone, my pda, my Blackberry, my iPod, my Bluetooth prosthesis, all let me select a preferred experience from among those intertwining realities. And TiVo goes one step further, letting me choose which time to designate as “live.”

The power to select from a rack of potential realities makes the designation of “here and now” an idiosyncratic option. I choose my reality on the fly, and utilize the communication protocols appropriate to that choice. The results are not always polite. When varying individual realities share the same physical space there is inevitable friction.

Ipod Guy

Ipod Guy

Consider the person standing next to you at the metro stop who has chosen the reality of their hands-free, ear-bud cell phone. He cradles his hands in his face moaning, “Baby, how can you say that? She means nothing to me.” You sidle down the platform a bit and sit beside a suit enmeshed in Blackberry. Her fingers flicker over tiny keys while she mutters phrases that sound, at the very least, confrontational — in a language you do not understand. You move again, and find yourself the unwilling partner of an iPodded youngster, moving in what you can only hope is sympathetic rhythm to the music in his head. And, as Sonny and Cher asserted decades ago, the beat goes on.

It is, I believe, this phenomenon of the unthinking selection of incompatible social realities that results in what I initially interpreted as rude and boorish behavior — in my classes and among my peers. The problem, of course, is that rude and boorish behavior is always a matter of perception. If your behavior is perceived by those in your immediate physical environment as being rude and boorish, then it is — no matter what your intention — still rude and boorish.

Social norms and mores, of which manners are an irrefutable part, have one primary function in human society — to smooth the inevitable conflict between personal inclinations and the comfort of the group. The current 21st century technology-enabled environment gives us unparalleled personal power to pick and choose the reality of the moment. It advantages the unique reality of the individual. It inclines me to “suit myself.” That invites conflict with the more social, group-centered norms of the 20th century — norms that emphasize social cohesion and personal restraint, norms with which most folks over 30 were socialized. The resultant friction is both uncomfortable and unnecessary.

What we need is a conscious reconfiguration of communicative etiquette for the 21st century. Increasingly we focus on the mechanical efficiency of digital communication systems, but at the expense of human sensibilities. We need a set of guidelines for respectful interactive behavior in an increasingly complex — from both an existential and a technological perspective — world. We need new social conventions that will simultaneously acknowledge and employ the increasing communicative power of our interactive environment, while retaining the grace of softer times. I do not know what that should look like, but I strongly advocate one guideline: courtesy. Acceptable communication in the 21st century, mode notwithstanding, should attend to the comfort of the other, every bit as much as it champions the choices and expressions of the individual.

Thank you for your kind attention.

Image Credits:

1. TiVo

2. Ipod Guy

Please feel free to comment.




Digital: The Dark Side

by: Robert Schrag / North Carolina State University

Voldemort

Lord Voldemort

Prestidigitation: 1. Sleight of hand; the performance of tricks which by nimble action deceive the eye; jugglery; conjuring tricks.

It is a 19th century word that defines a very 21st century concern. Maybe it is because I have been feuding with my broadband provider. Or maybe it is because I have been reading reviews of Last Child in the Woods: Saving Our Children from Nature-Deficit Disorder by Richard Louv, just published by our friends down the road at Algonquin Books in Chapel Hill. I don’t know. But for some reason, for the last few days, I have been drawn to reflections on the dark side, the deceptive side of the digital world — a world that ordinarily delights me. My response to most new high-tech toys or services or sites is usually “Is that cool, or what?” And that is the attitude I most often take into the classroom. But, like Dumbledore, I realize — deep down inside — that not all wizards are good wizards. Some choose to follow He-Who-Must-Not-Be-Named. OK. I won’t say Lord V——-t out loud. But I really must assert that there is more than a little prestidigitation going on in our garden of digital delights. If we are to enjoy the legitimate fruits of the garden — we must also be willing to call a weed a weed.

The weeds in the garden are the conjuring tricks made possible by the blurring of the line between experiential reality — the touch, feel, see, taste of the physical world; and the digitized reality — the seeming touch, feel and taste — of the sights and sounds of the electronic pulses transmitted to the screens and speakers of our digital world. In that world illusion seems no different from reality. That is a problem. Consider the following illusions:

The illusion of writing v. the reality of just cutting and pasting.

The end product of many academic assignments is words on a page, or perhaps words on a screen. The process for getting the words on the page has changed radically over the past decade. Even in the early days of word-processing a professor could be reasonably sure that the student’s fingers had actually pressed the keys that formed the words that ended up on the page. One clung to certain assumptions about the thought process involved in that exercise. For example, the belief that the effort necessary to physically construct the messages was, at least in part, reflective of the intellectual effort applied to formulating the content.

The ability to simply “cut and paste” from the thousands of sources presented by Google, or Scholar.google, shatters those perhaps always foolish assumptions. Today, when I sit down to read a paper, I no longer assume that students realize the differences between forgetting the footnotes and intellectual theft. The distinction between a paper merely constructed of many pastings and a work of original thought that draws upon the works of others is, for most, shrouded in confusion; and the actual difference between scholarship and plagiarism seems, to them, more an issue of “search strategy” than intellectual ethics.

The illusion of “knowing” v. the reality of knowing “where to find.”

“I do not post my PowerPoint lecture slides on the course website.” You would think I had just confessed to devil worship. Yet, there is method in my contrariness. It has become clear to me that there is a direct relationship between students’ realization that “the slides are on the web” and their behavior in class. If you post the slides three things happen:

1. Attendance goes down because “everything they need to know” is on the web.
2. Note taking goes down because “everything they need to know” is on the web, and
3. Grades go down because “everything they need to know” wasn’t in the PowerPoint slides.

Don’t post your slides and all three trends are reversed. I don’t have hard data on this, but could probably Google it for you. . .

Ubiquitous access to information causes us to confuse what we can find with what we know. The problem with this particular illusion is that we really don’t need to know much of what we can find. I don’t need to know the directions to every place in town. Mapquest can handle those mysteries. But when I am confronted with a phone other than my own and realize that Speed Dial has co-opted any knowledge I ever had of my daughter’s phone number — I worry.

The illusion of experience v. living in the physical world.

Seeing is believing — or at least it was until Photoshop and Final Cut Pro. Time was when you could look at the picture or watch the film clip and have a chance of spotting any technical manipulation of the “visual truth.” The fact that we no longer can, reveals the part of “seeing is believing” that we started “editing out” back in the 50s: You are supposed to be there to see it. That’s what sideshows at the county fair were all about — you paid your quarter and went inside the tent. You decided if that was really a 5000-lb. Man Eating Crocodile from the Nile, whether he really was The Fattest Man in The World, if her beard was real. You were there, it was “right before your eyes,” “seeing was believing.”

That has all been usurped by the voyeurism of the webcam, the wonders of streaming video and the downloading of DVDs. The problem is not simply the exponential increase in the potential for deception and fraud. Perhaps even more problematic is the relocation of experience — from the world around us to the screen in front of us, and the resultant inability to gather any physical verification of the experience that constitutes our life.

The illusion of activity v. actually using major muscle groups.

It’s not just the “First-Person Shooter” games. I wish we could find a way to confine the “skills” developed in those frightfully realistic simulations to images flickering benignly on a screen. The larger issue is that we can mimic anything on a screen. Racing — auto, motorcycle, skies, skateboard, whatever. Football, basketball, baseball, streetball, wrestling, boxing, hockey. You name a human activity and I will lay you better than even money there is a video game out there that reduces the complexity of the activity to the twitching of eyes across a screen and the ceaseless movement of thumbs and joysticks. Body held in suspended animation.

When he was researching his Last Child in the Woods book, a fourth grader told Louv, “I like to play indoors better ‘cause that’s where all the electrical outlets are.” The first graders probably all know about cutting edge battery technology and wi-fi. Still, it is an unsettling quote. It is, I suppose, inevitable that in an information-based economy, work becomes sedentary to the extent that we are tied to screen and keyboard. But there is something terribly depressing about the idea that play has become passive. It is like losing Neverland.

So, is this a Luddite rant? Do I lose the laptop? Burn the Blackberry? Knock out Bluetooth? Mangle my Mac? No. Not at all. But we need to realize that digital space is both powerful and different. It enables a magical world peopled by wonderful opportunities to enrich our lives, and by an unprecedented power to deceive others and ourselves. Digital experiences and technologies are not the same as tactile, “in the physical world” interactions. Whether events, interactions and experiences occur in real or digital space profoundly affects both the nature of those thoughts and experiences and our perception of them. Furthermore, the real world/digital world dichotomy affects the nature and dynamics of the relationships that grow out of, or depend upon, those intersections.

Both environments provide and advantage tools through which we experience the thoughts and experiences of others, as well as tools that enable and enhance the understanding and structuring of our own experiences. The challenge is to discover which tools in which environment are best suited to specific realms — and which realms lend themselves to blended toolsets. It would be foolish to advocate for a blanket preference of one venue or the other. They are — what is the current parlance? — “differently-abled.” Viva la difference!

Image Credits:
1. Lord Voldemort

Please feel free to comment.




Hegemony on a Hard Drive

by: Robert Schrag / North Carolina State University

Apple Logo

Apple Logo

The big sucking sound I had just heard was my Canon i9900 printer swallowing a 19×13 inch piece of photo paper. It then proceeded to dedicate its eight ink cartridges to printing only half of the image down the right hand side of the sheet. Damn, damn, damn! Apple + . Apple + . Cancel, Cancel! Abort! Abort! Aoougah! Aoougah! Dive, dive, dive!

I hate it when that happens.

I had printed images this size at the office, no problem. Well, OK, slight problem. The printer wouldn’t accept a “print landscape” orientation, so you had to get Photoshop to rotate the image 90 degrees and print in portrait aspect. Other than that – piece of cake. So I glared balefully at the printer, blinking and burping away there on the side table in my home workspace. I began to run through the variables that might be dorking around with my image. One of these solved the problem: moving the image from a remote hard drive to the laptop hard drive and printing from there, or switching the printer from the USB port to the Firewire port that was now free since I didn’t need the remote hard drive, or using a standard paper size setting instead of a custom size. I don’t know which did the job because I did them all simultaneously and the image printed. Maybe I needed all three.

“It shouldn’t be this hard,” I thought. “Art and my computer should be better friends.” And it’s not just visual material; life doesn’t get any easier when we consider audio. Singing, poetry, anything better heard than read; they are all part of that “digital trunk in the attic” I wrote about. Trying to create those messages in the digital environment runs us into more tool concerns:

I have a pretty small, pretty awful, USB microphone. I play no instrument – assuming we do not count the kazoo. Garageband sits gathering nanodust on my hard drive. Yes, I have a friend who is an excellent keyboardist and vocalist who has offered to share her skills, her keyboard, and her high end USB microphone if I teach her how to use Garageband. But I don’t know how to use Garageband, yet. And my office tech guru tells me that “If spend another 80 bucks in software, and get a mumbo-jumbo yadda yadda 100 dollar interface, you will have really excellent sound. Plus processor speed isn’t an issue because the CPU will either choke or it won’t. Probably won’t. And the interface is clean – just like Garageband!” Whew, I feel a lot better now!

It shouldn’t be this hard.

But there is a bigger issue than my personal frustration. Before the expressive digital genie has even wriggled her way out of the bottle, we are lopping off appendages, willy-nilly. The intricacies of hardware and software are selectively marginalizing various communicative modalities, and particular voices. In my classes, I call it communicative hegemony. We tend to think of hegemonic inclinations as advantaging a specific worldview. I’d take that particular paranoia a step further. The communicative technologies that come to dominate any point in history advantage some modes of expression over others, and those advantaged expressive modes are uniquely inclined to favor a construction of reality that carries embedded assertions about the nature of existence and expression.

There are two primary areas of ware dominance – the communicative hegemony made possible a convergence of software and hardware – that concern me. The concern can be framed thusly: What expressive ware enables and advantages particular constructions of messages, particular groups of message makers, and hence specific perceptions of reality/truth/value?

Garage Band

Garage Band

The tradition in expressive message software – visual processing packages such as Photoshop and Illustrator and audio packages like ProTools and Cakewalk Sonar – is to create powerful, full-featured applications for “media” professionals. I have two significant objections to that tradition. First, it unduly influences the whole area of what is the “allowable” structure of an expression. And second, it nudges the creative impulse toward the slippery slope of commodification.

Let’s address the “allowable structure” notion first. I have two friends who are “real artists.” He is primarily a sculptor, she a painter. Both refuse to use Photoshop any longer. They quit early in the version 2.0 years. She originally used it to do a variety of “color treatments,” experimenting with various color schemes on a preliminary sketch without using reams of paper or pots of paint. She quit because the software became too complex; it got in the way of her painting. He used it for similar reasons, to look at various glaze ranges and do some manipulation of digital images of “pieces in progress.” But he stopped using it for a very different reason. A computer-science professor in his previous professional life, he walked away because Photoshop got “Way too cool. I was afraid I’d never go back to the studio.” Those are two sides of the same coin – the software began to assert its own agenda into the creative process. By foregrounding certain processes – sometimes literally in the tools palette, sometimes figuratively as in the abundance of filters and effects available in drop down menus – the software advocates certain expressions more than others.

The software designers would be quick to point out that they use “feedback from their customers” to decide which tools to foreground and which features to provide. Which takes us directly to the issue of commodification. Expressive software packages – graphics, music, sound – that sell for $500.00 to $10,000.00 are not designed for the personal expression budget. They are designed for professionals. Folks who do work for profit or for hire. And those are the customers the software designers ask what features to foreground or include. Hence, the software packages advantage techniques and tools designed for commercial products, and in doing so, further establish the artistic language of the commercial artist as the accepted language for any artist wishing to employ that particular medium. And, if that weren’t enough, the software advantages output in forms that are particularly salient to the marketplace. Jpegs for websites and online stores, “save as html” to provide the “copy,” .ram files for your PC Real One Player – click here to upgrade! “It’s easier to build an online business than you ever thought!” Again, product for profit, not process for expression.

Now, my friends over at IT tell me that there are plenty of freeware, shareware, cheapware, options I can use. A few even work on my Mac, a few I can get up and running in less than 12 or 15 hours, and some will actually output sound or video or images to a format I can print, play or display. Some I might be able to figure out myself. That is significant progress. I can still remember when they didn’t want to talk to me if I wasn’t using a UNIX box and couldn’t program is C++. Still –

It shouldn’t be this hard.

Apple is making an honest effort – I think. Their iLife suite tries to walk the thin line between commerce and creativity. But it is a very difficult razor on which to balance. Look at GarageBand, for example, which I have played with more since starting this essay. Version 1.0 leaves you at the mercy of your own skills with an instrument or the loops and samples provided with the software. Version 2.0 – just out – seems to move further along the road toward enabling the consumer; but the price is a significant leap in the complexity of the software. And it still exports to iTunes, which shows an uncomfortable inclination to shuffle me off to the iTunes Store.

Jef Raskin, who died about a month ago, was largely responsible for the original Macintosh user-friendly interface/mouse tandem. He wouldn’t like that. He always asserted that computers should serve people – not the other way around. He ALWAYS thought it shouldn’t be this hard.

And it is our fault. When I say “our,” I mean those of us in universities. Our love affair with technology has led to tools of awesome power, wonderful capabilities. Our research, our fascination with what might be possible, has created the electronic phantasm that is the 21st century. But in acquiescing to the “off the shelf” ware solutions provided by our graduates in the industry, we have unwittingly added a new deep trench to the digital divide. We have allowed our genie to build walls instead of bridges between the creative impulse and the digital environment. The tool now dominates both the process and the nature of the product. It is time to wrap our academic robes more firmly around us and figure out how to reverse that paradigm, because — all together now – It shouldn’t be this hard!

Image Credits:
1. Apple Logo
2. Garage Band

Link
Apple

Please feel free to comment.




The Trunk in the Attic, or, Designing a Digital Legacy

by: Robert Schrag / North Carolina State University

Communication is, and always has been, a negotiation; technology and society parrying and thrusting, demand and counter, proposition and accommodation. Folks feel a communicative urge and hunt around for a communication container capable of holding the symbols necessary to ease said urge. Speech, text, painting, sculpting, music and math all met communicative, conceptual, needs and claimed specific amenable space on paper, canvas, stone, metal, or in the melodious air. It is, in part, those past successes that articulate the next expressive opportunity; the evolving expressive capabilities of technology are themselves hints at how our communicative tools might be best employed.

The gradual, expressive, maturing of the digital environment makes me hopeful that an old communicative fantasy of mine may be edging toward reality. I have always been delighted by the creativity of others. Nothing gives me more pleasure than to be in the presence of another’s insight or expression and find myself reduced to a state of delighted confusion: How did they do that? And, how did they even think of that? The desire for real answers to those questions often drives me to Google to find an author’s or artist’s or musician’s or scientist’s email address and ask them. You would be amazed at how often they respond. The problem, of course, is that on occasion they have been so rude as to die before answering my questions – sometimes decades ago. The frustration of their ultimate inaccessibility always reignites my desire for a “virtual biography.” I want to know what Einstein ate, what the streets he walked along looked like. I want to share the music to which Georgia O’Keeffe listened; I want to hear the sounds of London that Shakespeare heard. I want to be able to participate in some way in the experiential reality that must have shaped the creative flame within those souls. And I want to feel the firelight and hear the wind that whipped around the farmhouse winter nights in South Dakota when my father was a boy. I want to see the pages of the books that entranced my mother as a young girl in rural Pennsylvania. I want interactive, real time biographies that move beyond words on a page or flickering images sprung from the imagining of filmmakers and TV producers.

Such living histories would be incredibly difficult and expensive to create. To reassemble the past from fragments of mostly discarded data, to attempt to reconstruct from them a facsimile of the creative, reflective, experiential reality of one long dead is a daunting, if not impossible, task. However, assembling such works to chronicle lives in the present, using digital technology, has become surprisingly feasible.

Think about it. All you really need is a “capture device” – something that can record the visual, auditory and textual experiences of a life, a “structuring device” – something that allows one to edit, order and organize those collected experiences, a “storage device” – someplace to store both the collected data and the constructed representations, and a “publishing-distribution device” – something that allows for the sharing of the constructed representations with others. A simple hardware configuration meeting all those requirements would be a video cell phone, a laptop computer with a broadband connection, and a large external hard drive. Apple’s iLife and Microsoft’s Office would take care of the software. You could, naturally, beef up each portion of that configuration as need and desire dictated, but those simple pieces could get “the job” done.

The next question is “What does ‘the job’ look like?” To which I respond, with great certainty, “I’m not sure.” I see two major divisions in “the job.” One is really a database. I like to think of it as a huge digital trunk in the attic. You know, that trunk that had all those funky things from when your parents were young, or better yet when your grandparents were young. You could dig through it and actually touch a bit of that time. Chronology and use of the items wasn’t always obvious, but many essential components of the past were there in that trunk. Our digital trunk, stored on the huge hard drive and backed up on the appropriate back-up medium de jour, would contain the digital components of our life: images, sounds, text, whatever is eventually available to record and store.

The second part I think of as a journal. Again consider the parallel to the trunk in the attic: In the trunk you find a journal that tells the story of a life, and in doing so refers to some of the items in the trunk – the data is structured in a way would allow an observer some insight into those two questions with which I am a bit obsessed: How did they do that? And how did they even think of that?

But what is the appropriate structure for this journal? Remember, we are the folks filling the trunk, writing the journal for those kids – biological, intellectual or philosophical – who we hope will one day climb up into our attic. What should we include in this story of our lives? Again we are meandering through somewhat unknown territory. As I mentioned in my last column here in Flow, we don’t really even understand the language yet. But I have revisited the works of several master storytellers recently, and from their efforts draw some reasonable guidelines.

First, the digital journal of our lives must aim for experiential veracity. Isaac Asimov in his 1953 work, The Second Foundation, introduced millions of readers to the idea of the Prime Radiant – a virtual reality projector that enabled social scientists to actually walk around inside an incredibly complex equation describing the past and future of all human/galactic society. They could reach out and shift elements of the equations and see the impact on the whole, in real time. They reached back in time and actually shared the creative experiences of the other Second Foundationers who had preceded them. I have never met anyone over 35, involved in new technology environments, who is either unfamiliar with, or uninfluenced by, those five or six pages.

Second, understand and share, to the best of your ability, your own formative moments. Louis L’Amour points me to this particular guideline. Yeah, yeah, I can see you looking down your nose. But I would suggest reserving judgment until folks buy more than two hundred million copies of your books. The man was an incredibly gifted storyteller. And his dependable structure was part of his gift to us. A Louis L’Amour story often starts in the protagonist’s childhood. The incident that solidifies the protagonist’s core characteristics is recounted and the rest of the work is an unfolding of how those characteristics guide a usually admirable life. Remember we are trying to explain to the folks who come rummaging around in our attic why we do the things we do, why we think the things we think. Hence, we need to share with them the “what and why” of our own core characteristics.

I draw the final characteristic – multitextuality – from Simon and Garfunkel’s Greatest Hits, “Scarborough Fair/Canticle”, to be precise. In this particular work several textual, vocal and instrumental themes flow around one another, over-lapping and intermingling to create a synthesis that is not only greater than, but also different from, its various component parts. Most narratives about life and creativity are cast in one medium. They begin at point A and proceed to the Z of our lives or creative endeavors. The reality is that our lives and creations are entities and events of complexity, serendipity, planning and surprise. We have the best chance of discerning and representing that process with accuracy. So the constructions we pass along – the journals we leave in the trunk – should reflect as much as possible all the experiential, cognitive, and creative streams that combine in the expressions we seek to preserve.

What I am talking about is the conscious creation of a personal digital legacy, compiling a personal history of unparalleled richness, accuracy and complexity. If such legacies were to become a common cultural practice, how much more profound would be our insight into ourselves, and our world. Certainly I would like to know the intricacies of the lives of the giants of our times, great artists and thinkers whose works I so admire. But at least as precious would be the legacy of my family. My father is 91, my mother and older brother have already died. How wonderful it would be to know that they had left me a trunk in the attic, a digital legacy of inexhaustible memories, moments and perceptions to comfort and to guide me. Sadly, my father cannot construct such a legacy, and my mother and brother took their trunks with them. I will leave mine for my daughters.

Links
Robert Schrag’s Online Journal
Prime Radiant
Images from the Prime Radiant

Please feel free to comment.




Sculpting a Digital Language

by: Robert Schrag / North Carolina State University

A number of responses to my last Flow column wondered what form the “digital language” I advocated might take. The question took me back to a very non-digital experience. It was a singular moment — unexpected on two levels. First, it was surprising that the show, featuring more works by Auguste Rodin than had ever been gathered in one place, was at the North Carolina Museum of Art. Second, as a lifelong Rodin-groupie, I didn’t expect to see a “new-to-me” work. But I turned the corner and there it was, Fallen Angels. It was love in an instant. Totally blind-sided, I stood and stared. I wanted to laugh and cry. Breathing was difficult, but what little air I could inhale seemed like Spring. I put out my hand and a museum guard quickly materialized, fixing me with a restraining glare. I returned to the show many times, spending hours just gazing at the Fallen Angels. It seems paradoxical that that ecstatic experience has come to define for me what we must avoid as we seek a new language for the digital environment. But, let us begin at the beginning.

I believe in Louis Sullivan’s assertion that form follows function — in skyscrapers, scissors and language. Language should be con-formed to its essential function: manifesting the perceptual-conceptual moment. And what, you ask, does that mean? Good question.

I often ask my students to consider the most powerful moments in their lives: when they fell in love, or realized that love had left; the birth of a child, the death of a parent; the moment they sensed a divine presence, or came to believe they were alone in the universe. Then I ask them to define what kind of a moment it was. A text moment? A picture moment? Tactile or olfactory? Musical? Eventually we agree that it was all of those at once. It was a multimodal moment.

Next I ask them where this moment occurred. Not the physical location that stimulated the perception, but where the perception bloomed. After a seemingly mandatory detour through the idea that a person with an artificial heart can fall in love, we fix this multimodal perceptual-conceptual moment [MPCM] in the brain; locked within us. Yet, we cannot leave it there. Often these peak MPCMs are communicative crystallizations, internal personal epiphanies that we are driven to share. That is the function of language. But what kind of language?

In the previous column I asserted that the evolution of communication technology is a bartered negotiation between cultural needs and technological capacity. Language, too, grows from a negotiation between society’s communicative needs and the capabilities of the media that hold language. It is most often a negotiation in which the medium — the language container — dominates; “form follows function” turned upside down. The functional ability of the container determined the form of the language. Paper holds words and numbers and images, stone and wood hold carving, instruments hold music. Thus, we began a millennia-long drift away from the ideal of a holistic representation of the MPCM. Instead we inclined towards language containers that held powerful unimodal expressions of the MPCM. The innate inflexibility of the container drove the drift; but there were other important factors at work. Among them were the tyranny of task and the hegemony of the marketplace.

Tyranny of task is the temporal pressure that accompanies every communicative need. If my sudden need is to communicate to my hunting partners that there is a mastodon the size of Montana around the bend, and I don’t want to alert the critter; sign language gains immediate primacy. If I need contracts and trade records to maintain the viability of my commercial interests, writing swiftly ascends. Painting and sculpture are effective in conveying the teachings of mystics to an illiterate populace. From the beginning of human time to Tuesday’s faculty meeting, we have always needed tomorrow’s communication tools yesterday. The driving need to get the task done puts the buggy beta version of the language swiftly into our hands. The crafting of language has never been a leisurely, reflective undertaking.

The hegemony of the marketplace becomes apparent when we realize that language systems and media do not merely facilitate commerce — they are themselves commodities. The wealthiest man in America is not Sandberg’s “Player with Railroads and the Nation’s Freight Handler.” He is a communication broker — a trader in computer hardware and software. Dominant corporations no longer fabricate steel, they stretch fibers of pure glass and fill them with messages designed to amuse and beguile us. Communication — the tools that facilitate it; and the words, sounds and images that define and construct our truth — has become the primary commodity of the 21st century. And the languages that dominate in that marketplace are not those that best express the MPCM; they are the ones — from computer operating systems to blockbuster films — that generate the most revenue.

And finally, there is the intimidation of genius — which takes us back to Rodin’s Fallen Angels. Genius uses a single mode expression to instigate a multimodal perceptual cascade in the mind of the audience member. Rodin’s sculpture, O’Keeffe’s painting, Mozart’s music, Balanchine’s choreography — all communicative acts from previous centuries that pour such power and perception into a uni- or bi-modal communication container that, in a kind of holographic transformation, we respond as if we were suspended in the totality of a multimodal perceptual-conceptual moment. These are acts of expressive genius that recreate the holistic MPCM from a fragment of its parts. They leave us with the notion that such communication is “normal,” when, in truth, it is rare beyond imagining.

These, then, are the barriers that stand between the languages we have inherited, and the language we should create to fully express the multimodal perceptual-conceptual moment in the digital environment:

∑ A history of unimodal languages developed to conform to the capabilities of existing communication containers.

∑ The tyranny of task prompting a “crisis-management” approach to language development, which favored quick and dirty language solutions over elegant expressive tools.

∑ The hegemony of the marketplace that currently fosters the development of technologies, languages and content that gain primacy based on profit.

∑ The heritage of genius that implies that we already have the expressive tools we need, if only we had the necessary “gift.”

Those are daunting obstacles indeed. Which is why I advocate simply walking away and starting all over. Seriously. I look around my campus and talk with colleagues near and far, and see little chance that we will succeed in “evolving” a new language for the digital age. The old barriers are simply too high. The tyranny of task confronts most academic endeavors: Use technology to solve the pedagogical challenges we cannot fix with bricks and mortar, right now! The purely expressive endeavors — art, music and animation (even in the rarefied atmospheres of Annenberg and MIT) — presume levels of funding that only government or industry can provide. Not surprisingly those efforts often result in products that primarily profit the military, the government, or the media cartel.

So here is how I would start over — if I had Bill Gates’ money. I would build a Digital Language and Expression Development Center in the mountains above Santa Fe, New Mexico. Why there? Because I like it there. This is my fantasy. Initially, there would be two populations at the Center. Since the function of digital language is to manifest the multimodal perceptual-conceptual moment, I would find the most creative traditional artists I could — in all the arts — and bring them to the Center. They are already manifesting the MPCM with damaged languages. They bring function. Then I would bring the best programmers in the world to the Center. They would be responsible for creating the digital form to contain the expressive function of the artists. But the artists would lead — form follows function, remember?

The artists would spend their days doing art, and the programmers would watch. At breakfast and lunch the artists and the programmers would negotiate the digital form to contain the expressive function of the artist’s medium. The programmers would be responsible for making sure that the various expressive digital palettes would be integrated: Musicware works with Artware with Filmware with Textware with Sculptware, etc. Eventually we get Expressionware — an open-source digital language that can contain all the elements of a multimodal perceptual-conceptual moment. Over dinner we would do “show and tell.”

Next we would conduct workshops for people from all different walks of life, painters, politicians, pursers and publicans — and jobs that start with other letters too. Each workshop would explore how Expressionware could be used in that arena, expanding it to include new or unique concerns and requirements. And thus, over the years, we would sculpt a new digital language, thoughtfully and reflectively.

I, naturally, would live at the Center, wandering, wondering, watching, and learning — because it is my fantasy.

Links
Auguste Rodin biography
North Carolina Museum of Art
Slacker HTML

Please feel free to comment.




The Invasion of the Screen People

by: Robert Schrag / North Carolina State University

It was late summer in the Heartland. A simpler time, with only vague fears of Y2K troubling my anticipation of brisk breezes and the deepening color of autumn. Thunderstorms decorated Iowa’s western horizon. I had pulled into a mega-gas station at the intersection of I-80 and I-29. Scores of semis towered above the SUVs and sedans, all swilling diesel, ethanol and high test before easing out to follow the blacktop’s broken white line through the gathering dusk and into the night. I faced the sleek screen embedded in a wall-sized pump; touched the credit payment icon, swiped my card, tapped “no receipt,” lifted the hose and jammed the nozzle into the side of my pickup. Gasoline fumes opened my nostrils and hit the roof of my mouth, mingling with the sweet perfume of distant rain. My eyes slide across the ranks of pumps to the unbroken cornfields that surrounded the incongruous concrete intrusion. And Peter Jennings spoke to me: “Tensions heightened in the Middle East today . . . .

I spun around to locate the celebrity anchor, stunned that he would join me out here on the road. He was nestled – as serene and composed as ever – on the touch screen perched above three grades of Texaco. I stared in disbelief as he inserted the news of the world between the bass rumble of Kenworths and the soprano squeal of travel-tired children. It was a macabre moment, like encountering a chimpanzee in top hat and tails, dining in a posh Manhattan tearoom. But my disorientation was swiftly banished by an unbidden thought: “I wonder if you can change the channel?”

That was when I realized that The Screen People had successfully infiltrated Earth. The last six years have only affirmed that realization. Screens have become the primary communication interface in the industrialized world. As I write these words, several screens assist me. The iBook’s screen reflects the words of the essay, and allows me to toggle to internet maps that refresh the memory of my Iowa trip. The TV screen gleams off to my left, enabling me to keep an eye on both the Olympics and a line of thunderstorms moving through the area. My cell phone screen identifies callers, making it possible to accept vital calls while relegating others to voice mail. Last night I watched a film projected on a large screen overlooking the lawn of the North Carolina Museum of Art. Earlier today I shot photographs, composing the images on the LCD screen of my digital camera. “They” are everywhere.

This ascendancy of the screens raises a number of questions for those of us who study the intersection of technology and communication. Consider, for example, the notion that people maintain four essential communicative guises in relationship to mediated messages: creator, consumer, assessor, and facilitator.

Creator is the active guise; participating in the making of a message. The process can be an individual crafting a personal expression for another individual, a group, or an audience of millions. It can be a group effort. It can range from purely presentational to dialogic; a transactional negotiated process between creator and audience.

Consumer is predominantly an individual, passive guise; one person chooses to consume a message or experience created by another – reading, listening, or observing. Consumption can be interactive. Interactive consumption ranges from performing works authored by another, to participating in virtual space constructed by, and dependant upon, another.

Assessor embodies the observational, analytical, reflective guise. Assessment is the individual’s reasoned, supported evaluation of the impacts, effects, implications and relative merit of messages structured by others. Assessments are often delineated by objective, medium, area of social or political influence, academic heritage, inclination or method.

Facilitators provide the interventionist guise: an individual or team utilizing specialized knowledge, skills and/or tools to aid other individuals in the realization of perceived communication objectives. Intervention ranges from interpersonal through organizational to international. It encompasses both technical training and conceptual exploration.

The predominance of screens in contemporary culture will significantly redefine each of those relationships. While their influence is still unfolding; clearly two paths diverge in this technological wood. We can either accept a traditional passive evolution, or bestir ourselves to – perhaps for the first time in history – plan the course of our own social evolution. Let me explain.

The evolution of communication technology has been more serendipitous syncopation than measured march. From speech to mime to music to writing to printing to painting to film to telegraph to telephone to radio to television to computer to Internet, the relationship between society and the tools we use to communicate has been a bartered negotiation. We, the members of continuously evolving cultures, are faced with similarly evolving communicative, expressive needs. Technology morphs to meet those needs. We fill the technologies with content, and in the process discover new needs, which in turn beget new technologies, and so on and so on. It is a negotiation because neither side of the equation determines the final path of evolution. It is a bartered negotiation because each side demands value from the process; society demands better communicative, expressive tools, while the industries that provide the technologies demand profit.

Negotiation implies compromise, and compromise rarely yields the exquisite. More often the result has been merely the mutually acceptable. And so it has been in the bartered negotiation of media evolution. Papyrus wasn’t perfect, but it was better than clay tablets. The printing press had flaws but also advantages over the scribe, the telegraph bartered speed over linguistic complexity, cells phones offered mobility at the cost of fidelity – and all yielded profit to industry and power to government.

Bartered negotiation in the world of the screen people has been the same – only different. When we examine the tools that drive the converged environment of the screen people – whether the special effects in Lord of the Rings or three-way calling on our cell phone – we find ourselves confronting computers, networks and software. And in that world we find a strange confrontation between complexity and elegance. The post-modern world often seems immersed in a love affair with complexity, a celebration of fragmentation. And nowhere is that worldview more manifest than in the design of software intended to facilitate expression. New versions of Photoshop, Dreamweaver, and Office, proliferate features that regularly relegate former experts to the status of “newbie.” The irony is apparently unintentional, lost in the marketing realization that “more features sell new releases.”

Any experience with an audience – singular or mass – reveals that rampant complexity confuses, while precise elegance empowers the depiction of the most intricate message. As we face the 21st century, the inclination is to allow the marketplace to drive the development of the communicative palette. “It has always ‘worked’ before.” But “before” was never in the hands of so few. Bertelsmann, Newscorp, Disney, Time-Warner, Viacom, Sony, Vivendi and Reed-Elsevier control most of the content distributed in the world today – from print to the internet. Microsoft, Adobe, and Macromedia decide the nature of the tools we use to express ourselves. The clout of huge profits in a concentrated marketplace makes quality secondary to popularity for all those companies.

Before was never like now, and the stakes have never been so high. We are not talking about cornering the market on widgets. The issue concerns a few colossal companies that control the communicative content of our world, and who also shape the very languages we use to express the truth and beauty of that world. To date the palette they have provided is flawed in three dimensions: Intricacy – the excessive inclusion of features in software that excludes all but the specialist from fluency. Discreteness – the inclination to provide tools and messages devoted to, and hence restricted to, a single medium, and, Commercialism – the hegemonic power of the marketplace that decrees that whatever the other characteristics of medium or message, significant profit must be among them.

In the film Dead Poets Society, John Keating exhorts his students, “Carpe diem. Seize the day.” He challenges them to “do something extraordinary.” It is time for the academy to do something extraordinary. We must reclaim the expressive imperative; we must define the palette. Certainly, the expressive tools provided by the media cartels are fatally flawed. But so are some cherished models from the “teach and publish” world of the academy. We linger in the solid predictability of prose upon the printed page. We are comfortable with formulae unfolding neatly across the board. We treasure heads bent over bluebooks as sunbeams dance with dust motes, reminiscent of chalk dust from bygone years. That world is gone. Yet many of our forays into “courseware” seek to recreate it.

Screens encompass a new world. It is our responsibility to create, to use, and to teach new, powerful, transparent languages and tools for elegant expression in the converged digital environment of that reality. Carpe Diem.

Links of Interest:

1. Roger Chartier on the role of on-screen texts

2. United Nations Information and Communication Technologies Task Force

3. MIT’s web magazine on information technologies

Please feel free to comment.