Justine Cassell, MIT Media Lab
In this fascinating and evocative essay, Keating and Mirus look at the effects
on American Sign Language of video-mediated communication technology. Hearing
populations around the world today take for granted that they can communicate
in their own language with those at a distance. The telephone is pretty much
available everywhere, and operators are no longer necessary to place a call,
so simply picking up the phone and dialing a number puts a speaker in contact
with the chosen interlocutor. Until the last couple of years, the same was
not true for the Deaf. Not until the TTY, introduced widely only in the 1970s,
was communication possible among non-co-located Deaf, and even then only through
the use of a foreign language. In the case of the TTY it is fairly clear that
technology is mediating communication in a significant way. Because there is
no written form for signed languages, the TTY is requiring communication not
only to be printed, but to be translated into English. Of course, independent
of technology, the Deaf community is already more than familiar with mediated
communication. Since the vast majority of American Deaf live and work in hearing
contexts, much day-to-day communication takes place through the mediation of
interpreters.
However, when video is brought into the equation, and the Deaf can communicate
with one another directly, over long distances, are we witnessing a return
to face-to-face communication, a different kind of mediation, or transformation
and creation of a new communicative reality ? This question can really only
be asked now, when videoconferencing has become affordable due to software
such as MicrosoftÕs NetMeeting. The authors bring up all of these options when
they describe their interests as Òthe symbolic properties of language as it
is used together with new potentials for communication afforded by the computer,
particularly computer mediated images of signed language (visual language)
as well as the recreation of face-to-face communication in virtual space or
the invention of face-to-computer communication.Ó The authorsÕ data are drawn
from a two-year study of four deaf families, and it is clear that fascinating
adaptations to the technology are being made by the participants. However,
in order to untangle what is going on Ð mediation, transformation, or a return
to ÒrealÓ face-to-face communication -- I think we need to compare ASL VMC
to other modes of mediated communication: (a) spoken language VMC, or (b) typed
English chat by Deaf interactants, or (c) communication between a signer and
an English speaker by way of an interpreter. Otherwise, we wonÕt be able to
tell which of the findings relate to the unique properties of signed languages,
which to mediated communication, and which to the role that technology plays
in communication.
First, a clarification for those readers who may not be familiar with web cam technology. For the most part, the quality of the web cam image is limited to a resolution of 640x480 pixels, which affects the size of the window in which the interlocutor is visible, and the crispness of the image. The video frame rate is under 30 frames per second under the best of conditions, which means that signers need to slow down their signing in order not to appear jerky. Finally, the speed of the computer processor and the speed of the Internet connection also affect how much lag time exists between when one participant signs and the other receives the signs, which affects whether back channel information can be read as such. All of this means that, much like any other kind of computer-mediated communication, only some aspects of face-to-face conversation are reproduced in CMC. From the point of view of the designer of technology, this leads to the question of what notion of face-to-face communication has been mapped to the technological context Ð what aspects of F2F communication were maintained at the expense of others? From the point of view of the user, we ask what kinds of disruptions the technology provokes, and how users overcome them.
A significant literature on video-mediated communication (Aiello, 1972; Sellen, 1995; Bracco, 1996; Finn and Sellen, 1997) has shown in particular that processes of grounding and turn-taking tend to be disrupted. For example, comparing VMC to F2F conversation, Cohen (cited in (O'Conaill and Whittaker, 1997)) found more speaker turns in F2F, and more interruptions. Likewise, (O' Conaill, Whittaker et al., 1993) comparing LiveNet, ISDN (longer lag times than LN), and F2F, found that participants in the lowest fidelity system produced fewer backchannels and interrupted less often. They engaged in longer turns, with fewer turn exchanges, and floor management being carried out in an explicit manner. OÕConaill, Whittaker and Wilbur attribute these results, in part, to the fact that VMC systems do not allow mutual gaze Ð having it be clear that oneÕs gaze has met that of the other participant. Of course, eye gaze is also an essential cue in American Sign Language and photos of the VMC set-up (e.g. Figure 4) show the extent to which itÕs difficult to gauge eye cues on VMC. But (Whittaker (to appear)) also points out how disruptive the lag between sound and audio on VMC is for spoken languages. It is possible that VMC for signed languages will not suffer from this problem. In fact, an early theory of technology and communication posited that, the closer the set of modes supported by a technology approximate to those of face to face communication, the greater the efficiency of the communication using that technology. Thus, they claim, a technology that supports both visual and linguistic modes should always outperform one supporting only the linguistic mode. ASL is a neat test-case, since the visual mode is the linguistic mode.
Literature on textual CMC is also relevant here. Much as Keating and Mirus describe in the case of Ben and Ned negotiating the technological context of their interaction, text-based computer-mediated communication also demonstrates a fair amount of metalinguistic stage-setting about the nature of the medium, and consequently necessary adaptations (Cherny, 1995; Herring in press). The authorsÕ lovely example of an interactant picking up the web cam to disambiguate a word is not unique to ASL participants using VMC. As (Streeck, 1996) reminds us, physical objects and their representations using iconic and deictic gestures (including, in this instance, with a web cam) are essential to meaning-making Ð to symbolicization -- in conversation.
Modifications to words for the purpose of CMC is a topic that has been studied by (Cherny, 1995) and others Ð there exists a whole lexicon of forms shortened to deal with the comparatively slow speed of typing over speech, and also to mark habitu}s. Linguistic variation on CMC can have more permanent effects, as described by (Paolillo, 1999) in his study of channel #India on the EFNet IRC (see also (Paolillo, 1996)).
One topic, however, seems uniquely relevant to American Sign Language, and as such uniquely revealing of the effects of VMC on any language. The use of fingerspelling between friends, who know one another to prefer ASL, and the use of signed-English-like constructions is a puzzle that doesnÕt seem to be adequately explained by strategies of adjustment to properties of the new communication technology. However, Keating and MirusÕ previous examples of explicit negotiation about, and lack of certainty about properties of the medium make it clear that participants are not entirely comfortable with their own model of the interlocutorÕs reception. In this sense, use of formal forms such as fingerspelling could indicate difficulty in constructing such a model of the interlocutorÕs understanding. Disambiguating among the different possible explanations might involve looking at pairs who have used VMC over longer or shorter periods of time vs. pairs who are used to communicating with one another over longer or shorter periods of time.
Other analyses of distinctly sign-like phenomena such as this one would be welcome. For example, what happens to body-shift for frame of reference, empathy, and narrative perspective in VMC (Rose, 1992)? The fixed orientation of the web cam precludes the signer from shifting his/her body and eye gaze while still ensuring proper camera framing. How do signers accommodate? These sorts of more discourse-level phenomena are particularly interesting because of the cinematic effects associated with them in face-to-face signing (Poulin and Miller, 1995; Krentz (forthcoming)). How do signers negotiate cinematography techniques when faced with cinematic inflexibility?
Of course, this is the danger with forays into new areas of research: your readers want to know about all of the possible topics associated with the topic that you have actually chosen to research. In that vein, I would add a few more requests. Keating & MirusÕ manuscript leaves me wanting to know whether VMC changes the nature of social capital (Bourdieu, 1986; Putnam, 1993) in the Deaf community -- a topic very discussed among of sociologists of technology, see for example (Pelto and Mxller-Wille, 1983)? Are more Deaf organizations formed of non co-located members? Do families feel that bonds are stronger? In sociolinguistic terms, do fewer ASL dialectal variations arise, as Deaf populations converse across regional boundaries? Does ASL devolve into one standard form? Or, perhaps more likely, do ASL dialects become income-based rather than regional, as a Òdigital divideÓ allows only well-off families to use VMC? Finally, do no users of VMC adapt the technology itself? In the current analysis, technology appears to be assumed Ð a locus for change, but not changeable itself.
The fascinating data presented here raise more questions than they answer. And the answers are relevant to practitioners hoping to serve the Deaf population (Craft, 1996), as well as to linguistic anthropologists. Luckily, the authors have clearly done a very careful job in their fieldwork: filming, transcribing and analyzing an extremely valuable data set. Interestingly, as is the case with much research on ASL, the current analysis reinforces our belief that ASL patterns like spoken languages. However, in order to examine the language practices of particular communities, we need to examine more particularities. Are the changes described here particular to VMC? To computer technology? To technology? To signed languages? To ASL?
References
Aiello, J. R. (1972). "Visual Interaction at Extended Distances." Personality
and Social Psychology Bulletin 3: 83-86.
Bourdieu, P. (1986). The Forms of Capital. Handbook of Theory and Research
for the Sociology of Education. J. Richardson. New York, Greenwood Press: 241-258.
Bracco, D. M. (1996). Profile of the Virtual Employee and Their Office. IEMC
'96, Vancouver, B. C.
Cherny, L. (1995). The MUD register: Conversational modes of action in a text-based
virtual reality. Linguistics. Stanford, CA, Stanford.
Craft, S. F. (1996). "Telemedicine services: Serving Deaf clients through
cutting-edge technology." Focus on Mental Health Issues March.
Finn, K. E. and A. J. Sellen, Eds. (1997). Video-mediated communication: Computers,
cognition, and work. Mahwah, N.J., Lawrence Erlbaum Associates, Inc.
Herring, S. (in press). Computer-mediated discourse analysis: An approach to
researching online behavior. Designing for Virtual Communities in the Service
of Learning. S. Barab, R. Kling and J. Gray. Cambridge, UK, Cambridge University
Press.
Krentz, C. ((forthcoming)). The Camera as Printing Press: How Film has Impacted
ASL Literature. Signing the Body Poetic: Essays on American Sign Language Literature.
H. Rose, U of California Press.
O' Conaill, B., S. Whittaker, et al. (1993). "Conversation over video-conferences:
An evaluation of the spoken aspects of video-mediated communication." Human-Computer
Interaction 8: 389-428.
O'Conaill, B. and S. Whittaker (1997). Characterizing, predicting, and measuring
video-mediated communication: A conversational approach. Video-mediated communication:
Computers, cognition, and work. K. E. Finn and A. J. Sellen. Mahwah, N.J.,
Lawrence Erlbaum Associates, Inc.: 107-131.
Paolillo, J. C. (1996). "Language Choice on soc.culture.punjab." Electronic
Journal of Communication , special issue on Computer-Mediated Discourse Analysis
6(3).
Paolillo, J. C. (1999). "The Virtual Speech Community: Social Network
and Language Variation on IRC." Journal of Computer Mediated Communication
4(4).
Pelto, P. J. and L. Mxller-Wille (1983). Snowmobiles: Technological Revolution
in the Arctic. Technology and Social Change. H. R. Bernard and P. J. Pelto,
The Macmillan Company: 166-199.
Poulin, C. and C. Miller (1995). On narrative discourse and point of view in
Quebec Sign Language. Language, Gesture and Space. K. Emmorey and J. Reilly.
San Diego, CA, Lawrence Erlbaum Associates: 117-131.
Putnam, R. D. (1993). "The Prosperous Community , Social Capital and Public
Life." The American Prospec 13(Spring): 35-42.
Rose, H. (1992). "A Semiotic Analysis of Artistic American Sign Language
and a Performance of Poetry." Text and Performance Quarterly 12: 146-159.
Sellen, A. J. (1995). "Remote conversation: The effects of mediating talk
with technology." Human Computer Interaction 10: 401-444.
Streeck, J. x., Human Studies,, 19, pp. 365-384. (1996). "How to do things
with things." Human Studies, 19: 365-384.
Whittaker, S. ((to appear)). Computer mediated communication: a review. The
Handbook of Discourse Processes. A. Graesser. Cambridge, MA, MIT Press.