American Sign Language in Virtual Space:
Interactions Between Deaf Users of Computer-Mediated Video Communication and the Impact of Technology on Language Practices


Elizabeth Keating and Gene Mirus 1
University of Texas at Austin
ekeating@mail.utexas.edu

Table of Contents

  1. Introduction
  2. Background
  3. The Deaf Community and Telephone Communication
  4. Transformative Technology
  5. Developing New Skills for a New Communicative Environment
  6. Trials in Collaboratively Developing Memories and Plans
  7. Explanatory routines and understanding the constraints and possibilities of new artifacts
  8. Trials in Modifying Sign Location and Orientation
  9. Trials with seuqnce based on spatially based meaning making
  10. Learning Communicative Competence: Shaping the next generation
  11. New Participation Frameworks and Organization of Communicative Activities
  12. Conclusion

Introduction

According to some discussions concerning new information technologies and technologically enhanced communication, we are now in a revolution as profound as the printing press (Poster 1984). The internet is creating new kinds of meeting places and work areas and the possibilities of new types of relationships across time and space. New sociotechnical relations involve linking the local and the non-local in intimate, relational and reciprocal connections, new forms of access to others with new space transcending capacities and new technocultural visions (Robins and Webster 1999:221) . Relationships can even involve a 'tactile' dimension (Mitchell 1995). The new arenas for social interaction are described by some as liberating; to others they have elements of the pathological or colonizing. The ability to enhance and expand capacities is celebrated, the ability for increasing surveillance or voyeurism is deplored. Some describe the ability to return to direct and immediate face-to-face engagement (Levy 1997) in a way that transcends spatial domains, others emphasize the ability to abandon the surrounding reality for another (Robins and Webster 1999:224). Increased access to knowledge is welcomed by some, others lament the devaluation and displacement of embodied and situated knowledges. It seems clear that there are new forms of participation open to some of us, at least those members of what has been called a new global virtual class (Reich 1992). How practices develop around new technologies and the social impacts of technology are important questions for social scientists (Escobar 1994:214). What conventions do new technologies engender, and what aspects of social interactions are transformed?

In this paper we discuss some consequences of the introduction of technologically enhanced communication in a particular community, the Deaf community. We are interested in how new tools mediate and influence human behavior (Vygotsky 1978, Leont'ev 1978, Wertsch 1991) including language and the organization of interaction, and how that development occurs through participation in particular activities embedded in a sociocultural context (Rogoff 1990) . Members of human societies have a history of inventing cultural tools, both material and symbolic, which have influenced human societies in important ways (Tomasello 1999) for example in the organization of collective activity (Leont'ev 1978). We are interested in the symbolic properties of language as it is used together with new potentials for communication afforded by the computer, particularly computer mediated images of signed language (visual language) as well as the recreation of face-to-face communication in virtual space or the invention of face-to-computer communication. This includes the development and manipulation of a computer mediated self and other, participation in joint activity with a computer image, creativity and problem solving in new communicative spaces, creating reciprocal perspectives, new participation frameworks, and specifics of language change, such as new code switching practices, registers, and modality shifting. Closely examining particular social interactions can reveal some of the important details whereby tools mediate and influence behavior and cognitive strategies.

Studying language as social action (Austin 1962, Searle 1969, Gumperz and Hymes 1972) and examining specific ethnographic contexts has revealed how talk enables social actors to accomplish goals and create mutual understanding. Examining interactions can also show how new knowledge of technologically mediated environments is shared and shared knowledge is managed through talk. Language does not simply symbolize events or objects, but makes possible the existence or the appearance of a situation or object, as language is a part of the mechanism whereby a situation or object is created (Mead 1934:78). Computer-mediated sign language communication involves new ways to manipulate language structure and performance, including the experimentation and invention of new language forms. This invention and experimentation is motivated by what Scribner 1997 has referred to as "interrelated goal-directed actions." These are constrained by convention, but also involve transcending bounded knowledge domains and creating new knowledge and practices. Participants acquire the mental and manual skills needed to master particular knowledge and technology and apply these skills to accomplishing an old set of goals in a new way.

The data presented here is part of a two year study of four deaf families in Austin, Texas (families with deaf parents and deaf children).

Background

Signed language communities are unique language communities, with membership organized according to properties of an individual's perceptive system and not necessarily according to the speech community into which a person is born. The majority of deaf [2] people are born into hearing families who do not use sign language, and depending on the ideologies of the particular local setting, Deaf people may or may not learn a signed language during their childhood. However, most Deaf individuals consider sign language to be a crucial aspect of their identities as members of the Deaf community, and many who do not acquire sign language from their parents develop signing skills later in life. Most Deaf people are bilingual using a signed language and literate in a written spoken language. Although deaf individuals cannot hear speech, many spend long hours acquiring the skill to produce it to some degree. In the American Deaf community deafness is defined not only in physiological terms but in cultural terms (Baker and Padden 1978), since this group includes the hearing children of deaf parents, i.e. hearing native signers. Among signers there is great variety in terms of hearing perception, language backgrounds, socialization to Deaf culture, native/late learners, styles, etc. What is also unique in deaf communities is the daily reliance on interpreters who can be informal language policy makers.

Signed languages were only recently recognized as full-fledged languages as complex as spoken languages. Research is now growing and there is great interest not only in the diversity of the world's signed languages (see e.g. Edmondson and Karlsson 1990), but on a comparison between signed and spoken languages in order to contribute to understandings about universal properties of languages. Ethnologue lists 103 different signed languages throughout the world, and there are certainly more which have not been documented. American Sign Language (ASL) is closely related to French Sign Language, and quite different from British Sign Language due to historical factors and early influences on the formalized teaching of sign language in the U.S. by the French. Because of their linguistic minority status, deaf people have often been isolated not only from hearing people (and from the status and special privileges accorded by the state to hearing minority language community members) but from other Deaf people.

Signed languages are much more than a manual system. Signers communicate important grammatical, affective, and other information through facial expressions. Shape of the hands, orientation, location and movement are all important components to sign language communication. ASL also uses a system of classifier handshapes to refer to objects, surfaces, dimensions and shape. There are important functions served by non-manual expressions such as head movement, eye movement and specific facial expressions. A question, for example, can be signaled by a raising of the eyebrows, widened eyes, and a slight leaning forward of the head. Eyes are powerful turn-taking regulators. Fingerspelling is used for names or new terms, and also some borrowed terms, e.g. 'well' or 'cool,' and sometimes for emphasis. Involvement can be shown through affective displays, role playing and direct quotation (Mather 1991:239). Signed languages can communicate several things simultaneously that spoken languages do sequentially. The majority of signs are made in the neck or head area (this has changed over time). [3]

Since minority Deaf communities exist within larger hearing societies and most Deaf people are bilingual, this creates a language contact situation and supports forms of code mixing. English-like or contact signing is characterized by such features as English word order, fingerspelled English words, and inclusion of articles or modals. Language contact and language policies created by hearing professionals teaching literacy skills in educational institutions has in some countries resulted in a kind of signed language that orders signs according to the syntactic rules of a spoken language. Examples of this kind of language include Signing Exact English or Signed Swedish. Such 'languages' have been described as difficult for Deaf children to understand because they fail to take advantage of spatial resources (this point will be taken up further below), and the Swedish National Association of the Deaf (SDR) has stopped advocating the use of Signed Swedish (Bergman 1990) and now advocates only Swedish Sign Language.

The Deaf Community and Telephone Communication

Historically long distance communication technology has excluded deaf people. Although Alexander Graham Bell was initially interested in creating technology which would help deaf people learn to speak, the invention of the telephone left deaf people out of one of the most important communication changes of the last century. The new communicative power of the internet, however, has a strong visual component and relies primarily on the visual mode, both text and images rather than spoken sounds (many hearing people do not use the audio capabilities of their computers, for example). For the linguistic minority Deaf community, the internet is increasing connections across Deaf members who are geographically dispersed throughout the majority hearing community as well as resulting in the development of new linguistic and sociolinguistic practices, and increasing communication across the deaf and hearing communities.

The Deaf community did not have telephone technology until the 1960s, nearly one hundred years after the hearing community. Before the teletypewriter (TTY) became available to them, deaf people scheduled particular times to meet or they got in the car and drove around to see their friends. Deaf people joke that since everyone was out driving around, no one was at home to visit with. Deaf people had to request that hearing neighbors relay calls and messages for them. In 1965 a deaf physicist in Southern California, Robert Weitbrecht developed an acoustic coupler which made telephonic communication possible for deaf people. This meant that a teletype machine could be connected to a telephone handset. Deaf people scrounged surplus teletype machines, repaired them, and new TTY communication practices developed. In the 1970s a number of companies began marketing new electronic devices which functioned like teletype machines, the kind Deaf people mostly use today (see Figure 1). The TTY, however, relies on typed spoken language, for example, English (there is no agreed-upon orthographic respresentation for the visual elements of sign language), and allows for only one-way transmission at a time, which means that the interaction is unlike face-to-face communication or hearing telephone calls; no information about the recipient's response is available until the message has already been sent. TTY telephone calls studied by (Mather 1991) often were characterized by multitopic turns, as many as six topics in a single turn. Example 1 shows a TTY conversation between project members.

(1)  A TTY "conversation"


01     G:     HI   GA
02     E:     HI THIS IS ELIZABETH, IS ALL FINE WITH GENE THER4 AND THE COMPUTER GA
03     G:     UV THIS IS GENE  AND I JUST DOWNLEOADED NETMEETING  AND AM GETTING READY 
04            TO HAVE IT INSTALLED  GA
(....)
05     G:     SURE  AND DI00  DID YOU FIND THE D CDS  CD  GA
06     E:     NO NOT YET IM STILL AT HOME GA
07     G:     K CATCH YOU LATER  THANKS SK 
08     E:     SK

Particular abbreviations are conventionalized in TTY interactions. Turn taking mechanisms such as GA ('go ahead') and conversation ending signals such as SK ('stop key' or signing off) are used.

Figure 1

The TTY changed communication habits and other social habits in the Deaf community. People no longer had to drive around to communicate face-to-face, but could communicate visually through typed English messages. The socially transformative properties of communication technology are nothing new. Such transformations have been resisted successfully by some groups, for example the Amish in Pennsylvania have banned telephones in the home since 1909 (Umble 1992) because telephones were considered conduits for negative influences from outside the community and contribute to pride, individualism, and encourage informal women's information exchange networks or "gossip." Some deaf people have been worried about the intrusiveness of the new computer mediated visual telephone technology. In Deaf town meetings in Austin in 1997 about the introduction of computer mediated video interpreting service or video telephony some Deaf people said they preferred communicating via TTY without a video image because of privacy issues. Being visually available transforms aspects of what you do before you say 'hello' or accept an incoming call. Now it may matter whether you had time to comb your hair. This has been a topic among deaf people in discussions characterizing the experience of the new video telephone technology. One videophone interactant said to another: "ALL DEAF USE THIS TECHNOLOGY SIGN WE CALL "THREE-O'CLOCK MORNING HAIR-STICKING-UP CHAT" [4] (His fingers are splayed upwards from the top of his head). Disembodied language productions such as TTY texts used by Deaf people enable certain freedoms from a set of interpretable symbolic resources (e.g. age, appearance). As interactions are re-embodied through video connections the videophone mediates between interactants, between interactants and other objects and ideas in a cultural environment in new ways.

Transformative Technology

Using a small webcam (a simplified video camera for web interfaces) with a desktop computer and linking through the internet, Deaf individuals can now communicate visually with each other over everyday phone lines. This is revolutionary because it means they can use sign language to communicate across large  distances in space. Computer mediated video telephone technology was first introduced into some areas of the American Deaf community in a pilot video interpreting service for deaf callers to hearing individuals (Video Relay Interpreting or VRI). This service was piloted in Austin, Texas by Sprint in 1996 (see Keating 2000). It has been highly successful. Subsequently, webcams and software products have become available for consumer use, meaning that Deaf individuals now have access to a simultaneous, two-way telephone technology that is visual and supports visual communication. With video transmission over the phone line, a context more like face to face conversation can be achieved, and much more complex participation frameworks are possible. Figure 2 shows two deaf teenagers in a conversation with a friend. Notice the web cam on top of the CPU (to the right of the computer screen).

Figure 2 Figure 3

With the new technology, interactants must arrange themselves in order to see and be seen (to be participants as both audience and communicator). Figure 3 shows the screen view of a different interaction [5] . Each sees both the other and her/himself.

Developing New Skills for a New Communicative Environment

New communicative tools mediate and influence human behaviors and the innovation of new activities. This involves interesting trials and experimentation. Solutions are tried and amended with experience as the problem-solving process is restructured by the knowledge and strategy repertoire available. The process involves assimilation of specific knowledge about the objects and symbols the setting affords, and the actions the work tasks require change novices to experts (Scribner 1997). When using the videophone initially interactants have to learn how to align themselves, and to figure out the best way to establish mutually coherent contexts for communication. They must work to accommodate existing practices to the new communicative space and develop procedures for achieving successful computer-mediated sign language communication. They must understand and orient to the web camera and computer's gaze parameters, and adapt their language to video transmission. They cannot immediately assume a shared perspective, since perspectives are technologically mediated. All conversational interactions are complex, highly structured events requiring moment-by-moment cooperation among participants. All conversationalists must produce a message that can be understood and receptors must actively respond and provide feedback at key stages in the interaction and this cooperation cannot be taken for granted (Gumperz 1982). Participants in interactions have reciprocal expectations and rely on mutually shared visions of the social world (Schutz 1962). With computer mediated video communication these reciprocities of viewpoint must be constructed anew and shared with others.

Trials in Collaboratively Developing Memories and Plans

Some aspects of the activity of communication re-mediated by the new technology we discuss include: communicative space, the body, virtual images of self and other, production of signs, clarity of signs, and construction of sign language messages. Some important skills for a virtual or technologically mediated environment include: manipulation of desk top real estate, manipulation of language features, manipulation of image transmission and body relations, creation of a radically different sign space, slowing signing speed, using more repetitions, using features of contact sign, code switching, and adjustment of deictic references.

Explanatory routines and understanding the constraints and possibilities of new artifacts

In Example 2 below Ben and Ned actively work to establish a coherent computer mediated space for producing and interpreting signs. They use several techniques to negotiate the best technosocial environment for interaction over multiple trials. This entails providing information on reciprocal perspectives, expert to novice descriptions of how to manipulate the technology, trials at manipulating the technology, and creating new metalinguistic terms and explanatory routines. Sign language communication is used to create a coherent space for sign language use and understanding, for example, breaking a simultaneously viewed picture of "what's wrong" into a sequence of parts or moves to make it "right". Participants collaborate on describing what they see -- shaping the other's view and understanding of particular technological results of particular actions.

The focal range of the webcam restricts communicative space significantly compared to face-to-face signed communication and the camera's visual field is far more limited than an actual interactant's would be. In example 2, Ben is teaching Ned how to understand the consequences of particular actions and settings and how to internalize the eye of the camera as the operative "reciprocal field," a new virtual reciprocity of perspectives -- a sense of the camera's visual field is different from and supercedes his own in terms of its importance in virtual sign language communication. Ben projects back the camera's gaze to Ned and tells him how to manipulate the on screen production of his own image. Not only the relation between hands and the signer's body is important, but the relationship between signers and the camera.


(2) Collaboratively building reciprocity of perspectives 
01 Ben: With his arms Ben represents the computer's desktop windows' horizontal 
        parameters. What he shows is the relation of the viewing frame to his 
        body, the body-to-camera relation is not optimal for communication. There
        is too much space above Ned's head and not enough of his torso is showing.
        (With his arms set in horizontal position, his dominant arm is above
        his head and across his face and his non-dominant arm is in front of
        his upper torso area) 
02      Ben moves both his arms downward.
03      CAMERA (classifer of camera, wrist move downward) 
04      A-LITTLE-BIT
05 Ned: WHICH? YOURS OR MINE?  
06      Ned leans toward the computer 
        Ned tilts the camera downward
07 Ben: ASK ((unclear fingerspelling))

08 Ned: Ned readjusts the camera position
09 Ben: OK FINE STAY
      

Figure 4

Ben is trying to get Ned to move his camera because only his head is showing and sign space must include both head and torso (see Figure 4, top window). Ned, however, asks which camera must be moved: WHICH YOURS OR MINE? (line 05). Complex shifts between interpretive frames can occur within even the shortest utterances (Haviland 1996). Here we see some confusion over interpretive frame when Ned asks which perspective is engaged.

Although the web cam has a more restricted gaze than a person, there are ways that the web cam's properties can enable understanding. In one instance two interactants have trouble over the meaning of the word 'rollercoaster.' One is a deaf person from Germany who knows ASL, but is not familiar with this particular ASL term. To solve the problem in understanding, his conversational partner finds a picture of a rollercoaster on his computer, picks up the webcam and moves it in front of the computer screen so that the picture of the rollercoaster is shown to the other. Then he then puts the webcam back on top of the computer and they continue the sign conversation. Another way computer mediated video enhances communication is in the way the webcam provides conversational partners with a unique resource--a mirror image of themselves--which is available throughout the conversation. The sender of the message can simultaneously serve as an audience for the message, and the ability to take the perspective of the other is considerably enhanced. One has a good replica of what the other is receiving (only in terms of the relation to the camera, however, not in terms of speed). Conversationalists show that they utilize the "virtual" self when they modify their signing, although they seem to depend far more on feedback from their co-participant.

In cases to be discussed below, conversationalists alter their production of signs and not just the relationship between their bodies, the camera and the computer environment.

Trials in Modifying Sign Location and Orientation

Meaning is a product of collaboration, and there is ongoing collaboration in the re-distribution of linguistic knowledge and practice in this new environment. Conversationalists must adapt to technical properties of computer-mediated image transmission. For signers one of the key accommodations is from three dimensional space to two dimensional space. Signers adjust their sign space and modify their signs. The new communication tool influences language behavior and new properties of language are created. When Bob signs a particular name sign, he turns his head to the side, to show how the hand position is performed in relation to the nose from the side view. Later, when he signs THREE, using his fingers, he turns his hand so the thumb is clearly seen. Before he turns it, the thumb is hidden and the sign looks like TWO. Another person signed MEXICO with an upward movement, rather than a movement toward the camera in order that the movement or change in spatial relationship can be clearly seen in two dimensional space.

Other examples of signs being altered for computer mediated communication are the sign 'baby' (usually produced slightly above or at waist level) produced with the hands almost at chin level, the sign NOW, usually signed at chest level, signed at shoulder height, and the sign for VALENTINE moved upwards to the shoulder from the chest area. The sign SON, usually signed with contact on the opposite hand at waist level, is signed with contact on the biceps and shoulder raised. Figure 5 shows the sign PROBLEM (usually signed at chest level directly in front of the speaker's body) being signed far outside the usual sign space, in front of the webcam, in fact in another person's sign space, in order to position the sign for optimal transmission by the webcam.

Figure 5

Signers show multiple ways to adjust their sign production in order to maximize the communicative potential of the computer mediated signing space. They experiment and learn that the image sent can be manipulated in various ways. Some lean back to make a larger area of the body available, leaning away from the screen, for example, to sign SORRY and DIE, both made in the chest area. One of the study participants signed PAGER by standing up so that his waistline (where PAGER is signed) was visible, not his head and torso. Participants also utilize the properties of the technology to create larger, and therefore clearer signs. They move their signing hands closer to the camera for emphasis. This means that, for example, in the case of a YES sign made near the camera, that the YES is made much bigger and more forcefully. Fingerspelling is frequently produced with the hand very close to the webcam. Three participants signed GOOD with the two-handed citation form (instead of one hand), which resulted in a larger sign. Like in face-to-face sign conversation, conversationalists can engage in producing 'continuers' which are a conventional way to show interest and understanding in a conversation. There are many examples of this, e.g. using YES, OH-I-SEE, etc.

Not all modifications work. In one case, a participant signed QUESTION using the first knuckle movement, but turned it 90 degrees to the side (presumably to make the sign clear in two dimensional space). However, the new orientation made the sign so different that his co-participant did not understand his meaning. He then signed the larger, more iconic form (tracing a question mark path) and she understood.

Participants slow down their signs and are asked to slow down and signs are distinct and fully articulated. Differences can be seen between the same signers' off-camera signs and their productions for the web cam or "on-screen" signing. Off camera, each sign is less emphasized, produced faster, and sign space is less restricted. On camera, signers sometimes hold their final signs, and do not return their hands to resting position. This may be a result of an uncertainty over whether the transmission has arrived to the interlocutor's location undistorted.

There is frequent repetition of signs, phrases, and concepts. In example 3 KNOW is repeated 14 times, and ME is repeated 11 times by a novice participant.


(3) repetition

01   Terri:  (to Frank) WOW GET BABY
02          ((looks at her mother)) I SHOW BABY ((beckoning gesture))
03   Frank: (to webcam) ((waves to get attention))  YOU KNOW YOU KNOW YOU KNOW 
04          YOU KNOW YOU KNOW ME ME ME ME ME?  KNOW KNOW KNOW KNOW ME ME?  
05          YOU KNOW KNOW KNOW KNOW KNOW ME ME ME ME?   
06   Terri: (to Frank)  YOU FUNNY.  SHE NOT KNOW YOU.  
07          SHE KNOW YOUR UNCLE J-R AND AUNT 'J-on-palm'[6]

Repetition often involves reformulation. The same idea is repeated in different forms or signs are enlarged. In one case, as she repeatedly asks her conversational partner to move back, with each repetition, a signer expands the space used for the sign. In example 4 we see two participants repeat "who's that?" to their on-screen interlocutor in several different ways.


(4) Two teenagers in Austin are talking to a friend in Indiana, and want to 
know who is with her.

01   Jeff:   WHO WHO WHO WHO THERE WHO WHO WHO THERE?
02   Karen: WHO OTHER PERSON WITH YOU QUESTION?
             _________wh[7]
03   Jeff:   WHO THAT? 
04   Karen: QUESTION?
05   Karen: WHO WHO WHO THERE? 
     ((here 'who' is made with a variant sign for 'who'))
             [
06   Jeff:   WHO WHO WHO THERE?

Trials with seuqnce based on spatially based meaning making

Making oneself understood through a computer-mediated environment involves not only repetition and alteration of sign space relationships, but in the families we studied involves the use of different varieties of language. As mentioned previously, there is a wide range of language styles and forms in the American Deaf community. Members of the Deaf community are used to adjusting their language to a wide range of addressees, from those whose signing is very English-like in structure to those whose signing is very ASL-like. Although we found the families in our study adjusting their sign language to more English-like grammatical features, this was not because they were conversing with those whose language skills were English-like, since all the study participants were fluent ASL signers (one of the authors of this paper, has Deaf parents and is also a native signer of ASL). We attribute an unexpectedly high use of English-like sign in the computer mediated interactions in our study to strategies of adjustment to properties of the new communication technology, including problems with the clarity of transmission of images and altered aspects of space.

English-like or contact signing is characterized by such features as English word order, fingerspelled English words, and inclusion of articles or modals (see Example 5). In example 5, Rose signs to her friend Teri (a person who highly favors ASL) a question containing a fingerspelled English modal ('did') and the signed form of the English preposition 'to' in her formulation of a question in "D-I-D YOU GO TO P-T-A [8] " (line 02). In fingerspelling 'D-I-D' Rose puts her hand close to the webcam. Teri is busy with her infant (Rose says 'I didn't know you were nursing the baby'), and Rose repeats her question about the PTA meeting, this time using more ASL-like construction: YOU GO P-T-A (with eyebrow grammar to signal yes/no question), line 03. This shows the range of linguistic competence and also the flexibility of study participants in experimenting with language forms and structure.


(5) underline indicates English-like grammatical constructions

01 Rose: SORRY INTERRUPT YOU I NOT-KNOW THAT YOU NURSE BABY D-  
02       (laughs) YES. D-I-D YOU GO TO P-T-A UNDERSTAND ME?
03       YOU GO P-T-A? D- NOT YET?
      

Lucas and Valli (1992) were surprised to see the use of English-like sign in conversations between ASL signers in an experimental interview situation they conducted (1992:63), and they attributed this to an association in the Deaf community between English-like signing and formality [9] and accommodation (in our case we suggest that the accommodation is not to another speaker but to another medium). Choice of language features regularly constructs differences in context, just as context can shape the choice of language features (Duranti and Goodwin 1992). Using a more English-like sign is a common way to signal a register or context shift in the ASL community (see e.g. Stokoe 1969), and is a resource for all Deaf signers, even those who commonly use only ASL (see also Mather 1991:138).

In Signed English, signs are ordered according to the syntactic rules of a spoken language, and signed English fails to take advantage of spatial resources in the same way as ASL, where movement is "highly productive" conveying many aspects of meaning including speed and quantity (Valli and Lucas 1998:86) as well as subject-object agreement. In ASL facial grammar and head position also convey important grammatical information, such as topic or object of a sentence, type of question, and negation. Non-manual signals such as facial expression and head position organize sentences into different types. However, forward head tilt can be difficult to perceive in two dimensional space and eyebrow raises, lip position, or squinting of eyes can be difficult to perceive if images are slightly distorted or not clear. Certain types of movement can disrupt the quality of an image transmission in computer-mediated communication, particularly over regular household telephone lines.

New adjustments and trials can be understood as preliminary attempts to formulate agreed upon practices for communication in the new medium. These are based on conventional strategies of sign language communication, including altering communication means in order to signal a specific purpose or context. Social activities shape local understandings and conceptions about space (see for example, Hanks 1990; Choi and Bowerman 1991; Brown and Levinson 1993; Duranti 1994; Senft 1997).

Confusion of non-shared space: Trials with Referential Pointing

An important area where participants in our study innovated was with the use of deictics. The term deixis is borrowed from the Greek word for pointing or indicating since these linguistic items call upon the hearer to use his powers of observation, and establish a real connection between his mind and the object (Peirce 1940:110). Common types of deictics are person markers, such as I and you place markers, such as here and there and time indicators, such as now and then. which have no meaning apart from the context in which they are uttered, since I or you can refer to different people at different points in a stretch of talk. The meanings of here or there or now must be ascertained from contextual cues, i.e. the physical position in space of the speaker or the time of the utterance. The interpretation of these forms is "intrinsically bound up with the cultural distinctions and practices" and the rules of use and interpretation become reflected in the structure of linguistic code (Hanks 1996:228). In ASL, person deixis is indicated by pointing, for example to the self or others. One of the most interesting aspects of computer mediated video sign language is how signers must renegotiate conventions of deictic use. In computer mediated sign language communication among the participants in this study, rules of referential pointing are changing.

Figure 6

In two-dimensional space, interlocutors orient to a modified deictic field, redefining the deictic relationships, not in terms of their position vis a vis their interlocutor on the screen, but in terms of  the position of the web cam and how their sign relative to what they are pointing at will be re-produced on the screen. For example, one participant raises her thumb and begins to point directly behind her (at her husband), but then turns her hand so that her thumb is pointing directly to the side, where her husband is in the two-dimensional world of the screen. In the computer mediated image in the video window, she is pointing at her husband, when actually he is at least one foot behind her. In other examples, signers point directly at the webcam, when signing YOU rather than pointing at the image of their interlocutor on the screen. Signers look at the screen, but then point at the webcam, and sometimes they both look at and point to the webcam. One of the participants tested pointings in space by placing her thumbs in different positions, as she repeated her message.

In Figure 6, two interactants simultaneously produce two different versions of THERE, one pointing to her right with her thumb, and the other pointing straight ahead with his forefinger. In line 05 and 06 (Example 6) both Karen and Jeff are asking the identity of someone in their interlocutor's image window. They each render the term 'there' differently (see Figure 6, where Jeff, in the foreground points ahead, while Karen points her thumb to her right). Previously, they have each tried two different ways to ask the identity. The importance of lines 05 and 06 lies in the way Jeff and Karen show that they are experimenting with new ways of representing the question 'who's there with you.'


(6) experiments with 'there'

03   Jeff:   WHO THAT? 
04   Karen: QUESTION?
05   Karen: WHO WHO WHO THERE? 
     ((here 'who' is made with a variant sign for 'who'))
             [
06   Jeff:   WHO WHO WHO THERE?

Signers such as these are in the process of developing new ways to signify meanings through the symbolic forms of language. With the 'mirror image' or representation of their own sign production available through computer mediated communication, they can judge the effect of certain relationships in two-dimensional space.

Learning Communicative Competence: Shaping the next generation

Members of language communities learn appropriate language use across contexts from everyday interactions. Young children are socialized into the use of new technologies, and can expand the creative potentials of the technology for communication across borders of space, time, and language understanding. Some of the explicit socialization of young children in our examples include: orienting them to the computer screen in the environment, how to identify oneself and others, establishing social relationships, opening and closings (hi and bye), and identifying images on the screen as people copresent and available for interaction.

In Figures 7 to 10 (video stills), Teri kneels on the floor to bring her image into camera range. Her son is in the chair in the 'coparticipant position' for computer mediated sign language interaction as she kneels on the floor beside him. She models looking towards the screen and she points to orient her child. She supports her child's arm as he mimics her point. She models smiling. She stands, and "explains" what her child is seeing: 'boy'. The moving image is a person to be greeted, and Teri waves at the screen. She observes her son's communicative production (by looking straight at him, not at the image he is producing on the screen). The grandmother looks on from the periphery. They make sense of the environment in which they live and orient themselves to each other and objects. With new communicative technologies frameworks for communication can increase in complexity and interactants must develop new skills and introduce these skills to novice users.

 
Figure 7   Figure 8
Figure 9   Figure 10

Socialization also takes place through institutional means. Some recommendations have been formalized on Gallaudet University's web site: "slow down a bit on fingerspelling or unusual signs, set up your location for chatting so that there is good light on you, if possible, have a plain wall as background, avoid having anyone walk behind you while you are chatting as this will slow down the video, avoid Internet "rush hours" such as 2-4 p.m., 7-10 p.m. weekdays."

New Participation Frameworks and Organization of Communicative Activities

With new communicative technologies past frameworks can increase in complexity. For example, signers using computer mediated communication can have more than one participation framework active--they can keep their signs out of camera range if they want to take part in a "side" conversation with participants in real space, excluding those in virtual space. There are thus 'front' and 'back' zones (Goffman 1974). A formerly dyadic long distance conversation by TTY can now be much more complex, involving three people (figure 11), and multiple generations.

Figure 11

New participation frameworks can involve images of people as well as text messages. In some cases two copresent interlocutors can converse through the onscreen image, as when a woman signs with both her husband and their friend on the computer screen, although her husband is actually behind her. The lack of reliance on the constraints of face-to-face physical space creates new opportunities. Since only two video signing spaces are currently available (one-to-one pictures), others can and do participate via text messaging at the same time. As online instructions from Gallaudet University indicate, this is a popular new form of group chat: "The video chat itself is limited to one to one [one dyad], not a group, however, you can have group text chat using "line by line" chat within NetMeeting". One to one really means one location to one location, as you can see in figure 11 and as many of our interactions show, multiple members of the deaf families can participate in the one camera frame "window" available.

The children in one of the families involved in the research project have integrated the video telephone into their lives and everyday communicative practices in transformative ways. Instead of watching TV to the same degree in the evening, the children have apportioned among themselves a designated time to chat with friends via sign language and the computer. They switch between languages, and modalities, between sign language via video and English via typed "instant messaging". It is not the case that the video phone has completely replaced the TTY or text messaging or email, as often the introduction of new technologies does not mean the total abandonment of other tools but entails a process of incorporation, involving and influencing collaborative and complex forms of human achievement embedded in within dynamic and changing cultural systems.

Conclusion

Human activity has always entailed the use of various artifacts (e.g. instruments, signs, procedures, machines), which are created and transformed during the development of activities, and become associated with other aspects of cultural practice, in other words become a "historical residue of that development" (Kuutti 1994:26, see also Engestrom 1991). Tools mediate human activity, as for example the tying of a knot in a handkerchief can stimulate remembering. Language resources can also shape memory strategies (Levinson 1996). Participation in activities, both as an individual or in collaboration with others, shapes consciousness, as humans develop memories and plans around recurrent practices. In situations where new technologies are introduced we have a chance to study these processes. Participants are actively and explicitly negotiating new language forms and sociolinguistic practices, reciprocity of perspectives, appropriate conduct, and production and interpretation of act sequences. They use old tools (e.g. language) to configure new ones and in the process reconfigure the old tools (in the cases examined here, for example), expanding relations between form and meaning, building new systems of mediation and repertoires which then are available to mediate further innovations.

This paper has discussed some aspects of new communicative technology and the creation of new technologically mediated interactions spanning time and space, as they impact signed language and communication. An important part of the study of language communities is the way new technologies alter communication practices, transcending conventional boundaries and reshaping relationships and ideas.

References

Austin, J. L. (1962). How to Do Things With Words. New York: Oxford University Press.

Baker, C., & Padden, C. (1978). Focusing on the Nonmanual Components of American Sign Language. In P. Siple (Ed.), Understanding Language through Sign Language Research (pp. 27-57). New York: Academic Press.

Bergman, B. a. L. W. I., eds. (1990). Sign Language Research and the Deaf Community. In S. Prilwitz & T. Vollhaber (Eds.), Sign Language Research and Application.

Brown, P., & Levinson, S. C. (1993). 'Uphill' and 'downhill' in Tzeltal. Journal of Linguistic Anthropology, 3, 46-74.

Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean:  the influence of language-specific-specific lexicalization patterns. Cognition, 41, 83-121.

Duranti, A. (1994). From grammar to politics : linguistic anthropology in a Western Samoan village. Berkeley: University of California Press.

Duranti, A., & Goodwin, C. (1992). Rethinking context : language as an interactive phenomenon. Cambridge [England] ; New York: Cambridge University Press.

Edmondson, W. H., & Karlsson, F. (1990). SLR '87 Papers from the Fourth International Symposium of Sign Language Research. Hamburg: SIGNUM-Press.

Engestrom, Y. (1991). Development work Research: Reconstructing expertise through Expansive Learning. In M. I. N. a. G. R. S. Weir (Ed.), Human Jobs and Computer Interfaces. Amsterdam: North Holland.

Escobar, A. (1994). Welcome to Cyberia: Notes on the Anthropology of Cyberculture. Current Anthropology, 35, 211-231.

Goffman, E. (1974). Frame Analysis: An Essay on the Organization of Experience. New York: Harper and Row.

Gumperz, J. (1982). Discourse Strategies. Cambridge: Cambridge University Press.

Gumperz, J., & Hymes, D. (1972). Directions in Sociolinguistics: The Ethnography of Communication. New York: Basil Blackwell.

Hanks, W. (1990). Referential Practice. Chicago: University of Chicago Press.

Hanks, W. (1996). Language form and communicative practices. In J. Gumperz & S. C. Levinson (Eds.), Rethinking Linguistic Relativity (pp. 232-270). Cambridge: Cambridge University Press.

Haviland, J. (1996). Language form and communicative practices. In J. Gumperz & S. C. Levinson (Eds.), Rethinking Linguistic Relativity (pp. 271-323). Cambridge: Cambridge University Press.

Keating, E. (2000). How Culture and Technology Together Shape New Communicative Practices: Investigating Interactions Between Deaf and Hearing Callers with Computer?mediated Videotelephone. Texas Linguistic Forum, Proceedings of the Seventh Annual Symposium About Language and Society-Austin., 99-116.

Kuutti, K. (1994). Activity Theory as a Potential Framework for Human-Computer Interaction Research. In B. A. Nardi (Ed.), Context and Consciousness: Activity Theory and Human-Computer Interaction. Boston: MIT Press.

Leont'ev, A. N. (1978). Activity, Consciousness, and Personality. Englewood Cliffs, NJ: Prentice-Hall.

Levinson, S. C. (1996). Relativity in spatial conception and description. In J. Gumperz & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 177-202). Cambridge: Cambridge University Press.

Levy, P. (1997). Cyberculture. Paris: Editions Odile Jacob.

Lucas, C., & Valli, C. (1992). Language contact in the American deaf community. San Diego, CA: Academic Press.

Mather, S. (1991). The Discourse Marker OH in Typed Telephone Conversations Among Deaf Typists. Unpublished PhD, Georgetown University.

Mead, G. H. (1934). Mind, self & society from the standpoint of a social behaviorist. Chicago: University of Chicago press.

Mitchell, W. J. (1995). City of Bits: Space, Place and the Infobahn. Cambridge, MA: MIT Press.

Peirce, C. S. (1940). Philosophical Writings. New York: Dover Publications.

Poster, M. (1984). Foucault, Marxism and History. Cambridge: Polity Press.

Robins, K., & Webster, F. (1999). Times of the Technoculture. New York: Routledge.

Rogoff, B. (1990). Apprenticeship in thinking : cognitive development in social context. New York: Oxford University Press.

Schutz, A. (1962). Collected Papers. The Hague: Martinus Nijhoff.

Scribner, S. (1997). Mind in Action: A Functional approach to thinking. In M. Cole, Y. Engestrom & O. Vasquez (Eds.), Mind, Culture, and Activity. New York: Cambridge University Press.

Searle, J. (1969). Speech Acts. Cambridge: Cambridge University Press.

Senft, G. (1997). Referring to Space. Oxford: Clarendon Press.

Stokoe, W. (1969). Sign language diglossia. Studies in Linguistics, 21, 27-41.

Tomasello, M. (1999). The Cultural Origins of Human Cognition. Cambridge: Harvard University Press.

Umble, D. Z. (1992). The Amish and the telephone: resistance and reconstruction. In R. Silverstone & E. Hirsch (Eds.), Consuming Technologies: Media and information in domestic spaces (pp. 183-194). New York: Routledge.

Valli, C., & Lucas, C. (1998). Linguistics of American Sign Language. Washington, DC: Gallaudet University Press.

Vygotsky, L. (1978). Mind in Society. Cambridge, MA: Harvard University Press.

Wertsch, J. (1991). Voices of the Mind: A Sociocultural Approach to Mediated Action. Cambridge: Harvard University Press.


Notes

[1] We gratefully acknowledge the invaluable assistance of Chris Moreland on this project. Mirus and Moreland are Deaf, Keating is hearing.

[2] It is customary to capitalize 'deaf' when referring to Deaf culture, whose members may include those who are hearing (e.g. hearing children of deaf parents), while lower case indicates those individuals with hearing loss.

[3] For more on sign language see Valli and Lucas 1998, Klima and Bellugi 1979.

[4] It is customary to reproduce ASL by using capitalized English glosses.

[5] The pictured system in Frame 3 is from a previous study by the authors, but shows the two image windows as they are available to interactants in the study being discussed.

[6] this is a name sign, a special sign to uniquely refer to a person.

[7] wh with a line indicates the facial grammar for a wh- question

[8] PTA is an acronym for Parent Teachers Association, a part of the way U.S. Elementary schools organize parent participation.

[9] English and English-like signing can be associated with formal and educational contexts by some members of the Deaf community.