top of page

What You Don't Know About Language Can Hurt


Interpreters work with languages every day of their work life yet may never take the time to consider what is language or how their lack of knowledge of what language truly is may affect the people they interpret for. The amount of linguistics interpreters are required to know varies from ITP to ITP (Interpreter Training Program) and some interpreters never attended an ITP. All interpreters are exposed to some linguistics just by the nature of the work and the requirements of certification maintenance, which require interpreters to continue to participate in educational activities. The linguistics interpreters are exposed to in workshops and at conferences are evidenced by the words interpreters use in their every day practice. Even though linguistic knowledge is disseminated and is tralatitious in the interpreting community unfortunately mis-information, errors, and outdated information is also spread and handed down. Even the common label interpreters use for themselves shows a limited linguistic understanding. "I'm a Sign Language Interpreter" is what most interpreters call themselves. To interpret you need to use two languages. Where in the title Sign Language Interpreter is the second language? Saying “I'm an Interpreter" is clear, or "I'm an ASL / English interpreter" is clearer and more precise by the addition of the languages. Some may say, "I interpret for the Deaf". Does that mean they don't interpret what Deaf people sign, and only interpret what people speaking English say? A better statement is; "I interpret between Deaf people and people who can hear". Deaf people and interpreters are the only ones who use the term “Hearing”. The average person who can hear never labels himself / herself as Hearing. It may seem like picking nits to discuss how interpreters label themselves and Hearing people but interpreters have become very sensitive to what labels Deaf and hard of hearing people prefer and never use terms like "hearing impaired". It is similarly important to label themselves and those who can hear with the same care. Using clear, sensitive, an understandable labels is one more step in representing the field of ASL / English interpreting as professional.

The purposes behind this article are so share some enlightening experiences that challenged the author’s traditional understanding of language. The enlightening experiences came through three professional educators; Professor Michael D. Gordin, Professor Joshua Timothy Katz and Joseph Wheeler (creator of "ASL That" Facebook page). Professor Gordin and Katz teach a college course called "Imagined Languages" and at the beginning of the course they discuss what at first blush seems to be two deceptively simple questions. What is Language? What is A Language? The result of the discussion of the two questions lead to a very complex nuanced look at what language really is. On "ASL That" Joseph Wheeler encourages Deaf people to discuss the state and development of ASL. One question he postulated began a discussion on why is the natural language of Deaf people in America called ASL? Another is how can Deaf people take back ASL and purge it from the pollution hearing people have forced upon it? (removing initializations from SEE)

What is Language?
Why is it important for interpreters to contemplate what is language? During routine work interpreters are asked questions about language and are called upon to give information about other peoples use of language. People who can hear will ask interpreters, when no Deaf people are around, if ASL is a real language and interpreters need to be able to respond with more than an affirmative, "yes". To adequately discuss language and the languages used by interpreters it is important to know what is language. Are; music, art, Pig Latin, gestures, mime, computer languages, Elfin, Klingon, and invented languages, true languages? If you ask a group of 100 interpreters to categorize a list of 10 things like the list just given, into two categories, language / not language, the results will show a wide variety of beliefs what is a real language. If you did the same experiment with a group of linguists you would still get a variety of responses. One example in the interpreting community that demonstrates a lack of understanding of what is language is when they use phrases like: “a reverse interpreting job" or "a voicing job". Not only are these phrases inaccurate linguistically they are confusing to non-interpreters. When a person interprets s/he interprets from one language to another. What does a reverse interpreting job mean? It could easily be thought to mean taking an English message and interpreting it into ASL and then reversing the same message back into English, (which is a useful skill building exercise) but that's not what it means in interpreter speak. What language is "voice"? Interpreters know when they use these phrases that they mean interpreting into English however the general public has no awareness of that meaning. Also the interpreter vernacular of reversing and voicing was created in the past where there was limited linguistic understanding within the field. Now that the interpreting profession has matured it is better to use accurate terms. Some may argue that the use of interpreter vernacular like voicing can be used among interpreters, however caution should be taken even in using vernacular among interpreters because habits have a way of spilling over into inappropriate situations, as well as initiating new interpreters into bad habits. Phrases like: "I had a job primarily interpreting into English" is more accurate way to say, "I had a voicing job". Interpreters know "voice" is not a language and when discussing ASL and English interpreters are confident they are both languages. What is "proper" ASL and "proper" English is not always clear and interpreters could have heated disagreements on what is and isn't "proper". What is ASL becomes a more difficult question when dealing with the infiltration of English into the ASL Deaf, HoH people use.
One unique aspect in the life of ASL / English interpreters is how the education system for the Deaf here in the USA has affected what interpreters are required to do. The education system in the USA decided to exclude the use of ASL and focus on oral communication and MCE (Manually Coded English) systems. MCE's include; the Rochester Method (1886) the use of fingerspelling and lipreading as the only form of communication, developed at the Rochester School for the Deaf in New York, SEE1 (Seeing Essential English) developed in 1966 by David Anthony, SEE2 (Signing Exact English) developed in 1972 by Gerilee Gustason, LOVE developed in 1971 by Dennis Wampler, CASE a more recent creation blending ASL conceptual signs and English Syntax (sometimes called Sign Supported English or Speech / sim-com), CUE "CUE makes all the phonemes of speech visible by using eight hand-shapes in four positions near the mouth in combination with the lip shapes and articulation movements of speech. Dr. R. Orin Cornett completed the invention of CUE speech in 1966” (according to the National Cued Speech Association). Interpreters are all well aware of the different systems mentioned above, where they get into trouble is describing them. Interpreters often speak of MCE, Signed English, PSE, etc. as the language a deaf / hard of hearing person is using. An important question to discuss is; Are the MCE’s invented by people who can hear true languages?
Interpreters struggle to express in terms of language what a Deaf or HoH person uses. In their struggle to express what language the Deaf / HoH person is using interpreters have tried to use linguistic terms like, contact language, pidgin, creole without a clear understanding of the meaning of the terms and the reality of the way Deaf / HoH people use language. Confusion stems from a lack of understanding of how bilingual people can have aspects of one language show up in their other language. Many interpreters don't consider that Deaf people who know and use ASL and have been educated in the USA English only or English priority system are actually bilingual. The education of the Deaf in the USA is primarily focused on teaching English. Rarely do educational institutions for the Deaf have courses teaching ASL. With all this emphasis on the Language of English and no time at all spent on the language of ASL it is no wonder the people educated in such an environment would blend the two languages. When aspects of one language infiltrate the other language among bilinguals it is called code switching. Deaf people code switch from ASL to English for many reasons. Due to the oppression of ASL Deaf people code switch into English to prove their intelligence to the person(s) they are communicating with. Deaf people due to having a lack of exposure to ASL will code switch to English because it may be the only way they know to express a concept they want to communicate. Deaf people will also code switch to English when they know the person they are communicating with; isn't fluent in ASL, ASL isn't their first language, is interpreting for them and s/he wants a specific English word used in the interpretation, as well as other reasons.  The code switching phenomenon and the pervasiveness of English among Deaf people have cause interpreters to attempt to describe what is happening. When Interpreters tried to use linguistic concepts like contact language, pidgin, and creole to describe MCE's they didn't understand that MCE's are coding systems and that Deaf people are bilingual. There is a difference between a language and a coding system. All research and scholarly writings of the subject of MCE clearly state that they are systems to represent English; they are not a separate language. Languages have a community of usurers who naturally cause the language to develop over time. MCE’s do not have communities of users who cause them to develop over time so there for it is better to describe MCEs as created languages.  MCE's have creators and proponents who are people who can hear. The purveyors or MCE's have a vested interest in their specific MCE to be perpetuated and they are only perpetuated in the USA's educational system. The greatest complaints offered against MCE are; their inability to naturally develop as unique circumstances arise, and their slow cumbersomeness of production.  The inability to adapt is due to the nature of a system who has a creator and proponents who make decisions (language by committee) rather than a group of native speakers.  Since these systems have been so pervasive in the education of the Deaf there is no wonder that aspects of MCE systems have infiltrated ASL resulting in a wide variety of ways Deaf / HoH people express themselves.
When thinking what is Language, interpreters will often conflate orthography (written form) with language. During the history of humanity there have been thousands upon thousands of languages. Many languages have become extinct and were never documented so the exact number of languages humans have created over their existence is unknown. According to Ethnologue (2014) there are an estimated 7,105 living languages in use and 3,570 have a written form. Most extinct languages from antiquity have no orthography. There are several examples of languages that actually have more than one orthography “Serbian, which has two alphabets (Latin and Cyrillic, both equally authoritative), Greek has been written in its alphabet and Linear B, Hebrew has been written in both Hebrew and Arabic letters, and Turkish has been written, at different points in its history and by different communities, in Arabic, Greek, and Roman letters” (Gordin).  A language does not need orthography to be a language just as ASL currently does not have a universally accepted orthography (though several have been attempted). It is easy after spending twelve or more years in the USA education system being taught to focus on reading and writing English with brief amounts of time dedicated to oration and comprehension of the spoken word, to think that orthography is the true language. Don't let the dogma of the modern educational system cause you to devalue any language due its orthography or lack there of. Stating that orthography is not language does not impugn its value in the modern day. For clarity it is best to keep orthography and its uses / values separate from the concept of natural language. After all English orthography is an artificial construct. Although linguists cannot agree upon a clear, succinct, and accurate definition of what language is, some general properties are agreed upon. Language will have a community of users, it will be used to convey information, and it will develop over time.  ASL and spoken English should be called natural languages, all MCEs should be called created languages, and any system to represent a language in some form of writing should be called orthography.

What makes a language distinct from another language?

At this point some definitions might be helpful to understand what makes a language distinct, however definitions will not make a clear delineation because none exists. Natural language is largely agreed upon to be any language created and used by a group of people, is dynamic and passed on to future generations. There are no clear markers delineating when a language branches off and becomes a new language. Languages have accents (different pronunciations) dialects (different pronunciations and different lexical items) but there is no clear delineation of when a distinct accent occurs, when an accent becomes a dialect, or when a dialect becomes a different language. Consider the differences among speakers of English. A person speaking a very distinct "accent / dialect" in England may be mutually unintelligible with a person speaking a very distinct "accent / dialect" in India, however the convention is that they are both speaking the same language. In comparison a speaker of English and a speaker of German may be able to understand each other much better than the people from England and India, but the convention states that English and German are separate languages. (Albeit closely related languages) Contact language is a category of communication that includes pidgin and creole. Contact language is when two individuals or groups of people who speak different languages are in contact for either a short period, periodically, or for extended periods of time and need to communicate with each other. Pidgin is usually used when people(s) using two languages pick parts of each language and put them together for communicating usually for a specific purpose i.e. trade. A creole is when two language communities have created a shared common language containing elements of each language and it is usually spoken by subsequent generations and continues to develop. There are no clear markers delineating when a pidgin becomes a creole. There is also Koine a dialect or language of a region that has become the common or standard language of a larger area (Spears, Arthur K., and Donald Winford. The Structure and Status of Pidgins and Creoles: Including Selected Papers from the Meetings of the Society for Pidgin and Creole Linguistics. Amsterdam: J. Benjamins, 1997. Print.) It is easy to see why interpreters erroneously thought that Signed English was a contact language (pidgin or creole) due to the mixed nature of SE, however Deaf people were not involved in the process of development of MCEs. From definitions of language / pidgin / creole and the history of MCEs it appears that MCE's are not natural languages nor contact languages, largely because there are no native ASL user or group of users were or are involved. Interpreters may question why is it that so many Deaf / HoH people use or incorporate aspects of MCE in their communication. The answer is easy to discern from the history of Deaf Education. Ninety (90) percent of all Deaf / HoH people are born to hearing parents who are not native users of ASL and most educational institutions do not teach ASL and only use a MCE to educate the Deaf. It is actually more surprising that ASL still exists at all under the linguistic oppressive conditions the Deaf have faced over the past 150 years. It is the preposition of this paper to encourage interpreters to use the term “constructed language” when discussing any MCE system. In order to use the term constructed language it is first necessary to discuss what is a constructed language. Constructed language are languages that are invented and or developed by individuals or small groups of people and are used for a variety of purposes. Some constructed languages were developed for the purpose of creating a lingua franca (Universal Language) others were created for literary purposes (elfin language in Lord of The Rings) Constructed languages in comparison to natural languages have a significant drawback in that they don't easily and readily evolve when confronted with the communication needs of its speakers. Speakers need to wait for some authority to decide how to address the need. There is also the drawback of the limitation of linguistic experience one person or a small group of people who develop a language can have. They will inevitably fail to incorporate something that is needed in everyday use of a language or some special vocabulary needed do discuss a specialized field. Some examples of constructed languages are; Solresol, Volapük, Esperanto, Ro, Occidental, Basic English, Interlingue, Lingwa de Planeta, Quenya, Elven, Klingon, and many more. Among created languages Esperanto is an exception, although it was created for a purpose (lingua franca), by an individual, it now has a community of users who expand and develop the lexicon and syntax of the language. If there is a community in the USA of Deaf people who were taught a specific MCE system in school and then they continue to use it, develop the lexicon, and very importantly the syntax of the MCE then it will have undergone the same transition that Esperanto underwent in becoming a naturalized language. If the power and authority of the language is still vested in the creator or the creators designated authority (ex; a school system) and not allowed to be developed by its users, as is the case with all MCE’s, then what is being used is not a natural language but a created one. One main purpose of all the MCE's is to keep the syntax of English there for a community of users would not be free to naturally develop the syntax of the MCE they use. It is easy for interpreters to be confused by the indoctrinated use of a MCE by Deaf adults as a language community but the important distinction is who has control over the natural evolution and development of the language, or if the evolution is allowed.

The importance of describing Deaf people’s communication

Interpreters will be asked what language does the Deaf / HoH participants use, either by hiring entities, referral agencies, or other interpreters. Interpreters often struggle with how to describe the language the participants are using or prefer to use. Interpreters will often say "strong ASL, VERY ASL, PSE, Signed English, MLP (minimal language proficiency), lazy signing, etc. Some inherent biases creep in when an interpreter chooses one of these labels. What is strong ASL? What does VERY ASL mean? It may be the interpreter’s own inexperience or uncomfortableness with ASL that is the real meaning behind those statements. A better description may be fluent or articulate in ASL. The interpreter may be trying to express that the Deaf person has very little to no code switching into English when s/he signs. Some interpreters conflate the amount of fingerspelling in a Deaf persons language with their proficiency in ASL. American Sign Language, as the first word states is an American (more precisely a United States of America) natural language. Deaf people by being in the USA are exposed visually to the orthography of English every day. The decision to incorporate something that is evident in the USA visual cultural landscape like the Nike swoosh or the Verizon V check mark into the language of ASL is perfectly normal for a visual natural language development and just because signing the Nike swoosh uses a g hand-shape the Verizon V check mark uses a V hand-shape does not make one "more ASL" than the other. Not everything English is antithetical to ASL. ASL is an American language and English is pervasive in American Culture. Spanish language is also an important part of the American Culture. Many Spanish words are common everyday expressions of American English speakers. As Spanish becomes a greater presence in the American Culture it would be normal to see elements of Spanish to appear in ASL in the future, and this would not cause ASL to not be ASL and become Spanish. Languages commonly borrow lexical items and phrases from other languages. Ex; deja vu. There is a difference between native users choosing to borrow from another language and an authority who is not a native user of a language dictate ing what is and is not allowed in a language. This is what the users of the native language of the Deaf in the USA are in the process of determining.
Pidgin is not a full language so using PSE is in essence saying the Deaf / HoH person doesn't have a complete language. In the limited amount of time interpreters spend with the people they interpret for it is impossible to make such a definitive statement about any ones language. Signed English is a very broad term encompassing; SEE1, SEE2, LOVE, CASE etc. all of which are distinct from each other. If an interpreter tries to explain to another interpreter the kind of language a Deaf / HoH person uses it is difficult to specify exactly what constructed language the Deaf / HoH person uses without extensive experience in all of them. Interpreters are rarely taught in an ITP all the constructed languages used to teach Deaf / HoH people.
The truth is very easy to state. Some Deaf / HoH people use ASL (a natural language of the Deaf in the USA) and some Deaf / HoH people use English. Some interpreters may balk at saying a deaf person uses English if s/he does not listen and speak for her/himself but signs and uses interpreters. The reality is the deaf / HoH person is using the language of English just through a visual mode. Is a person writing English not using English because s/he is expressing themselves through a visual representation of English (the written word)? Is a blind person not using English when reading or writing in braille because it is a tactile representation of English? The language of English is spoken and heard however it does have orthography, which is a visual representation of English. No visual or tactile representation of a spoken language fully encompasses all of the nuances of the spoken language no matter how sophisticated the system may be.
I use the phrase VRE (Visual Representation of English) or VRL (Visual Representation of Language, which can be used for any language). The reason for using VRE instead of MCE is to make a broader category similar to that of contact language. The broader category of VRE includes any orthography or MCE systems previously created or to be created in the future. The phrase MCE Manually Coded English focuses on the encoding of English into a manual / gestural form, VRE Visual Representation of English focuses on the aspect it is a visual format and does not focus on the encoding or decoding of the English language but rather the sensory format focus on vision in replacement of audition. All the MCEs are not just signed they are seen as well. After researching all the MCE's created to teach Deaf individuals the question arises are they any better than written English in instructing Deaf / HoH people in English. All the proponents of each MCE cite research that supports their own MCE as the best way for the Deaf to learn English. Not until 2000 has research been attempted to evaluate deaf children raised in a true bilingual environment when they are exposed to ASL from birth and written English as early as possible and factoring for any extenuation circumstances that can affect cognitive development (ex. social economic status). Recent studies on Deaf / HoH acquisition of English have shown the more a child reads English the better their English skills. (Chamberlain, Charlene. Language Acquisition by Eye. Mahwah, N.J.: Lawrence Erlbaum Associates, 2000. Print.) The weakness that no VRE is able to encompass the entirety of a spoken language is one reason detractors of MCE's use to encourage children to be taught using the natural (and there for complete) language ASL and use that language to help his/her development of English using the VRE of English’s Orthography and comparative linguistics. The creation of the language category concept of a visual representation of English (VRE) that includes; MCE, CUE, written English begs the question. If there existed a perfectly good, well respected, time tested, universally taught (in the USA) VRE called “written English” why was there a need to create so many more MCEs? The education of the Deaf here in the USA began as a Bilingual Bicultural approach with Gallaudet / Clerk then came the oralist only movement, then use of MCE's, now it is turning back full circle to a Bi-Bi approach. This change in educational approach may have had nothing to do with teaching the Deaf English but to oppress a linguistic minority by forcing them to conform to an idealized standard. This begs the question, are all MCEs a creation of audism (Humphries) rather than a sincere attempt to help Deaf / HoH people become literate in English? Does that audism show it’s devilish face through the way interpreters speak about ASL, SE, English, and language in general? The last common phrase that interpreters use and needs to be discussed is MLC. Many interpreters felt the phrase minimal language competency (MLC) is a derogatory phrase and decided to create a new phrase HVL (High Visual Language). The problem with using the word high in combination with language is that it tends to mean elite in that context. Elite does not express what is happening. The phenomenon of MLC is actually more complicated. The label MLC applies to Deaf people who are; recent immigrants fluent in another signed language, immigrants who were never educated, people with cognitive developmental delays affecting language, people who were only exposed to home signs, people who use a sub-culture sign lexicon, as well as other unique circumstances of language acquisition. Most hearing interpreters are not well trained in the variety of communication styles that fall under the MLC label. Certified Deaf Interpreters (CDI) are best at interpreting for people who do not use standard ASL. For hearing interpreters to label a Deaf persons communication mode is problematic. It would be best to leave that function to the professional Deaf interpreters. MLC and HVL are both unsatisfactory terms and as of yet the profession has not developed a better term. For the moment it may be better to simply say the Deaf person is not using standard ASL and needs the expertise of a CDI.  
As with the struggle interpreters have in describing the language of the Deaf / HoH, they also have challenges when describing hearing people's spoken language. To inform other interpreters about the hearing people in a given interpreted event they occasionally say things like; s/he, has a strong accent, uses lots of slang, and is all over the place. Some statements are derogatory or comment on the hearing persons fluency in language. Again it is difficult for an interpreter to know the full extent of hearing people’s language abilities from the brief exposure of an interpreted event. When interpreting for people who speak English with a different accent than yours it is better to identify the accent in a neutral manner. Interpreters need to practice understanding different accents because it is a normal aspect of the job especially for interpreters working in communities with a high degree of diversity. Instead of blaming a less than perfect interpretation on the speaker’s accent it would be better to be honest and state "I struggle to understand". When interpreting for a person using a different dialect than the ones an interpreter is accustomed to; s/he could research unique lexical items in the given dialect, pre-conference with the speaker and ask if s/he is aware of any terms that have a different meaning in their dialect than the one the interpreter knows, ask if s/he can interrupt the speaker for clarification of the meaning of terms.  

What makes a word / sign a distinct word / sign?  

Joseph Wheeler founder for the Facebook page "ASL That" a forum for Deaf people to discuss Sign Language and its development put forward two questions that inspired this article. The first question on why Deaf people's language is called ASL (American Sign Language) when the syntactical order of A-S-L does not follow they syntax of Sign Language but of English. Will the Deaf community decide the Sign for their language in their language is the sign TWO HANDS PALMS UP 'S' opens to '5' closes to 'S' or will another sign be chosen? Many years ago the signs SIGN LANGUAGE (language with l not with f hand-shape) was used to express the language of the Deaf here in the USA. Over the past 40 to 50 years the “S to 5 to S” sign has become much more prevalent and the SIGN LANGUAGE sign has faded. Just as people speaking English in America call the language of other countries by using an English word ("French" vs 'Français", and “German” vs “Deutsch").  ASL is the English name for the natural language of the Deaf in the USA at the present time. In the 1960s when the first books on Sign Language were beginning to be published in the USA there was no name given for the language of the Deaf in the USA. One book postulated AMSLAM as a possible name for what today we call ASL. When the question of the sign to represent the language of the Deaf in the USA is decided by Deaf people interpreters will need to remember to translate that sign into the English ASL and when ASL is said to translate it into the sign the Deaf have decided to use for their language. The second topic discussed was the removal of initializations. Interpreters should be well versed in the history of "ASL" in America. Starting with Martha's Vineyard Sign (having roots in Old Kent England Sign), Native tribes contact signs, and French Sign Language (LSF via Laurent Clerc & Thomas Gallaudet). Interpreters job is to perceive a message in one language, understand it, and then create the message in the other language. This would seem to eliminate the need for knowing what makes a word / sign distinct, however interpreters are often asked, "What is the sign for...." when not interpreting. Interpreters can also hear / see a concept that is created using several words / signs and interpret each word / sign instead of interpreting the concept.  An educated response puts the profession in a good light. Some important background knowledge to have before beginning a discussion on what is a "word" or "sign". Consider the graph for the English language's types of words.
                               Meaning                         Spelling                        Pronunciation
Homonym           Different                             Same                              Same
Homograph         Different                             Same                              Same or different
Homophone        Different                             Same or different         Same
Heteronym          Different                             Same                             Different
Heterograph        Different                             Different                       Same
Polyseme             Different but related         Same                             Same or different
Synonym             Same                                   Different                       Different
Synophone          Different                             Different                       similar but not identical
   (no similar graph has been created for ASL, with labels matching a visual language)

Words pronounced the same may be considered different words or the same word. Words with different meanings may also be considered different or the same. There is no list of criteria that determines when a word is considered a distinct word. If English which has been around for quite some time cannot agree as to exactly what differentiates one word from another than how are interpreters supposed to respond to inquiries about the signs of ASL? It is not agreed which, how many, or to what extent the five (5) parameters of Signs need to change for it to be considered a different sign. Sometimes interpreters conflate an interpretation with the sign itself. If two signs tend to be interpreted into the same English word with a possible addition of an adjective or adverb to be more precise interpreters feel the signs are the same. The signs; SUMMER, UGLY, DRY only vary in one parameter (location) and most interpreters would agree they are distinct signs, however when a sign like WALK is varied in several parameters like; movement speed, facial grammar many interpreters would think it is the same sign. Compare English synonyms for walk: stroll · saunter · amble · trudge · plod · dawdle · hike · tramp · Etc. Each are considered a different word although they have the same denotative meaning with different connotative feeling. ASL "WALK" has many variations each could be paired with a different English word. However no study has made a correlation between English synonyms and ASL variations to make one to one correlations. It seems as though some interpreters do not view the connotative meaning of any ASL as a critical aspect of the lexical item. This may be due to the fact the connotation comes through the facial grammar of ASL, which is the most recent parameter to be identified, is expressed through small movements on the face which may be difficult for hearing people to catch, and is the area of ASL that many interpreters are weakest in.
The sign HUNGY / WISH is one sign with two different meaning depending on context.
There is also the issue of "gestures" "non-manual makers" “facial expressions" relation to ASL. If a hearing person speaks an English sentence and uses a vocal inflection that creates an opposite meaning from the meaning of the sentence without the inflection, does the inflection  cause the word to become a different word? Initially many would respond no, inflection does Chang meaning but does not create a new word per se but it is an integral part of language. Some languages heavily rely on tones / inflection to determine meaning Ex; Mandarin. ASL uses gestures, non-manual markers, and facial expressions to create nuances in expression. These nuances can cause the meaning to become the opposite of the original sentence Ex; Rhetorical Questions, AWFUL, etc. Just as adult learners of another language whose L1 is English struggle to master tonal languages, interpreters who are not native signers may struggle with the nuances of ASL and without an appreciation and understanding of these nuances an interpreter could view ASL as an impoverished language. The concept of tonal and inflection in spoken languages is known and studied but no study has examined visual languages to see if a corresponding phenomenon happens in Signed Languages. When ASL was first studied it was commonly believed that signs had four (4) components; Hand-Shape, Palm-Orientation, Location, and Movement. Later research found the fifth (5) component and named it non-manual markers NMM. Part of the issue with NMM is that when people communicate in any languages they use facial expressions so what happened on the face was not considered part of language.  Now that linguistics is studying signed languages the role and importance of facial movements in language will be studied more. It may come to pass that the nomenclature of Signed languages may change to emphasize the face due to the fact that NMM / facial expressions supersede the meaning of the manual part of the language. Interpreters have on occasion found themselves creating a message in English that follows the manual produced signs in ASL only to find that after a few sentences that the raised eyebrows meant that all that was signed is in actuality the opposite of the point of the Deaf person.   
An Interpreters opinion on what constitutes a unique sign can lead to a perception that ASL is a lexically sparse language which in turn influences how interpreters represent ASL to others. It also causes interpreters to underutilize the NMM aspect of ASL there for creating a less dynamic interpretation in comparison to the source message. Interpreters whether intentionally or not reflect their opinions on the richness of ASL and his/her belief of what makes a sign a unique sign.
As part of the Deaf Hood movement many Deaf people are participating in dialogues about how and which MCE's caused initializations that were added to ASL should be removed. LSF when imported into the USA had initialized signs, many of which still persist in ASL to this day. Determining which hand-shapes will remain in ASL signs and which will be modified during this process is unknown, however interpreters need to keep themselves abreast of the changes. Remember every sign that uses at least one hand must incorporate a hand-shape and the language of ASL uses many hand-shapes (many more than the 26 that correspond to English letters), The decision by the Deaf community to keep a hand-shape for a sign does not mean that English is being kept in the language just as the V in SEE and the C in SEARCH does not mean French is being kept in ASL. ASL is a vibrant and evolving language both lexically and syntactically, and interpreters need to keep up with the changes. Similarly to how ASL developed from having only indigenous ASL signs for other countries and then began to incorporate the signs other Signed Languages use for their own language, as well as new technology, corporations signs (iPhone, iPad, Starbucks, etc.) interprets need to be aware of the developments and be respectful of the adoption of new signs. Interpreters can fall pray to the same instincts all language users have of falling into one of the two camps, "what was wrong with the old way?", "the new way is better!" and not respect the Deaf persons own preference. Some rare few interpreters for want of doing what was politically correct fall into actions like one interpreter who actually chastised an older Deaf person for using a sign for a country the interpreter felt was derogatory but the Deaf person had used without derision all of his life. It is just as bad for an interpreter to refuse to use new signs because "I've always signed it this way and it was good enough before".  Awareness of ASL changes, timely adoption of the changes, and respect for individuality of expression for Deaf people, all need to be in the forefront of interpreters minds.
(Padden, Carol A. "The ASL Lexicon." Sign Language and Linguistics Sign Language & Linguistics SL&L 1.1 (1998): 39-60. Web.)
Another way interpreters miscommunicate about ASL is how they describe fingerspelling. Many interpreters erroneously believe fingerspelling is English. You can easily test that belief by fingerspelling to an English speaker who knows no sign and see if they understand you. That test is a facetious example. Interpreters may say fingerspelling is English but in a different mode. If it is English then how is it you can fingerspell French, Spanish, and any other language that uses the Latin letters? Recent research by James W. Van Manen associate professor Columbia College Chicago (and CODA) has found Deaf people create at least five hundred 500 distinct hand shapes when they fingerspell. If fingerspelling can represent many spoken languages and has over five hundred shapes not the twenty-six letters of the English orthography then what is it? Fingerspelling is sign! Fingerspelling can be used to represent an orthographical representation of a spoken word. Many languages borrow lexical items from other languages. This phenomenon is called loan words, and ASL has a unique way of borrowing from other languages called fingerspelling.  Even though a lexical item originated in another language when it is incorporated into a different language it becomes part of the new language. The loan word could change pronunciation, part of speech, denotation (specific meaning), connotation (implied idea), there for it is no longer the same as in its original language and becomes part of the new language. Fingerspelling can be used any time a specific concept that has a word in another (not just English) language is desired to be expressed in ASL. Some concepts have been borrowed so frequently they become loan signs (modified fingerspelling that incorporates a movement in addition to the letters and some letters may be omitted or severely changed). ASL fingerspelling follows ASL linguistic rules and does not follow English or any other language's rules. Fingerspelling also does not follow the phonology of a spoken word. Interpreters have tried, with varying success, to represent the phonology of English with fingerspelling.     The difficulty arises in the fact that the 26 letters of the Latin alphabet used by English actually represents about 44 phonemes in spoken English. The number of consonant phonemes is about 24 (or slightly more). The number of vowels is estimated to range from 19 to 21 phonemes, however accents and dialects can change that number. There is a huge disconnect from the 500 fingerspelled letters to the 26 written letters in English orthography to the 44 phonemes of English, and there is no correlation between the fingerspelled letters (signs) and the phonemes. Ye Wang Associate Professor Department of Communication Sciences & Disorders Missouri State University (CODA of non signing parents) has created 44 hand symbols to represent English's phonemes for the purpose of teaching deaf people to speak (2005). These hand symbols are not ASL fingerspelling. The hand symbols were created to teach children with cochlear implants to speak more clearly, and is an example of the new trend to attempt to cure deafness.
Fingerspelling is actually a rapid succession of individual signs (traditionally called letters) sometimes incorporating an overall movement to represent a concept. Often Deaf people will fingerspell a concept that has an established sign. The reason is often for emphasis showing that a fingerspelled "word" can replace a sign (with added emphasis), there for fingerspelling is signing ASL not English. ASL is not a lesser language because it borrows from English and other spoken languages Ex; jalapeño. As Deaf people become more financially independent and video technology becomes more pervasive they will borrow more signs from other sign languages. The lexicon of ASL is larger than many interpreters think and is evolving so interpreters need to continually learn about the complexity, modifications, and additions to the natural language of the Deaf here in the USA. Some may be wondering after the discussion on bilingual Deaf code switching and this recent discussion about fingerspelling being ASL and not English, how do Deaf people code switch. Some Deaf people will use their speaking voice, some will sign phrases, or sentences in English lexical order incorporating them in ASL discourse, and some may sprinkle in English idioms. The code switch can be just one lexical item fingerspelled but tends more toward the phrasal or sentential levels and incorporates English syntactics (grammar). Some Deaf people who use English are also known to code switch by adding an ASL syntactical phrase when the MCE they are using is inadequate for communicating the desired idea.

Conclusion
Interpreters continue to expand their understanding of ASL / English and the interpreting process along with specializing in areas of need for the Deaf community. This expansion is vital to the profession. This paper may seem to address an insignificant aspect of the interpreting profession. However the essence of what interpreters do is work with language. A better understanding of what language is and what are the unique qualities distinct languages have along with a more broad understanding of the lexicons used will help interpreters to communicate intricate nuanced concepts more accurately. What words interpreters use to describe language, different languages, signs etc. can cause inexperienced hearing people to have a great respect for the profession of ASL / English interpreting or think that interpreting is not a true profession. Many interpreters work with young Deaf people or recent immigrants to the USA and the interpreter's knowledge, perspective, and attitude toward language can have a great impact on these impressionable people. What you know and say about language can be detrimental or inspirational to others. It is hoped that this paper will inspire more interpreters to be on the supportive side.

 

Written June 2015 by Bryon K. Rowe

Anchor 1

Will robots / computers replace interpreters?

The Robot Apocalypse is coming, the Robot Apocalypse is coming, the Robot Apocalypse is coming!
Your beliefs about the ability of robots / computers to replace interpreters when the Robot Apocalypse happens may depend on whether you are a Sci-Fi fan or not. Your perspective just like Chicken Little's perspective can have a profound impact!  The Robot Apocalypse is a phrase used for a belief that humans will be replaced by robots or computers because their abilities will far exceed human abilities. This article explores the hyperbole, hypocrisy, and hope of technology within the bimodal interpreting profession. With a focus on how to empower interpreters with statements they can have at the ready to combat Tech ignorance relating to the belief that technology will be better at interpreting between Deaf and hearing people than human interpreters are.


Historically Deaf people look forward to and embrace technology that helps them overcome barriers. One example is apple's iPhone FaceTime. As interpreters, we should learn about and adopt the technologies that Deaf people embrace. What then should our response be to the idea that technology will replace interpreters. We should support any technology that Deaf people embrace and we need to be articulate in explaining (no Deaf present)/ interpreting (for the Deaf) to non-Deaf people what technology Deaf people find helpful and what is not helpful. Often hearing people attempt to impose or suggest technology for Deaf people based on their naïveté, genuine concern, or zealous support of technology. As interpreters, we can get caught up into the same motivations yet we need to keep a Deaf person’s perspective. Hearing people tend to think a computer will do the job of the interpreter cheaper and better often with an emphasis on cheaper.
 

Interpreters are all too aware of the oppressed minority status that Deaf people must endure. Interpreters are also aware of some of the oppressive acts the Deaf are routinely confronted by. However, it is not the norm for interpreters to be formally trained in an ITP on how to deal with the multitude ways Deaf people's status impacts interpreting events and interpreters as vicarious participants of/in oppression. This injustice can come in many forms. One form is the simple misunderstanding a person, who has never learnt about a specific minority, may hold and inadvertently perpetuates. A more severe form is ingrained systemic active oppression, weather societal or institutional. As interpreters, even though we are not a member of the minority group, we are associated with the group and we are often confronted by statements and actions generated by the injustices perpetrated on people in the minority. Interpreters are trained to not engage with the perpetrator but to facilitate a dialogue between the perpetrator and the Deaf / HoH person. There are many occasions when perpetrators start dialogues with interpreters when there are no Deaf / HoH people around. Although Interpreters are not usually formally trained on how to answer these types of questions or dispute the misunderstandings people approach them with, however after years of interpreting Deaf / HoH people's responses to those who oppress them, interpreters internalize those responses and develop a cadre of responses they can give in settings where no Deaf / HoH person can express his / her own perspective. This article suggests that interpreters need to add to the number of topics and responses they include in their, must have, cadre of responses. This article focuses on the oppression that comes in the form of technology replacing interpreters. Interpreters need to develop statements that clarify and refute the notion that technology will replace interpreters, not because we want to keep our jobs but because it is a form of oppression to the Deaf community. It is a fair question to ask after reading that statement "What does oppression of Deaf people have to do with the notion that technology can replace interpreters"?

If anyone thinks oppressors have only the characteristics of ignorance or hatred they may miss the oppression that comes from over-optimism. Many people are eager to embrace the glorious new world that technology promises and do not realize that their cheerleading can have deleterious effects. Deaf people have been combating inspiration porn perpetrated by the cochlea implant industry for years. We have all seen the viral videos of a young attractive woman weeping when her CI is turned on and she miraculously hears for the first time. Have you ever asked why the person in those videos is always a young and attractive female (or a cute baby)? Interpreters have had to be able to speak knowledgeably, succinctly, and convincingly about the facts that assistive audition technology (aids, CI, etc.) do not restore “natural” hearing or “cure” deafness. The unspoken oppression in the statement that technology can replace interpreters is that “ASL is a very simple language and therefor is easy to interpret”. This oppression also establishes a protocol where Deaf people would be required to always carry around specialized computers to interpret for them and the hearing majority does not!  It also assumes that ALL situations that need interpreting are in locations where technology is easily set up and used. Deaf people have been struggling with educating the medical community that VRI interpreting is not appropriate and effective in many medical settings and should not be the default or first choice for communication access in medical settings.

There is a new threat rising from technology and that is the computer-generated interpretation. YouTube has tried to create an algorithm that interprets spoken English into captions and we have found that the results are sorely lacking, and this is English to English text not another language. Google translate attempts to convert one language to another and if you have ever used it you will clearly see its limitations. The Chinese phrase on menus translated by humans is rendered as “Spring Chicken” computers translated into “young not yet having sex chicken”.
Lest you think the author is a technophobe or a naysayer the opposite is actually true. I hope computer automated captioning continues to improve and someday it may be as good as a human interpreter. Again, it is still auditory English to English orthography which is easier than from one language to another. As an interpreter who has worked with CART and automated vide captioning in real world settings I was still responsible to interpret because spoken English inflection, intonation, and pausing can all impact the meaning of spoken English. Even though each English word spoken appeared in the text (CART / Caption) the meaning was not present. Tech champions tout the success of IBM’s Watson as an example of marine understanding of spoken English. This technology enables a computer to understand spoken or written English. Not to translate English into another language. Tech champions would say the examples given are just the beginning of the endeavor and just give technology time and it will improve and achieve the task set before it.

The point that is missed is that the technology may improve however without much more intense research into ALL natural languages (especially visual languages) interpretations will never be as successful as when a human does it.  
 

This image shows the actual physical size of Watson.

Are all Deaf people supposed to carry around a Watson to interpret for them so it is easier for hearing people? Of course, that is impossible. The next question is; should Deaf people carry a device that is wirelessly connected to Watson? Again, the connection speeds of typically available Wi-Fi and cellular networks are insufficient to carry on a real-time interpretation. There are limits to Physics that can't be overcome when it comes to the speed that information can be transmitted through the atmosphere. One misunderstanding is that Watson can understand natural language on any topic. It can’t. Although it is a great improvement in technology and has many beneficial applications and some will even save lives it still is not able to comprehend spoken English in the entire spectrum of settings English is used as well as humans can. The limitations of breadth of English and ASL knowledge and the connection speed capabilities are huge limitations that would need to be overcome. Then there is the cost of having a dedicated Watson for the Deaf, devices they can carry, and the expensive fast internet connections. (Live interpreters are still much cheaper). Watson is being used in the medical field to quickly search a huge database of human diseases and aliments, comparing a person’s symptoms to the know symptoms in its database. This is something uniquely suited to computer functions and not to human brains therefor it is logical to use Watson to aid in patient diagnostics. One important underlying reason Watson is being used in medicine is because the medical field in America is rich and can afford a dedicated super computer like Watson. When was the last time any oppressed minority group had such resources given to it? Watson, in the future may be able to do a decent job of interpreting for the Deaf however, from my experience I doubt if anyone but will fund such an expensive endeavor for an oppressed minority group like the Deaf.

One technology touted to make interpreters obsolete are the gloves that can sense the motions of the wearers hands and translate the ASL motions into spoken English. It is another assistive device that may someday be helpful to hearing people who don't understand ASL (or other signed languages) however it is insufficient to replace an interpreter.After much refinement, it may be able to be used by Deaf people who are comfortable with limiting their robust language to just hand movements making accommodations for the essential parts of ASL that are not on the hands. Limiting a natural language to a sort of short hand in order to communicate rudimentary conversations with hearing (non-signing) people. The greatest fault in this technology is the ignorance of the complexity of signed languages and that simple hand movements and positions are not the complete language. It can be argued that the more essential aspects of ASL is in the NMM / facial grammar and expressions. For example, the sign glossed "AWFUL" although it has the same hand movements it has two meaning depending on the NMM connected to it. Will the gloves be able to identify the difference between SUMMER, UGLY, DRY when all three signs have 4 of the 5 sign parameters in common, when the only difference is a slight variation in their location. There are also prosody markers that use a shift in eye gaze, slight shoulder movements, etc. will the gloves be able to identify them? This technology could in essence replace a pen and paper for simple communication between the hearing and Deaf, if such rudimentary technologies become completely obsolete with no digital replacement (like a smart phone or tablet). Wouldn't it be better if hearing people would embrace technology like an ASL website or app to learn a few basic signs that they could use to communicate with the Deaf in everyday brief communication scenarios, like taking a food order, simple teller assisted bank transactions, etc. Deaf people are quite competent in communicating brief interchanges with hearing people in a variety of ways. It is the hearing people who “freak out”. Why must the technology burden be place only on the Deaf?

Another technology is video capture / facial recognition technology which is touted as a way to interpret ASL to English working in tandem with speech recognition technology interpreting English into ASL using a computer-generated avatar on a computer screen. The movie industry has pushed development in these technologies to the point they are not quite realistic but close enough. With the example of Watson, it is clear machine understanding of language is still in its infancy and the size of the computer necessary is far from portable so the translation aspects of the technology falls short. The movie industry when they use motion capture there is still a person sitting at a computer for hours changing that capture into the avatar that appears on screen. In addition, the ability of technology to capture all the tiny details that permeate language is far from adequate. The ASL sign "crinkling up the side of a nostril" is one example of a sign that has such subtle movements that video capture can't identify as having meaning verses when a person just twitches their nose because it itches. However just like the CI inspiration porn how these tantalizing experiments are represented give futurists hopes that a technological solution will be found for the communication barriers between people using spoken / heard languages and visual / gestural languages. The Star Trek fan’s hope of a universal translator is a great dream but have they ever thought of the complexities and shortcomings involved? Even Star Trek (the next generation) had an episode called “Darmok” devoted to the concept that UNIVERSAL translation is never going to be a reality.What the tech enthusiast fails to understand is the complexity of human language and in their zeal to adopt the new technology they inadvertently INSULT Deaf people's language. They have the assumption that ASL is just hand symbols so a simple set of gloves or a video camera can capture the entire language. That a simple algorithm will decode the relatively few gestures in ASL. Interpreters are aware that ASL is a full complex rich natural language that utilizes movements of the entire upper body (which includes facial movements and sometimes movements in the lower body) to convey messages.The reality is that a CI does not and never will replace natural hearing. Current research in CI technology is trying to find a way to inculcate directionality of sound, something natural hearing is excellent at but CI technology fails to achieve. The truth that many do not want to face are the facts that the complexity of the biological system of hearing and the limits of solid state miniaturization simply mean that technology will never be comparable to abilities of the biological system. The same is true for the complexities of natural languages. Computers will need to be miniaturized a thousand-fold to be small enough that a pocket translator that will be able to fully understand natural languages and translate them. This is in essence a room temperature quantum computer. Cloud based systems that have super computers like Watson or a quantum computer that people access through a hand-held device are possible but are prohibitively expensive and connection speeds have limits imposed by physics. One reason Watson is being used in medical diagnostic procedures is because there is enough money in the medical setting to pay for the super computer, interpreting is no were as lucratively funded. Everyone who has used smart phone based personal assistants designed to provide services when spoken to or who has turned on the computer-generated captioning for online video content clearly understands the technology’s limitations. What most people fail to understand is that it isn’t just a matter of time and then technology will get better to the point we will have a Star Trek like universal translator. We have not yet done enough linguistic research on all languages to program computers to understand any language well enough to accurately translate it.Important points are not just the limitations of the technology or the lack of research in linguistics but that people seem to think that Signed languages are somehow simpler, less sophisticated and therefore are ripe for the beta testing of computer generated language translation, and that Deaf people’s communication isn’t as vital as hearing people’s communication. Why aren’t computer generated translations the norm at the United Nations? Why isn’t the UN the first go to location to test and implement computer translators? The answers to those questions isn’t in the technology shortcomings or the need, but in the image that UN translators are highly trained professionals and the communication is vital! Therefore, we shouldn’t use less than perfect technology in such a vital and essential setting. This demonstrates the average person’s belief that ASL / English interpreters are not highly trained professionals and that the communication Deaf and hard of hearing people need to do with hearing people isn’t vital, or essential. Deaf people who need to communicate with an oncologist or lawyer would vehemently disagree that their communication isn’t vital or essential.The technologies mentioned in this article could have a purpose for simple communication scenarios but that fails to recognize the fact that most Deaf people are bi-lingual and can simply write what they need to communicate on current smart phones or on old fashioned paper and pencil for simple, brief communication. The use of interpreters is for more complex communication that technology cannot and without huge advances in linguistics and overcoming some huge technological barriers will not be able to compete with the human's abilities in both quality and affordability.As interpreters, we need to add to our repertoire of answers to common questions clueless hearing people ask us. Being able to give a clear, succinct, poignant, response to the ignorant statement "technology will replace interpreters" is just as important as having to learn how to share culturally significant information relating to deafness in a way that a clueless hearing person will understand. You can shock the tech nerd who tells you “technology will replace interpreters” by responding “go and re-watch Darmok!

So put down your phaser interpreters are still indispensable.

Image from Star Trek the Next Generation episode "Loud as a Whisper" of 3 interpreters being killed.

Written October 2017 by Bryon K. Rowe

bottom of page