Lecture 8
Lecture 8
Dialogue interpreting
As a comprehensive designation for a broad and diversified range of non-conference
interpreting practices, „dialogue interpreting‟ has gained increased currency since the late 1990s,
particularly in the wake of two seminal publications with this title edited by Ian Mason (1999b,
2001). Evidence of this lately acquired popularity is the inclusion of an entry on dialogue
interpreting in the second edition of the Routledge Encyclopedia of Translation Studies (Baker
& Saldanha 2009) – in addition to the one on COMMUNITY INTERPRETING which had
already appeared in the first edition. Although the two expressions have been used as
synonyms, the prevailing approach is to view dialogue interpreting as the overarching category,
comprising interpreting activities in a broad variety of SETTINGS, particularly „in the
community‟. Leaving aside geographically more circumscribed denominations, such as bilateral,
contact and cultural interpreting, the only other label with comparable potential for
inclusiveness is „liaison interpreting‟ (Gentile et al. 1996; Erasmus et al. 1999). The ascendance
of „dialogue interpreting‟ in scholarly publications to the detriment of the latter expression is
the result of a theoretical turn in INTERPRETING STUDIES, which has brought to the fore
the dialogic nature of many interpreter-mediated encounters, and has come to be known as the
“dialogic discourse-based interaction (DI) paradigm” (Pöchhacker 2004a).
Definition What, then, is the defining trait of dialogue interpreting that underlies such
strikingly diverse situations as medical consultations, welfare and police interviews, immigration
hearings, courtroom trials, parent–teacher meetings, business and diplomatic encounters,
broadcast interviews, and TV talk shows? In Mason‟s ground-breaking definition, dialogue
interpreting is described as “interpreter-mediated communication in spontaneous face-to-face
interaction” (1999a: 147). In his later review of the concept, Mason (2009a) lists four
fundamental characteristics: dialogue, entailing bi-directional translation; spontaneous speech;
face-to-face exchange; and CONSECUTIVE INTERPRETING mode. If this set of features
were to be strictly complied with (though Mason himself presents cases of deviation),
interpreting events which are now widely held to fall within the domain of dialogue interpreting
would not qualify. TELEPHONE INTERPRETING, for instance, would have to be excluded
as the face-to-face condition does not usually obtain (with the exception of more
technologically advanced, yet still rarely used, video-calls). Being normally conducted in the
simultaneous mode, SIGNED LANGUAGE INTERPRETING – the very field where the DI
paradigm was born (Roy 1996, 2000a) – would equally fall outside the boundaries of dialogue
interpreting, as would TALK SHOW INTERPRETING, where mixing consecutive and
chuchotage is a very common practice (the latter mode being used for the benefit of the foreign
guests on the show). More quintessentially than the face-to-face dimension or the mode, it is
therefore the discourse format that constitutes the unifying element among a vast array of
interpreter-mediated social activities, setting these apart from the monologue-based
communication of most conference interpreting events. Highlighting the core notion of
dialogue entails a number of significant shifts in perspective. Whereas the „liaison interpreting‟
label places emphasis on the connecting function performed by the interpreter, and
consequently on the centrality – both physical and metaphorical – of „the person in the middle‟,
what is foregrounded in dialogue interpreting is interaction itself (see Merlini 2007). More
specifically, the type of interaction is one which involves all the participants in the joint
definition of meanings, mutual alignments, roles and identities. The first shift in perspective is
thus that interpreters are revealed as full-fledged social agents on a par with primary
interlocutors, with whom they co-construct the communicative event. Secondly, the
interpreter‟s clients, on their part, are seen to play a crucial role as co-determiners of
communicative success or failure. Thirdly, though bilingual communication remains largely
dependent on the interpreter‟s verbal contributions to the exchange, increased emphasis is
placed on directly accessible features, such as eye contact, facial expressions, GESTUREs,
postures and PROSODY, which may offer primary interlocutors complementary or even
alternative cues for sense-making and rapport-building. This novel outlook on interpreting has
been consolidated by a growing number of studies resting on sociolinguistic and sociological
underpinnings, and on real-life data analysis. The ensuing body of research based on
DISCOURSE ANALYTICAL APPROACHES has given dialogue interpreting a clearly
defined identity of its own within the field of interpreting studies, and is thus to be considered
as its other distinctive and unifying trait. Quite significantly, in the earliest scholarly publications
which deal explicitly with dialogue interpreting, the rationale for this terminological choice is
instantly clear. Though working independently of each other, Cecilia Wadensjö (1993) and
Helen Tebble (1993), two pioneers of research on spokenlanguage dialogue interpreting, raised the
same focal points, namely: whatever is attained or unattained in communication is a collective
activity requiring the efforts of all participants; interlocutors‟ turn-by-turn contributions to the
exchange need close scrutiny at a micro-analytical level through recording and
TRANSCRIPTION; and the interpersonal and socio-institutional dimensions also require
investigation at a macro-analytical level – all of which was taken to call for a new discourse-
based approach to the study of dialogue interpreting. In light of these considerations, the
following account of the state of the art in dialogue interpreting is divided into two sections.
The first provides a comparative overview of a few domains of dialogue interpreting practice,
which have been selected on account of their stark contextual dissimilarities. The aim here is to
show that, underneath such diversity, the salience of interactionally negotiated interpersonal
dynamics is a constant throughout. The second section looks into the core concepts of dialogue
interpreting, and presents the main research topics.
Respeaking ()
In broad terms, respeaking may be defined as the production of subtitles by means of speech
recognition. A more thorough definition would present it as a technique in which a respeaker
listens to the original sound of a live programme or event and respeaks it, including
punctuation marks and some specific features for the deaf and hard-of-hearing audience, to
speech recognition software, which turns the recognized utterances into subtitles displayed on
the screen with the shortest possible delay (Romero-Fresco 2011). It is, in effect, a form of
(usually intralingual) computer-aided SIMULTANEOUS INTERPRETING with the addition
of punctuation marks and features such as the identification of the different speakers with
colours or name tags. Although respeakers are usually encouraged to repeat the original
soundtrack in order to produce verbatim subtitles, the high SPEECH RATES of some
speakers and the need to dictate punctuation marks and abide by standard viewers‟ reading
rates means that respeakers often end up paraphrasing rather than repeating (SHADOWING)
the original soundtrack. The origins of respeaking may be traced back to the experiments
conducted by US court reporter Horace Webb in the early 1940s. Until then, court reporters
used to take shorthand notes of the speech and then dictate their notes for transcription into
typewritten form. Webb proposed to have the reporter repeat every word of the original speech
into a microphone, using a stenomask to cancel the noise. The subsequent recording of the
reporter‟s words would then be used for transcription. This was called voice writing and may
thus be seen as the precursor of respeaking, or realtime voice writing, as it is called in the US.
Respeaking involves the same technique but uses speech recognition software for the
production of TV subtitles and transcriptions in courtrooms, classrooms, meetings and other
settings. The very first use of respeaking or realtime voice writing dates back to 1999, when
court reporter Chris Ales transcribed a session in the Circuit Court in Lapeer, Michigan, with
the speech recognition software Dragon Naturally Speaking. Respeaking was introduced in
Europe in 2001 by the Belgian public broadcaster VRT and by the BBC in the UK to replace
less cost-effective live subtitling methods for TV, such as the use of keyboards or stenography.
Standard intralingual respeaking for live TV programmes and court reporting has since
expanded to other applications and contexts. In many companies, subtitlers are using
respeaking to subtitle pre-recorded programmes in order to increase productivity. This
technique, known as scripting, allows respeakers to pause the original soundtrack if they wish,
which brings this type of respeaking closer to CONSECUTIVE INTERPRETING than
simultaneous interpreting. Respeaking is also being used for live public events (conferences,
talks, religious ceremonies, university lectures, school classes, etc.) and business meetings, with
respeakers working on site or remotely, sometimes (as in the case of telephone respeaking) with
no VISUAL ACCESS to the speakers. Interlingual respeaking is also being used in these
contexts, which highlights the resemblance between this technique and simultaneous
CONFERENCE INTERPRETING. Training in respeaking is usually focused on elements
specific to this discipline (especially those related to the use of speech recognition software),
and elements from both interpreting and subtitling for the deaf and hard of hearing (general
subtitling skills, awareness of the needs of deaf and hard-of-hearing viewers). As far as
interpreting is concerned, the emphasis is often placed on the skills required to listen,
comprehend and synthesize the source text and to reformulate it and deliver it live as a target
text. Multitasking is thus also involved in respeaking. As in simultaneous interpreting,
respeakers must listen to the source text, translate it (or respeak it) and monitor their own
voices. However, unlike interpreters, respeakers on TV often have to watch their output on the
screen as they are respeaking and correct it live if there are any mistakes or if their subtitles are
being obscured by on-screen text. Although more work is needed on this issue, research
findings (Romero-Fresco 2012) suggest that interpreting students perform better in respeaking
than those without prior training in this area, who seem to struggle to perform different tasks
simultaneously. However, interpreting students find it difficult to dictate punctuation marks
and must pay attention to their diction and INTONATION, which need to be more controlled
than in interpreting.
Fingerspelling
Fingerspelling as a mode of signing operates on the alphabetic level and represents individual
letters of the Roman alphabet (Stokoe et al. 1965). Fingerspelled tokens, each of which is a sign (a
sequence of handshapes, locations, orientations, and movements), represent a graphic
character of the alphabet and are used in sequences to create lexical items. Fingerspelling
systems differ by country and by the signed language with which they are associated, and the
use of fingerspelling also depends on setting-related needs and language user preferences.
American Sign Language (ASL) uses fingerspelling more often and more prominently than
many other signed languages (Padden 2006). Research on fingerspelling in ASL is therefore
relatively abundant, but not all of it may be generalizable to fingerspelling in other signed
languages. Fingerspelling in ASL is used to represent words with various grammatical functions
such as proper nouns, adjectives, verbs, nouns, expletives, English function words, and
transplanted English function words and phrases (Padden 1998). COMPREHENSION of
fingerspelling is difficult for many hearing interpreters and is a source of errors in
interpretations. Many interpreters report experiencing anxiety that interferes with fingerspelled
word recognition. Depending on the signed language and the way it uses fingerspelling, there
may be at least three types of fingerspelling that interpreters must learn to recognize and
produce: careful fingerspelling typically shows one fingerspelled token for each character in the
corresponding printed word, and is characterized by a relatively slow speed and even rate of
presentation; rapid fingerspelling, used for non-initial presentations of the word, has
constituent signs of a shorter duration that tend to blend together or to be eliminated; and
lexicalized fingerspelling tends not to vary much in speed or form from one presentation to the
next. This type of fingerspelling has also been described as lexical borrowing, as it tends to have
the rhythm and characteristics of a single sign rather than those of fingerspelling (Patrie &
Johnson 2011). Comprehending fingerspelling requires complex cognitive processes. The
fingerspelled sequence must be processed, serially connected to an existing template (a pattern
that is created and stored in the brain for that word if one exists), and converted by the receiver
to a mental image of a written word with which they are familiar. Practice with rapid serial
visual presentation (RSVP) and template building (Patrie & Johnson 2011) shows promise in
addressing the pervasive difficulty associated with fingerspelled word recognition, and in
reducing the corresponding effort related to comprehension.
Interpreting for deafblind persons (усний переклад для осіб, з вадами слуху та
зору)
DeafBlindness is a condition that manifests itself differently for each person along a spectrum
of combined vision and hearing loss. Time of onset and residual vision or hearing are factors
affecting language choice (signed or spoken), use of technological supports (cochlear implants,
assistive listening devices, computer applications), receptive and expressive communication,
employment, quality of life, leisure, relationships, independence, and education (RID 2007).
This heterogeneity is also reflected in preferred terms and notations. „DeafBlind‟, „Deafblind‟
and „deafblind‟ (without a hyphen, and with or without upper case) are preferred notations in
the international community, and represent a single condition more complex than the
combination of vision and hearing loss. The hyphenated term „Deaf-Blind‟ is widely accepted
by individuals with congenital deafness and acquired vision loss (usually after language
development). Users of signed language who maintain cultural-linguistic ties to the Deaf
community may also prefer the hyphenated designation. When referring to people with vision
and hearing loss from birth or early childhood, „deafblind‟ is commonly used. Interpreting for
the heterogeneous DeafBlind population is a complex specialty, requiring competence in
communication methods beyond those required for SIGNED LANGUAGE INTERPRETING
with the general Deaf population. Many interpreters begin their training as a Support Service
Provider (SSP), a role characterized by independent living assistance that includes providing
human guide services, environmental orientation, and supplementation of the interpreted
message with visual information. Ethically, interpreters whose roles are expanded to include
SSP responsibilities must take particular care not to encroach upon the service user‟s
independence. Interpreters working for DeafBlind persons need great flexibilityin adjusting to
personal communication preferences and varying degrees of independence of consumers, and
they must be comfortable with close interaction and touch. As a consequence of physical
proximity and contact, special attire and grooming are required, including a skin
color-contrasting, plain shirt with three-quarter-length sleeves and a high neckline, manicured
hands, and avoidance of colognes and shiny objects. Interpreting for people who are DeafBlind
requires use of tactile or non-tactile techniques that depend upon consumer preference, lighting
and other contextual factors. A consumer who can access visual information in certain lighting
or at a certain distance or angle might prefer the interpreter to work within a restricted field
where signs can be seen more easily. This consumer may place his or her hands on the
interpreter‟s forearms to follow signs in a limited area, using a method called tracking. When
lighting is too dim for tracking, or the Deaf consumer has no functional vision, tactile
interpreting allows the consumer‟s hands to maintain contact with the interpreter‟s for reading
the message by touch. The tactile interpreting scenario presents options for POSITIONING
(e.g. side-by-side, interlocking knees), sign adaptations, and use of one-handed and two-handed
methods, all of which are highly contextual and negotiated by interpreters and consumers.
Tactile interpreting can also be accomplished through methods other than a formal signed
language, such as Lorm (more prevalent outside North America), print-on-palm, tactile braille, and
FINGERSPELLING. Children or latedeafened consumers who use spoken language may
communicate with the Tadoma Method, a seldom-utilized form of tactile lipreading which
originated in the US (NCCC 2001). Interpreters follow behavioral guidelines unique to the
DeafBlind population, such as always self-identifying upon approach and anchoring a person to
a tangible object before stepping away. They must also seek to incorporate relevant visual as
well as auditory information about the environment, such as the room configuration, other
persons present and their actions, the location of exits, and other items of interest. These
interpreter-generated utterances ensure that the consumer is better informed and facilitate
independent communication management (Metzger et al. 2004).
https://www.youtube.com/watch?v=v0yyZ72eiKc
https://www.youtube.com/watch?v=fBMhJDjkS2c