0% found this document useful (0 votes)
36 views

Lecture 8

This document discusses relay interpreting and dialogue interpreting. Relay interpreting involves interpreting a source language into a target language through a third language when no direct interpreter is available between the languages. It is commonly used in international organizations to allow for interpretation between many languages simultaneously. Dialogue interpreting refers more broadly to interpreting in spontaneous face-to-face interactions beyond conference settings, such as in medical, legal, or business contexts. It emphasizes the co-constructive nature of communication between all participants and examines interactions at both micro and macro levels through discourse analysis.

Uploaded by

Svit S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Lecture 8

This document discusses relay interpreting and dialogue interpreting. Relay interpreting involves interpreting a source language into a target language through a third language when no direct interpreter is available between the languages. It is commonly used in international organizations to allow for interpretation between many languages simultaneously. Dialogue interpreting refers more broadly to interpreting in spontaneous face-to-face interactions beyond conference settings, such as in medical, legal, or business contexts. It emphasizes the co-constructive nature of communication between all participants and examines interactions at both micro and macro levels through discourse analysis.

Uploaded by

Svit S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Lecture 8.

Relay interpreting (Опосередкований усний переклад)

Relay interpreting is a double or two-stage interpreting process, in which a source-language


message is interpreted into a target language via a third language. This system is used when no
interpreter is available to provide direct interpretation between the languages in question. For
example, in the case of a speech in Khmer addressed to a Czech-speaking audience, a
Cambodian speaker could first be interpreted into French by an interpreter in the visiting
delegation, whose French rendering would then be interpreted into Czech by a local interpreter.
Such indirect interpretation in relay mode has presumably been practiced when necessary
throughout history, a famous example being the joint efforts of Spanish/Mayan interpreter
Jerónimo de Aguilar and MALINCHE to enable communication between Spanish and Nahuatl
in Hernán Cortéz‟s drive to conquer the Aztec Empire for SPAIN. Relay interpreting is
encountered in a great variety of SETTINGS, in the consecutive and simultaneous MODES,
and spoken and signed languages alike. Given the additional time required for back-to-back
CONSECUTIVE INTERPRETING (as used in the League of Nations for speeches in
languages other than English or French), relay is much easier to accommodate with
SIMULTANEOUS INTERPRETING (SI), where the extra cost in time is limited to the relay-
taker‟s TIME LAG. This arrangement is indeed used on a large scale in multilingual meetings
of international institutions such as the United Nations and the European Union (with
interpreting from and into up to 24 languages). In these settings, the initial interpretation into
the so-called pivot language (e.g. English) – for instance, from Chinese and Finnish respectively
in the two organisations concerned – can be relayed into any number of other languages at the
same time. This system was commonly used in the former Soviet Union and Eastern Bloc
countries, with Russian serving as the pivot language. The interpreter providing the relay
(known as the pivot or relayer) may work into his or her A language, or else into a B language
(known as „doing a retour‟). The latter is especially common with source languages of limited
diffusion. In either scenario, regardless of the issue of DIRECTIONALITY, relay interpreting
is generally viewed as problematic and as something to be avoided, by employers and
interpreters alike. The rationale is that the added complexity resulting from the doubling of an
already complex process makes relay doubly prone to errors or losses of message integrity. An
additional inconvenience in SI is that the cumulative time lag deprives the relay-taker, and even
more so his or her audience, of the full benefits of visual stimuli (such as GESTUREs or slides)
synchronised with the source speech. Paradoxically, considering its widespread use as an
integral part of multilingual conferencing, there has been very little research investigating the
alleged weaknesses of this „second-best‟ practice.

Dialogue interpreting
As a comprehensive designation for a broad and diversified range of non-conference
interpreting practices, „dialogue interpreting‟ has gained increased currency since the late 1990s,
particularly in the wake of two seminal publications with this title edited by Ian Mason (1999b,
2001). Evidence of this lately acquired popularity is the inclusion of an entry on dialogue
interpreting in the second edition of the Routledge Encyclopedia of Translation Studies (Baker
& Saldanha 2009) – in addition to the one on COMMUNITY INTERPRETING which had
already appeared in the first edition. Although the two expressions have been used as
synonyms, the prevailing approach is to view dialogue interpreting as the overarching category,
comprising interpreting activities in a broad variety of SETTINGS, particularly „in the
community‟. Leaving aside geographically more circumscribed denominations, such as bilateral,
contact and cultural interpreting, the only other label with comparable potential for
inclusiveness is „liaison interpreting‟ (Gentile et al. 1996; Erasmus et al. 1999). The ascendance
of „dialogue interpreting‟ in scholarly publications to the detriment of the latter expression is
the result of a theoretical turn in INTERPRETING STUDIES, which has brought to the fore
the dialogic nature of many interpreter-mediated encounters, and has come to be known as the
“dialogic discourse-based interaction (DI) paradigm” (Pöchhacker 2004a).
Definition What, then, is the defining trait of dialogue interpreting that underlies such
strikingly diverse situations as medical consultations, welfare and police interviews, immigration
hearings, courtroom trials, parent–teacher meetings, business and diplomatic encounters,
broadcast interviews, and TV talk shows? In Mason‟s ground-breaking definition, dialogue
interpreting is described as “interpreter-mediated communication in spontaneous face-to-face
interaction” (1999a: 147). In his later review of the concept, Mason (2009a) lists four
fundamental characteristics: dialogue, entailing bi-directional translation; spontaneous speech;
face-to-face exchange; and CONSECUTIVE INTERPRETING mode. If this set of features
were to be strictly complied with (though Mason himself presents cases of deviation),
interpreting events which are now widely held to fall within the domain of dialogue interpreting
would not qualify. TELEPHONE INTERPRETING, for instance, would have to be excluded
as the face-to-face condition does not usually obtain (with the exception of more
technologically advanced, yet still rarely used, video-calls). Being normally conducted in the
simultaneous mode, SIGNED LANGUAGE INTERPRETING – the very field where the DI
paradigm was born (Roy 1996, 2000a) – would equally fall outside the boundaries of dialogue
interpreting, as would TALK SHOW INTERPRETING, where mixing consecutive and
chuchotage is a very common practice (the latter mode being used for the benefit of the foreign
guests on the show). More quintessentially than the face-to-face dimension or the mode, it is
therefore the discourse format that constitutes the unifying element among a vast array of
interpreter-mediated social activities, setting these apart from the monologue-based
communication of most conference interpreting events. Highlighting the core notion of
dialogue entails a number of significant shifts in perspective. Whereas the „liaison interpreting‟
label places emphasis on the connecting function performed by the interpreter, and
consequently on the centrality – both physical and metaphorical – of „the person in the middle‟,
what is foregrounded in dialogue interpreting is interaction itself (see Merlini 2007). More
specifically, the type of interaction is one which involves all the participants in the joint
definition of meanings, mutual alignments, roles and identities. The first shift in perspective is
thus that interpreters are revealed as full-fledged social agents on a par with primary
interlocutors, with whom they co-construct the communicative event. Secondly, the
interpreter‟s clients, on their part, are seen to play a crucial role as co-determiners of
communicative success or failure. Thirdly, though bilingual communication remains largely
dependent on the interpreter‟s verbal contributions to the exchange, increased emphasis is
placed on directly accessible features, such as eye contact, facial expressions, GESTUREs,
postures and PROSODY, which may offer primary interlocutors complementary or even
alternative cues for sense-making and rapport-building. This novel outlook on interpreting has
been consolidated by a growing number of studies resting on sociolinguistic and sociological
underpinnings, and on real-life data analysis. The ensuing body of research based on
DISCOURSE ANALYTICAL APPROACHES has given dialogue interpreting a clearly
defined identity of its own within the field of interpreting studies, and is thus to be considered
as its other distinctive and unifying trait. Quite significantly, in the earliest scholarly publications
which deal explicitly with dialogue interpreting, the rationale for this terminological choice is
instantly clear. Though working independently of each other, Cecilia Wadensjö (1993) and
Helen Tebble (1993), two pioneers of research on spokenlanguage dialogue interpreting, raised the
same focal points, namely: whatever is attained or unattained in communication is a collective
activity requiring the efforts of all participants; interlocutors‟ turn-by-turn contributions to the
exchange need close scrutiny at a micro-analytical level through recording and
TRANSCRIPTION; and the interpersonal and socio-institutional dimensions also require
investigation at a macro-analytical level – all of which was taken to call for a new discourse-
based approach to the study of dialogue interpreting. In light of these considerations, the
following account of the state of the art in dialogue interpreting is divided into two sections.
The first provides a comparative overview of a few domains of dialogue interpreting practice,
which have been selected on account of their stark contextual dissimilarities. The aim here is to
show that, underneath such diversity, the salience of interactionally negotiated interpersonal
dynamics is a constant throughout. The second section looks into the core concepts of dialogue
interpreting, and presents the main research topics.

Respeaking ()

In broad terms, respeaking may be defined as the production of subtitles by means of speech
recognition. A more thorough definition would present it as a technique in which a respeaker
listens to the original sound of a live programme or event and respeaks it, including
punctuation marks and some specific features for the deaf and hard-of-hearing audience, to
speech recognition software, which turns the recognized utterances into subtitles displayed on
the screen with the shortest possible delay (Romero-Fresco 2011). It is, in effect, a form of
(usually intralingual) computer-aided SIMULTANEOUS INTERPRETING with the addition
of punctuation marks and features such as the identification of the different speakers with
colours or name tags. Although respeakers are usually encouraged to repeat the original
soundtrack in order to produce verbatim subtitles, the high SPEECH RATES of some
speakers and the need to dictate punctuation marks and abide by standard viewers‟ reading
rates means that respeakers often end up paraphrasing rather than repeating (SHADOWING)
the original soundtrack. The origins of respeaking may be traced back to the experiments
conducted by US court reporter Horace Webb in the early 1940s. Until then, court reporters
used to take shorthand notes of the speech and then dictate their notes for transcription into
typewritten form. Webb proposed to have the reporter repeat every word of the original speech
into a microphone, using a stenomask to cancel the noise. The subsequent recording of the
reporter‟s words would then be used for transcription. This was called voice writing and may
thus be seen as the precursor of respeaking, or realtime voice writing, as it is called in the US.
Respeaking involves the same technique but uses speech recognition software for the
production of TV subtitles and transcriptions in courtrooms, classrooms, meetings and other
settings. The very first use of respeaking or realtime voice writing dates back to 1999, when
court reporter Chris Ales transcribed a session in the Circuit Court in Lapeer, Michigan, with
the speech recognition software Dragon Naturally Speaking. Respeaking was introduced in
Europe in 2001 by the Belgian public broadcaster VRT and by the BBC in the UK to replace
less cost-effective live subtitling methods for TV, such as the use of keyboards or stenography.
Standard intralingual respeaking for live TV programmes and court reporting has since
expanded to other applications and contexts. In many companies, subtitlers are using
respeaking to subtitle pre-recorded programmes in order to increase productivity. This
technique, known as scripting, allows respeakers to pause the original soundtrack if they wish,
which brings this type of respeaking closer to CONSECUTIVE INTERPRETING than
simultaneous interpreting. Respeaking is also being used for live public events (conferences,
talks, religious ceremonies, university lectures, school classes, etc.) and business meetings, with
respeakers working on site or remotely, sometimes (as in the case of telephone respeaking) with
no VISUAL ACCESS to the speakers. Interlingual respeaking is also being used in these
contexts, which highlights the resemblance between this technique and simultaneous
CONFERENCE INTERPRETING. Training in respeaking is usually focused on elements
specific to this discipline (especially those related to the use of speech recognition software),
and elements from both interpreting and subtitling for the deaf and hard of hearing (general
subtitling skills, awareness of the needs of deaf and hard-of-hearing viewers). As far as
interpreting is concerned, the emphasis is often placed on the skills required to listen,
comprehend and synthesize the source text and to reformulate it and deliver it live as a target
text. Multitasking is thus also involved in respeaking. As in simultaneous interpreting,
respeakers must listen to the source text, translate it (or respeak it) and monitor their own
voices. However, unlike interpreters, respeakers on TV often have to watch their output on the
screen as they are respeaking and correct it live if there are any mistakes or if their subtitles are
being obscured by on-screen text. Although more work is needed on this issue, research
findings (Romero-Fresco 2012) suggest that interpreting students perform better in respeaking
than those without prior training in this area, who seem to struggle to perform different tasks
simultaneously. However, interpreting students find it difficult to dictate punctuation marks
and must pay attention to their diction and INTONATION, which need to be more controlled
than in interpreting.

Sight interpreting/translation (переклад з аркуша)

Sight interpreting/translation is one of the basic MODES of interpreting. It is a hybrid


form, in that a written source text is turned into an oral – or signed – target text in another
language in real time. The interpreter is expected to render the contents of the written text,
often without time for even a cursory reading, at a consistent, fluent pace. The term „sight
translation‟, generally used to identify this mode of interpreting in English as well as various
other languages (e.g. traduction à vue in French, traducción a la vista in Spanish), is intrinsically
inaccurate. Given the real-time processing demands, what is referred to as sight translation is
more aptly defined as a form of interpreting and has indeed long been regarded as a form of
SIMULTANEOUS INTERPRETING (e.g. Herbert 1952; Shiryaev 1979). Therefore, the term
„sight interpreting‟ is much better suited to convey the essence of this mode of interpreting (see
also Cˇ enˇková 2010). Sight interpreting is encountered in a wide range of SETTINGS and
work situations. These include meetings, often of a bilateral nature, which are usually
conducted in consecutive mode. Written documentation (e.g. annual reports, minutes) is then
delivered in sight interpreting mode, either in full or in selected fragments. Sight interpreting is
also frequently used at press conferences, where statements or press releases may be delivered
by an interpreter in a language which the audience understands. Other documents which lend
themselves to sight interpreting in conference settings include press reports which may be of
interest to a meeting, letters of apology for absence, or congratulations. Sight interpreting may
also be used for drafts prepared in one language to be submitted to a plenary or other body for
completion. In community-based institutional settings, sight interpreting may be required in
backtranslating the written record of an interpreter-mediated interview in POLICE SETTINGS and
ASYLUM SETTINGS, for rendering expert witness statements in courtroom interpretingor
for medical reports and patient files in HEALTHCARE INTERPRETING. Sight interpreting
is similarly also used in SIGNED LANGUAGE INTERPRETING, a special example being
that of a DEAF INTERPRETER working from a teleprompter in MEDIA INTERPRETING.
In terms of cognitive processing, the interpreter working at sight rather than from auditory
input has the advantage of controlling his or her pace; on the other hand, sight interpreting
makes additional cognitive demands as the text is constantly before the interpreter‟s eyes,
which increases the risk of lexical and syntactic INTERFERENCE. Despitethese constraints,
the interpreter‟s delivery is expected to be natural and without unnecessarycorrections and
REPAIRS while maintaining eye contact with the listener(s) or user(s). In interpreter
EDUCATION, sight interpreting has been used in APTITUDE TESTING to determine
whether candidates are able to quickly grasp the essentials of a text and render its meaning
(see Russo 2011). Its use in interpreting PEDAGOGY, recommended and actual, varies in
different schools. Training in sight interpreting often starts only once students masterthe basics
of CONSECUTIVE INTERPRETING and are thus able to render the message, as opposed to
words, based on their understanding and analysis of the source language text (e.g.Seleskovitch
& Lederer 2002; Viaggio 1995; Weber 1990). Sight interpreting is generally considered a
good exercise for working up speed and thus for preparing students to undertakesimultaneous
interpreting in the booth. When allowing for prior reading, sight interpreting is
believed to improve students‟ ability to navigate in a text applying a non-linear approach and to
identify core information. In this respect, it is essential preparation for the composite skill of
SIMULTANEOUS WITH TEXT. Sight interpreting has attracted relatively little research
interest. Among the topics studied are the type of information processing in sight interpreting
compared to other modes (Viezzi 1989, 1990; Lambert 2004), and output-related constraints,
given the high probability of interference from the source text (e.g. Agrifoglio 2004; Gile 2009).
In her PhD dissertation, Jiménez Ivars (1999) undertakes a comprehensive descriptive analysis
of sight interpreting as a working mode and analyses the task in terms of the PACTE
translation competence model. More recent work has investigated „sight translation‟ in the
framework of translation process research. In a small-scale experimental study, Dragsted and
Gorm Hansen (2007) compared the performance of translators and interpreters on a sight
translation task and found differences in behaviour regarding temporal variables and
translational approach. Translators paused more and took much longer to complete the task,
and interpreters gave a „freer‟ rendering, focusing less on individual words, as demonstrated by
an EYE TRACKING analysis. Eye tracking data (pupil size, fixations and regressions) were
also employed by Shreve, Lacruz and Angelone (2010) in an experimental study focusing on
cognitive effort and visual interference in a sight-interpreting task. Their results were in line
with earlier findings regarding interference from the continued visual presence of the source
text, and confirmed the cognitive complexity of the task, resulting from the need to cope with
the high lexical density and syntactic complexity of a written text under the demands of fluent
oral production.

Signed language interpreting (усний переклад жестових мов)


Signed language interpreting (SLI) prototypically means interpreting between a signed language
and a spoken or another signed language, and is sometimes referred to as visual language
interpreting, particularly in CANADA. (Since this may involve language modes other than sign
languages proper, such as TRANSLITERATION, signed language interpreting is preferred as
the broader term, whereas practitioners are commonly referred to as sign language interpreters.)
Sign(ed) languages are different in every country; they are naturally occurring languages that are
independent from, but related to, the spoken languages of the countries where they are used,
and are used by deaf people as their first or preferred language of communication. Spoken-
language interpreters work between two linear languages, whereby one word is produced after
another and the message is built up sequentially. Sign languages, however, are visual-spatial
languages that can create meaning using space, location, referents and other visually descriptive
elements. Therefore sign language interpreters are constantly transferring information between two
alternate modalities, which requires the representation of information in very different ways.
This is referred to as bimodal, as opposed to unimodal, interpreting (Nicodemus & Emmorey
2013). Signed languages inherently encode „real-world‟ visual information. When hearing
certain abstract concepts or generic descriptions, it is necessary for sign language interpreters
to visualize the information, and implicitly encode it in their interpretation. Brennan and
Brown (1997) cite an example: In order to render „X broke the window‟, the British Sign
Language (BSL) interpreter ideally needs to know the shape of the window, andhow it was
broken, in order to give an accurate visual representation of the event. In the
reverse direction, when „voicing‟ for hearing people, interpreters need to distil visual
information into idiomatic spoken-language usage. For example, a deaf person can immediately
convey visually where a person they were having a conversation with was seated, but a hearing
person would not expect to hear something like: „I was chatting with John who was sitting on
my right‟, unless this were relevant in a legal context. Thus, the bimodal nature of SLI creates
additional COGNITIVE LOAD for interpreters (Padden 2000).

Fingerspelling

Fingerspelling as a mode of signing operates on the alphabetic level and represents individual
letters of the Roman alphabet (Stokoe et al. 1965). Fingerspelled tokens, each of which is a sign (a
sequence of handshapes, locations, orientations, and movements), represent a graphic
character of the alphabet and are used in sequences to create lexical items. Fingerspelling
systems differ by country and by the signed language with which they are associated, and the
use of fingerspelling also depends on setting-related needs and language user preferences.
American Sign Language (ASL) uses fingerspelling more often and more prominently than
many other signed languages (Padden 2006). Research on fingerspelling in ASL is therefore
relatively abundant, but not all of it may be generalizable to fingerspelling in other signed
languages. Fingerspelling in ASL is used to represent words with various grammatical functions
such as proper nouns, adjectives, verbs, nouns, expletives, English function words, and
transplanted English function words and phrases (Padden 1998). COMPREHENSION of
fingerspelling is difficult for many hearing interpreters and is a source of errors in
interpretations. Many interpreters report experiencing anxiety that interferes with fingerspelled
word recognition. Depending on the signed language and the way it uses fingerspelling, there
may be at least three types of fingerspelling that interpreters must learn to recognize and
produce: careful fingerspelling typically shows one fingerspelled token for each character in the
corresponding printed word, and is characterized by a relatively slow speed and even rate of
presentation; rapid fingerspelling, used for non-initial presentations of the word, has
constituent signs of a shorter duration that tend to blend together or to be eliminated; and
lexicalized fingerspelling tends not to vary much in speed or form from one presentation to the
next. This type of fingerspelling has also been described as lexical borrowing, as it tends to have
the rhythm and characteristics of a single sign rather than those of fingerspelling (Patrie &
Johnson 2011). Comprehending fingerspelling requires complex cognitive processes. The
fingerspelled sequence must be processed, serially connected to an existing template (a pattern
that is created and stored in the brain for that word if one exists), and converted by the receiver
to a mental image of a written word with which they are familiar. Practice with rapid serial
visual presentation (RSVP) and template building (Patrie & Johnson 2011) shows promise in
addressing the pervasive difficulty associated with fingerspelled word recognition, and in
reducing the corresponding effort related to comprehension.

Interpreting for deafblind persons (усний переклад для осіб, з вадами слуху та
зору)
DeafBlindness is a condition that manifests itself differently for each person along a spectrum
of combined vision and hearing loss. Time of onset and residual vision or hearing are factors
affecting language choice (signed or spoken), use of technological supports (cochlear implants,
assistive listening devices, computer applications), receptive and expressive communication,
employment, quality of life, leisure, relationships, independence, and education (RID 2007).
This heterogeneity is also reflected in preferred terms and notations. „DeafBlind‟, „Deafblind‟
and „deafblind‟ (without a hyphen, and with or without upper case) are preferred notations in
the international community, and represent a single condition more complex than the
combination of vision and hearing loss. The hyphenated term „Deaf-Blind‟ is widely accepted
by individuals with congenital deafness and acquired vision loss (usually after language
development). Users of signed language who maintain cultural-linguistic ties to the Deaf
community may also prefer the hyphenated designation. When referring to people with vision
and hearing loss from birth or early childhood, „deafblind‟ is commonly used. Interpreting for
the heterogeneous DeafBlind population is a complex specialty, requiring competence in
communication methods beyond those required for SIGNED LANGUAGE INTERPRETING
with the general Deaf population. Many interpreters begin their training as a Support Service
Provider (SSP), a role characterized by independent living assistance that includes providing
human guide services, environmental orientation, and supplementation of the interpreted
message with visual information. Ethically, interpreters whose roles are expanded to include
SSP responsibilities must take particular care not to encroach upon the service user‟s
independence. Interpreters working for DeafBlind persons need great flexibilityin adjusting to
personal communication preferences and varying degrees of independence of consumers, and
they must be comfortable with close interaction and touch. As a consequence of physical
proximity and contact, special attire and grooming are required, including a skin
color-contrasting, plain shirt with three-quarter-length sleeves and a high neckline, manicured
hands, and avoidance of colognes and shiny objects. Interpreting for people who are DeafBlind
requires use of tactile or non-tactile techniques that depend upon consumer preference, lighting
and other contextual factors. A consumer who can access visual information in certain lighting
or at a certain distance or angle might prefer the interpreter to work within a restricted field
where signs can be seen more easily. This consumer may place his or her hands on the
interpreter‟s forearms to follow signs in a limited area, using a method called tracking. When
lighting is too dim for tracking, or the Deaf consumer has no functional vision, tactile
interpreting allows the consumer‟s hands to maintain contact with the interpreter‟s for reading
the message by touch. The tactile interpreting scenario presents options for POSITIONING
(e.g. side-by-side, interlocking knees), sign adaptations, and use of one-handed and two-handed
methods, all of which are highly contextual and negotiated by interpreters and consumers.
Tactile interpreting can also be accomplished through methods other than a formal signed
language, such as Lorm (more prevalent outside North America), print-on-palm, tactile braille, and
FINGERSPELLING. Children or latedeafened consumers who use spoken language may
communicate with the Tadoma Method, a seldom-utilized form of tactile lipreading which
originated in the US (NCCC 2001). Interpreters follow behavioral guidelines unique to the
DeafBlind population, such as always self-identifying upon approach and anchoring a person to
a tangible object before stepping away. They must also seek to incorporate relevant visual as
well as auditory information about the environment, such as the room configuration, other
persons present and their actions, the location of exits, and other items of interest. These
interpreter-generated utterances ensure that the consumer is better informed and facilitate
independent communication management (Metzger et al. 2004).

Note-taking (використання запису/скоропису)


CONSECUTIVE INTERPRETING of entire speeches presents major challenges to the
interpreter‟s MEMORY. Faced with the need to render speeches lasting five to ten minutes or
even longer, interpreters take notes to avoid overburdening their memory during the initial
processing phase (COMPREHENSION) and to ensure the retrieval of content stored in
memory during the second processing phase (production). However, notes do not replace
memory; they are used by interpreters to aid their memory, which is achieved by jotting down
the ideas, structure and some details of a speech, but not the source-language wording.
Consecutive interpreters thus complement their cognitive memory with a „material memory‟
(Kirchhoff 1979), that is, written cues taken down on their notepad. Kirchhoff (1979) talks of a
„parallel storage strategy‟, where the information to be remembered is stored simultaneously in
two different but interdependent ways. While listening and simultaneously taking notes,
interpreters are performing a continuous analysis of the speech which allows them to
understand and remember its content and „main thread‟ (Matyssek 1989). Aside from such
items as unfamiliar names or NUMBERS, which are particularly difficult to remember,
interpreters ideally note down only such units of the source text as they have successfully
analysed and fully understood. The role of notes as merely an aid to memory is reflected in
quantitative analysis of how much the interpreter actually commits to paper: according to
Matyssek (1989), only 20 to 40 percent of the source text, at most, is represented by notes.
Principles and systems Consecutive interpreting was the interpreting mode commonly used
in the 1920s at conferences of the League of Nations and the International Labour
Organization. Interpreters at that time had not gone through formal training, but developed
their interpreting skills and techniques on the job (Herbert 1952). Note-taking strategies had to
be developed individually in order to cope with long speeches; the introductory courses
reportedly offered at the German Foreign Office from 1921 onwards (Schmidt 1949) were an
exception. The note-taking systems developed individually and intuitively by interpreters in the
course of their working experience were largely based on similar principles, such as writing
down key idea units, logical links and marks of negation as well as dates, numbers and names,
using different abbreviation procedures and arranging notes vertically. In the 1950s these
practices came to be summarised by Herbert (1952) and, in particular, Rozan (1956), thus
establishing the Geneva School‟s classic canon of note-taking. This approach, based on using
mainly words and some 20 recommended symbols, was further developed by Ilg (1988; Ilg &
Lambert 1996), integrating more symbols and also emoticons. Unlike Rozan (1956), Minyar-
Beloruchev (1969b) in the SOVIET SCHOOL opted for more symbols in order to avoid
INTERFERENCE from source language notes during target text delivery. This „language-
independent‟ approach based on symbols to represent concepts was further developed in
Heidelberg by Matyssek (1989), who proposed an elaborate system of combinatory symbols.
Because of his emphasis on symbols, Matyssek has often been seen in opposition to
Rozan, and criticized for his exhaustive collection of symbols. However, the various
approaches do not differ that much with regard to basic principles (Ahrens 2005b). On
the other hand, some disagreement exists over the choice of language in note-taking,
with some favouring the targetlanguage (e.g. Herbert 1952), others the source language
(e.g. Ilg 1988), and still others (e.g. Matyssek 1989) giving preference to the
interpreter‟s A language. Given the tradition of interpreter training in Europe,
specialised literature on note-taking has a distinctly European focus, with publications
mostly in French and German, and more recently Italian (e.g. Monacelli 1999); an
English version of Rozan‟s 1956 classic became available only in 2002. With the notable
exception of Gillies (2005), textbooks in English tend to focus on teaching consecutive
interpreting more generally (e.g. Bowen & Bowen 1984). The same applies to
publications in Asian languages, such as Chinese or Japanese (e.g. Liu 2008).

Watch these video


https://www.youtube.com/watch?v=Cz3fjAX5Meg

https://www.youtube.com/watch?v=v0yyZ72eiKc

https://www.youtube.com/watch?v=fBMhJDjkS2c

You might also like