Improvements To NAATI Testing
Improvements To NAATI Testing
Contributing authors:
Ignacio Garcia
(University of Western Sydney),
Jim Hlavac (Monash University)
Mira Kim (University of NSW)
Miranda Lai (RMIT University)
Barry Turner (RMIT University)
& Helen Slatyer (Macquarie University)
For:
30 November 2012
Project Ref: RG114318
Acknowledgements
The Improvements to NAATI Testing Project was carried out with funding from the National
Accreditation Authority for Translators and Interpreters, and in-kind support from the University
of New South Wales, University of Western Sydney, Monash University, and RMIT University.
We thank Helen Slatyer (Macquarie University), Associate Professor Catherine Elder (University
of Melbourne), Professor Claudia Angelelli (San Diego University) and Professor Gyde Hansen
(Copenhagen School of Business) who provided invaluable expert advice on assessment and
evaluation.
We are also grateful to Associate Professor Jemina Napier, a sign language expert from
Macquarie University, Dr Michael Cooke, Indigenous interpreting expert, and Marc Orlando from
Monash University, who also worked as advisors on the project and participated in the working
groups.
Project officers Silvia Martinez, Elizabeth Friedman-Rhodes and Julie Lim ensured effective
project coordination and Elizabeth Bryer, David Deck and Louise Hadfield provided research
support to the working groups. Dr Ron Brooker and Annette Mitchell assisted in the analyses of
the survey and focus group discussions.
Finally, we are grateful for the contribution and support of various stakeholders including
everyone who participated in the focus groups and interviews, questionnaire respondents, and
specialist working group members. We would also like to thank the NAATI owners, board
members, staff and the Special Committee for their valuable help and feedback.
Commercial-in-confidence 2
Project Ref: RG114318
Table of Contents
Acknowledgements ....................................................................................................... 2
Table of Contents ........................................................................................................... 3
Table of Figures ............................................................................................................. 6
Executive Summary ....................................................................................................... 7
1. Introduction ................................................................................................................ 9
1.1 The project plan .................................................................................................................. 9
1.2 Background....................................................................................................................... 10
1.3 Policy Review ................................................................................................................... 15
2. A conceptual model for a revised accreditation system ...................................... 18
2.1 Review of accreditation/certification systems around the world ....................................... 18
2.1.1 Different terminology ................................................................................................................. 20
2.1.2 Comparisons between different countries ................................................................................. 20
2.1.2.1 Australia ............................................................................................................................ 20
2.1.2.2 Latin America .................................................................................................................... 21
2.1.2.3 North America ................................................................................................................... 22
2.1.2.4 Western Europe ................................................................................................................ 23
2.1.2.5 Northern Europe ................................................................................................................ 26
2.1.2.6 Southern and Eastern Europe ........................................................................................... 27
2.1.2.7 Asia ................................................................................................................................... 27
2.1.2.8 South Africa ....................................................................................................................... 28
2.1.3 Recent initiative for global harmonisation of national certification and accreditation systems .. 29
2.1.4 Conclusions on international accreditation systems .................................................................. 29
2.2 Results from consultations with interpreting and translation practitioners, educators,
examiners and agencies on issues relating to pre-requisites and specialisations.................. 32
2.2.1 National survey .......................................................................................................................... 32
2.2.1.1 Demographic information .................................................................................................. 32
2.2.2 Consultation with Aboriginal Interpreter Service ....................................................................... 34
2.3. Suggested conceptual model for an improved accreditation system............................... 35
3. Testing ...................................................................................................................... 43
3.1 Language testing .............................................................................................................. 43
3.1.1 IELTS ........................................................................................................................................ 43
3.1.2 TOEFL ....................................................................................................................................... 44
3.1.3 CEFR ......................................................................................................................................... 44
3.1.4 NFAELLNC ................................................................................................................................ 45
3.2 Interpreting and Translation Testing ................................................................................. 46
3.2.1 The current NAATI test components ......................................................................................... 47
3.2.1.1 Translation Tests ............................................................................................................... 47
3.2.1.2 Interpreter Tests (Spoken and Signed) ............................................................................. 47
3.2.2 Interpreter and Translator competencies, skills and related knowledge ................................... 47
3.2.2.1 Interpreting ........................................................................................................................ 48
3.2.2.2 Translation ......................................................................................................................... 51
3.3 Marking systems ............................................................................................................... 52
3.3.1 Overview of marking systems ................................................................................................... 52
Commercial-in-confidence 3
Project Ref: RG114318
Commercial-in-confidence 4
Project Ref: RG114318
Appendices ................................................................................................................... 96
Appendix 1
NAATI Project Specialist Working Group Memberships ..................................................... 96
Appendix 6 CEFR B2 Descriptors for the macro-skills of Speaking and Listening (Association of
Appendix 7 The National Framework of Adult English Language, Literacy and Numeracy Competence
111
Appendix 13 ATA (2011b) Framework for Standardized Error Marking Explanation of Error Categories
120
Appendix 14 The Federal Court Interpreter Certification Examination (FCICE) (USA) ......................... 124
Appendix 15 Marker’s Guide for the CTTIC (Canadian Translators, Terminologists and Interpreters
Council) Translation Test (CTTIC [Canadian Translators, n.d., pp. 3-5) .......................... 125
Appendix 16 Community interpreting services of Ottawa-Carleton test (Roberts, 2000) ...................... 127
Sign Language Criteria (linguistic criteria) AVLIC Evaluation Committee (2007, as cited in
Appendix 19 Official Journal of the European Union / Amtsblatt der Europäischen Union (2006) ........ 131
Commercial-in-confidence 5
Project Ref: RG114318
Table of Figures
Table 1: Quotations from judicial officers on the reliability of current accreditation standards ................... 14
Table 3: Comments from survey respondents on the weaknesses of the current system ......................... 30
Table 10: Top Interpreter skills to be tested as expressed by survey respondents ................................... 50
Table 11: Top Translator skills to be tested as expressed by survey respondents .................................... 52
Table 15: NAATI Examiners’ comments on computer use for translation examinations............................ 72
Commercial-in-confidence 6
Project Ref: RG114318
Executive Summary
This report presents the results of the first phase of the project: “Improvements to NAATI
testing. Development of conceptual overview of a new model for NAATI standards, testing and
assessment”. The project, commissioned by the National Accreditation Authority for Translators
and Interpreters (NAATI), consisted of three stages: 1. Review of the literature and
consultations with the different stakeholders through focus groups, interviews and
questionnaires; 2. The work of five specialist working groups on issues relating to prerequisites
to testing, specialisations, testing, assessment and technology and 3. The development of a
new conceptual model.
The authors acknowledge NAATI’s crucial role in the establishment of Interpreting and
Translation as a profession in Australia, its status as an international leader in the accreditation
of community interpreters and translators in multiple language combinations and its important
relationship with Interpreting and Translation education and training. As part of this integral role,
NAATI seeks to reflect on best practice. In response to this proactive imperative this report has
been commissioned to review all aspects of the current system that must be addressed in order
for NAATI to maintain and strengthen its position as a rigorous accreditation body. The report
highlights the need for improvement in the areas of prerequisites to accreditation, validity and
reliability of testing instruments, assessment methods and training of examiners. It is worth
noting that these shortcomings are not unique to NAATI or to Australia. However, a number of
certification bodies around the world are now beginning to address them and we believe it is
time for NAATI to do the same. The authors commend NAATI for its willingness to review its
practices and implement improvements amidst limitations of resources, logistical challenges
and lack of universal support for its role.
The report makes 17 recommendations, which must be viewed within the framework of the new
proposed model. Some recommendations will be easier to implement than others, and we
acknowledge that any major changes to the current system will require time and adequate
resources in order to be implemented.
Recommendations
1. That all candidates complete compulsory education and training in order to be eligible to
sit for the accreditation examinations, in accordance with the new suggested model
outlined in section 2.3, Table 7.
2. That NAATI produce an information package explaining the meaning of Interpreter and
Translator, prerequisites for testing and expectations of potential candidates, including
expected levels of language proficiency in English and the LOTE, as outlined in section
2.
3. That NAATI select (or devise) an on-line self-correcting English proficiency test to be
taken by potential candidates for a fee, as part of the non-compulsory preparedness
stage, as outlined in sections 2.3 and 3.1.
4. That NAATI language panels select (or devise equivalent) on-line self-correcting
proficiency tests in the various languages to be taken by potential candidates for a fee,
as part of the non-compulsory preparedness stage, as outlined in sections 2.3 and 3.1.
5. That an Advanced Diploma in any discipline (or equivalent) be the minimum pre-
requisite for the Generalist accreditation, and a Bachelor’s degree in any discipline (or
equivalent) or a NAATI approved Advanced Diploma in Interpreting) be the minimum
pre-requisite for Specialist accreditations, as outlined in section 2.
Commercial-in-confidence 7
Project Ref: RG114318
6. That the current levels of accreditation be replaced by a Generalist level (for both
Interpreting and Translation) and Specialist accreditations for Interpreting, with a
Provisional Generalist level with a sunset clause of 2 years, particularly for new and
emerging and Aboriginal languages, as explained in section 2.
8. That NAATI move to computerised translator tests in the first place. Secondly, that test
candidates undertaking computerised translator tests be allowed access to the internet
while taking the test1, taking account of security considerations. See section 3.5.2 and
section 4.
9. That Interpreting tests be conducted live, as much as possible. Where this is not
possible, that candidates be provided with video recorded interactions and that their
performance be video recorded for marking. See section 3.5.
10. That Interpreting tests at the Generalist level for both spoken and signed languages
include a telephone interpreting component consisting of protocols for identification of all
interlocutors, confidentiality assurances and dialogue interpreting only. See section 3.5.1
and section 4.2.1.
11. That a validation research project be conducted to design the new testing instruments
for Interpreting and Translation. See section 3.6.
12. That new assessment methods using rubrics (see Table 8) be empirically tested as part
of the validation project.
13. That new examiners’ manuals be written to reflect the new assessment methods to be
adopted.
14. That NAATI review the current composition of examiners’ panels to include more
graduates of approved courses and fewer practitioners who hold no formal qualifications
in Interpreting and Translation. See section 3.7.
15. That examiners undertake compulsory training before being accepted on the panel, and
continuous training while on the panel2. See section 3.7.
16. That NAATI establish a new Expert Panel, with subpanels for the specialisations, to
design the curricula for the compulsory training modules and provide guidelines for the
final assessment tasks.
17. That NAATI continue to approve tertiary programs and encourage all applicants to take
the formal path to accreditation where such is available for the relevant language
combinations.
1 This is being trialled by the American Translators’ Association [ATA] and they have signalled their readiness to offer support and
technical advice to NAATI working group members in regard to the introduction of logistic protocols and recently-developed
software.
2
For Aboriginal language examiners and possibly other languages of limited diffusion, training may be unrealistic in some
languages due to literacy/numeracy considerations. In such cases we recommend that untrained examiners be partnered with a
trained examiner, as explained in the report.
Commercial-in-confidence 8
Project Ref: RG114318
1. Introduction
On 8 April 2011, the National Accreditation Authority for Translators and Interpreters (NAATI),
published the “Improvement to NAATI Testing. Expressions of Interest” document, seeking
applications to conduct the first phase of a three-phase project. Phase 1 of the project was
titled: Development of a conceptual overview of the elements of a new model for NAATI’s
standards, testing and assessment. According to the document, the project’s aim would be “…to
improve various aspects of NAATI’s testing process and related matters”, with special emphasis
on issues relating to validity, reliability and practicality. This would be the first comprehensive
review of the NAATI accreditation system since its inception in 1977.
A team of researchers submitted an expression of interest by the due date, which was accepted
and followed by a more detailed research project proposal. The proposal presented a revised
structure, with Phase 1 consisting of the exploratory phase of the project, envisaging Phase 2 to
be the validation phase and Phase 3 the implementation and trial phase. The NAATI Board
approved the research proposal for the first Phase in September 2011. The first Phase of the
project commenced in October 2011 and ended in November 2012.
The research team comprised Professor Sandra Hale (University of NSW) as Chief Investigator;
Dr Mira Kim (University of NSW), Dr Jim Hlavac (Monash University), Adjunct Professor Barry
Turner (RMIT University), Miranda Lai (RMIT University), Dr Ignacio Garcia (University of
Western Sydney) as co-investigators; Helen Slatyer (Macquarie University), Professor Claudia
Angelelli (San Diego State University, USA), Professor Gyde Hansen (Copenhagen School of
Business, Denmark) and Associate Professor Catherine Elder (Language testing research
centre, Melbourne University) as consultants, and Associate Professor Jemina Napier
(Macquarie University), Dr Michael Cooke (Aboriginal interpreting expert), and Marc Orlando
(Monash University) as advisors to the project.
Stage 2 of the project consisted of the work of five specialist working groups:
Each group was led by one of the researchers who invited experts in each of the relevant areas
to participate in the work of each group. Among others, the consultants and advisors also took
part in the working groups (See Appendix 1 for group memberships).
Stage 3 of the project consisted of the analysis of the results of the previous two stages, the
development of a new conceptual model and a set of recommendations.
Commercial-in-confidence 9
Project Ref: RG114318
Three detailed progress reports were submitted to NAATI at the end of each Stage. The reports
were also submitted to the international consultants for their feedback. This final report
consolidates the results of each of the three stages, which were already presented in the
progress reports, and makes conclusions and recommendations. The report will be organised
around three main themes: A conceptual model for a revised accreditation system, including
pre-requisites, specialisations and paths to accreditation; Testing, including issues of standards,
validity and reliability, test content and delivery and assessment; and recommendations to
support and implement such a model. The last section will provide a summary of the
recommendations that will be highlighted throughout the report and will suggest practical ways
of implementing the recommendations.
1.2 Background
Testing and accreditation lie at the interface of training and professional work. Australia
is at the forefront in the field of T&I accreditation, and there is growing recognition of the
importance of using accredited interpreters and translators, but in certain quarters (not
least amongst some practitioners themselves) there remain questions as to the need for
accreditation and the ability of accreditation tests to determine accurately a candidate’s
ability to work at a professional standard of quality outside the examination room. For
the National Accreditation Authority for Translators and Interpreters (NAATI) to be
perceived as a credible testing authority, there is a need for rigorous selection of
examiners, workshops for examiners, and an ongoing review of standards, marking
guidelines and individual panels. I am pleased to see that NAATI is working on all of
these areas, and it deserves commendation for its work in what is often a difficult
climate (Wakabayashi, 1996).
Wakbayashi’s observation, made sixteen years ago, is to a large extent still true today. Australia
has been praised by many for its developments in community interpreting, especially with
regards to its nationwide accreditation system, government funded service provision and formal
training in multiple language combinations. NAATI is unique in the world for a number of
reasons, two of which are paramount: it is a national accreditation body with the laudable aim to
accredit in over sixty international languages and forty five indigenous languages3, and it is
owned by the Federal government and all State and Territory governments. For these reasons
NAATI has been internationally recognised, as very few countries have managed to have
uniform systems that give credentials in so many languages.
As a public accrediting body, NAATI is essentially different from equivalent bodies in other
countries, such as those in the US (for e.g. American Translators’ Association, ATA) or the UK
(for e.g. Chartered Institute of Linguists), which are ‘owned’ by members of the profession, as
will be detailed in our review below. The main reason for the Australian governments
involvement in NAATI’s establishment and ownership was a response to the high levels of
immigration after World War II who spoke languages other than English (Ozolins, 1991).
One of the most important government commissioned reports in relation to Interpreting and
Translation in Australia is “The Language Barrier” report, also known as the COPQ4 report,
which recommended the establishment of a national council as an overall “standard setter” for
interpreters and translators, working especially in community settings (1977:3), which was later
to become NAATI. Thirty five years ago, the report found that “…employers tend to underrate
the level of interpreters required, just as they have, over the past twenty five years, underrated
the need for interpreters of any kind”. The report further comments that the provision of
3
The range of these languages is extremely varied: from major European, Middle Eastern and Asian languages to relatively low-
volume developing-country languages such as Dinka, Nepali, or Tetum. By comparison, the CIOL conducts their DPSI tests in over
forty languages. The ATA conducts translator testing in seventeen languages, in only eight of which is testing available in both
directions.
4
Committee on Overseas Professional Qualifications
Commercial-in-confidence 10
Project Ref: RG114318
incompetent practitioners “…may lead to the violation of the human and civil rights of those
involved” (1977:14). This observation from the cited report highlights two important issues,
which NAATI has had to grapple with:
Those two requirements are necessary in order to ensure access and equity among all
members of the Australian community regardless of language and cultural background. It is
important to note, however, that the two aims stated above may work against each other: the
desire to ensure minimum standards will inevitably limit the size of the available pool, especially
in some languages. However, pressure from government to ensure the largest possible pool
remains strong5, and NAATI needs to be responsive to that pressure. On the other hand, it is
also worth highlighting that these governments’ interests in ensuring a large pool of accredited
T&Is have generally not been reflected in the amount of funding they have made available to
NAATI. NAATI thus has to do its job in an environment of constant financial limitations.
A common concern among the representatives of the Commonwealth and State government
owners during the focus group discussion and interviews conducted for this project was the
issue of increased costs associated with an improved system and of limited sources of funding.
The main concern however, should be the relationship between accreditation and competence.
The provision of accredited practitioners who do not meet the adequate levels of expertise to
perform their required tasks adequately will only have a negative effect on those receiving the
services, on those providing the services and on NAATI’s credibility as an accrediting body, but
more importantly, as the COPQ report states, will violate the basic human rights of those
receiving the services. We do believe, nevertheless, that the current accreditation system has
provided a benchmark, which has ensured a certain level of competence, normally
distinguishing those with and without accreditation and those with Paraprofessional and
Professional accreditation. However, we also strongly believe, that as praiseworthy as the
current system is, it has shortcomings (many of which are common to many other similar
credentialing bodies around the world) that must be addressed in order to progress to the next
level of development. We therefore commend NAATI for their willingness to review the current
system and to implement changes for its improvement.
There are multiple factors that contribute to the complex task of ensuring competence and
quality. Among these are issues relating to pre-requisites to accreditation, test validity and
reliability of the testing instruments and assessment models, and post accreditation checks. As
previously stated, NAATI’s desire to ensure that there is a concrete link between accreditation
and competence has motivated the current review, with the aim to make recommendations to
implement changes to the current system.
Commercial-in-confidence 11
Project Ref: RG114318
interpreting and translation normally require education and training at the tertiary level” (COPQ,
1977:3). This recommendation was taken up by NAATI in its course approval system, which
continues to operate successfully to date. The original intention was that most accreditations
would be through NAATI-approved degrees at universities (for the former Level 37) or diplomas
at institutions in the Vocational Education and Training (VET) sector (for the former Level 28).
Direct testing by NAATI was made available as a ‘back-up’ to this system (with a further avenue
through recognition of overseas qualifications). This position is still supported by NAATI, as its
current Chief Executive Officer stated in a presentation to ASLIA:
The NAATI ideal is a practitioner who prepares for the profession by gaining tertiary
qualifications and who holds the NAATI credential at the appropriate level to show they
9
are practice-ready (John Beever, 2012) .
However, because courses in translating and particularly interpreting are staff-intensive and
therefore expensive to run, the number of courses available, especially in the Higher Education
sector, has tended to decline overall since the 1980s, when most states had a NAATI-approved
Bachelor of Arts in Interpreting and Translation. The situation may have been exacerbated by
NAATI continuing to test in the same languages for which courses were available (Ozolins,
1998). Three decades later, only one university has retained the undergraduate degree, with the
others offering post-graduate awards. The VET sector, however, in response to the decline in
undergraduate I&T degrees, began to offer Advanced Diplomas approved by NAATI at the
Professional level (former Level 3) in the 1990s, thus blurring the distinction between degrees
and TAFE diplomas that existed in the beginning. Just as importantly, the range of languages
offered at universities has also tended to be restricted to those with the highest volume of
demand, especially languages that have attracted high numbers of international students, such
as Chinese, since the late 1990s. On the other hand, training in languages of greatest domestic
need (e.g. the ‘newly arrived’ language communities as well as Indigenous communities in
northern Australia) has generally (with the exception of some languages offered at some TAFE
colleges, especially in Victoria and South Australia) been limited and ad-hoc, or not available at
all. These factors have tended to make direct NAATI testing the de facto ‘standard option’ for
the domestic market, so that the majority of local current practitioners would have gained their
accreditation by this method. Although NAATI reports that in 2010/2011 (NAATI, 2010-2011),
70% of accreditations were obtained by course completion, that figure can only reflect the
languages for which there are formal NAATI approved courses available, with Chinese10 being
the language with the highest number of graduates (mostly international students who return to
their country of origin to practise). It is also worth noting that the majority of practising
interpreters and translators (59%) who responded to our survey as part of this project, had been
practising for more than 5 years, with 45% having over 10 years’ experience and that 72% of
translators and 66% of interpreters had gained their accreditation by sitting an external NAATI
test. This tends to indicate that the current workforce is mostly made up of untrained
practitioners, a situation we hope will change in the near future.
When candidates gain accreditation by testing, this is done on the basis of a single relatively
short test. This situation where a single test (or a combination of such single tests) can
potentially give access to the profession makes NAATI testing a distinctly ‘high-stakes’ issue.
Significantly, this path to accreditation means that even when candidates are successful, there
7
Current Professional Level
8
Current Paraprofessional Level
9
“Rediscovering our roots: Shaping our future”. Address given by John Beever, NAATI CEO at the ASLIA National Conference
August 25, 2012 Adelaide
10
While no formal figures are kept or available to the public on the ratio of Chinese students to other languages, it is a well known
fact among educators that international Chinese students predominate in I&T classrooms. To give an example, the current numbers
for the Master’s program at the University of New South Wales indicate that 82% of commencing students in Semester 2, 2012 were
Chinese.
Commercial-in-confidence 12
Project Ref: RG114318
is no guarantee that they have the level of competence required to operate in the different
fields, as many of the competencies are not tested in the current tests (see discussion in 2.1.2.1
below). Furthermore, candidates have usually had no training in issues such as the ethics and
practice of the profession, and have no theoretical knowledge to underpin their practice. This
‘gap’ may partly explain the reason for criticism from various users about the ‘poor quality’ of
some accredited T&Is, as expressed by comments from the government NAATI owners at the
focus group discussion and by the I&T agencies who responded to the questionnaire. One of
the members of the focus group expressed that they receive frequent feedback from service
users that the quality of interpreters and translators can vary enormously between people with
the same levels of accreditation, a comment that was also prevalent in the responses to the
survey of judicial officers and tribunal members conducted by Hale (2011). An interesting result
from our questionnaire to I&T agencies was that government agencies did not give preference
to formally qualified practitioners and received the highest number of complaints as compared
to private agencies who claimed to give preference to trained practitioners. This practice of
ignoring formal educational I&T qualifications in the allocation of work was a common complaint
from trained interpreters in previous surveys of Australian practitioners (see Hale, 2011;
Ozolins, 2004). We must point out at this stage, however, that there are great differences
across courses in terms of duration, content, resources and standards and that not all courses
are likely to produce optimum outcomes either. Nevertheless, courses consist of a variety of
activities, practical and theoretical content, assessment tasks and practicum opportunities that
minimise the level of risk and have a higher chance of assessing individuals’ competence levels
more comprehensively. One extra layer of quality assurance is provided by NAATI’s current
monitoring system of approved courses, which we strongly support.
Our survey asked agencies to report on the feedback that they receive from clients on the
performance of the practitioners they hire. The types of negative feedback received by agencies
on Interpreting, fell into one of the following categories: breaches of the code of ethics
(punctuality, impartiality, professionalism); lack of English language competence; lack of
management skills and lack of specialist training (medical and legal). The last three are areas
that are not currently tested in the NAATI Interpreter examination. The negative feedback on
Translation related to incorrect approach (too literal); linguistic issues (grammatical, spelling
errors); accuracy of content, register and style and lack of technical skills. These are all issues
that could be minimised through language screening and education and training, yet most
agencies did not consider training to be of much significance when allocating work. This
feedback should of course be taken only as an indication of the perceived deficiencies in the
market and can be useful in deciding what measures to implement to improve performance. The
positive comments that agencies reported receiving from their clients, on the other hand, were
very general in nature: “Very good”, “very helpful”, “very impressed”; which seems to indicate
that the criticism is likely to come from those who are more familiar with interpreting and
translation and with what is expected of professional practitioners.
Another limitation of the current accreditation examinations is that in neither translating nor
interpreting, even at the Professional level, is the material highly specialised (although it can be
situated in specialist areas); in other words, there is no testing of competence in specialised
areas such as legal or medical. This aspect differs from some testing conducted overseas, in
which specialist areas are specifically tested, using fully authentic texts, such as the Court
Interpreting Certification exam conducted in the USA (see Appendix 14). It also differs from a
number of formal courses in Australia that have components that specialise in medical, legal,
conference and business interpreting already. The results of our questionnaire of practitioners
confirmed the need for specialist training. The statement that received the lowest level of
agreement by the sample of practitioners who gained their accreditation by sitting a test, was “I
was well prepared to interpret in complex settings such as the courtroom, after passing the
test”. Having completed a formal course appears to have provided participants with greater
confidence in all situations, including “I was well prepared to interpret in complex settings such
Commercial-in-confidence 13
Project Ref: RG114318
as the courtroom...” which received the highest level of disagreement from both groups, but was
more positively accepted by the trained group. Having generalist practitioners working in highly
specialist areas in Australia has led to dissatisfaction on the quality of services, and especially
interpreting services in the legal system, where judicial officers and tribunal members have
commented that the Professional NAATI accreditation level does not seem to guarantee the
level of competence they require of interpreters working in such a specialised field.
Table 1: Quotations from judicial officers on the reliability of current accreditation standards
“NAATI 3 is the benchmark, and we aim for that, although I know it guarantees little in terms of quality”
“The standard of interpreters varies widely – even among those with the same level of NAATI
accreditation…”
“My experience over the years is that the rules of qualification as an interpreter are not nearly stringent
enough” (in Hale, 2011, p. 14).
As the quotes in Table 1 demonstrate, the judiciary do not seem to have complete confidence in
the current accreditation levels as a reliable measure of competence for their purposes. The
judiciary, therefore strongly support the introduction of specialist legal interpreting accreditation
and compulsory training (see Hale, 2011 for the results of a national survey of judicial officers
and tribunal members). Medical practitioners, especially specialists, have also complained
about the inadequacy of interpreters working in Sydney (Hale, 2007b).
Another common complaint from users of I&T services concerns interpreters’ lack of language
proficiency (especially in English)11. We believe, therefore that these three pre-requisites to
accreditation (adequate bilingual competence, generalist and specialist education and training)
are crucial in making the first step to bridging the gap between accreditation and adequate
levels of competence.
Secondly, it is also essential to comprehensively research the validity and reliability of the
testing instruments (e.g. test tasks, scoring rubrics, rating procedures and the conditions in
which they are administered). Over the years, the NAATI tests have been subject to anecdotal
criticism over their perceived lack of reliability and validity. This criticism relates, in particular, to
the perceived lack of consistent processes in test setting and scoring, both within and across
languages, as well as to its inability to assess the competencies required by professional
interpreters and translators. This anecdotal evidence was born out by the findings of the Rater
Reliability Study undertaken in 2007 by NAATI to investigate claims of variability empirically
(Slatyer, Elder, Hargreaves, & Luo, 2008). This research identified discrepancies in inter-rater
reliability in some language panels, inter-rater reliability between language panels and problems
in variability in some of the test tasks. Intra-rater reliability was generally acceptable. The
qualitative findings relating to the study of rater behaviour indicated that some raters were
confused in their interpretation of the rating criteria and descriptors and there was disagreement
within panels about the relative weighting of errors. The study also found a strong tendency of
raters to rate scripts holistically according to a binary pass/fail judgement, adjusting scores to
align with their overall impression of the performance, notably in the case of ‘borderline’
performances. A strong culture of practice was also observed within some language panels,
which may lead to a prioritisation of issues relating to the language pair of the panel rather than
attending to the achievement of consistency across language pairs.
High-stakes tests such as the NAATI tests should be subjected to regular evaluation, through a
rigorous research process, which measures the performance of the tests to ensure that they are
fair. Traditionally, interpreting and translation examinations have not been subjected to the
same rigour as language proficiency tests, for example, both in Australia and the rest of the
11
This was found in previous research (Hale, 2011), and was corroborated by the results of our current survey of I&T agencies and
of the focus group discussion, although Turner & Ozolins (2007) did not find the same results in their national survey.
Commercial-in-confidence 14
Project Ref: RG114318
world. This failing is increasingly being acknowledged around the world as more research in the
field of interpreting and translation assessment is being carried out. It should also be noted that
the multilingual characteristics of the tests pose a particular challenge in this regard.
Our survey results also indicated dissatisfaction with the adequacy of NAATI examiners,
especially by practitioners. Indeed the statement “There should be compulsory training for all
NAATI examiners” elicited the highest percentage of agreement from all respondents combined
(84.5%). Below are some unsolicited open comments from practitioners about NAATI
examiners that further elucidate this perception:
We therefore argue that tests, marking criteria and descriptors must be empirically designed
and validated and examiners adequately trained accordingly, with clear assessment guidelines
provided.
The final step towards improving the link between accreditation and competence is to establish
a post-accreditation re-validation or re-accreditation system, which NAATI has already begun to
implement and we strongly support. However, such a system is meaningless if the original
accreditation cannot be relied upon to ensure the competencies required of interpreters and
translators to function in the different areas of expertise.
The Working Party emphasises that its findings and recommendations depend for their
effectiveness on the adoption by the Australian and State governments of an
occupational classification that gives adequate recognition to the qualifications and
contribution of the interpreters and translators at the various levels of skill. There is also
an obligation on others using the services of interpreters and translators to recognise
that the quality of services provided by tertiary trained personnel calls for
commensurate remuneration (COPQ, 1977:4)
Any attempt to improve the current accreditation system must be supported by government
policy. General policy principles state their commitment to ensuring access and equity for those
members of the community who do not speak English well or at all. For example, Principle 2 of
“The people of Australia” states that:
Commercial-in-confidence 15
Project Ref: RG114318
where government services are responsive to the needs of Australians from culturally
and linguistically diverse backgrounds” (DIAC, 2011, p. 5)
The Access and Equity Framework states the need to provide information in “appropriate
languages” and “using interpreters” (DIAC, 2011, p. 14). The Community Relations
Commission’s Principles of Multiculturalism Act 2000 frames policies relating to the provision of
services to migrants in NSW, identifying their objectives as:
s12(b) access to government and community services that is equitable and that has
regard to the linguistic, religious, racial and ethnic diversity of the people of New South
Wales
s12(c) the promotion of a cohesive and harmonious multicultural society with respect for
and understanding of cultural diversity
s13 1(i) to provide (whether within or outside New South Wales) interpreter or other
services approved by the Minister (CRC, 2009)
A review of the current language policies around Australia, however, shows a high level of
inconsistency across states and departments. Firstly, a number of entities with language or I&T
policies do not mention minimum requirements for interpreters and translators, which means
that non-accredited interpreters may be engaged. The Family Court of Western Australia, is the
most extreme example, specifically stating in its policy that “For undefended divorce
proceedings the assistance of a friend to act as interpreter is encouraged and will be sufficient
for those proceedings” (Family Court of Western Australia, 2006, p. 1).
Commercial-in-confidence 16
Project Ref: RG114318
The above policy makes an interesting exception for Aboriginal languages in remote locations,
recognising the particular difficulties posed for access to interpreters for speakers of small
language groups:
The Federal Court specifies that it “will usually only accept interpreters who are accredited and
registered with the National Authority for the Accreditation of Translators and Interpreters
(NAATI)” and that it “will generally prefer accreditation to the level known as ‘Professional
Interpreter’” (Federal Court, n.d., p. 5). The other organisations that stipulate a preference for
Professional accreditation, rather than a requirement, are the Migration and Refugee Review
Tribunals. Two of the documents reviewed in the legal area make an incorrect assertion in
relation to Professional NAATI accreditation, which leads the reader to believe that a
Professional accreditation implies training. The Queensland Department of Justice and Attorney
General asserts that “Professional interpreters are trained to maintain confidentiality, impartiality
and accuracy as part of their code of ethics” (QLD Department of Justice and Attorney-General,
2009, p. 34) and the Northern Territory DHLGRS states that “Professional interpreters are
bound by a strict code of ethics covering confidentiality, impartiality, accuracy and reliability, and
have completed training and assessment to certify that they have the level of linguistic
competence” [italics added] (NT Department of Housing Local Government and Regional
Services, 2011). This is only true for interpreters who have gained their Professional
accreditation by training, not for those who have gained it exclusively by testing, but the
distinction is not made by the cited policies.
Commercial-in-confidence 17
Project Ref: RG114318
In Western Australia, Government policy which is also reflected in the guidelines provided by
the District Court, specify that interpreting services must be provided by “professional
interpreters and translators or persons who have completed an accredited interpreting or
translating training course in all other situations12” (WA Office of Multicultural Interests, 2008, p.
6).
In our view, the confusion and inconsistencies present in the current policies reflect the current
inconsistencies in the accreditation system, where trained and untrained practitioners can
receive the same level of accreditation. This has also contributed to the confusion regarding the
terms ‘qualification’ and ‘accreditation’. We believe that a qualification implies the completion of
a formal course of study and accreditation implies the credential awarded by a credentialing
authority upon meeting that authority’s requirements. In our opinion a streamlined system,
where a minimum requirement for some training (albeit short and non-language specific in the
case of languages of small diffusion), will apply to all accredited practitioners, will provide a
much higher benchmark. In a system with compulsory training, where all practitioners will be
qualified and accredited, we hope that all state and Commonwealth policies will support NAATI
accreditation as the minimum standard. Similarly, we hope that government departments, and in
particular the justice system and health care departments will demand the new NAATI
accredited specialist interpreters as their minimum requirement. We also believe that the
requirement for compulsory training will stimulate the demand for courses, which in turn will lead
to their supply, which is currently limited.
12
We note that this policy is currently being revised.
13
See http://ec.europa.eu/dgs/translation/publications/studies/translation_profession_en.pdf
Commercial-in-confidence 18
Project Ref: RG114318
Commercial-in-confidence 19
Project Ref: RG114318
There is little difference in the meaning of the two terms ‘accredit’ and ‘certify’. Both can refer to
the awarding of (an official) recognition to a person or organisation. However, a distinction
between the terms occurs in the labelling of authorities. In North America and increasingly in
Europe, the body that issues formal recognition to individuals is a ‘certifying body’ that ‘certifies’.
Hierarchically, this body is subordinate to another authority that checks that the ‘certifying body’
is following required standards in issuing ‘certification’. This higher authority is an ‘accrediting
body’ that ‘accredits’ the certifying body. Thus, authorities equivalent to NAATI in North America
are usually termed ‘certifying bodies’. One interesting point is that NAATI does not answer to a
single higher authority. NAATI answers to the nine governments of Australia, which are the
highest authorities in the land and are part owners of it. However, NAATI examiners do not
answer to an external authority. Whereas the examinations held at approved NAATI courses
and the results awarded are monitored by NAATI examiners, NAATI examiners are not
monitored by any external body.
There are other terms that come close to the meaning of ‘certify’. In the UK, the term ‘chartered’
is used with ‘linguist’ to refer to a member of the professional association. ‘Registered’ is also
used in the UK, in reference to those who have passed the Diploma of Public Service
Interpreting. ‘Sworn’ is a commonly used term, particularly in countries in which the courts were
the first or only authority that provided formal recognition of status and skill level.
As we stated above, in Australia, there sometimes seems to be some confusion between the
terms ‘accreditation’ and ‘qualification’. We believe that the term accreditation should continue
to be used to indicate ‘credentialing’ from the national accreditation authority. The word
‘qualification’ should only refer to the completion of a training course. In other words, a person
who is accredited by NAATI may possess a number of relevant qualifications. In our new
proposed model, all NAATI accredited practitioners will need to have minimum qualifications
before becoming accredited (see point 2.3 below).
NAATI may wish to change its nomenclature to align itself with most
other countries.
14
Commercial-in-confidence 20
Project Ref: RG114318
In Australia, the accreditation process for interpreters and translators is administered by NAATI.
There are four accreditation levels for translation and interpreting, the titles of which are as
follows:
• Paraprofessional Translator / Paraprofessional Interpreter;
• Translator / Interpreter;
• Advanced Translator / Conference Interpreter; and
• Advanced Translator (Senior) / Advanced Interpreter (Senior).
There are five ways to gain NAATI accreditation: 1. by passing a NAATI test; 2. by successfully
completing a NAATI-approved translation and/or interpreting course (TAFE diploma, advanced
diploma or University undergraduate or post graduate degree); 3. by providing evidence of
overseas qualifications recognised by NAATI, 4. through membership of a recognised
international association in translating and interpreting (e.g. AIIC) and 5. By providing evidence
of advanced standing in translating or interpreting (NAATI website)15. NAATI also has a system
of ‘recognition’ in languages for which there are no examination panels and therefore testing is
unavailable.
In Mexico, the translation and interpreting industry is not regulated by a specific organisation,
though most practitioners hold degrees in languages or in translation/interpreting (Cuevas,
2011). Legal translations must be carried out by sworn translators (peritos traductores), certified
by the Supreme Court of Justice.
15
http://www.naati.com.au/accreditation.html
Commercial-in-confidence 21
Project Ref: RG114318
The Mexican Organisation of Translators (OMT) offers a certification exam for experienced
translators. This is not an official (governmental) accreditation and does not classify as a sworn
translator certification. The OMT recommends that candidates hold a degree in translation and
have a minimum of three years’ experience (Organizacion Mexicana de Traductores, 2011).
In the USA, medical interpreting ‘has progressed from an ad-hoc function performed by
untrained, dubiously bilingual individuals to a fledging profession concerned with standards of
excellence and ethical practice’ (Beltran Avery, 2003, p. 100). The National Council for
Interpreting in Health Care was established in 1994. It published the National Code of Ethics for
Interpreters in Health Care in 2004, but it was not until 2009 that the National Board of
Certification for Medical Interpreters launched their process for National Certification. This is not
yet a mandatory certificate for medical interpreters, but aims to become the national standard,
and encourages hospitals to employ interpreters with this certification. (cf. National Board of
Certification for Medical Interpreters, 2011).
Nationally, there are at least ten other interpreter certification programs with a focus on
healthcare (cf. Roat, 2006, p. 13). The National Board of Certification for Medical Interpreters
may eventually replace these, but a review of their various characteristics and strengths is
useful in gaining a more complete picture of the current environment for certification in the US.
With regards to court interpreting, in 1995, the National Centre for State Courts created the
National Consortium for State Court Interpreter Certification, a multi-state partnership dedicated
to developing court interpreter tests. In response to the low passing rates, some certifying
bodies have begun to include interpreter training as part of a certification program, but a
university degree for certification is not a requirement (see Kelly, 2007).
In relation to interpreting testing in signed language, the US Registry of Interpreters of the Deaf
is much more established than its spoken-language-interpreting counterparts. It has seven tests
(reduced from the original 13): the Oral Transliteration Certificate, Certified Deaf Interpreter,
Certificate of Interpretation, Certificate of Transliteration, the combined certificate (CI and CT),
the Conditional Legal Interpreting Permit-Relay, and the Specialist Certificate: Legal. Five of the
Commercial-in-confidence 22
Project Ref: RG114318
seven are general in nature; the two industry-specific ones are legal. Sign-language interpreters
must hold a generalist certificate before they can sit a specialist exam.
In Canada, university graduates are given preference by translation agencies, but there is no
legal requirement for translators/interpreters to be certified. Despite this, some professional
organisations offer certification, which can mean undertaking an exam to prove expertise, or
can involve membership in that organisation, although successful completion of an exam is
increasingly a prerequisite for membership in most provincial chapters of the Canadian
Translators, Terminologists and Interpreters Council (CTTIC). Below are some examples of the
types of certifications offered by individual associations.
The Society of Interpreters and Translators of British Columbia offers a number of certification
examinations: Certified Translator, Certified Conference Interpreter, Certified Terminologist and
Certified Court Interpreter. Certification examinations are not entry level tests; candidates must
be in good standing, must have passed the society’s ethics exam, and must comply with (a) or
(b) before being able to sit the exam: (a) provide evidence of experience of four years
(120,000–440,000 words of translation, depending on language); (b) hold a degree in the study
of translation, linguistics, interpretation or language, plus one year of full-time experience.
In general, in Canada, there is separate testing and certification of court interpreters, which is
now handled by the national body, the CTTIC. Certification in conference interpreting has been
offered in Quebec, but is now being co-ordinated by the CTTIC as well. Otherwise, there is no
distinction or grading of ‘translator certification’ and all candidates sit a ‘generalist’ examination.
The term ‘terminologist’ is also commonly used in Canada to refer to often government-
employed specialists whose job it is to have specialist knowledge and to ensure parity and
equivalence in official terms used in the large volume of French<>English translation performed
in or for government authorities.
For court interpreters, a formal test exists which is open to graduates with evidence of two
years’ professional work, or to non-graduates with evidence of five years’ work. Membership of
the Association of Certified Court Interpreters is limited to three years. Renewal is allowed
where there is evidence of continuing employment.
Commercial-in-confidence 23
Project Ref: RG114318
While there is no formal certification process in place, other than that for court interpreters, the
changing ethnic composition of Austria and new T&I demands has led to a ‘two-tiered’ provision
of T&I services. While there is a good supply of qualified T&I practitioners for English, French,
Italian and Spanish, most of these are employed as specialist translators or conference
interpreters. The needs of residents in Austria who speak only Albanian, Bosnian, Chinese,
Croatian, Kurdish, Russian, Serbian and Turkish are less well addressed. While Bosnian,
Croatian, Hungarian, Romanian and Serbian are offered at, at least one of the universities, few
graduates are interested in community interpreting due to low remuneration. Recent EU laws
which require public institutions to provide T&I services to clients without proficiency in the
language of their country or residence have led to a large upsurge in the demand for T&I
services. Consequently, similar to Australia, many of the T&I services for speakers of these
‘rarer’ or ‘non European’ languages are performed by lay, untrained interpreters.
In Belgium there was no formal certification of translators and interpreters until 2003. In the
Belgian Constitution it is stated that any citizen appearing before a court may address that court
in the language of his or her choice. A list of ‘sworn interpreters’ is often kept by courts, drawn
up in consultation with the public prosecutor’s office, with each court having its own system for
the recruitment and accreditation of its interpreters and translators. There is no national register
of translators and interpreters, and the titles ‘translator’ and ‘interpreter’ are not legally
protected. Belgium has a long history of T&I training and there are well-established university
centres in Antwerp, Brussels and Gent. Common to other European countries, completion of a
university degree (usually at post-graduate level) is still commonly accepted as a benchmark of
ability. Practitioners commonly state their qualifications after their names or in advertising or
correspondence to demonstrate their level of expertise. The university sector commonly focuses
on T&I training in other European languages such as English, German, Italian and Spanish in
addition to the two official languages, Dutch and French. Gent has recently increased its
repertoire of languages to cover those of recent migrants, e.g. Czech, Russian and Turkish.
The large numbers of speakers of languages other than the four above-mentioned European
ones in Brussels and in the northern region of Belgium, Flanders, precipitated the development
in 2004, of the Social Interpreting (‘community interpreting’) test that includes a preliminary
language proficiency test (in both languages) followed by 102 hours of compulsory pre-training
before the main test which includes sight translation, consecutive interpreting and ethics. The
development of the Social Interpreting test is of interest to the Australian context as it is
intended to address recently migrated language groups, and at the same time, is predicated by
Belgium’s long history of T&I training and adoption of some of the features of the UK Institute of
Linguists Diploma in Public Service Interpreting test.
Commercial-in-confidence 24
Project Ref: RG114318
In the Netherlands, candidates wishing to become sworn translators and interpreters must
‘provide ample evidence to the court that they have a good command of Dutch and the pertinent
foreign language, as well as provide a declaration of good conduct’ (Stejskal, 2002b, p. 14).
‘Ample evidence’ differs from court to court. The sworn status is valid throughout the
Netherlands and does not have a time limit, though can be recalled if the translator behaves
inappropriately or incompetently.
In the United Kingdom, ‘the focus is on the certification of translations rather than translators’
(Stejskal, 2002f), and Ireland has a similar situation. In the UK, the Institute of Linguists
(hereafter: IoL) is the organisation that co-ordinates and administers the language assessment
and the awarding of accredited qualifications to interpreting candidates who pass a test at the
end of a long period of non-intensive training and/or preparation. In the case of the IoL Diploma
in Public Service Interpreting test (hereafter: DPSI), the final test is given five years after a
candidate has fulfilled an initial minimum training requirement, i.e. a candidate has received a
‘letter of credit’ or ‘unit certificate’ as the first part of the diploma sequence. Thus, trainees
undergo a long ‘apprenticeship’ but are still required to sit a final examination, which is a pre-
requisite for the diploma to be issued. The IoL’s DPSI has responded to new language groups
that are now resident in Britain and testing is provided in languages such as Bengali,
Cantonese, Croatian, Dari, Farsi, Gujurati, Greek, Hindi, Hungarian, Jamaican (sic), Kurdish,
Latvian, Lithuanian, Polish, Punjabi, Portuguese (Brazilian), Portuguese (European), Pushto,
Romanian, Serbian, Slovak, Somali, Swahili, Tamil, Thai, Tigrinya, Turkish, Ukrainian, Urdu,
Vietnamese, as well as the traditionally popular European languages such as French, German,
Italian, Russian and Spanish. The Diploma in Interpretation serves as a qualifying examination
for membership in the National Register of Public Service Interpreters.
Similarly, the IoL Diploma in Translation consists of tests, which are assessed according to
criteria very similar to those of the NAATI test for professional translators. The IoL does not
award certification; it co-ordinates training and testing for diplomas. The IoL also has a category
or list of practitioners that are termed ‘chartered linguists’ – practitioners who have gone through
a five-year probationary period, must have a university degree, demonstrated expertise in T&I,
and have three references. However, an IoL diploma is not an obligatory prerequisite for
application to become a ‘chartered linguist’.
The Institute of Linguists also serves as an examining body. It offers assessment and
accreditation to suit higher-level candidates seeking a professional qualification. Its Diploma in
Translation is a mixture of general and specialised translation, and the examinations have a
high failure rate. Its Diploma in Public Services Interpreting is available in four options: health,
local government, English law and Scottish law.
The other main organisation offering membership in the UK is the Institute of Translation &
Interpreting (ITI). This offers different levels of membership to translators and interpreters
throughout Europe and in other countries where English is commonly spoken. Levels of
membership reflect varying amounts of experience. ‘Qualified members’ of ITI are not certified
themselves, but can certify their translations.
The concept of a ‘sworn translator’ does not exist in the UK’s common law system, but
translations must be ‘sworn/certified’ for various purposes. Such translations have no bearing
on the translation quality, but through this process the translators are identified and therefore
can be held accountable for their work.
In Ireland there is the Irish Translators’ and Interpreters’ Association (ITIA), which is working
towards standards of certification. Their ‘professional members’ have to go through the following
process to be granted membership:
Commercial-in-confidence 25
Project Ref: RG114318
(1): Success in foreign examinations organized by the profession abroad and recognized by
ITIA (within a period of five years preceding the application, plus one year of full-time
professional experience in the same period); or (2): Award of a translation degree by an Irish
third-level institution or similar foreign institution recognized by ITIA (within a period of five years
preceding the application, plus one year of full-time professional experience in the same
period); or (3): If the applicant is a staff translator, two years of professional experience
substantiated by the employer’s reference (within a period of five years preceding the
application); or (4): If the applicant is a freelance translator, three years of professional
experience substantiated by invoices, statements, or other recognized proof of work completed
on a commercial basis (within a period of five years preceding the application, where it is
estimated that the linguist translated at least 80,000 words in each of the above three years).
The applicant has the option of submitting references, or, where discretion will allow, examples
of work completed (to be treated in utmost confidence by ITIA). The association also reserves
the right to administer a sample translation test. Literary/cultural translators are required to
submit a portfolio of work they have had published, broadcast, or produced.
The auktoriserad translator title is protected by law and those who have it are subject to
statutory rules on secrecy. Only those who have it can become members of the professional
organisation Foreningen Auktoriserade Translatorer (Föreningen Auktoriserade Translatorer,
2011). Maintenance of a high level of ability and quality over time is a key concern of one of the
organisations for translators in Sweden, SFO. For this reason, their admission procedure is
strict and places emphasis on continuing education (Stejskal, 2002b, p. 15).
In Finland, candidates must pass a translation exam that has a general and a specialised
component. Candidates must reside in one of the member states of the European Union or in
another country included in the European Economic Area. The exams are administered by the
Translator Examination Board, appointed by the Ministry of Education in conjunction with the
Research Institute for the Languages of Finland.
Commercial-in-confidence 26
Project Ref: RG114318
Similarly, in Norway, government authorised translators must pass possess a three year
university degree before sitting for their certification translation test (Stejskal, 2002d, pp. 13-14).
In contrast with the formal nature of translation testing and certification, a Norwegian Interpreter
Certification Examination was established in 1990 to address the need for community
interpreting services in Bosnian, Croatian, Russian, Serbian and Spanish. Later, other
languages such as Albanian, Arabic, Persian, Somali, Turkish, Urdu were added. The
certification examination has been administered and conducted by the Linguistics Department
of the University of Oslo since its establishment. This is an example of collaboration between
academics, welfare authorities and government departments that were prepared to fund but not
organise certification testing. The certification is intended for community interpreting only and
the accounts of difficulties in creating training and testing materials for newly-arrived languages
will be familiar to those in NAATI testing. Web-based training materials are being trialled as the
low success rate and lack of preparedness of many test candidates has alerted testers to the
need for comprehensive training before testing (University of Oslo, 2001). We see this as a
common thread across all countries: those who do not have a compulsory pre-testing
requirement inevitably find that the failure rates are too high and introduce some type of pre-
testing education to remedy the situation.
In Ukraine, the Ukraine Translators Association has an accreditation examination and stringent
membership requirements. To become full members, freelance translators are admitted after
passing the exam, and interpreters should have a minimum of 100 hours of interpreting and
client references. The certification procedure involves an exam and the resulting translation is
not expected to be highly refined and polished (Stejskal, 2002g, p. 13).
2.1.2.7 Asia
In China, the most authoritative translation and interpreting proficiency credential is the China
Accreditation Test for Translators and Interpreters (CATTI). The certificate awarded is called the
Translation and Interpretation Proficiency Qualification Certificate of the People’s Republic of
China. This is the official credential, very similar to NAATI accreditation, and it is incorporated
into the national system of professional qualification certificates, though those without
certificates can still legally practice translation and interpreting. The certificate is one of the
prerequisites for ‘translation and interpreting professional and technical posts’. It has four levels,
here given from lowest to highest: Level 3 Translator and Interpreter, Level 2 Translator and
Interpreter, Level 1 Translator and Interpreter, Senior Translator and Interpreter. Those at the
Senior level have to be experienced experts and have the responsibility of mentoring and
training new interpreters and translators. At the other end of the scale, Level 3 practitioners
have rudimentary skills and can only carry out generalist work. Other accreditations by other
organisations include the National Accreditation Examinations for Translators and Interpreters
(NAETI), the Shanghai Interpretation Accreditation (SIA) and the Accreditation for Interpreters
and Translators (AIT) (Chen, 2009, p. 261).
Commercial-in-confidence 27
Project Ref: RG114318
The most authoritative examination of technical translation skills in Japan is run by the JTA.
These are intended for the fields of natural sciences, social sciences and the humanities, and
they comprise both a knowledge examination and a technical skill examination. There are four
levels:
Another organisation, Babel Co., offers an entry-level test for translators, ‘English Translation
Grammar Proficiency Test’, and a test designed to evaluate the competence of professional
translators, ‘Professional Translation Proficiency Test’ (PTPT). Each test comprises
approximately 1,000 words. The categories of the PTPT include fiction (divided into romance
and mystery), non-fiction, subtitles, law- and computer-related texts, and patent specifications.
An interesting aspect of this system is that candidates take the test at home (see Stejskal,
2002c).
Commercial-in-confidence 28
Project Ref: RG114318
§ Translation
§ Sworn translation
§ Simultaneous (conference) interpreting
§ Language editing
§ Terminology
§ Corporate accreditation (for language agencies and language offices).
2.1.3 Recent initiative for global harmonisation of national certification and accreditation systems
Recent discussions have been held by an international consortium including members of our
research team, to discuss the development of international standards to make interpreter and
translator credentials portable. Such a move would require quality assurance measures on the
organisations that grant such credentials, such as NAATI. There is currently no international
standard that applies to how credentialing organisations around the world assess and award
certification. This means that the certification a candidate receives is largely restricted to and
recognised in the country that certification was gained. In the global T&I market it is difficult for
consumers to assess what a practitioner’s certification represents if it was gained elsewhere.
There is a need for certification to be ‘portable’, i.e. for certified practitioners to be able to
demonstrate that the certification authority from which certification was gained conforms to
internationally set standards on what minimum requirements for certification are. Such a ‘meta-
standard’ for the certification authorities would not necessarily result in a uniform type of testing
and assessment structure that each certification authority in each country would have to comply
with. Rather, the particular requirements that an international standard would set out would be
global and relate to the processes that a certification authority should conform to. Nevertheless,
it is a further reason for NAATI to align itself with the more stringent international practices that
currently exist in some countries.
Commercial-in-confidence 29
Project Ref: RG114318
Some countries do not have certification bodies at all. This is mostly because they have not
needed them, as their educational institutions performed the function of providing training and
assuring standards. For those with accrediting or certifying bodies, these also differ in nature.
In some countries, typically Anglophone countries of the New World and countries in East Asia,
there are governmental or semi-official bodies that administer and usually also conduct testing
for the awarding of certification (or ‘accreditation’ or ‘registration’) to T&I trainees or practitioners
who can demonstrate minimum standards of ability and practice. In other countries, professional
bodies take the responsibility of awarding the credential; and in others, such as Argentina for
legal translators, there is a very highly regulated system where translators complete a formal
degree in legal translation (of up to five years), register with a registration board and become
government certified.
Another important difference is that in most countries there are generalist and specialist tests
and training. These usually relate to court and/or medical interpreting, sometimes also
conference interpreting, terminology and/or technical translation.
There are also fundamental differences in the underlying purpose of the certification test. In
some countries, certification is granted to experienced practitioners, in other words, it is not an
entry-level credential, but a recognition of high standing in the profession. This is sometimes
ascertained via the compilation of a dossier/portfolio or evidence of long-standing practice,
although these are not the most common avenues to certification. Some countries also have
more flexible test delivery options, such as take home exams, permission to use the internet or
to conduct on-line tests in the candidate’s own time. Pre-testing language screening is also
common with some systems. An annotated overview of accreditation/certification procedures in
a number of the countries presented above in sections 2.1.2.2 to 2.1.2.8.
In light of the comparison with other countries around the world, we can see two important
advantages to the current NAATI system: 1. Its uniformity as a national system, and 2. the
availability of testing in many more languages than in other countries, including signed language
for interpreting. However, we strongly believe that NAATI could improve in two most important
aspects: the requirement for compulsory pre-accreditation education and training and the
availability of specialisations. This view was strongly supported by the respondents of our
survey, as illustrated by the quotes from two survey respondents in Table 3 below:
Table 3: Comments from survey respondents on the weaknesses of the current system
“The level of competence required at the interpreter/translator level cannot (and should not) be tested by
one single exam. This is a ridiculous situation. Currently, we have the ludicrous situation whereby NAATI
accreditation at the interpreter/translator level can be achieved by a 2-3 hour exam OR by successfully
completing an approved NAATI course and passing the equivalent of a NAATI test at the end” (Survey
respondent)
“In Europe it would be unthinkable to let a self-taught interpreter who sat a micky mouse test loose into
the general public, and translators undergo lengthy training” (Survey respondent)
It is clear that survey respondents consider that NAATI must incorporate the need for training
and specialisations into the accreditation system. We are conscious of the fact that people will
continue to practise outside of the accreditation system, especially if the requirements for
accreditation are made more stringent. However, we believe that in order for NAATI
Commercial-in-confidence 30
Project Ref: RG114318
accreditation to strengthen its status as a credible and reliable credential, it must only be
awarded to those who can adequately prove they have reached the desired standards.
Commercial-in-confidence 31
Project Ref: RG114318
2.2 Results from consultations with interpreting and translation practitioners, educators,
examiners and agencies on issues relating to pre-requisites and specialisations
2.2.1 National survey
As mentioned in the Introduction, as part of Phase 1, the team conducted three on-line
questionnaires using the Key Survey software, which collected data from three separate groups:
“Translation & Interpreting Agencies”, “Examiners and Educators”, and “Practitioners”. Some
participants requested paper copies of the questionnaire, which were supplied and later entered
into the program. The Network or Snowball sampling technique was used for all questionnaires.
The invitation to participate in the survey, containing a description of the project, was sent to a
wide distribution list (see Appendix 2 for the full detailed list). The results generated by the Key
Survey program were downloaded as Microsoft Excel and SPSS files for further quantitative
analyses. The NVivo program was used to assist with the qualitative analyses of the open-
ended responses. The questionnaires (see Appendices 3,4 & 5) consisted of three sections:
section 1. “Demographic information”, section 2. “Behavioural questions” and section 3.
“Opinion questions”. The results of the different sections are incorporated into the relevant
sections of the report. This section will deal with the opinions obtained on issues of pre-
requisites to accreditation and specialisations.
To compare the results of these three groups, a combined SPSS file of the common variables,
namely, the respondents’ demographics and their responses to “Please indicate your level of
agreement with the following statements”, was established. As stated above, the numbers of
respondents to each survey were 21, 95, and 226, respectively, making a combined total of
342. While responses from NSW dominated with 169, Victoria and WA were well represented
with 62 and 49, respectively. Queensland (26), SA (17), and the ACT (15) were better
represented than NT (3) and Tasmania (1), as seen in Table 4 below.
Commercial-in-confidence 32
Project Ref: RG114318
A listing of the statements that respondents were asked to rate using a five point Likert scale of
‘1. Strongly Disagree, 2. Disagree, 3. Neutral, 4. Agree, or 5. Strongly Agree’, appears in Table
5.
Overall, there was support for most statements by the majority of the respondents. Some
statements were overwhelmingly supported by over 80% of all respondents. These appear in
Table 6 below in pink and include compulsory training for NAATI examiners (84.5% agreement),
compulsory training for specialist interpreters in legal, medical and conference settings (84%
agreement), continuous professional development for all practising interpreters and translators
(81.6% agreement) and compulsory training for interpreters prior to accreditation (81.3%
agreement). The next most popular statements, with agreement levels of over 70% were that
NAATI should continue to approve training programs (73%) and that translators should also
undergo pre-accreditation compulsory training (72%). Over 60% agreement was obtained for
two statements: different types of accreditation according to training and accreditation (66.4%
agreement) and a minimum amount of experience required of interpreters before being
accredited (65.2% agreement). The same statement for translators received slightly less
agreement at 58.2%. The statement that received the least amount of agreement was that
NAATI accreditation should not be necessary when an Interpreting and Translation formal
course has been completed (35% agreement).
Commercial-in-confidence 33
Project Ref: RG114318
Currently the AIS conducts pre-accreditation language screening, training and assessment, and
some post accreditation monitoring of their practising interpreters. They have a mentoring
system between senior and junior interpreters as well as a tiered system with differential pay
scales. For this reason, the group was in favour of compulsory training and language screening
before accreditation. They were also very much in favour of the different specialisations for
interpreters, especially legal, but also medical and conference interpreting. The legal
interpreting specialisation is their main priority and they are currently in the process of devising
legal interpreting training modules, in conjunction with the TAFE diploma. Aboriginal interpreters
are also often required to interpret in conference-like settings and high-level meetings with
government, for which training in conference interpreting would also be very valuable. They
welcomed a change to the current accreditation levels to give their interpreters a higher chance
of success. They heavily criticised the current professional level examination for not testing the
skills that are required of interpreters in professional practice. They stressed, however, the need
for them to maintain some flexibility to cater for their interpreters’ needs. One example of
flexibility would be the adaptation of the requirement for Sight Translation in the examination,
which would not be applicable to their languages. When confronted with such situations in the
courtroom, for example, Aboriginal interpreters can ask the lawyer or judicial officer to explain or
read the document aloud so they can interpret it orally rather than having to read it. They were
very much in favour of improvements to the expertise of NAATI examiners, but were also aware
Commercial-in-confidence 34
Project Ref: RG114318
of the extra costs that would be required for training examiners and educators and would like
NAATI to fund such extra training if required.
As stated above, we believe that it is no longer appropriate for NAATI to continue to accredit
candidates who have not undergone any Interpreting and/or Translation training. Such a
practice is inconsistent with the existing body of research on the advantages of training (see
Berk-Seligson, 1990 / 2002; Cambridge, 1999; Chacón, 2005; Ebden, Carey, Bhatt, & Harrison,
1988) and is strongly rejected by the majority of interested parties in Australia, as evidenced by
the results of this project’s national survey and other consultations, as well as the results of
previous research and previous reviews. The review of certification/accreditation systems
across the world also showed that for some types of interpreting and translation, especially for
the legal specialisation, there are very stringent educational requirements in place (e.g.
Argentina). It also highlighted the tendency towards some type of pre-testing training in the
cases where formal education is not available (e.g. community interpreting certification in
Belgium). The 1977 COPQ report strongly advocated for compulsory tertiary training for
interpreters and translators, as cited above. We agree with the desirability of formal higher
education in Interpreting and Translation, as is the practice in many countries for the well
established European languages, but understand that such a requirement would be unrealistic
in Australia for all languages. We believe, however, that some compulsory interpreting and/or
translation training in the way of flexible modules delivered mostly in English is a feasible
alternative for the languages for which formal courses are unavailable. We see no valid reason
for allowing accreditation without any form of Interpreting and/or translation training. Our
recommended model proposes pre-testing training as obligatory and not optional. Candidates
can of course present a case for equivalence and not be required to undertake any further
training if equivalence is established, as will be explained below.
Further, in almost all occupations in Australia, even ones that have been considered ‘unskilled’
or ‘semi-skilled’, training is now an obligatory condition for employment. For example, security
guards (including unarmed ones) cannot gain employment without a Certificate II in Unarmed
Guard and Crowd Control16, which goes for between 3-4 weeks full-time. A personal services
assistant in a hospital or clinic (who has no physical contact with patients) requires a Certificate
III in Health Services Assistance17, which is 6 months, full-time at most reputable VET providers.
The minimum requirement for any childcare worker is the Certificate III in Children’s Services18,
which is also a 6-month, full-time course. The minimum requirement for an integration aide
working in a school is Certificate III in Education Support19, which is a 6-month part-time course.
In Victoria, in order to become a taxi driver, non-native speakers of English must do an IELTS or
ISLPR test, and all applicants must complete Certificate II in Driving Operations20, which
includes the “Knowledge of Melbourne” test, which has a failure rate of over 70%.
16
http://www.ista.com.au/vic_prs20103-crowdcontroller.asp
17
http://www.kangan.edu.au/tafe-courses-melbourne-victoria/certificate-iii-in-health-services-assistance-psa/aosc/1723/
18
http://www.kangan.edu.au/tafe-courses-melbourne-victoria/certificate-iii-in-children-s-services/aosc/1914/
19
http://www.kangan.edu.au/tafe-courses-melbourne-victoria/certificate-iii-in-education-support/aosc/1821/
20 http://www.deca.com.au/coursedetail/Car_Driver_Training/Taxi_Driver_Training_Victoria/Certificate_II_in_Driving_Operations_Taxi
Commercial-in-confidence 35
Project Ref: RG114318
These levels of training are minimum levels, and in many fields of employment such as
childcare and healthcare services, applicants for employment require a level of training that is
higher (e.g. Cert. IV or Diploma) as employers now require these further levels of training as a
condition for employment. The above examples show that pre-employment training is now
almost universal in the Australian labour market. Other professionals, such as nursers or
migration agents, which in the past had not required any training, have also moved to
compulsory pre-registration university training. We believe it is time for interpreters and
translators to adopt a similar stance.
We understand that there may be concerns about access and equity. Access and equity can be
seen from two view points – from the point of view of the non English speaker receiving the
services and from the point of view of the bilingual person desiring to become accredited. As
stated above, we believe that in order for all non English speakers to have equal access to all
services there must not only be an adequate supply of interpreters in the required languages,
but those interpreters must be competent to perform the required tasks. Providing the service of
inadequate interpreters will not fulfil the requirement for access and equity. This supports the
requirement for pre-testing training, especially for Interpreting. The second point refers to the
means through which a trainee has access to a training course.
Concerns about access to training can perhaps be best addressed through targeted funding
and subsidising of courses for particular language communities. For example, in Victoria, state
government funding through the Office of Multicultural Affairs and Citizenship, provides
bursaries for entry-level (community) interpreters in short courses conducted by Monash
University, and bursaries for target language communities (e.g. Dari, Assyrian, Dinka) for
training conducted as part of RMIT’s Diploma of Interpreting. In Sydney, the University of
Western Sydney conducted a fully funded similar course for the languages for which there is a
shortage of interpreters (see Hale & Ozolins, forthcoming for more details). More such
opportunities, in addition to non language specific programs such as the one offered by
Macquarie University, non award courses offered by all universities with NAATI approved I&T
courses and other courses that can be designed specifically for this new model, can be
presented as a list of different possibilities for training to meet the pre-testing requirement of our
new proposed model.
The issue of interpreter supply can be addressed in many different ways, the main one being
through improving efficiencies of services, which is of course beyond NAATI’s scope.
Nevertheless, it is worth noting at this point that current practices show that interpreting services
are often not being used in the most efficient ways, with interpreters waiting for hours in waiting
rooms or multiple interpreters providing interpreting for different people of the same language
combination21. The use of technology (such as simultaneous interpreting equipment where two
interpreters can interpret for many speakers, or video conferencing facilities to allow interpreters
to interpret for remote areas) can be some of the ways demand could be met. In our opinion
NAATI should not be concerned with issues of service provision, but with ensuring high
standards. We believe that a smaller but better qualified workforce of practitioners (especially
specialist interpreters), who will service all of Australia, will lead to a higher volume of better
paid work for practitioners, which in turn will justify any extra costs involved with training.
21
Preliminary results of a research project funded by the Australian Research Council Linkage grant scheme on court interpreting.
Commercial-in-confidence 36
Project Ref: RG114318
1. Accreditation via completion of a formal NAATI approved course of study, either through
the VET or Higher Education sectors, as currently instituted. The final NAATI
accreditation examinations are administered at the completion of the training and
monitored by NAATI, as is currently the case.
We propose that the current levels of accreditation be changed to only one level for Translation
and two levels for Interpreting: a generalist accreditation and specialist accreditations in in the
legal, medical, conference and business settings, with priority given to the first two
specialisations. The decision to have specialisations in Interpreting only was informed by the
international practices as well as by the high level of support for interpreting specialisations but
not for translation specialisations in the results of our survey and other consultations as well as
previous research (Hale, 2011).
These changes would remove all the other current levels as they currently stand, except for
Recognition, the recipients of which will also be required to complete the compulsory training
modules. However, we propose that all these changes not be applied retrospectively. We
recommend that the current holders of Recognitions be encouraged to complete the non-
language specific training modules, even if accreditation examinations in their languages are
not yet available. Similarly, the current holders of Paraprofessional and Professional
accreditations who have not received any training, should also be encouraged to complete the
training modules, and later attempt the specialisations. Those NAATI accredited professionals
who have already undertaken specialist training in legal, medical, conference and business
interpreting (either in Australia or overseas), should be encouraged to attempt the specialist
accreditation examinations directly, without the need to undergo further training, unless they
wish to do so.
Although two parallel systems are proposed, currently approved formal interpreting and
translation programs would need to be adjusted to align with changes and improvements to the
required contents of training, and to the structure and content of testing instruments and
assessment criteria. Later sections of this report will deal with issues of standards, testing and
assessment criteria. However, as pointed out in the introduction, we restate at this point, that
22 Equivalence of an Advanced Diploma is established by indicating a combination of other short courses plus recognition of prior
learning (RPL). A case for RPL needs to be presented by applicants with supporting documentation to evidence, for e.g. experience
in related fields, letters of support from community members. This is a point that will need to be refined before the implementation of
the new model.
23
A Bachelor’s degree or equivalent is a common requirement for university post-graduate degrees. Equivalence of a bachelor’s
degree is established by indicating a combination of other qualifications plus recognition of prior learning (RPL). As per the previous
footnote, a case for RPL needs to be presented by applicants with supporting documentation to evidence, for e.g. experience in
related fields, letters of support from community members, completion of short courses, etc. This is a point that will need to be
refined before the implementation of the new model.
24
The research team did not reach consensus on this point. Two members of the team advocated for an Advanced Diploma in
Interpreting to be considered equivalent to a Bachelor’s degree and acceptable as a pre-requisite for the Specialisations. The others
insisted on a Bachelor’s degree.
Commercial-in-confidence 37
Project Ref: RG114318
any new testing instruments and assessment criteria that are proposed must be subjected to
proper validation via a comprehensive validation study in Phase 2. Failure to submit new testing
instruments to adequate validation will potentially lead to a new flawed system. Consistent with
the guidelines of the International Language Testing Association (ILTA), high stakes test
management has the responsibility to provide information, which allows valid inferences to be
made. All tests, regardless of their purpose or use, must be reliable. This means that test results
must be consistent, generalizable and therefore comparable across time and across settings
(ILTA, 2012).
The ILTA Guidelines for Practice, which reflect the principles of the American Psychological
Association’s Standards for Educational and Psychological Testing, explicitly state that testing
bodies have responsibility to provide comprehensive and accurate information to test
stakeholders. Some principles relevant to NAATI are outlined below:
Institutions (colleges, schools, certification bodies etc) developing and administering entrance,
certification or other high stakes examinations must:
§ utilize test designers and item writers who are well versed in current language testing
theory and practice.
§ publish validity and reliability estimates and bias reports for the test along with sufficient
explanation to allow potential test takers and test users to decide if the test is suitable in
their situation.
§ publish a handbook for test takers which
1. explains the relevant measurement concepts so that they can be understood by non-
specialists.
2. reports evidence of the reliability and validity of the test for the purpose for which it
was designed.
3. describes the scoring procedure and, if multiple forms exist, the steps taken to
ensure consistency of results across forms.
4. explains the proper interpretation of test results and any limitation on their accuracy.
These requirements pre-suppose that testing organisations undertake the relevant research to
be able to fulfil their obligations to test-stakeholders.
We also note that special courses will need to be designed and introduced by the different
educational institutions to cater for the new training needs of Pathway 2, although, as we have
already pointed out, many of the subjects currently offered by the different institutions could be
offered to candidates as ‘non-award courses’25 to fulfil their training requirements under this
model26. The new modules could be delivered by distance as well as face-to-face to cater for
candidates in all languages from across Australia.
The proposed conceptual model attempts to bridge the gap that currently exists between trained
and untrained practitioners, by ensuring that all accredited practitioners meet minimum
standards for language proficiency, interpreting and translation competencies and knowledge of
interpreting and translation underlying theoretical principles, including issues of professional
ethics, in order to make informed choices to underpin their practice. We strongly believe that
25
The new proposed Expert Panel who will write the curricula for the compulsory modules will also establish a list of equivalents
with current university and TAFE courses.
26
For example, most institutions have a theory subject that could be taken by candidates. Some institutions also have specialist
subjects such as Legal interpreting, medical interpreting or conference interpreting that could also be taken by candidates.
Commercial-in-confidence 38
Project Ref: RG114318
standards relating to T&I practice that are, at present, untested (for e.g. introductory and role
establishment protocols, management skills, simultaneous interpreting -for interpreters-,
assignment preparation, compilation and organisation of glossaries -for both interpreters and
translators- use of computer assisted translation software, management and security of
translation text files -for translators, to name just a few) will be enhanced through training at the
generalist level. We further contend that specialised training is essential for the different
interpreting areas, but especially for court interpreting. The current NAATI Interpreter
examination does not test most of the skills and knowledge required of court interpreters (e.g.
understanding of the strategic use of questions in examination-in-chief and cross-examination,
court protocols, simultaneous interpreting, specialised legal terminology and structures, etc).
The tiered model of generalist and specialised testing and accreditation levels will ensure that
future training courses conform to current ‘minimum’ standards of professional level
accreditation and the model will ensure that for specialisations, further training needs to be of a
level beyond that of the current professional level. Overall, this is a model that provides for
clearer and more accessible pathways to specialised testing, and this will lead to an
improvement of practitioner standards inasmuch as these further training, testing and
accreditation levels are completed. The current short accreditation examinations will be
complemented by a minimum set of hours of training and by hurdle tests throughout and at the
end of each training module. Accreditation will no longer be seen as just being the result of a
one-off accreditation examination, which as we have discussed above, cannot possibly assess
all the relevant knowledge and skills required of I&T practitioners.
We anticipate concern from some stakeholders about increasing the level of difficulty for those
who currently only hold the Paraprofessional level of accreditation. We argue that although the
level of difficulty will increase, candidates will be much better prepared to acquire the necessary
knowledge and skills in order to have a higher chance of success than is currently the case. We
also argue that stages 0 and 1 will filter out those candidates who should not be attempting
accreditation at all due to their lack of the necessary linguistic skills and other relevant
knowledge. We also propose a Provisional Generalist accreditation (with a maximum 2 year
duration) for those candidates who do not achieve the minimum pass mark for the Generalist
examination. Candidates will need to re-sit the examination before the two years are up, after
having practised in the field and conducted further training.
The new proposed conceptual model would consist of five stages, with a voluntary pre-stage we
have called stage 0. The objective of a stage 0 is to ensure that potential candidates understand
the basic requirement for adequate bilingualism before they invest more time and money into
progressing any further in the process. This stage would not be compulsory but would be highly
recommended to all aspiring candidates. Stage 1 would require at least an Advanced Diploma
(in any discipline) or equivalent to ensure that candidates have a minimum level of academic
background necessary for the type of skills and competencies required of professional
interpreters and translators. For the Specialist levels we recommend that the minimum
requirement be a bachelor’s degree (in any discipline) or equivalent or an Advanced Diploma in
Interpreting27. Equivalence can be established in different ways and the details can be agreed
on at a later time. However, below are some examples of what may constitute equivalence for a
bachelor’s degree:
Example 1:
§ TAFE Advanced Diploma plus
o Related professional experience
o Other professional development courses
27
No consensus was reached on this point, so we offer both options.
Commercial-in-confidence 39
Project Ref: RG114318
Example 2:
§ A series of short courses amounting to an equivalence of an Advanced Diploma
§ Recommendations from members of the community
§ Related professional experience
Stage 2 would comprise the compulsory education modules, which would prepare candidates to
sit for the Generalist accreditation examination at Stage 3, but more importantly would provide
them with education in the main areas of I&T expertise that will be outlined below (see section
3.2.2.1). Stage 4 would comprise training in the chosen specialisations in interpreting, followed
by specialist accreditation examinations at stage 5. Different modules and examinations would
be required for Translation and Interpreting. Similarly, different specialist training modules and
examinations would be required for the specialist interpreting accreditations. As stated above,
graduates of current courses that offer specialisations such as legal, medical or conference
interpreting, would be exempt from undertaking the training modules and would be allowed to sit
for the specialist accreditation examinations directly. We strongly recommend that, if this new
model is adopted, government language policies be amended to reflect the new accreditation
system.
28
Table 7: Proposed conceptual model for an improved accreditation system
STAGE 0
Non-compulsory stage
28
Although the language used in the model is definite (e.g. candidates will …), this is for ease of expression only, as we understand
that the model is only a proposal and its implementation will depend on what NAATI decides.
29
Non-native speaker is an inexplicit term. We suggest this applies for those who learned English after puberty. For a full discussion
on the use of the term ‘native speaker’ see Hale & Basides (2012/13).
Commercial-in-confidence 40
Project Ref: RG114318
STAGE 1
STAGE 2
Interpreting Translation
The bulk of Module 2 may be delivered in English. However, it is recommended that bilingual components
where candidates receive formal feedback from bilingual I&T experts be included at least twice during the
course.
Candidates can only progress to Stage 3 after passing the hurdle assessment tasks at stage 2.
RECOGNITION
Granted to those candidates for whose language there is currently no accreditation available. In order to
receive Recognition they need to have successfully completed stages 1 and 2 above
30
The contents of the training modules need to be flexible enough within a general framework of essential components.
Commercial-in-confidence 41
Project Ref: RG114318
STAGE 3
RE-VALIDATION
We propose that practitioners undertake professional development activities in order to maintain their
accreditation, as currently being implemented by NAATI. This was supported by the results of our survey.
STAGE 4 Pre-requisites
1. Successful completion of Stage 3
31
2. A Bachelor’s degree (or equivalent – including an Advanced Diploma in Interpreting )
STAGE 4
Candidates can only progress to stage 5 after passing the hurdle assessment tasks at stage 4
STAGE 5
RE-VALIDATION
We propose that re-validation continue after the attainment of specialisations.
Although we cannot at this stage make a firm recommendation on assessment methods and
pass marks, we provide a possible option in Table 8 below:
31
As stated before, the research team did not reach consensus on this point, with most arguing for a Bachelor’s degree at this level.
32
See explanation of pass marks below
Commercial-in-confidence 42
Project Ref: RG114318
The system outlined above would align the results with the academic sector and would allow for
a Provisional level only for Interpreting with a sunset clause of 2 years’ duration. The bands and
pass marks align with Angelelli’s suggested rubric bands, as cited below in table 12. They also
align with the results of our own survey on rubrics (see section 3.3 below).
Having provided a background and proposed a new conceptual model, the next section will
discuss issues surrounding testing.
3. Testing
3.1 Language testing
This section provides a brief overview of the best-known proficiency tests for English as well as
the CEFR (Common European Framework of Reference for Languages) for European
languages. This overview aims to shed light on the types of language tests available to
candidates at stage 0 and to also provide some guidance to NAATI in the development of its
own language tests, as proposed above. Aspects of language testing can also be taken into
account when designing interpreting and translation testing instruments, although we note that
interpreting and translation tests have their own very specific characteristics and should not be
confused for language proficiency tests.
3.1.1 IELTS
IELTS is a British-based test that is now the most popular measure for English language
proficiency for academic and non-academic purposes in the world (with the exception of USA
where TOEFL is more widely used). IELTS has a band-scale from 1-9 with the provision of 0.5
scores providing for 17 different gradings. IELTS owes part of its popularity to the descriptors
that the testing system provides for each of the 9 bands and for each of the four macro-skills:
listening, speaking, reading and writing. IELTS testers look for a mix of specific features, e.g.
use of tenses in narrative speech, length and complexity of clauses, use of linking words etc.,
as well as ‘global’ features such as pragmatic appropriateness, word-attack, use of speech acts
appropriate to situation. The numeric score is based on these in the first instance. Testers are
supposed to refer to the descriptors only as a supplementary guide to diagnosis after a
preliminary score has been reached. The descriptors provide an outline to testers, candidates
and institutions of the various level ratings of the macro-skills.
There are different components used for each of the four macro-skills:
33
Some institutions use slightly different percentages which would need to be taken into account. For example, RMIT uses 60-69%
as Credit, 70-79% as Distinction and 80%+ as High Distinction.
Commercial-in-confidence 43
Project Ref: RG114318
Speaking: the ability to communicate opinions and information on everyday topics and
common experiences and situations by answering a range of questions; the ability to
speak at length on a given topic using appropriate language and organising ideas
coherently; and the ability to express and justify opinions and to analyse, discuss and
speculate about issues
Reading: reading for gist, reading for main ideas, reading for detail; understanding
inferences and implied meaning; recognising a writer’s opinions, attitudes and purpose;
and following the development of an argument
The IELTS descriptors were first introduced as statements of level to guide candidates as well
as examiners. In practice, the descriptors function not only as a guide to potential candidates
but to all interested parties – the descriptors outline and disseminate to outside parties the
gradings of ability in plain English. It is difficult to establish what role the ‘transparency’ of the
descriptors has played in the overall global success of IELTS, but it can be safely assumed that
the descriptors have augmented candidates’ and others’ notions of what the band marks signify.
The descriptors are also regularly reviewed and sometimes edited and adapted. Review of
IELTS test procedures and their validity is undertaken by non-interested parties, sometimes with
critical conclusions (e.g. Moore & Morton, 2005). IELTS has a large research and test review
infrastructure, (e.g. IELTS Research Reports, Studies in English Language Testing and
Research Notes) which are all partly funded by IELTS.
3.1.2 TOEFL
The other major international English language proficiency test is TOEFL (Testing of English as
a Foreign Language). TOEFL is an American-based test whose results are also accepted at
most English-language tertiary institutions. Reflecting the pedagogic philosophies of late
twentieth-century America, TOEFL began as a largely error-focussed tool: number and type of
errors were calculated for speaking tests and marks deducted accordingly; reading
comprehension tests contained multiple choice questions. This approach was normative and
allowed for an efficient and speedy marking process. Today, TOEFL’s iBT (internet based test)
is taken online with a large part of the reading and writing sections constructed so correction
can be automated. This is something that could be adopted by NAATI if it decides to deliver its
own auto corrected language proficiency tests. TOEFL has adopted descriptors for speaking
and writing. Each of these has components and gradings. For speaking, these are: delivery,
language use, topic development. For writing, there is only one component: task development.
3.1.3 CEFR
The CEFR (Common European Framework of Reference for Languages) was developed as a
EU project, funded and administered by the Council of Europe, based in Strasbourg. It is a set
of rubrics that describes six levels of language proficiency, from A1 and A2 (‘basic user’)
through to B1 and B2 (‘independent user’) to C1 and C2 (‘proficient user’). Recent fine-tuning to
the CEFR allows for a nine-level differentiation: A1, A2, A2+; B1, B1+, B2, B2+; C1, C2. The
CEFR also has five rather than four macro-skills: listening, reading, spoken interaction
Commercial-in-confidence 44
Project Ref: RG114318
(pragmatic skills), spoken production and writing. The introduction of the CEFR was precipitated
by the need for common terms and benchmarks to apply to speakers’ language levels. The
mobility of EU citizens within the EU and the harmonisation of higher education institutions led
to a need for a common framework which applies to not only the languages of the EU, but to
most other European languages and which can be applied to all languages worldwide. The
CEFR has established itself as a measure for linguistic proficiency at European institutions of
higher education. Non-native speakers are required usually to have a C1 level for admission.
This level could be adopted as a guide to candidates to accreditation in the languages covered
by the CEFR.
The CEFR has also provided a basis for the European Language Portfolio (ELP). The ELP is a
collection of textbooks, information sheets, teaching materials and self-study and self-diagnosis
resources so that individuals can ascertain their own CEFR level informally by answering
questions about their abilities in their languages. Although the target for the ELP are principally
primary and secondary school students, we believe this resource may be used by aspiring
interpreters and translators in the relevant languages to ascertain their readiness to attempt the
new accreditation process we are recommending in the LOTE. The recently developed social
interpreter certification test in Flanders, Belgium has a language proficiency test in both
languages, using the CEFR scale. That required level (B2) is one level lower than the C1 scale
required from L2 students for entry into European universities. An example of the B2 descriptors
for the macro-skills of speaking and listening is provided in Appendix 6.
3.1.4 NFAELLNC
The National Framework of Adult English Language, Literacy and Numeracy Competence
(hereafter: NFAELLNC) aims to provide a guide to description, rather than diagnosis of a
student’s capabilities, and it has its starting point on previous education and vocational
experiences, also referred to as ‘recognition of prior learning’. The NFAELLNC is a good
example of an instrument that offers descriptions for many features, not just language or
numeracy proficiency, and could be used to ascertain equivalence to a Bachelor degree in our
new proposed model.
The holistic scope of the NFAELLNC can be seen in the break-up of six areas of assessment:
task, technology, identity, group, organisation, and community. The first area ‘task’ relates to
particular elicited activities that a test candidate is assessed on. The remaining five areas
describe surrounding areas that may be present in attempting an activity: technology (as an
instrumental means of fulfilling a task); identity, group and community (relate to socio-
psychological and socio-environmental features of performance); organisation (knowledge of
administrative and legal/procedural features relevant to an assessment activity). For each of the
six scales, there are three levels, from lowest to highest: ‘stage 1 – assisted competence’;
‘stage 2 – independent competence’; ‘stage 3 – collaborative competence’.
The NFAELLNC is not tied to a test. The candidates presenting at entrance tests have very
diverse backgrounds and learning experiences and a uniform test would be unworkable. The
scale leaves it up to the tester to draw up their own test and to relate testees’ performance to
the features found in the NFAELLNC scales for diagnoses to occur. The scales are not only for
entrance testing, they are also for on-going and exit assessment. A sample of the content of six
areas for the highest scale (‘stage 3 – collaborative competence’) is contained in Appendix 7.
The NFAELNNC can also be used as an example of a scale that focuses not only on task
performance but mostly on other, social-interactional and group-based capabilities for
diagnosis, which can be relevant to I&T testing. This means that testing scales can seek to
describe functions, which are less task-based. In T&I testing these functions can include:
Commercial-in-confidence 45
Project Ref: RG114318
Many of these abilities cannot be readily ascertained from a single task and require on-going
observation of testees, individually and in multi-party situations. The scale has a continuum of
descriptions and once a performance descriptor is satisfied, the testee is judged to have
‘achieved’ it. There are two grades only: achieved (pass) and partly achieved (fail), with the
scale containing only positive descriptions of ability.
Performance tests aim to be as ‘authentic’ as possible, i.e. the tasks and scoring methods aim
to replicate real-world assignments. The standard practice in performance test design for
translators is one or more translation passages, or for interpreters dialogue and/or consecutive
passages, which aim to replicate real-life translations and interpreting assignments. Inevitably,
the degree of authenticity of these test items is subject to practical constraints such as the time
needed to complete the task, the need to standardize the test instruments and conditions of
examination. In order to achieve an acceptable degree of reliability in the test, a compromise is
generally sought between the tasks’ authenticity and the reliability of the test. An example of an
approach to authentic test task design is the NICE test (Norwegian Interpreter Certification
Exam), whereby actors (professionals in the field) role-play a semi-scripted scenario in which
the candidate plays the interpreter, a practice that is also common in Australian education
institutions. The performance is judged live by a jury according to theoretically-derived criteria
relevant to professional practice (Mortensen, 2001). While this test has high face, construct and
context validity, the potential for variability in the difficulty and delivery of the task is high,
impacting on the internal reliability of the test. However, the external reliability is improved by
having a panel to judge the performance (multiplying the ratings) and the use of a criterion-
referenced system (a set of criteria by which performance is marked, which is less subjective
Commercial-in-confidence 46
Project Ref: RG114318
than holistic judgements), but the cost of running the test is high and therefore its practicability
may be low. This example demonstrates the inevitable trade-off between the three fundamental
characteristics of tests: validity, reliability and practicality (Bachman, 2000, 2002; Bachman &
Palmer, 1996). Our discussion of any amendments to the current NAATI tests is framed within
the compromise that needs to be found between these three aspects.
34
AUSIT is the Australian Institute for Interpreters and Translators
Commercial-in-confidence 47
Project Ref: RG114318
the skill set identified in the literature on this issue, as well as through consultation with
practitioners, educators, examiners and service providers.
3.2.2.1 Interpreting
Interpreting has been described in terms of a process comprising three main components:
comprehension, conversion and delivery. Hale (2007b) describes the different skills,
competencies and knowledge required of interpreters according to each of the three facets of
the interpreting process. At the comprehension level, interpreters require a thorough knowledge
of both languages at all levels (lexical, semantic and pragmatic), knowledge of the subject
matter and of the particular settings and accompanying discourses. At the conversion level,
interpreters require the technical skills, such as mastery of note taking skills and the different
modes of interpreting (e.g. consecutive and simultaneous), as well as a thorough understanding
of the underlying theories of interpreting to determine the approach to be taken according to the
requirements of the setting, the interpreting specialisation, the expected role for the particular
assignment and professional ethics. Finally, the delivery phase requires interpreters to be able
to reproduce the message processed during the previous two stages into an appropriate form.
This entails socio-pragmatic competence, mastery of public speaking skills, ability to produce
different registers and to reproduce tone and suprasegmental features of language. At this
stage interpreters also need to master management skills in order to coordinate bilingual
situations. The current NAATI interpreting examination only assesses some of the skills at the
conversion level and ignores the knowledge and skills required at the comprehension and
delivery levels.
Kalina (2004) adopts a similar approach but specifically targeted to conference interpreting. She
breaks down the interpreter skills into temporally-based factors: the first set of factors refers to
factors defined prior to the process, the second to those immediately before and during the
interpreting process, the third to those that are “…actual in-process requirements and
conditions” and the fourth to post-process factors (p. 126). Table 9 below lists the conference
interpreting skills under each of the four sets of factors.
Commercial-in-confidence 48
Project Ref: RG114318
The factors that Kalina (2004) identifies relate to conference interpreting. Court, medical,
business or community interpreting (face-to-face and remote interpreting) necessitate an
augmentation of these proposed factors. Kalina’s list of factors is a formal attempt to
comprehensively identify all personal attributes and activities that precede and succeed
performance in contrast to assessment or rating scales which include only those factors which
are identifiable immediately prior to and during interpreting performance. The notion of
standards is more comprehensive than performance, as the establishment of standards seeks
to identify any relevant feature or activity that condition the candidate’s performance. Such
standards can usually only be assessed in a course of study, rather than through a single test.
The skills and competencies incorporated into the three facets of the interpreting process by
Hale and in the factors identified by Kalina, have been proposed by others also, often classified
in terms of linguistic, pragmatic, socio-cultural, occupational and attitudinal attributes. These
attributes can be classified according to: 1. pre-training characteristics or aptitude, such as
language proficiency levels, ability to paraphrase, ‘teachability’, demonstrated motivation
(Benmaman, 1997; Timarova & Ungoed-Thomas, 2008), and pragmatic and communicative
competence (Hale, 2004; Lee, 2008); 2. those that are acquired through education and training,
such as advanced listening and comprehension skills (Giovannini, 1993; Sandrelli, 2001), public
speaking skills (Hertog & Reunbrouck, 1999; Pochhacker, 2001), advanced interpreting skills
(Gentile, Ozolins, & Vasilakakos, 1996), management skills (Bontempo & Napier, 2009;
Wadensjö, 1998), knowledge of the context and subject matter (Colin & Morris, 1996),
understanding of the goals of the institution where the interpreter is working (Berk-Seligson,
1990 / 2002; Hale, 2004), understanding of the interpreter’s role and professional ethics
(Edwards, 1995; Mikkelson, 1996), cross-cultural awareness (Chesher, Slatyer, Doubine, Jaric,
& Lazzari, 2003), theories that underpin interpreting choices (Roy, 2000; Wadensjö, 1998),
knowledge of protocols (Bontempo & Napier, 2009; Jacobson, 2009); and 3. those that can be
acquired through practice, such as self-confidence, stress management, ethical workplace
behaviour, knowledge of OHS etc. The vast majority of these characteristics require education
and training for their acquisition and development and are difficult to assess in a single
examination, such as the current NAATI accreditation examination. Most I&T courses assess
students in all of these areas throughout their program.
Consultation with stakeholders in Australia has also produced similar lists. In preparation for the
development of national qualifications in interpreting and translation for the VET sector,
Government Skills Australia (GSA), contracted by the Department of Employment, Education
and Workplace Relations in 2008, conducted extensive consultation with I&T practitioners to
ascertain their professional skills and competencies. Other aspects of Interpreting that were
highlighted by those consulted included ability to prepare for assignments, ability to work as a
team and ability to manage multi-party interactions.
Our own survey asked respondents to state the top skills they believed an accreditation
examination should be testing. As can be seen in Table 10 below, the results match those that
have been presented before.
Commercial-in-confidence 49
Project Ref: RG114318
A number of the characteristics listed above reflect those that are currently tested in the
accreditation examination, such as note taking skills, ethics, language competence, consecutive
interpreting and sight translation. Others, however, are noticeably absent, such as the
interpreter’s ability to manage the situation when the speakers do not adhere to the expected
norms, the interpreter’s ability to coordinate turns between speakers, or the interpreter’s
understanding of his/her role according to the goals of the institution for which they are working.
Another important aspect of interpreting that is currently not tested and that is impossible to test
in a single generalist examination, is the interpreter’s understanding of the theory to inform
his/her choices with regards to the approach taken according to the setting and the participants
involved. Such theoretical knowledge is also necessary for interpreters to justify their
performance when challenged (Baker, 1992; Calzada Perez, 2005; Hale, 2007a), something
that is becoming increasingly common, especially in court interpreting. Not surprisingly, few
respondents in our survey indicated the need for theory in interpreting and translation tests,
although more seemed to indicate this was necessary for translation (see Table 11 below). This
lack of appreciation for the theory is likely to be symptomatic of a profession where the
overwhelming number of practitioners (and NAATI examiners) lack any formal training in
theoretical aspects. In mature professions whose members make decisions that impact on the
public (e.g. medicine, law, engineering), skills are developed on the basis of theory. Members of
these mature professions are expected to be able to make expert autonomous decisions, to be
able to analyse, describe and report upon their professional decision-making using particular
terminology that is drawn from theoretical training. We will discuss these issues further when we
deal with test design, marking criteria and examiners’ competence.
Commercial-in-confidence 50
Project Ref: RG114318
3.2.2.2 Translation
The majority of literature on translation competence to date is theoretical and reflects the
disciplinary perspective of the theorist. Text-based linguistic models include Baker’s (1992)
theory of equivalence, which draws on systemic functional linguistics, pragmatics and cultural
studies to propose a bottom-up view of the relationship between source and target text
characteristics. Hatim and Mason’s (1997) work on communicative models of translation expand
these notions of equivalence to include the interactional aspects of the translation act. In
contrast, functional models situate the translation act in relation to the purpose (or skopos) of
the translation. Nord (1991), expanding on Vermeer’s Skopos Theory, proposes a process for
analysing the source text as the starting point for translators.
Early models of translator competence (i.e. models of process as opposed to product) include
Wilss (1982) who described the act of translating (rather than the characteristics of the
translation). Wilss’s model comprised three components: source language receptive
competence (or the ability to understand the source text), target language reproductive
competence (or the ability to express concepts in the target language), and a super-
competence, which describes strategic translation competence (or the ability to translate).
While these theoretical models have provided the basis for our understanding of the relationship
between source and target text and the role of the translator, they lack empirical evidence. The
only comprehensive, empirically supported model of translator competence is the PACTE model
(PACTE Group, 2009). The model is the result of over ten years of research which has
investigated the construct through a robust, triangulated research design incorporating both
product and process components of translator performance. The model describes translator
competence according to five interconnected sub-competencies and a psycho-physiological
component. The Psycho-physiological components, which are common to professional practice
in many fields, include attitudinal (e.g. critical thinking, creativity, etc.), cognitive (e.g. memory,
attention, emotion) and psycho-motor skills components. The other five sub-competences are:
Of these five sub-competences, the first two (Bilingual sub-competence and Extra-linguistic sub-
competence) and the psycho-physiological component may reside independently of the
translation context. They could be considered to be prerequisites to professional translator
education and are often included in screening tests for entry to educational programs to assess
the suitability of candidates for the profession. This would be addressed in our proposed stage
0. The last three (Knowledge about translation, Instrumental sub-competence and Strategic
sub-competence) comprise the knowledge and skills that are essential for a professional
translator. These three sub-competences constitute the professional knowledge and skills that
are acquired during the process of education. This would be addressed in our proposed stages
Commercial-in-confidence 51
Project Ref: RG114318
1 and 2, and should also be included as key components of the test construct in translator
testing programs whether this be the summative assessment at the end of a course of
instruction or a gate-keeping test such as the NAATI test evaluating professional readiness.
As with the interpreting test, a number of the skills proposed by the survey respondents are
currently tested, such as comprehension, accuracy, terminology and writing skills, however,
others are absent from the current NAATI examinations (e.g. the use of technology, checking,
editing and formatting skills and an understanding of the underlying theories of translation). This
report will provide a detailed section on the use of technology for testing later. However, the
other issues relating to theory and checking and editing are competencies that are more
adequately taught and assessed through education, rather than through an accreditation test.
The section below will present an overview of the I&T assessment research and practice around
the world.
Commercial-in-confidence 52
Project Ref: RG114318
methods (Eyckmans, Anckaert, & Segers, 2009; Lee, 2008; Turner, Lai, & Huang, 2010).
Marking systems that are more ‘impressionistic’ and which evaluate a candidate’s performance
in a more ‘global’, ‘intuitive’ way, whether examining a test overall or breaking down a test into
particular areas, has been termed a ‘holistic’ marking method (Bontempo & Hutchinson, 2011;
Lee, 2009). Descriptors are the usual means for holistic test evaluation. Trial and adoption of
descriptors has, in the evaluation of interpreting and translation testing, been advocated by
some as a means of providing alternative or supplementary feedback (e.g. Turner, et al., 2010),
as a means of verifying and testing the validity of analytic testing (e.g. Turner, et al., 2010;
Waddington, 2004) and as a method preferable to analytic testing (e.g. Lee, 2009).
Analytic and holistic testing systems can conform to psychometric testing requirements of
validity so they can readily test for activities that a test candidate would undertake in the T&I
profession, i.e. the test contains what is required in everyday T&I professional life. However,
analytic and holistic testing systems are vulnerable to problems associated with inter-rater
reliability, i.e. different testers using the same method and awarding very different marks due to
different ‘subjective’ applications of the marking system. They can also be vulnerable to the
problem of intra-rater variation, i.e. the same tester applying the same marking method to the
same test but arriving at different scores due to previous tests marked, time of day, level of
fatigue etc.
Traditionally, translator and interpreter performance tests have been scored using an error
deduction (‘points-off’ or ‘penalty’) system, which is the system currently adopted by NAATI for
translation tests. The ‘points-off’ system assesses the product of translation rather than the
process or the ability of the translator and takes points off for identified error types. Other
marking systems are also being used by different accreditation/ certification bodies. Turner et al.
(2010) surveyed 24 different accreditation/certification systems for translators and interpreters
currently employed around the world and found that scoring systems are based on one of three
designs:
A promising recent shift in the field is the development of theoretically derived rubrics, such as
those developed by Angelelli (2007, 2009), Lee (2005) and Jacobson (2009). Rubrics-based
systems use sets of ‘descriptors’ (word-pictures) of performance at various levels (typically
identified by numbers) to help markers determine the result. Usually, there will not be only one
set of such descriptors, but several in order to reflect the various sub-components of the skill
being assessed. A marker using a rubrics-based system will still need to identify, at an early
stage in the marking process, the various errors that have been made. However, once they
have been identified and noted, awarding a level is done on the basis of comparing the
observed performance with the various descriptors. The level awarded in each area is
determined by selecting the descriptor that most closely matches the observed performance.
These sub-components (or dimensions/assessment areas) will usually be determined by
carefully analysing the test construct (Angelelli, 2009, p. 38). The descriptors at each level can
be determined by looking at various samples of candidate performance that are agreed to be of
a given standard, and identifying the observed characteristics. The number of levels available
needs to be considered carefully, as too few levels will give insufficient discrimination between
very good and very poor performances, while too many levels may simply confuse markers
(Angelelli, 2009, p. 44). Pass/fail is usually determined in terms of achieving a specified level in
each assessment area (although it is not unusual for a candidate’s performance to be uneven
across the assessment areas). Pass/fail could be determined at the same level for all
Commercial-in-confidence 53
Project Ref: RG114318
assessment areas, but another possibility is to allow some flexibility, so that being below the
specified level in (for instance) one area can still allow a candidate to pass. This is what we are
suggesting for the Provisional level.
Another difference between the two major systems presented above (point deduction vs
criterion referenced), is the level of directness of the assessment. Performance tests, such as
the NAATI test, use the direct assessment method, where the candidate is assessed on the
interpreting or translation task directly. Indirect testing, on the other hand, targets the sub-
components of the target construct or traits. Examples of this are a vocabulary test for
interpreters (Skaaden, 1999) which has good reliability (objective scoring of correct responses)
and predictive validity for outcomes of an educational program, but low face validity; or the test
designed by Stansfield et al. (1992) for translators, which combines direct and indirect
components, thereby balancing validity and reliability; the indirect components (such as multiple
choice language items) have higher reliability and the performance items have higher validity.
Interpreting assessment has been the subject of a number of studies. In Australia, Lee (2009)
compares analytic scales and holistic scales (rubrics) based on the CISOC (Community
interpreting services of Ottawa-Carleton) test. The three bands: accuracy (40%), target
language quality (40%); and delivery (20%) were given to nine experienced interpreting
examiners to trial. Each of the holistic bands had six scales along it by which examiners could
show their assessment. Examiners were also given conventional analytic scales, i.e. scales that
examine different components of performance separately with a punitive, point-deduction
system for each component. Examiners were asked to rate the same interpreters’ performance,
firstly using a holistic scale and secondly using an analytic scale. Lee (2009) reports that the
examiners initially assumed that their analytic ratings would be more ‘accurate’ (i.e. more able
to closely describe and quantify performance) than the holistic ratings, as this was the
convention that they were most used to. However, there was no general dissatisfaction with the
holistic scales: “the majority of the raters approved of the [holistic] rating scales proposed by the
researcher, and the rating results also pointed to high inter-rater reliability” (Lee, 2009, p. 183).
In the end, Lee (2009, p. 193) still cautions that the results are mixed and perhaps inconclusive
and recommends that further research be undertaken to further test the validity and reliability of
holistic scales. Jacobson (2009, pp. 61-65) argues that rubrics or descriptors constitute a better
way to quantify performance for non-linguistic features such as contextualisation cues (e.g.
paralinguistic features that signal meaning such as intonation contour, eye gaze, body position)
and the professional establishment of the interpreter’s role relationship to others (e.g. pre-
interaction establishment of the interpreter’s role, management of other interlocutors’ turn-taking
opportunities, management of over-lapping speech and interruptions etc). A sample of
Jacobson’s rubrics is contained in Appendix 8.
Other practitioner-researchers in Australia have also devised evaluation rubrics for interpreter
performance. Bontempo has devised and uses rubrics for the evaluation of (Auslan/English)
interpreter performance in general (2009b) and also for specialised situations such as
conference interpreting (Bontempo, 2009a) and educational interpreting (Bontempo &
Hutchinson, 2011). Bontempo’s rubrics are intended as guides not only for testers to employ for
on-going or exit testing, but also as a professional development tool for practitioners to evaluate
others’ performance and for them to reflect on their own. The rubrics contain value-neutral
statements or interrogatives and testers are left to award marks as they see the relevant
features to be present or absent, e.g. ‘Equivalence of message (appropriate for context?
Contains textual integrity and fidelity? Is information exchange successful overall?). The rubrics
contain four key elements – interpreting aspect, language aspect, interaction/role aspects,
professional conduct – and each element provides a mark of up to 5. This breakdown of marks
limits cross-linguistic transfer and target language performance to 25% each and awards the
remaining 50% of the marks to pragmatic and professional aspects of performance, which
differs from more traditional breakdowns of marks that award the majority of marks to the first
Commercial-in-confidence 54
Project Ref: RG114318
Tiselius (2010) adapts scales from Carroll (1966) that had originally been developed for
assessments of the quality of machine translation and applies them to the following criteria in
the assessment of conference interpreting: intelligibility and informativeness. The main point of
Tiselius’s (2010) research is that specialist interpreter examiners and lay people award very
similar ratings. Both groups of examiners received transcribed renditions of conference
interpreters’ interpretations in their first language and, following Tiselius’s adapted 6-scale
metric for both intelligibility and informativeness, were able to match these in similar ways when
reading through the ad verbatim transcriptions of conference interpreters.
For translation assessment, Angelelli (2009) surveyed the recent literature in sociolinguistics,
discourse analysis and second language acquisition in order to apply the directions that these
disciplines have taken in recent years to the performance of translation testees. In particular,
teaching and testing methodologies in the field of second language acquisition have, over the
last 25 years in most Western countries, advocated a communicative approach of language use
in which a speaker’s abilities to functionally communicate with others overrides the importance
of grammatical or lexical accuracy. Angelelli (2009, p. 29), describing the current American ATA
translation exam, states that “the ATA seems to primarily emphasize the reading
comprehension, translation ability (not operationalized) and the micro-linguistic elements of
translation competence present in writing (e.g. lexicon, grammar and punctuation rather than
discourse, cohesion etc.)”. Without wanting to disregard the importance of grammatical and
lexical accuracy, Angelelli seeks to systematically list non-linguistic criteria in translation
performance as important and worthy of consideration in assessment.
Angelelli (2009) seeks to measure and to quantify performance through the following descriptive
rubrics: source text meaning; style and cohesion (addressing textual sub-component);
situational appropriateness (addressing pragmatic sub-component); grammar and mechanics
(addressing micro-linguistic sub-component); and translation skill (strategic sub-component).
Angelelli (2009, pp. 40-41) proposes descriptive, 5-point rubrics, which are being considered by
the ATA for adoption in its marking system. An example of the rubric for ‘source text meaning’ is
given in Table 12 below.
Commercial-in-confidence 55
Project Ref: RG114318
T contains elements that reflect a complete understanding of the major and minor themes of the
4 ST and the manner in which they are presented in the ST. The meaning of the ST is proficiently
communicated in the T.
T contains elements that reflect a general understanding of the major and most minor themes of
the ST and the manner in which they are presented in the ST. There may be evidence of
3 occasional errors in interpretation but the overall meaning of the ST appropriately communicated
in the T.
T contains elements that reflect a flawed understanding of major and/or several minor themes of
2 the ST and/or the manner in which they are presented in the ST. There is evidence of errors in
interpretation that lead to the meaning of the ST not being fully communicated in the T.
Angelelli (2009, p. 43) proposes that a mark of 4 or above satisfies a typically required standard:
“number 3 is seen as the point at which the candidate shows evidence of skill but falls slightly
short of the proficiency level desired for certification”. A full description of Angelelli’s remaining
four rubrics is given in Appendix 10.
Turner, Lai, and Huang (2010) conducted a study which compared marking outcomes using the
current NAATI system and a rubrics-based system (the DPSI from the UK). In this study, a
number of translating test papers from accreditation students at RMIT University were marked
by experienced NAATI examiners using both systems (in the case of DPSI, using both blind and
non-blind marking), and the results compared. The findings were interesting. There was a
strong correlation between ‘NAATI’ and ‘non-blind DPSI’ marking across all language groups,
and a weaker but still significant correlation between ‘NAATI’ and ‘blind DPSI’ marking. In a
focus group discussion held with these markers, they stated that they felt they could have
benefited from more extensive training in the use of a rubrics-based system before taking part in
the study, and supposedly due to their inexperience in the use of rubrics, they tended to prefer
the current NAATI system. The results of the 2010 study, however, contrast with the results of a
small study on the use of rubrics conducted by our research team for the purpose of this current
research project. Two groups were recruited as participants: a group of ‘practitioners’ or
representatives of T&I ‘agencies’ (N=7); and a group of NAATI ‘examiners’ and/or ‘educators’
(N=11). The participants were invited to a short presentation on the background to rubrics-
based marking systems and provided with a copy of the set of rubrics proposed by Angelelli
(Angelelli, 2009, pp. 40-41), a closely-translated English ‘sample translation’ of a LOTE text
from a previous NAATI Translator test, responses from three translation candidates, with
various types of deficiencies, and a rubrics-based grid to score the translations. The participants
were then asked to score one of the candidates using the set of rubrics provided, after which
they were asked to record their impressions and comments of the process on a questionnaire.
The first three questions of the questionnaire were different for the two groups: practitioners &
agencies were asked about the potential usefulness of a rubrics-based marking system for
those employing or working with T&Is, and examiners & educators were asked about how a
rubrics-based system might result in better marking. The remaining questions were the same for
both groups, and sought feedback on issues such as preferred ‘pass/fail’ and ‘hurdle’ levels,
and suggestions for adding or deleting assessment areas. A copy of the questionnaires is found
in Appendix 11. In general (and within the limits of the small sample size), there was definite –
Commercial-in-confidence 56
Project Ref: RG114318
but not unanimous – agreement among both groups of participants in favour of using rubrics in
marking. Among examiners/educators, there was good agreement that rubrics can provide
clearer guidance and be easier to use, and strong agreement that rubrics can encourage
markers to take a wider range of factors into account. Among practitioners/agencies, there was
strong agreement that a rubrics-based system could be a good basis for determining
accreditation, but ambivalence about the usefulness of rubrics-based levels for employment
decisions or for reporting results. Both groups expressed a preference for level 4 in a 5-level
system as a ‘passing’ level, and level 3 or lower as being a ‘hurdle’ level that would preclude a
pass.
Kim (2009) advocates a different framework of assessment criteria from the current NAATI
criteria of translation examination which include the following: too free a translation, too literal a
translation, spelling, grammar, syntax, punctuation, failure to finish a passage, unjustifiable
omissions, mistranslations, non-idiomatic usage and insufficient understanding of the ethics of
the profession. Based on a systemic functional linguistics (SFL) framework, Kim proposes
assessment criteria where points are deducted for different features:
Table 13: Kim’s alternative marking criteria
Lexis Clause Text
Experiential Accuracy 1-2 points 2-3 points
Naturalness 1-2 points 2-3 points
Logical Accuracy 1-3 points
Naturalness 1-3 points
Major
Interpersonal Accuracy 1-2 points 3-5 points
Naturalness 1-2 points 3-5 points
Textual Accuracy 1-2 points 3-5 points
Naturalness 1-2 points 3-5 points
Graphological mistakes such as spelling 0.5 points
Minor
Minor grammar mistakes that do not impact on meaning 0.5 points
Kim (2009, p. 135) argues that such an approach which looks at the meaning of the ST and TT
is preferable to one that focuses on errors per se: “…the present criteria do not specify possible
forms of errors, such as additions, omissions, and inadequate equivalence, because what is
important is to judge whether a mistake has something to do with accurate and natural delivery
of different aspects of meaning”. Kim (2009, p. 150) points to increased student satisfaction in
class and also to increased pass rates (from 10% in 2004 to over 60% in 2007) for the NAATI
accreditation test (English into Korean) amongst students whose work in class had been
assessed to the above grid rather than ones closer to the existing NAATI one.
Another example is Gile’s (2004) notion of feedback from trainees or testees about how they
approach and perform translation tasks, i.e. an explicit account (without following any required
content format) of why and how they translated texts in the way that they did. This kind of
feedback, which Gile requires of all his translation students, is termed integrated problem and
decision reporting. Gile (2004, p. 34) requires of the trainees that they “include full references of
sources consulted, and preferably the context in which target-language terms or expressions
which they chose were found (generally a sentence, sometimes a whole paragraph).” This focus
on students’ reflection on the process of translation is common in higher education I&T courses,
but absent in performance tests such as NAATI, despite the recommendation for a reflexive
component in the accreditation tests from the 2000/1 Review.
Commercial-in-confidence 57
Project Ref: RG114318
Below we provide more detailed descriptions of the I&T tests and marking systems used by a
selection of countries with publicly available information.
Similarly, the IoL Diploma in Translation consists of tests assessed according to criteria very
similar to those of the NAATI tests: candidates receive a percentage score with a ‘pass’ mark of
60%. A points breakdown is made according to the following three components:
comprehension, accuracy and register (50%); grammar (morphology, syntax, etc), cohesion,
coherence and organisation of work (35%); technical points relating to spelling, accentuation,
punctuation and the transfer of dates, names, figures, etc. (15%) (IoL, 2011). Space is provided
on each candidate’s mark sheet for examiners to provide comments for each of the three
components. The marking systems for the tests in the IoL diplomas are comparable to those
used for the NAATI tests (see Appendix 12).
Commercial-in-confidence 58
Project Ref: RG114318
strong, acceptable, deficient, minimal. The rubric’s descriptors are short and generalised and list
the absence or presence of inadequate features of translation.
Thus, the ATA examination adopts an approach in which performance is measured by both
descriptive rubrics and identification of errors made with a refined but complicated system of
quantification of error type (ATA, n.d.). At the same time there are reminders to the examiner of
‘global’ features: e.g. ‘Is it intelligible to the target reader?’
For the certification of court interpreters, pre-test training is typically offered by private providers
and the certification exam is offered individually by a nominated authority in each state –
sometimes a university institution, sometimes a court authority, sometimes a private enterprise
or agency. There is great variation in the training, security screening and formal testing of
candidates. For example, for the year 2009 the National Center for State Courts (2009) lists four
steps to certification: orientation workshop, security record check, written test and oral test. In
some states, all of these four steps are required, in others only some of them (typically the
written and oral tests), while in others no testing is planned or available.
In order to gain an insight into a standardised and formally administered test used in the USA,
this report takes as an example The Federal Court Interpreter Certification Examination (FCICE)
for Spanish/English. The FCICE is a test offered in three languages only: Spanish, Navajo and
Haitian Creole. The FCICI consists of a two-phase examination of language proficiency and
interpretation performance, consisting of a written examination with multiple-choice answers to
test proficiency in grammar and expression in both languages, with a pass mark of 75%. The
Oral Examination has a pass mark of 80%. A description of the FCICI test is provided in
Appendix 14. The two examinations are administered in alternate years. The first phase of the
examination, referred to as the Written Examination, is a multiple-choice test of language
proficiency in English and Spanish, and is offered in even-numbered years. The duration of the
Written Examination is three hour and fifteen minutes. The second phase is a 45-minute Oral
Examination that simulates the work that interpreters do in court, and is offered in odd-
numbered years. Candidates must pass the Phase One Written Examination in order to qualify
to take the Phase Two Oral Examination (National Center for State Courts, 2011, p. 7). The
offering of examinations in alternate years is something NAATI could adopt for languages that
have high supplies of practitioners, such as Spanish or Chinese. Candidates in those languages
should be encouraged to enrol in formal tertiary courses.
3.3.2.3 Canada
The national organisation, the Canadian Translators, Terminologists and Interpreters Council
(hereafter CTTIC) conducts a translation test for certification which contains two texts, each of
175-185 words in length. One text is general in nature and compulsory, the other is a choice of
Commercial-in-confidence 59
Project Ref: RG114318
Errors – translation (comprehension, i.e. failure to render the meaning of the original text)
– language (expression, i.e. violation of grammatical and other rules of usage in the
target language).
For major mistakes (e.g. serious misinterpretation denoting a definite lack of comprehension of
the source language) 10 marks are deducted. For minor mistakes (e.g. unacceptable loan
translation) 5 marks are deducted. The pass mark is 70% for each exam. (Further details on the
CTTIC guide are in Appendix 15).
Elsewhere in Ontario, the ATIO (Association of Translators and Interpreters of Ontario) has a
pathway of certification to becoming a translator, conference interpreter, court interpreter or
terminologist through compilation of a dossier which can provide evidence of five years’ full-time
experience (or two years if applicants hold an bachelor honour’s degree, or equivalent, in their
occupational category) (ATIO, 2011). In British Columbia, a court interpreting test exists which
requires examination of legal knowledge (elicited in written form) and an oral examination. A
description of the format and the marking guide appears as Appendix 17.
The Association of Visual Language Interpreters of Canada (AVLIC) revised its certification
processes in 2002 and 2004 and (Russell & Malcolm, 2009) are contemplating a new testing
procedure, which includes a prerequisite of training with a minimum of two years full-time study
and detailed feedback in simulated performance before the certification test. The redesign of the
testing procedure included preliminary language testing, firstly in English and secondly in ASL
(American Sign Language) before message equivalency was tested. This is similar to what we
are proposing for our new model. For the revised testing procedures, the professional
association considered on-going, cumulative assessment through portfolio development rather
than a test as a means of ascertaining trainees’ levels but decided against this due to workload
demands (for trainees and testers) and to concerns about the validity and reliability of such a
system. Psychometric analysis preceded development of the revised testing procedure to
ensure that these features – content and construct validity and inter-rater reliability could be
addressed. This is what we propose must happen in Australia before a final decision is made on
the new test design and content.
The marking system used in the AVLIC test is not based on error calculation or on a descriptive
checklist. Instead, the AVLIC test is based on examiners identifying criteria and “making
evidence-based decisions about the consistent representation of those features across all […]
test segments” (Russell & Malcolm, 2009). Russell and Malcolm (2009) describe this testing
procedure as a qualitative one. The marking system for the test has two sections: sign language
criteria (linguistic criteria) and message equivalence criteria (cross-linguistic transfer). The two
sections are presented in Appendix 18.
Commercial-in-confidence 60
Project Ref: RG114318
3.3.2.4 Europe
In some European countries with established T&I training centres, usually at university-level,
marking practices reflect those of the academic or vocational institution. As explained above,
the notion of a ‘single testing procedure’ for accreditation, recognition or certification generally
does not apply in most European countries. Instead, accreditation, recognition or certification,
whether tacit (i.e. the permission to list a T&I qualification after one’s name) or formal (i.e. being
admitted to or seeking permission to practice professionally through formal means other than
testing) is gained through course completion and attainment of a specialist T&I academic
degree. Further to formal training and qualifications, many European countries have a formal
process of ‘registration’ of practitioners, very often court interpreters and translators.
The European Union has one of the largest and most extensive T&I infrastructures in the world.
The focus of T&I performance within the EU is conference and speech interpreting, document,
legal and speech translation but candidates for employment at EU institutions must also, in
addition to a post-graduate qualification in T&I, pass a test which includes assessed
performance of translation or interpreting skills, but also knowledge of EU institutions and areas
of responsibility. A description of the test for employment to work as an employed interpreter is
provided in Appendix 19. A pass mark of 50% is required, but no details are provided of the
marking criteria.
Commercial-in-confidence 61
Project Ref: RG114318
spoken performance for these features at the same time, both during the test and when viewing
the video recording of it afterwards.
In the marking of the translation test, it is clear that examiners are instructed to identify errors
and assess their severity. The following information is provided on the SASL website in regard
to the examination of translation tests:
Examinations are assessed on the basis of a system of major and minor errors
originally drawn up by the American Translators Association. Major and minor errors are
defined as follows:
Major errors: Gross mistranslation, in which the meaning of the original word or phrase
is lost altogether; omission of vital words or other information; insertion of information
not contained in the original; inclusion of alternate translations, where the translator
should have made a choice; and any important failure in target-language grammar.
Minor errors: Mistranslation that distorts somewhat, but does not wholly falsify, the
intent of the original; omission of words that contribute only slightly to meaning;
presentation of alternate translations where the terms offered are synonymous or nearly
so; and ‘inelegance’ in target-language grammar.
For spoken interpreting, the assessment criteria emphasise target language performance (i.e.
TL vocabulary and register, grammar, idiom and purity are two of the four criteria groups) and
the pass mark individually for each criteria group as well as collectively is 80%. For signed
language interpreting, pragmatic and professional attributes are also marked in assessment.
The marking criteria for South African sign language (SASL) are language skills (vocabulary
grammar, idiom, purity), content/message (faithfulness to message, accuracy, clarity),
interpreting technique (fluency of delivery, hesitation, backtracking, lag time, irritating habits,
eye contact), professional conduct (preparation, knowledge of the topic, behaviour/dress code).
3.3.2.7 Australia
The NAATI marking system is specified in the NAATI Examiners’ Manual (EM), which was
extensively rewritten in 2005, with minor revisions in 2008. The EM includes a section outlining
general principles for marking, with guidelines for marking each specific type of test included in
the sections relating to translating and interpreting at each level. As many marking systems do,
Commercial-in-confidence 62
Project Ref: RG114318
as reviewed above, the current marking system is based on error detection, but the basic
principle of the subsequent scoring process (including the determination of pass/fail) is what is
generally referred to as a ‘subtractive’ or ‘punitive’ numerical or deduction system, where the
candidate starts with 100%, and marks are then deducted according to the number and
seriousness of the errors previously detected. Determination of pass/fail is then based simply on
achieving an overall score of 70% or better, although for interpreting tests there are also
requirements to achieve 70% minimum in each component (dialogue interpreting, consecutive
interpreting, sight translation and questions on ethics and cross cultural issues). This
requirement, in essence makes each component worth 100%, thus invalidating the current
weightings allocated to each. This is something that we propose changing, so that skills that are
found to be more important or more common for interpreters, for example, will receive higher
weightings. Such a true weighting system will prevent the current situation where a candidate
can obtain, for example, 90% for the dialogue interpreting component and 60% for the
consecutive interpreting component and fail overall.
§ accuracy (of conveying the message, both overall and for any given part)
§ quality of language (viewed particularly in terms of its contribution to accuracy, not
merely for its own sake)
§ technique (application of good practices).
According to the EM, accuracy should be given the greatest weighting and technique the least;
however, the exact proportions are not specified, although the guidelines for marking translating
tests are much more detailed than those for interpreting. In addition to the general guidelines
just described, the specific guidelines for marking translating tests include the following:
§ Markers need to differentiate between ‘general’ and ‘isolated’ errors (the former
affecting whole clauses or more, the latter affecting only the immediate word).
§ Markers need to penalise errors more severely when they affect accuracy than when
they simply offend against lexical or grammatical usage, but the meaning is still clear.
§ When a candidate produces a number of ‘systemic’ errors (usually, ones that indicate
a fundamental ignorance of TL lexical or grammatical usage), these can be penalised
as many times as they occur. By contrast, a mistranslation of a particular word that
might occur multiple times throughout the text can only be penalised a maximum of
three times.
In contrast to the relatively specific guidelines for marking translation tests, the guidelines for
marking interpreting tests are much less specific. The suggested approach, as currently
described in the EM, is as follows:
§ Allocate to each segment of the dialogue, in accordance with its length, a proportion
of the total available marks.
§ After noting the errors made by the candidate, determine what proportion of the
‘message’ in each segment has been successfully conveyed, and award a mark
proportionally for each segment.
At the Professional level, the current guidelines in the EM for marking sight translation and
monologue interpreting suggest a somewhat indefinite assessment of how much of the whole
‘message’ of the text has been adequately conveyed, and awarding a proportional mark
accordingly (an admittedly vague and difficult task when looking at the ‘message’ of a 200-word
or 300-word text). If a new system is adopted, a new, improved examiners’ manual will need to
be produced, although many aspects of the current manual could remain.
Commercial-in-confidence 63
Project Ref: RG114318
Currently markers notify results to NAATI using a proforma, from which selected information is
then communicated to the candidate. This proforma provides for:
§ recording a numerical score for each part of the test, and overall
§ if the candidate has failed, circling one or more letter codes indicating the types of
errors that contributed significantly to that result (e.g. A = significant omissions).
Particularly if the candidate has failed, markers are expected to attach a sheet with ‘narrative’
feedback on the candidate’s performance, highlighting areas of particular weakness and
perhaps giving some brief examples, as well as suggestions for improvement.
After reviewing all the available marking systems, we believe that the benefits of a rubrics-
based system outweigh its potential flaws, and we recommend that NAATI embark on a
validation study to construct theoretically derived and empirically tested rubrics for the
Australian context. Below we outline the major advantages and potential disadvantages of the
rubrics-based marking system:
§ They oblige markers to consider a wider range of factors in deciding on the eventual
result. For instance, good sets of rubrics direct markers’ attention to factors such as
register, pragmatics, and the like, which can easily be overlooked in a subtractive
marking system.
§ Because the descriptors are phrased in terms of the candidate’s performance
throughout the text, rather than at discrete points, they again oblige the marker to
view the candidate’s performance holistically, rather than focus on particular errors.
§ They encourage markers to identify positive as well as negative aspects of
performance.
§ Because the descriptors, if well-designed, are generally expressed in non-technical
language, they could, if used as part of reporting of results to candidates, give the
candidates a more meaningful and more standardised picture of their performance.
Commercial-in-confidence 64
Project Ref: RG114318
§ In case a candidate disputes a result, it can be easier (albeit not automatically so) to
justify the result by pointing to the descriptor selected and demonstrating how that
matches the actual performance.
However, some points need to be noted about the use of rubrics-based systems (although
some of these limitations can also be identified in the other marking systems):
§ For the rubrics to be valid, the test construct needs to be carefully defined and
carefully analysed and the criteria empirically devised (Angelelli, 2009, p. 22).
§ Unless the descriptors are carefully worded, there can still be room for varying
interpretations of what each level means.
§ At least initially, markers used to the current NAATI system may have difficulty dealing
with some of the assessment areas that they may not have had to think much about
until now (e.g. pragmatics or register). This may particularly be the case, given the
discussion above on the backgrounds of many NAATI examiners who have not
received any formal education in interpreting and translation studies and are very
likely unaware of the theories. Clearly, this draws further attention to the need for
extensive training of markers, including trial marking of sample tests followed by inter-
comparison amongst panel members.
We must however, highlight that the benefits to be gained from any move to a rubrics-based
marking system may be effectively negated (or rendered far less worthwhile than they might
otherwise have been) if the overall testing system is not also significantly overhauled. In other
words, while the potential benefits of using rubrics are undoubtedly significant, without wider
changes to the overall accreditation system NAATI might be at risk of simply ‘tinkering around
the edges’.
Based on our review and on the responses from the national survey, below we provide some
suggestions for possible test design, content and weightings, which may need to be changed or
adapted as a result of the proposed research project.
§ Dialogue/bilateral interpreting
§ Remote/telephone interpreting
§ Sight translation (where applicable)35
§ Consecutive interpreting of oral language likely to appear in community settings, such as
information sessions, with repetitions and clarifications permitted36
35
For example, in many Indigenous contexts sight translation would only occur in one direction: from English to LOTE, so the
examination will need to be adapted to cater for their particular needs.
36
Two members of the research team were in favour of a level below the Generalist, equal to the current Paraprofessional
examination, which only assesses dialogue interpreting for the new and emerging languages and Aboriginal languages. The rest of
Commercial-in-confidence 65
Project Ref: RG114318
§ Simultaneous/whispering interpreting
§ Management skills
The lack of authenticity of the interpreting test is a central issue in improving interpreting testing.
In addition to the components to be tested that are currently absent from the NAATI test (as
outlined above), the current tests lack authenticity in the way they are delivered (via a
disembodied tape recording), in the length and structure of the dialogues (which usually lack the
many features of spoken discourse and can read more like written texts than oral natural
dialogues), and in the penalisation of repetitions or requests for clarification (which is common
practice with competent interpreters in a real situations). We therefore, strongly recommend that
interpreting examinations be delivered live, that scripts reflect features of spoken language and
that candidates be allowed to seek clarification and be assessed on how they manage and
coordinate the interpreted situation. Recorded tests cannot assess such crucial aspects of
interpreting. Furthermore, the current tests have had, in some instances, the negative effect
among some training courses of shifting the focus of the training (training to pass the NAATI
test rather than training candidates to become competent interpreters).
We understand that holding live examinations can be logistically difficult and may be impossible
in all instances. If live testing is not always possible, we propose that tests be video recorded,
so that the candidate can see the participants in the interaction and can stop them when
needed. The candidate’s performance should also be video recorded for marking, so that the
candidate’s demeanour and management skills can also be assessed.
NB: Note that all other knowledge and competencies will be assessed in the training.
NB: Note that all other knowledge and competencies will be assessed in the training.
NB: Note that all other knowledge and competencies will be assessed in the training.
the team argued for the discontinuation of the current Paraprofessional level, with the Provisional Generalist examination as a
compromise. As consecutive interpreting is not common in Interpreting practice, the consecutive interpreting component is likely to
carry a small weight in the new proposed examinations, thus no longer making this component the key cause of failure.
Nevertheless, we stress that a validation study will be used to determine the final contents of the accreditation examinations.
Commercial-in-confidence 66
Project Ref: RG114318
NB: Note that all other knowledge and competencies will be assessed in the training.
NB: Note that all other knowledge and competencies will be assessed in the training.
As per the interpreting tests, the translation test will be complemented by the training and the
different hurdle assessment tasks throughout the duration of the modules and upon their
completion.
The validity of the test, in general terms, “refers to the appropriateness of a given test or any of
its component parts as a measure of what it is purported to measure. A test is said to be valid to
the extent that it measures what it is supposed to measure. It follows that the term valid when
used to describe a test should usually be accompanied by the preposition for. Any test then may
be valid for some purposes, but not for others” (Henning, 1987, p. 89). This is an important point
to remember when testing the validity of the different testing instruments we are proposing (i.e.
generalist and specialist tests).
External & internal validity relate to the methods for assessing validity. Internal validity relates to
studies of the test content and its perceived impact, external validity (or criterion validity) relates
to the relationship between a candidate’s test scores and measures of their ability beyond the
test. One very important type of validity is ‘context validity’: the extent to which test tasks
compare to real-world tasks undertaken in (in this case) translation and interpreting professional
practice (Weir, 2005, p. 19). The current NAATI exams, and in particular the Interpreting exam,
seem to be low in context validity, for the reasons outlined above.
It needs to be acknowledged that test developers face additional challenges when designing
interpreter and translator tests, as compared to test designers in other more established fields
due to a lack of empirically defined and supported models of translator and interpreter
competence, and a lack of research into existing tests. The lack of research into accepted
models of competence leaves test developers with little more than untested theoretical
Commercial-in-confidence 67
Project Ref: RG114318
frameworks or practitioner experience as the basis for test design, including the design of test
passages and scoring rubrics. The lack of a body of research into existing tests means that
there are no accepted standards for the validity and reliability of translator and interpreter tests
and no tried and tested methods for undertaking this research. Below we review the few
research studies into issues of validity and reliability in the field of interpreting and translation.
Clifford draws on Berger and Simon’s (1995 cited in Clifford, 2001) list of principles of
psychometric evaluation: reliability, equity and utility and adds a fifth principle, comparability, as
one which is also important in interpreter assessment. Clifford’s (2001, p. 374) descriptions of
the principles appear in Table 14 below:
For interpreting, content validity applies where a test measures interpreting abilities across a
range of scenarios that are determined to be typical and common interpreting situations found
across most if not all areas, as we stressed above. For the development of English-sign
language tests in Canada, Russell & Malcolm (2009, p. 356) sought to ensure content validity
by selecting “test segments created based on community consultation with interpreters and
consumers of interpreting services, along with interpreter referral agencies, in order to plan test
scenarios that are realistic and reflect the broad range of settings where ASL-English
interpreters typically work”. The choice and selection process for sourced materials was similar
to that employed by Angelelli (2007) in a test designed for medical interpreters. Russell and
Malcolm’s segments were sent for review and perusal by a number of parties.
Construct validity is apparent where inferences can be legitimately made from testing
performance to the theoretical constructs on which the criteria are based. Russell and Malcolm
(2009) also sought to ensure construct validity in their testing apparatus. An initial test
developed in Canada surveyed a number of theoretical approaches and based its perspective
of performance measurement on discourse analysis. One of the most comprehensive
investigations into one of the psychometric features of assessment, that of (construct) validity in
interpreting testing was undertaken by Clifford (2003) who approached interpreting performance
from a discourse perspective and applied three constructs (intelligibility, informativeness, style)
to a revised test, given to 15 trainee or practising French-English conference interpreters in
Canada. Clifford (2005, pp. 120-122) argues that previously-used tests had been unable to
Commercial-in-confidence 68
Project Ref: RG114318
distinguish between these constructs and that test-takers tended to attain either high or low
scores for all of constructs together. His revised test provides a consistent relationship between
test scores and statements about the related constructs (Clifford, 2005, p. 127).
Vermeiren et al. (2009, p. 303) add a further sub-category of construct validity, which is the
“measure of agreement between the test and the concept or domain it is derived from. In an
educational concept, this would be the relevant curriculum (knowledge and skills), in a
professional context the professional standard (related to specific knowledge and skills)”.
Vermeiren et al. (2009) also posit ‘predictive validity’ as a term referring to a test taker’s future
performance being predicted from their prior performance. Elsewhere, others term this
“maintenance” (Lysaght & Altschuld, 2000, p. 95). NAATI’s introduction of a finite time-length for
new accreditations, renewed through a process of revalidation, is a move that addresses this
psychometric feature of testing, ‘predictive validity’ or ‘maintenance’.
The other psychometric principle that this discussion focuses on is that of reliability. Reliability is
usually sub-divided into different types: inter-rater reliability (typically examined in relation to the
assessors who use a test and the reliability that a test design can ensure comparable marking
outcomes amongst different assessors); ‘intra-reliability’ (the same assessor marking the same
test for different test-takers); test-retest reliability (the same assessor marks the same test
according to the same criteria after a sizeable time interval). Russell & Malcolm’s (2009) revised
sign-language-English testing regime for certification in Canada makes the claim that inter-rater
reliability is addressed through training of raters, a high number of raters (6) for each test
assessment, with three meeting collectively to mark assessment and three receiving test
performance recordings and marking them individually. Vermeiren et al. (2009, p. 304) also
advocate for rater training to increase rater reliability for translation marking. Inter-grader
triangulation of criterion is also advocated by Vermeiren et al. (2009), who also support
repeated testing for every criterion, the application of consistent indexes in assessors’ marking
and systematic test-retest exercises in training.
Tests therefore need to be both valid and reliable: reliable as tools that can be administered to
all candidates and administered and marked in a uniform way, and valid so that they can
measure what the test designer wishes to measure. A third important characteristic is that of
authenticity, i.e. “the degree to which tasks on a test are similar to, and reflective of a real world
situation towards which the test is targeted” (Angelelli, 2009, p. 20). In the case NAATI tests, as
discussed above, hand written translation tests or interpreting tests using pre-recorded
dialogues are examples artificial situations that do not reflect authenticity.
§ practitioners
§ T&I educators
§ non-T&I language academics (less often now than in the past).
In some cases (especially when that language community is very small and/or newly arrived),
panel members may not belong to any of the above groups, but simply be L1 speakers of the
relevant LOTE.
While NAATI prefers language panels to include both L1 LOTE speakers and L1 English
speakers, in the case of many languages (not only newly arrived ones) there are hardly any L1
English speakers who have the level needed to be an examiner, and the entire panel is
therefore made up of L1 LOTE speakers. For translating LOTE > English, this lack is
Commercial-in-confidence 69
Project Ref: RG114318
compensated for by a panel of ‘English markers’, but there is no such provision for interpreting
tests. Many examiners, especially those who are practitioners, may have a good instinctive
sense of what is satisfactory translating or interpreting, but do not have any background in the
theory of translating and interpreting (as evidenced by our survey), which may result in a
tendency to take an over-literal approach to marking, or focus too much on quality-of-language
issues. In addition, many examiners may have little background in the theory of assessment.
This can be a particular problem with examiners for some aboriginal languages. One suggestion
would be to have such examiners partnered with someone who can assist them to “translate”
their comments and assessment into standard marks.
The statement “There should be compulsory training for all NAATI examiners” in our survey,
received the highest percentage of agreement from all respondents combined (84.5%). This
indicates that there is also a clear public perception that the current examiners may not be
adequately qualified, thus jeopardising the credibility of NAATI testing. One possible way to
improve this, in addition to compulsory training, is a through more rigorous and more
transparent recruitment process. Open calls for applications should be widely advertised on a
regular basis, with three-year contracts, renewable upon successful review by the panel chair.
We believe that the criteria for applicants must also be amended to include Interpreting and
Translation formal higher qualifications in addition to NAATI accreditation, where applicable. We
believe more examiners should be recruited from the graduates of the existing NAATI approved
courses. Such highly qualified examiners can also assist NAATI in training other examiners with
no formal I&T education background. Many I&T graduates are practitioners and part time
educators. Another benefit to NAATI is that as educators they are familiar with the NAATI
examiners’ manual already as well as with other assessment practices relevant to the institution
for which they work.
At present, there is no consistent formalised training program for examiners. For many years,
NAATI was fairly assiduous in organising training workshops at least yearly, and sometimes
twice yearly, and continued membership of a panel was (at least officially) conditional on
attendance at these workshops. However, because these were often held only in major capital
cities (because numbers in the smaller capital cities did not make it viable), and had to be held
on fixed dates, not all examiners were able to attend, and some chose not to attend for
considerable periods. NAATI was often limited in its ability to enforce attendance because it
could not afford to lose panel members in some smaller panels, or to pay them to attend. On the
other hand, NAATI has been trialling, and now hopes to implement more widely, a system
where members of each panel are brought together in one location to engage in intensive
workshopping of test setting and test marking, in response to one of the recommendations in
the Cook Report. Some logistical difficulties can be overcome with other methods of delivery
such as online training.
The conclusion of all of this is that if any new system is to be introduced examiners must have
adequate training.
Commercial-in-confidence 70
Project Ref: RG114318
paper and dictionaries that they might not even use in daily life. I know many great translators who have
failed the NAATI exam because of this” (Survey practitioner respondent)
New Technology: With the advent of much new ICT the Board hopes that this will be
taken into account in considering the practicality of proposed changes to testing. The
Board believes that there are potential benefits in administration, logistics, access,
assessment and reduced postage. While the Board is not advocating technological
determinism in proposed models as against an evidence-based conceptual framework,
it is hoped that the benefits of available and emerging technology will be captured as
much as possible. In holding that view the Board notes that it does not want NAATI
‘captured’ by unique technology that is difficult to maintain and may pose difficulties in
access by the typical community of NAATI clients (National Accreditation Authority for
Translators and Interpreters, 2010, p. 6 see Current Views of the Board )
The Board warns that the apparent attractions of using new technology in testing cannot ignore
the potential drawbacks, including issues of access and equity for candidates and possible
technological dependence of NAATI itself. In short, technology must be the servant, and not the
master. Caveats and potential pitfalls aside, the call for using computers in examinations seems
so strong that the main questions appear to be ones of ‘when’ and ‘how’ rather than simply ‘if’.
The 2001/2 NAATI review already made recommendations for the use of computers in
translation testing with the main argument that the current pen and paper tests do not reflect the
current practice of translation practitioners, as expressed by the quotation above from one of
the respondents to our survey. The 2001/2 Review identified two type of resources that can be
available to translators: 1: Dictionaries, Glossaries, Parallel Texts (texts from the same genre or
on a similar topic), Terminology databases, On-line and off-line electronic resources,
Computers, Software (spell check, grammar check), Internet; and 2. Email, Mobile Phones,
Translation Memory. The recommendation from the Review was that only resources under
category 1 should be allowed in the NAATI translation test (NAATI Test Review Translators
Group, 2001, pp. 16-17).
Commercial-in-confidence 71
Project Ref: RG114318
The Cook Report also strongly advocates the use of computers in NAATI Professional level
testing. The authors of the Report offer the following reasons for the need to introduce
computerised translation tests:
Cook and Dixon subsequently explore the logistics required to implement computer testing,
such as the number of computers that NAATI would need to own or hire, and the technical
assistance required to enable access to LOTE scripts while ensuring functionality limits
elsewhere - such as Internet resources. On balance, the authors ultimately consider that the
advantages gained by doing tests on computers would outweigh the additional upfront
expenses, and proceed to suggest ways of implementing it.
The computerisation options suggested in the Cook Report are four in number, wherein NAATI
respectively 1) purchases and maintains the computers and associated software; 2) hires
computers and testing venues from schools and universities; 3) allows candidates to use own
computers (as already implemented in Advanced Level testing), and 4) negotiates with a
computer manufacturer to provide candidates with a laptop (including appropriate software) on
which to undertake the testing, at a favourable cost. Two recommendations are finally made:
Table 15: NAATI Examiners’ comments on computer use for translation examinations
Number of
NAATI Marker Reponses
markers (n=11)
Tests should be:
Conducted on a computer 10
Handwritten 1
Candidates have the option to use the keyboard or to hand write it 1
In testing ‘on a computer’, the hardware should be:
NAATI owned or leased 6
Candidate allowed to bring their own computer 3
Aids allowed:
Commercial-in-confidence 72
Project Ref: RG114318
No aids 2
Hardcopy only 3
Dictionaries only (hardcopy and softcopy) 4
Digital aids only (no hardcopy) 3
If digital aids allowed, which ones?
Spelling & grammar checkers and other aids offered in Word 4
Off-line dictionaries only 5
Dictionaries only, online and off-line 3
Unrestricted (but not TM or MT) 7
Unrestricted (including TM and MT) 2
On choosing a platform to manage the testing process priority should be given to:
Web-based, with candidate working on the browser, not the hard drive 7
Security, candidates not allowed to keep a copy of test 7
User friendliness 5
Allowing for remote testing 1
The prompts admittedly allowed respondents to support more than one option, but the numbers
nonetheless give an insight into relative levels of importance or acceptance. Significantly, the
vast majority (ten) seemed in favour of using computers only (since an either/or response was
also available). Of what might be termed the two dissenters, the one who supported retaining
the current handwritten form only felt that “some candidates may not be sufficiently familiarised
with computers”37. The other favoured implementing computer examinations but also retaining
the handwriting option. With regards to resources, four favoured dictionaries only (both hard and
softcopy), but if digital aids were allowed then seven favoured unlimited aids such as spelling
and grammar checking, and online and offline dictionaries. These responses are consistent with
the recommendations of the 2001/2 Review and can be summarised as a list of what would
seem to be an acceptable format for Translator testing:
Security was regarded overall as more important than user-friendliness, and although web-
based exams would make remote delivery possible, that scenario was only accepted by one
respondent. We received also a contribution from a translator from Japanese into English who
explained that translators working from different scripts, particularly with English as the L1, face
additional difficulties which could in part be solved by testing in a digital environment.
37
No reason given, though we might speculate a concern for emerging languages and/or demographically and socio-economically
differentiated groups (e.g. age/gender/ethnicity/educational opportunities/country of origin).
Commercial-in-confidence 73
Project Ref: RG114318
The above assumes that computers must be used in accreditation tests to reflect current
professional practice. However, there is a fundamental question that needs to be assessed
before deciding on this step: what is the main aim of the NAATI accreditation examination? In
other words, does it aim to assess a candidate’s basic, core translation skills as a novice
translator or does it aim to assess an experienced translator’s ability to produce a professional
translation product? It appears to us that the first aim could be applicable to the current NAATI
accreditation system which does not require any pre-testing training, and that the second aim
could more adequately apply to our new suggested accreditation system which will involve
compulsory hours of training, including training on the use of translation technologies, as is the
case in current formal translation courses. We will therefore continue the discussion in support
of the use of computers in accreditation tests and discuss issues relating to logistics and
security.
Word processors in particular must be classed as an aid and not simply a medium, because
they do not only permit text recording, but also text manipulation (spelling and grammar
checking, copying and pasting, storage, etc.).
With Proxy Mediation the examining entity outsources supervisory control (full or partial) over
exam implementation and/or marking to an external agent. For example, Microsoft Certification
Programmes employ the services of the Thomson Prometric company (now a subsidiary of
Educational Testing Service – ETS); Australian high school examinations are delivered,
supervised and recorded in-house but marked externally. There is clearly no bar to a Proxy
being engaged to replicate precisely the same strictures and controls as the Host entity would
Commercial-in-confidence 74
Project Ref: RG114318
apply itself, but the process is now removed from the Host, with a corresponding
decentralisation of control and accountability.
In Client Mediation the candidate (‘client’38) plays a self-supervisory role that may be more or
less significant depending upon the circumstances. Thus, if an institution (or Proxy thereof)
examines candidates on-site, but allows them to use their own computers (or simply, say,
personal reference materials or aids), this inevitably relinquishes a degree of control to another
(highly interested) party. In the case of certain voluntary or non-prescriptive qualifications,
candidate autonomy may be extreme (e.g. undertaking exams at home using own equipment,
as with SDL Trados Certification), and strict time limits may not always apply (untimed exams).
Such self directed tests are also encountered in self-directed studies, correspondence courses,
open learning, adult education, and online distance-mode studies. In the global translation
marketplace, this is also a common type of test procedure applied to translators seeking to join
agency panels, as identified in our review of accreditation systems around the world (see point
2 above). Client mediation thus supposes inherently lower guarantees, posing particular
challenges with the copying of material, vetting of candidate identity and restriction of exam aids
– although if time limits can be imposed (e.g. online examination with timed logins), these can at
least help curtail opportunities to seek unfair advantage.
Their proposed new format combines old-fashioned elements (on-site Invigilators) with ultra-
modern ones such as centralised server control, with fully digital exam script delivery and
recording. There is apparently mediation through a proxy – Amazon – and it is also not clear
whether the exam sitting room will be located at ATA, or some other premises. Candidate
38
The term is not used ill-advisedly, since the modern approach to education tends to view students as paying customers,
contrasted against a traditional teacher-disciple paradigm in which students (and examinees) are passive and subordinate.
Commercial-in-confidence 75
Project Ref: RG114318
autonomy is minimal, unless they can somehow bypass server control. At face value, ATA’s
proposed system appears safe and workable. But as we have repeatedly had occasion to
observe, whenever a computerised exam environment is contemplated, security management
in all its aspects (candidate identity, exam theft, fraud or manipulation) becomes vulnerable, and
the solutions complex – and by extension, expensive. The apparatus, planning and execution
for ATA’s proposed process are all necessarily more elaborate. For example, allowing a
candidate’s computer to act as a terminal to a host must entail installing some form of client
software and presumably firewalls and malware protection must be in place to prevent malicious
circumvention and re-routing. ATA will also provide for a lengthy transition period in which
keyboarded and handwritten exams will co-exist, which seems to be a necessary measure for
any introduction of technology in translation tests.
One argument for imposing limits on exam computerisation is that technological advancement
in the digital world is rapid, and in the space of only a few years some new technologies may
seem objectively unfair when contrasted against what candidates in earlier years had available.
Potential solutions to this difficulty might entail placing a ceiling on permissible exam technology
– e.g. word processor only. Whatever the course taken, each approach yields ‘frozen’ test
situations that will become antiquated as the real world moves on, while their unwieldiness of
implementation is apparent.
On the other hand, if we accept that resourcefulness is a key attribute of the translator (which
was supported by the results of our survey), then a strong case can be made for simply
replicating the modern working environment and allowing candidates unfettered access to
whatever computerised resources they wish – even Machine Translation, which is now a
common adjunct to commercial Translation Memory suites and a tool that is taught in the
current formal translation courses. Replicating professional working conditions in turn suggests
that candidates might sit exams remotely at home, precisely so they can demonstrate and use
whatever resources they have had the perspicacity to acquire. Removal of direct supervision
then raises the concern of identity and exam fraud, but this can be lessened if the candidate has
already been ‘captured’ by and is known to the system through having completed the pre-test
compulsory training and accompanying tests39. We therefore recommend that this be the course
taken for translation examinations.
39
Or consider the case of the Institute of Linguists, which allows candidates five years in which to complete the exam cycle. Apart
from simply accommodating schedules of candidates, it adds an important diachronic dimension to their contact with the Institute
(one obvious flaw of ‘one-shot’ accreditation being that the examination constitutes the first and only contact between examinee and
examining entity).
Commercial-in-confidence 76
Project Ref: RG114318
lingual transfer of items and texts. Laptops, notebooks, handheld personal digital assistants,
together with voice recognition technology now offer ‘instantly’ translated text to interpreters,
sometimes even in spoken form. However, as Donovan (2006), Veisbergs (2007) and
Winteringham (2010) conclude, the immediate nature of interpreting makes recourse to textual
sources very impractical, if not impossible.
Telephone interpreting, trialled for the first time in Australia in 1973 (Kelly, 2008, p. 5), now
occupies a standard place in the provision of interpreting services not only in Australia, but in
most other Anglophone countries of the New World, in western Europe and increasingly in other
areas of the world. The market of large telephone interpreting companies such as Language
Line is now global and this company now markets its services to customers worldwide. Most
telephone interpreting providers are private, although the world’s second-largest, Australia’s
TIS, is still publicly-funded. Telephone interpreting has been widely used in medical/healthcare
situations since the 1980s (Hornberger, 1998; Kuo & Fagan, 1999; Lee, Batal, Maselli, &
Kutner, 2002; Leman, 1997). In some contexts, telephone interpreting can be the default means
of providing interpreting services: one major health provider in Melbourne has adopted, from the
start of 2012, a policy of telephone interpreting as the preferred choice for consultations of sixty
minutes or less. In a study of his own and others’ data, Rosenberg (2007) finds that two-thirds
of telephone interpreting assignments were healthcare related and one third commercial.
Elsewhere, Chesher et al. (2003, p. 283) report that amongst community interpreters in several
countries, the proportion of telephone interpreting is comparable to that of face-to-face
interpreting. Kelly’s (2008) comprehensive description of logistic, ethical and personal
management issues that pertain to telephone interpreting relate not only to all community
interpreting settings but to a variety of others such as business and tourism. Ozolins (2011)
reports on the world providers of telephone interpreting services, such as US-based Language
Line and Cyracom and Manpower Business Solutions (The Netherlands). Many of these
telephone interpreting agencies offer testing. The NAATI accreditation Interpreter tests, on the
other hand, have never included any aspects of telephone interpreting, and although this skill is
covered by some current formal interpreting courses, it does not feature highly in any of them.
4.2.1.1 Telephone Interpreting Test – the Language Line interpreter skills test
The largest single global provider of telephone interpreting services is Language Line (Ozolins,
2011), which primarily offers telephone interpreting services, but also on-site interpreting,
translation, as well as training and testing. Language Line offers two training courses:
fundamentals of interpreter training and advanced medical training for interpreters. The courses
are offered over the phone, with advice provided by instructors about modules and role-plays
enacted, while the trainee is left to work with training manuals that focus strongly on
terminology. Language Line offers a larger number of tests, all delivered by telephone:
language proficiency test (English or LOTE), interpreter skills test, medical certification test and
court certification test. The only way to have access to the test is by sitting for it. For this
purpose, Dr Hlavac contacted Language Line and booked in for an interpreter skills test for 24
Commercial-in-confidence 77
Project Ref: RG114318
February 2012 (The interpreter skills test is presented as the “entry level assessment for
working interpreters” on the Language Line website). Information supplied to test candidates
prior to the test set out the length of time, role of examiner (as both the English-speaker and
LOTE-speaker), five criteria for assessment (accuracy, listening and retention skills, grammar,
knowledge of terminology and interpreting style), paper-copy reference sources allowed,
allowances for requests for clarification or repetition. Test candidates are also supplied with a
transcript of a model dialogue between an English-speaking healthcare employee and a LOTE-
speaking patient. No special telephone equipment is required to attempt the test other than a
keyphone telephone. The candidate was contacted on the day of the test by the examiner who
explained the format of the test and repeated the protocols expected of interpreter performance
– use of first person, recommendation to take notes and allowance for requests for clarification.
During the explanation given to the author before the test, the examiner emphasised that
requests for clarification are not sanctioned and that the test candidate should ask for repetition
or clarification if medically-related information heard was unclear or not retained. Further, the
examiner explained that test candidates were welcome to refer to medical dictionaries to clarify
the use and translation of medical terms. These instructions differ from those provided to NAATI
test candidates who are penalised for two or more requests for repetition and who are not
permitted to refer to dictionaries during tests. The protocols for the Language Line test are
therefore, in some way, adapted to the situation that successful candidates are likely to find
themselves in, should they commence work as healthcare telephone interpreters. In such
situations, where an interpreter requires clarification of medical terms or diagnosis, this
requirement overrides the general need in interpreting interactions to maintain a normal flow of
information exchange. The test itself contains one dialogue only for which the test candidate
interprets bi-directionally. There are no sight translation tasks, no speech interpreting, no ethics
questions, and no questions on the social or cultural features of either language community. No
information is elicited on a test candidate’s prior education, details of language acquisition,
occupational experience, general aptitude or motivation. No screening procedure is required
before admission to the test. The Language Line testing system appears to work from the
premise that test candidates are likely to self-select for test admission or be nominated by their
employer or other organisation to attempt the test due to linguistic, occupational or other
personal attributes that recommend a candidate to consider testing and to gain certification.
The interpreted dialogue in the test lasted eighteen minutes and generally followed the format
contained in the sample. The examiner explained that there would be salutations exchanged at
the start and at the end of the dialogue and the conclusion of the dialogue would be clearly
indicated by the examiner. The examiner also explained that the test would be recorded and
that there may be slightly longer pauses between exchanges to allow a clear distinction in the
recording between source speech and interpreted turns.
During the test, Dr Hlavac made notes, requested clarification once, and otherwise adopted the
role of a professional interpreter who has extensive experience in on-site interpreting and some
experience in telephone interpreting. Upon completion of the dialogue, the examiner informed
the tester that the test was over and that an assessment of performance would be made on the
basis of the recording. Information was not provided on the number of assessors who would
evaluate the recording. The assessment would provide the basis for a results report that would
be sent to the test candidate within ten working days.
With regards to the macro-level (or psychometric) features of the testing procedure, some
features could not be evaluated. The author had no access to other examples of the Language
Line interpreter skills test and therefore cross-test consistency and the overall reliability cannot
be evaluated. The actual test was congruent to the sample test provided in content, number of
words per turn, terminological specialisation, grammatical complexity of utterances within turns,
and a variety of turns containing different numbers of key messages. In terms of the test’s
Commercial-in-confidence 78
Project Ref: RG114318
relationship to the content and activities required from a healthcare interpreter, these appeared
to be congruent and the test conforms to the feature of authenticity.
The Language Line test has a focus which is specific to the means of communication that test-
takers are potentially going to use as practitioners (as telephone interpreters) and a focus which
is specific to the field of telephone interpreting that its services typically relate to (i.e., healthcare
interpreting). The specific focus of the test design accounts for the nature of the test in
comparison to the on-site NAATI test. The main features that sufficiently distinguish telephone
interpreting from on-site interpreting for the former to justify a separate testing structure are the
following:
Video-link interpreting is now also regularly used in prison and remand situations. In one of the
few studies to address not only interpreters’ but also others’ (e.g. court clerk, defence advocate,
prisoner) experiences, Fowler (2007) reports serious problems in the acoustic and visual access
to source speakers, leading to constant requests for repetition and instances of
miscommunication. Despite these findings, remote interpreting via video link is increasingly
being used, both in conference and community settings, and it is a mode of interpreting that
cannot be ignored. The expansion of video-link and remote interpreting in Europe and in
European courtrooms over the last decade precipitated interest in this medium and led to EU
funding for the AVIDICUS project, led by Sabine Braun. The aims of the AVIDICUS project were
to evaluate the quality of video-mediated interpreting in criminal proceedings and its viability
from an interpreter’s point of view. The final reports of the AVIDICUS project were published in
2011, containing a list of recommendations (Braun, 2011) and modules for interpreting students,
Commercial-in-confidence 79
Project Ref: RG114318
legal interpreters and legal practitioners (Braun, et al., 2011). The AVIDICUS project also
developed modules for delivery to trainee interpreters, practising legal interpreters and legal
practitioners. Braun et al. (2011) report that clear majorities of both trainee as well as practising
interpreters strongly support training specific to these means of interpreting. The implication of
such a finding is that these should be strongly considered as a component of our suggested
pre-test training and possibly of the interpreter test itself.
The questionnaire was made available in hard-copy form to potential informants who attended
the forum of invited practitioners, agencies and examiners held on 21 February 2012 at RMIT,
Melbourne, organised jointly by RMIT and Monash University as two institutions who have
working parties for this joint project. Eleven practitioner questionnaires were completed by the
attendees, and seven examiner questionnaires were completed by the attendees at this joint
forum. The remaining surveys were completed by practitioners and examiners known to the
researchers who responded to a global email invitation to participate and who provided their
anonymous responses via an electronic Survey Monkey address.
40
On the topic of digital technology and “Consec-Simul” aka SimConsec, see also Hamidi & Pöchhacker (2007)
Commercial-in-confidence 80
Project Ref: RG114318
belief relates largely to the use of video-link technology, with which a majority of informants had
not yet had contact. In other words, informants see the means through which communication
between participants in an interpreting interaction as amenable to technological change, but not
the activity of inter-lingual transfer as such. There is scepticism that technological innovations
will be able to do much more than prepare practitioners for assignments and as an aide for
some forms of telephone interpreting where recourse to online sources is considered logistically
possible. Overall, while a majority of informants do not report experience with telephone, video-
link or remote interpreting, there is a widespread consensus amongst informants that these
means of interpretation will become more common and widespread.
4.2.4.2 Survey results from practitioners and examiners on the use of technology for interpreting tests
Practitioners were generally in favour of audio and video recording of candidates’ performance.
Many, however, preferred audio recording only to protect candidates’ anonymity where there is
a possibility in smaller LOTE groups that examiners may know the candidates. Others see the
importance of video-recording to show candidates’ inter-personal skills, demonstration of role
relationship, coordination skills and use of paralinguistic markers. For signed interpreting
testing, video-recording remains essential.
With regards to remote or distance interpreting, only 20% of practitioners supported this as a
good idea. Advantages of remote testing nominated by informants include: lower travel costs
and greater access to candidates in remote areas. Some disadvantages included: doubts about
transmission quality, requirement to train examiners and conduct pre-test training for
candidates. Others also mentioned the difficulty that candidates could have in connecting to the
(test) discourse environment and that physical presence is what is required in most interpreting
interactions which are on-site.
The examiners were evenly spread on their attitudes towards the benefits of advanced
technology for testing. There were neutral and negative responses which cautioned against
video-link technology as a communication means for testing, highlighting the following as
potential problems: unfamiliar technology as a possible distraction for test candidates, stress
and lower performance in the event of technical problems, and doubts that variable bandwidth
could ensure good video reception for both candidate and examiner.
In the process of examining candidates’ performance and interacting with other examiners,
almost all examiners expressed an interest in video-link up and/or online exchange with other
examiners and even restricted-access web pages with a repository that stores examiners’
reports and allows other examiners access to these reports.
In general, there are very mixed responses to the idea of video-link testing – while many
informants can see merit in it through a widening of access to testing for previously
disenfranchised groups, others have concerns about the quality and feasibility of video-
link/remote testing as a fair and reliable means of testing. These responses also indicate that if
video-link/remote testing is considered, it is to be considered as a test that would require pre-
test training to familiarise the test candidate not only with the technical equipment that they
would use in the test, but the altered discourse and personal protocols that remote
communication bears in comparison to face-to-face testing.
Commercial-in-confidence 81
Project Ref: RG114318
of dialogue interpreting, sight translation, consecutive interpreting and ethical questions. The
technical specifications of the testing circumstances were specific to the training that preceded
testing. A web-based Collaborative Cyber Community (3C) was used for testing purposes
(2010, p. 155). Overall, the technology used in Chen & Ko’s (2010) test was able to accomplish
the requirements of the NAATI test in regard to test delivery and recording of candidate
performance for all components of the test. Chen & Ko’s 2010 and 2011 studies advocate
further development and trialling of online testing and are hopefully likely to continue in this
direction themselves. The implication of their study is that remote testing through computers is a
realistic possibility and this development should be considered very strongly where NAATI may
move to allow the use of computers for interpreting and translation testing.
The video-link was arranged between two campuses of Monash University, through its Video
Conference Services. The system used was Tandberg Edge 95MXP with a transmission speed
of up to 4 mg. Video input was provided by a video camera and video output was provided by
two televisions. Audio input was provided by a microphone and audio output through speakers
located in one of the televisions. A system computer tied together these components, initiating
and maintaining the data linkage via the network. The entire simulated test was recorded.
In the simulated test, one screen contained a screen shot of the examiner’s computer showing
the test items (i.e. tracks from the sample test CD and a Word document for the sight translation
task). The other screen followed a voice-activated switch (VAS). This means that the multipoint
control unit (usually set up or controlled by technical support staff) adopts a setting that switches
the endpoint that can be seen by the other endpoint by the level of one’s voice. This setting is
important for the record function, because the recorded version of the interaction followed the
voice-activated switch protocol – the recording showed in one of the screens the participant that
was speaking at that time and switched as each participant finished speaking and the other
started speaking. During the actual interaction, the VAS was not operating: on one of the
monitors, the participant could continuously see the other participant and could see him/herself
in a small box in the top right hand corner of the same screen. The other television screen, as
stated, showed the desktop of the computer that the examiner was using to access the
dialogues and sight translation document. Both participants were able to establish optimum
input and reception of audio and visual features through trialling different microphone positions
and seating arrangements. These features are important to establish in pre-test contact. The
Commercial-in-confidence 82
Project Ref: RG114318
dialogue interpreting exercise was led by the test administrator who pressed the play and pause
button to regulate audio output. The test candidate was able to receive audio output from the
test administrator clearly. The test candidate needed also to signal to the test administrator
when each interpreted turn had been completed. This simulated dialogue interpreting exercise
was accomplished in a similar fashion to conventional on-site dialogue interpreting testing.
The sight translation exercise was provided to the test candidate by the test administrator as a
Word document, on the screen. As an electronic interface, the candidate could not make notes
on the document to be sight translated. This means that a test candidate would be required to
make notes on a separate document (e.g. their own notepad) which is less convenient.
However, it is conceivable that in the future, electronically-delivered texts, such as on hand-held
devices, kindles, tablets, etc. could become more common in everyday interpreting practice and
an electronic source text as sight translation task. The sight translation task was recorded in the
same way as the dialogue interpreting task.
The consecutive interpreting component and ethical questions were not trialled as the recording
of these is no different from that of the previous two components. The test was concluded after
about forty-five minutes and a request was made to technical support staff to supply both
participants with a recording of the interaction.
Other synchronous audio video communication systems such as Skype allow for audio video
recording of transmissions through programs such as EVAER (Excellent Video and Audio
Recorder). However, the quality of SKYPE video and audio output is variable and not of
sufficient quality to be a reliable platform for testing. At present, Chen & Ko’s (2010) study which
was based on the use of a synchronous audio video learning platform with a finite number of
participants and with technical specifications that allow synchronous recording without
transferral to a recording source file, appears as the best available model where minimum
technical specifications can be guaranteed.
Commercial-in-confidence 83
Project Ref: RG114318
settings and modes that interpreter training can now offer to trainees (cf. Mouzourakis, 2008).
These developments greatly assist trainers and trainees alike in simulating the types of
interactions that practising interpreters find themselves in. Technology assisted distance
education can be considered for our suggested pre-testing compulsory modules, especially for
emerging languages for which there are no current courses.
In Australia, Ko (2008) reports teaching interpreting through distance education in a study that
compared off-campus teaching to on-campus teaching. The four modes of interaction with off-
campus students were (sound-only, multi-group) teleconferencing (for dialogue interpreting,
consecutive interpreting and some sight translation), telephone (for consecutive interpreting
sight translation in pairs), bulletin board (study materials and texts) and email (general and
specific correspondence with students). Ko’s (2008) comparison of pre-test, final and
independent test results for control groups of on-campus and off-campus trainees showed no
significant differences in performance, suggesting that trainees can learn and become skilled in
interpreting through distant education with no disadvantage compared to on-campus trainees.
Testing was not conducted through distance education in Ko’s (2006) study, but in a
subsequent one (Chen & Ko, 2010).
In Norway, Skaaden and Wattne (2009) report on teaching interpreting through remote (i.e.,
distance education) means to 116 students for a twelve-month course on community
interpreting. Remote refers here to web-based delivery of teaching materials to trainees and all
interaction between instructors and trainees and amongst trainees themselves being web-
based. The students had already gone through a screening procedure consisting of a bilingual
lexical knowledge test and oral test in which simulated consecutive interpretation of fifteen to
twenty short sequences was recorded. Such a remote education course was designed in
Norway, which, like Australia, has a thin population spread and long distances are a
disincentive to student participation. Time constraints are also considered a motivation.
In the United States, technological advances in remote communication have given rise to a
large number of providers of educational training who now use remote means to deliver course
content and even to conduct testing. The US National Centre for Interpretation, based at the
University of Arizona, has a combined approach with some remote training courses for
interpretation, together with on-campus classes. Testing, however, is on-campus only (email
correspondence, February 11, 2012).
Other major remote education providers of interpreting training are the providers of telephone
interpreting services themselves. The main reason for this is these providers’ desire to offer
training and testing to trainees who are interested in becoming employees or contracted staff of
such agencies. One of the largest telephone interpreting agencies that specialises on
healthcare telephone interpreting, US-based Cyracom, offers both training and testing to
trainees through remote means. Interpreter Education Online (IEO) offers training programs in
general, legal and medical interpreting. It also offers testing in simultaneous and consecutive
interpreting and sight translation, with all three offerings available as general tests, tests with
legal terminology or tests with medical terminology.
The Medical Interpreting and Translating Institute Online offers three training programs:
beginners, intermediate and advanced interpreting of approximately forty hours length in
Spanish-English interpreting only. The courses consist of lecture notes sent in hard-copy,
videos and a prescribed textbook. There appears to be no test that this educational provider
requires other than successful completion of the training, after which trainees are recommended
to US-based health providers that require the services of Spanish-English interpreters. The
Berkeley Language Institute offers some online training but no testing. Pacific Interpreters offers
mostly telephone interpreting services but also on-site interpreting and document translation.
Commercial-in-confidence 84
Project Ref: RG114318
The largest global telephone interpreting agency is US-based Language Line, which services
not only North America but all parts of the world where English is one of the languages for
which interpreting services are required. Language Line also provides on-site interpreting,
document translation, training and testing.
Commercial-in-confidence 85
Project Ref: RG114318
For the languages for which formal NAATI approved courses are available, candidates should
be advised to enrol in such courses as the preferred method of obtaining accreditation. Where
candidates’ languages are not offered as part of NAATI approved courses, candidates will be
directed to follow the staged approach as outlined in the new proposed model. The new model
recommends training modules in theory and practice that can be delivered mostly in English
and through flexible modes. We acknowledge that there are currently limited opportunities for
such training and that should this recommendation be accepted, such training modules will
need to be made available before compulsory training is implemented. We therefore
recommend that once such training becomes available, no accreditation be gained without
having undertaken any Interpreting and/or Translation training.
Implementation suggestions: We propose that NAATI commence the process that leads up to
compulsory training by first establishing the Expert Panel (see recommendation 16) to set up
the training requirements and establish what constitutes equivalence. We acknowledge that full
implementation of this recommendation can take a number of years, but the process can
commence within the next year.
Having read such an information package, it is envisaged that those who have misconceptions
about Interpreting & Translation will decide not to pursue accreditation. This will ensure that
candidates who do not have any chance of success will not waste money and time attempting
accreditation. It will also minimise potential complaints about a low pass rate.
3. That NAATI select (or devise) an on-line self-rating English proficiency test to
be taken by potential candidates for a small fee, as part of the non-compulsory
preparedness stage, as outlined in sections 2.3 and 3.1.
There is some controversy over the need to screen for language competence prior to
accreditation. For example, Turner & Ozolins (2007) in their survey found no significant
concerns over language levels. However, the results of our current study showed that language
proficiency continues to be an issue – both for those sitting for the examinations and for those
who practice in the field. Some certification bodies overseas also screen for language
proficiency before allowing candidates to sit for the certification examination. For this reason, we
recommend that language screening be voluntary rather than compulsory, during the
preparedness stage. Candidates should be advised against attempting accreditation if they
achieve a result lower than a set score (to be decided).
4. That NAATI language panels select (or devise equivalent) on-line self-rating
proficiency tests in the various languages to be taken by potential candidates for
Commercial-in-confidence 86
Project Ref: RG114318
Implementation suggestions: This recommendation will require more time than recommendation
3. We believe NAATI could task its language panels to devise its own set of LOTE proficiency
tests and recover all costs through a fee to be paid by the candidates. The language proficiency
test could also be used by candidates as a means to prove their proficiency for admission to
formal Interpreting and Translation courses, also for a fee.
Recommendations on accreditation
Recommendations on testing
In order to improve the authenticity and validity of the NAATI examinations, we recommend the
following:
Commercial-in-confidence 87
Project Ref: RG114318
8. That NAATI move to computerised translator tests in the first place. Secondly,
that test candidates undertaking computerised translator tests be allowed access
to the internet while taking the test41, taking account of security considerations.
See section 3.5.2 and section 4.
Implementation suggestions: NAATI could first pilot computerised translator tests with no
internet access while exploring security considerations for the use of the internet. The pilot
phase could be implemented without much delay.
Implementation suggestions: Some NAATI approved courses already conduct their final
examinations live. NAATI could commence testing live for the major languages in the main
capital cities where there is a sufficient supply of examiners and organise video recorded
examinations for the other languages. This recommendation is connected to recommendations
12 & 13, as the examiners will need a revised assessment instrument to rate the candidates’
live performances.
10. That Interpreting tests at the Generalist level for both spoken and signed
languages include a telephone interpreting component consisting of protocols for
identification of all interlocutors, confidentiality assurances and dialogue
interpreting only. See section 3.5.1 and section 4.2.1.
11. That a validation research project be conducted to design the new testing
instruments for Interpreting and Translation. See section 3.6.
The validation study will provide empirically based construct definitions to design the
components of the test, levels of difficulty of each component, standards, marking criteria and
test delivery. Descriptors will need to be empirically defined so that assessment tools can be
aligned with them.
Implementation suggestions: The validation study can have a duration of between 1 and 3 years
(depending on its scope) and therefore recommendations 6-10 and 12-13 cannot be
implemented until the completion of such a project. We suggest such a project be funded by an
Australian Research Council Linkage grant, where NAATI becomes the linkage partner.
Recommendations on assessment
12. That new assessment methods using rubrics (see Table 8) be empirically
tested as part of the validation project.
41 This is being trialled by the American Translators’ Association [ATA] and they have signalled their readiness to offer support and
technical advice to NAATI working group members in regard to the introduction of logistic protocols and recently-developed
software.
Commercial-in-confidence 88
Project Ref: RG114318
13. That new examiners’ manuals be written to reflect the new assessment
methods to be adopted.
Recommendations on examiners
14. That NAATI review the current composition of examiners’ panels to include
more graduates of approved courses and fewer practitioners who hold no formal
qualifications in Interpreting and Translation. See section 3.7.
Implementation suggestions: We recommend that an open call for applications for examiners be
implemented without delay. We further recommend that examiners serve for a period of 3 years
which can be renewed for another three. In the case of languages of small diffusion, the term of
office may need to be much longer. This recommendation can be implemented in time for the
next round of call for applications.
15. That examiners undertake compulsory training before being accepted on the
panel, and continuous training while on the panel42. See section 3.7 above.
16. That NAATI establish a new Expert Panel, with subpanels for the
specialisations, to design the curricula for the compulsory training modules and
provide guidelines for the final assessment tasks.
The Expert panel should comprise educators from the different NAATI approved courses,
whose membership can rotate every five years. This recommendation is consistent with a
number of recommendations in the Cook Report. Different Expert sub Panels should be
organised for each specialisation, with representatives from the relevant industry/profession as
well as from Interpreting (for example, lawyers for the legal specialisation, health care workers
for the medical specialisation, etc).
17. That NAATI continue to approve tertiary programs and encourage all
applicants to take the formal path to accreditation where such is available for the
relevant language combinations.
42
For Aboriginal language examiners and possibly other languages of limited diffusion, training may be unrealistic in some
languages due to literacy/numeracy considerations. In such cases we recommend that untrained examiners be partnered with a
trained examiner, as explained in the report.
Commercial-in-confidence 89
Project Ref: RG114318
References
Amtsblatt der Europäischen Union. (2006). Europäisches Amt für Personalauswahl (EPSO) (2006/C 233
A/01). Retrieved 3 Nov, 2011, from http://eur-
lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2006:233A:0003:0017:DE:PDF
Angelelli, C. V. (2007). Assessing Medical Interpreters: The Language and Interpreting Testing Project.
The Translator, 13(1), 63-82.
Angelelli, C. V. (2009). Using a rubric to assess translation ability. Testing and Assessment in Translation
and Interpreting Studies: A Call For Dialogue Between Research and Practice, 14, 13.
Angelelli, C. V., & Jacobson, H. E. (2009). Testing and Assessment in Translation and Interpreting
Studies : A Call for Dialogue between Research and Practice: John Benjamins Publishing Co.
Association of Translators Terminologists and Interpreters of Manitoba. (2011). Becoming a Member:
Certification by Portfolio. from http://atim.mb.ca/en/becomingamember/certificationPortfolio.htm
ATA, [American Translators Association]. (2011). ATA Certification Program, Certification Exam.
Retrieved 4 Nov, 2011, from http://www.atanet.org/certification/aboutexams_overview.php
ATA, [American Translators Association]. (n.d.). Flow chart for error point decisions. Retrieved 4 Nov,
2011, from http://www.atanet.org/certification/aboutexams_flowchart.pdf
ATIO, [Association of Translators and Interpreters of Ontario]. (2011). On Dossier Certification.
Retrieved 1 Dec, 2011, from http://www.atio.on.ca/services/certification/php
Australian Department of Health and Ageing. (n.d.). A Manual of Mental Health Care in General Practice.
Retrieved 16/12/2011, from
http://www.health.gov.au/internet/publications/publishing.nsf/Content/mental-pubs-m-mangp-
toc~mental-pubs-m-mangp-4~mental-pubs-m-mangp-4-int
Bachman, L. F. (2000). Modern language testing at the turn of the century: Assuring that what we count
counts. Language testing, 17(1), 1-42.
Bachman, L. F. (2002). Some reflections on task-based language performance assessment. Language
testing, 19(4), 453-476.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful
language tests (Vol. 1): Oxford University Press, USA.
Baker, M. (1992). In other words: A coursebook on translation. London: Routledge.
Beltran Avery, M.-P. (2003). Creating a High-Standard, Inclusive and Authentic Certification Process. In
L. Brunette, G. Bastik, L. Heinlik & H. Clarice (Eds.), The Critical Link 3: Interpreters in the
Community (pp. 99-112). Amsterdam/Philadelphia: Benjamins.
Benmaman, V. (1997). Legal Interpreting by any other name is still Legal Interpreting. In S. E. Carr, R.
Roberts, A. Dufour & D. Steyn (Eds.), The Critical Link: Interpreters in the Community (pp. 179-
190). Amsterdam and Philadelphia: John Benjamins.
Berger, M.-J., & Simon, M. (1995). Programmation et évaluation en milieu scolaire: notes de cours.
Unpublished manuscript. Université d’Ottawa.
Berk-Seligson, S. (1990 / 2002). The Bilingual Courtroom. Court Interpreters in the Judicial Process.
Chicago: The University of Chicago Press.
Bontempo, K. (2009a). Conference interpreter performance evaluation rubric [private teaching and
assessment resource].
Bontempo, K. (2009b). Interpreter performance evaluation rubric [private teaching and assessment
resource].
Bontempo, K., & Hutchinson, B. (2011). Striving for an “A” Grade: A Case Study of Performance
Management of Interpreters. International Journal of Interpreter Education, 3, 56-71.
Bontempo, K., & Napier, J. (2009). Getting it right from the start. Testing and Assessment in Translation
and Interpreting Studies: A Call For Dialogue Between Research and Practice, 14, 247.
Braun, S. (2011). Recommendations for the use of video-mediated interpreting in criminal proceedings. In
S. Braun & J. Taylor (Eds.), Videoconference and remote interpreting in criminal proceedings (pp.
265-287). Guildford: University of Surrey.
Braun, S., & Taylor, J. (2011). Video-mediated interpreting: An overview of current practice and research.
In S. Braun & J. Taylor (Eds.), Videoconference and remote interpreting in criminal proceedings
(pp. 22-57). Guildford: University of Surrey.
Braun, S., Taylor, J. L., Miler-Cassino, J., Rybińska, Z., Balogh, T. K., Hertog, E., et al. (2011). Training in
video-mediated interpreting in legal proceedings: Modules for interpreting students, legal
interpreters and legal practitioners. In S. Braun & J. Taylor (Eds.), Videoconference and remote
interpreting in criminal proceedings (pp. 205-254). Guildford: University of Surrey.
Calzada Perez, M. (2005). Applying Translation Theory in Teaching. New Voices in Translation Studies,
1, 1-11.
Commercial-in-confidence 90
Project Ref: RG114318
Cambridge, J. (1999). Information Loss in Bilingual Medical Interviews Through an Untrained Interpreter.
The Translator, 5(2), 201-219.
Carroll, J. B. (1966). An Experiment in Evaluating the Quality of Translations. Mechanical Translation and
Computational Linguistics, 9(3 & 4), 55-66.
Chacón, M. J. (2005). Estudio Comparativo de la Actuación de Intérpretes Profesionales y no
Profesionales en Interpretación Social: Trabajo de Campo. Puentes, 5, 83-98.
Chen, J. (2009). Authenticity in accreditation tests for interpreters in China. The Interpreter and Translator
Trainer, 3(2), 257-273.
Chen, N. S., & Ko, L. (2010). An online synchronous test for professional interpreters. Educational
Technology & Society, 13(2), 153-165.
Chesher, T., Slatyer, H., Doubine, V., Jaric, L., & Lazzari, R. (2003). Community-based interpreting. The
interpreters’ perspective. In L. Brunette, G. Bastin, I. Hemlin & H. Clarke (Eds.), The Critical Link
3 (pp. 273-291 ). Amsterdam and Philadelphia: John Benjamins.
Clifford, A. (2001). Discourse theory and performance-based assessment: Two tools for professional
interpreting. Meta: Journal des traducteurs, 46(2), 365-378.
Clifford, A. (2003). A preliminary investigation into discursive models of interpreting as a means of
enhancing construct validity in interpreter certification. Unpublished doctoral dissertation.
University of Ottawa.
Clifford, A. (2005). Putting the exam to the test: Psychometric validation and interpreter certification.
Interpreting, 7(1), 97-131.
Colin, J., & Morris, R. (1996). Interpreters and the legal process. Winchester: Waterside Press.
Cook, J., & Dixon, H. (2005). A review of NAATI administrative processes related to testing including
quality control processes. Sydney: Macquarie University.
Cope, B., Kalantzis, M., Luke, A., McCormack, R., Morgan, B., Slade, D., et al. (1995). Communication,
Collaboration and Culture: The National Framework of Adult English Language, Literacy and
Numeracy Competence. Sydney: Centre for Workplace Communciation and Culture.
Council of Australasian Tribunals. (n.d.). Practice manual for tribunals: 5.4.2. Interpreters, Chapter 5:
Hearings.
CRC, C. R. C. (2009). Multicultural Services and Programs. Retrieved from
http://www.crc.nsw.gov.au/multicultural_policies_and_services_program_formally_eaps.
CTTIC [Canadian Translators, Terminologists and Interpreters Council],. (n.d.). CTTIC Marker's Guide.
Cuevas, A. (2011). T&I Labour Market in Mexico. Sydney: Centre for Translation and Interpreting
Research, Macquarie University.
Delisle, J., Lee-Jahnke, H., Cormier, M. C., & Albrecht, J. (Eds.). (1999). Terminologie de la traduction /
Translation terminology / Terminología de la traducción / Terminologie der Übersetzung.
Amsterdam and Philadelphia:: John Benjamins.
DIAC, [Department of Immigration and Citizenship]. (2011). The People of Australia: Australia's
Multicultural Policy. 1-16. Retrieved from
http://www.immi.gov.au/media/publications/multicultural/pdf_doc/people-of-australia-multicultural-
policy-booklet.pdf
Donovan, C. (2006, 30 June – 1 July 2006). Trends–Where is Interpreting heading and how can training
courses keep up? Paper presented at the Presentation at EMCI conference in London, University
of Westminster:“The Future of Conference Interpreting: Training, Technology and Research.
Ebden, P., Carey, O. J., Bhatt, A., & Harrison, B. (1988). The bilingual consultation. Lancet, 1(8581), 347.
Edwards, A. (1995). The practice of court interpreting. Amsterdam: John Benjamins.
Eyckmans, J., Anckaert, P., & Segers, W. (2009). The perks of norm-referenced translation evaluation.
Testing and Assessment in Translation and Interpreting Studies: A Call For Dialogue Between
Research and Practice, 14, 73.
Family Court of Western Australia. (2006). Court funded interpreter services policy. July 2006. from
http://www.familycourt.wa.gov.au/i/interpreter_services.aspx on 14/01/2010
Federal Association of Interpreters and Translators. (2011). BDU. from http://www.bdue.de/indexen.php
Federal Court. (n.d.). Federal Court Benchbook: 9.3 Interpreters (received from. E. Connolly in email
correspondence on 04/02/2010).
Federal Magistrates Court. (n.d.). Interpreter and Translator Policy. from
http://www.fmc.gov.au/services/html/interpreters.html on 16/12/2009
Föreningen Auktoriserade Translatorer. (2011). Advice to Customers. from
http://www.aukttranslator.se/eng/advice.asp
Fowler, Y. (2007). Interpreting into the ether: interpreting for prison/court video link hearings. Paper
presented at the Proceedings of the Critical Link 5 conference, Sydney, 11-15/04/2007.
Commercial-in-confidence 91
Project Ref: RG114318
Gentile, A., Ozolins, U., & Vasilakakos, M. (1996). Liaison Interpreting. Melbourne: Melbourne University
Press.
Gile, D. (2004). Integrated problem and decision reporting as a translator training tool. The Journal of
Specialised Translation, 2, 2-20.
Giovannini, M. (1993). Report on the Development of English Language Assessment Tools for Use in
Ministry Supported Cultural Interpreting Services for the Settlement and Integration Section.
Toronto: Ministry of Citizenship.
Gipps, C. (1994). Beyond Testing:: Towards a Theory of Educational Assessment. London: Falmer Press.
Hale, A., & Basides, H. (2012/13). The keys to academic English. South Yarra: Palgrave Macmillan.
Hale, S. (2004). The discourse of court interpreting. Discourse practices of the law, the witness and the
interpreter. Amsterdam and Philadelphia: John Benjamins.
Hale, S. (2007a). The challenges of court interpreting: intricacies, responsibilities and ramifications.
Alternative Law Journal, 32(4), 198-202
Hale, S. (2007b). Community Interpreting. Hampshire: Palgrave Macmillan.
Hale, S. (2011). Interpreter policies, practices and protocols in Australian Courts and Tribunals. A national
survey. Melbourne: Australian Institute of Judicial Administration.
Hale, S., & Ozolins, U. (forthcoming in 2014). Monolingual short courses for language-specific
accreditation: can they work? A Sydney experience. The Interpreter and Translation Trainer, 8(2),
TBA.
Hamidi, M., & Pöchhacker, F. (2007). Simultaneous Consecutive Interpreting: A New Technique Put to
the Test. Meta: Journal des traducteurs, 52(2), 276-289.
Hatim, B., & Mason, I. (1997). The translator as communicator. London & New York: Routledge.
Henning, G. (1987). A guide to language testing: Development, evaluation, research: Newbury House
Cambridge, MA.
Hertog, E., & Reunbrouck, D. (1999). Building Bridges between Conference Interpreters and Liaison
interpreters. In M. Erasmus (Ed.), Liaison Interpreting in the Community (pp. 263-277 ). Pretoria:
Van Schaik.
Hornberger, J. (1998). Evaluating the costs of bridging language barriers in health care. Journal of Health
Care for the Poor and Underserved, 9(5), S26-S39.
ILTA, [International Language Testing Association],. (2012). ILTA Guidelines for Practice. from
http://iltaonline.com/images/pdfs/ILTA_Guidelines.pdf
IoL, [Institute of Linguists]. (2011). Diploma in Translation. Handbook for Candidates. Retrieved 4 Nov,
2011, from http://www.iol.org.uk/qualifications/DipTrans/DipTransHandbook.pdf
IoLET, [Institute of Linguists Educational Trust]. (2004). Diploma in Public Service Interpreting. Retrieved
8 Nov, 2011, from http://www/iol.org.uk/qualifications/IoL-DPSI-Handbook-Apr04.pdf
Jacobson, H. (2009). Moving beyond words in assessing mediated interaction: measuring
interactional competence in healthcare settings. In C. V. Angelelli & H. Jacobson (Eds.), Testing
and Assessment in Translation and Interpreting. Amsterdam: John Benjamins Publishing Co.
Kalina, S. (2004). Quality in interpreting and its prerequisites. In G. Hansen, K. Malmkjær & D. Gile
(Eds.), Claims, changes and challenges in translation studies: selected contributions from the
EST Congress, Copenhagen 2001 (Vol. 50). John Benjamins Publishing Company. (pp. 121-
130). Amsterdam/Philadelphia: John Benjamins.
Kelly, N. (2007). Interpreter Certification Programs in the US: Where Are We Headed? The ATA
Chronicle, January, 31-39.
Kelly, N. (2008). Telephone Interpreting: A comprehensive guide to the profession. Clevedon: Multilingual
Matters.
Kim, M. (2009). Meaning-oriented assessment of translations: SFL and its application for formative
assessment. In C. V. Angelelli & H. Jacobson (Eds.), Testing and assessment in translation and
interpreting studies. Amsterdam/Philadelphia: John Benjamins
Ko, L. (2006). The need for long-term empirical studies in remote interpreting research: a case study of
telephone interpreting. Linguistica Antverpiensia New Series, 5(2006), 325-338.
Ko, L. (2008). Teaching interpreting by distance mode: An empirical study. Meta, 53(4), 814-840.
Ko, L., & Chen, N. S. (2011). Online-interpreting in synchronous cyber classrooms. Babel, 57(2), 123-
143.
Kuo, D., & Fagan, M. (1999). Satisfaction with methods of Spanish interpretation in an ambulatory care
clinic. Journal of general and internal medicine, 14, 547-550.
Lee, J. (2005). Rating interpreter performance. Unpublished MA Dissertation. Macquarie University.
Lee, J. (2008). Rating scales for interpreting performance assessment. The Interpreter and Translator
Trainer, 2(2), 165-184.
Commercial-in-confidence 92
Project Ref: RG114318
Lee, J. (2009). Toward more reliable assessment of interpreting performance. The Critical Link, 5(2009),
171-185.
Lee, L. J., Batal, H. A., Maselli, J. H., & Kutner, J. S. (2002). Effect of Spanish interpretation method on
patient satisfaction in an urban walk-in clinic. Journal of General Internal Medicine, 17(8), 640-
645.
Leman, P. (1997). Interpreter use in an inner city accident and emergency department. Journal of
accident & emergency medicine, 14(2), 98-100.
Linn, R. L., Baker, E. L., & Dunbar, S. B. (1991). Complex, performance-based assessment: Expectations
and validation criteria. Educational Researcher, 20(8), 15-21.
Lysaght, R. M., & Altschuld, J. W. (2000). Beyond initial certification: the assessment and maintenance of
competency in professions. Evaluation and Program Planning, 23(1), 95-104.
Messick, S. (1994). The interplay of evidence and consequences in the validation of performance
assessments. Educational Researcher, 23(2), 13-23.
Mikkelson, H. (1996). Community Interpreting. An emerging profession. Interpreting, 1(1), 125-129.
Ministerio de Asuntos Exteriores y de Cooperacion. (2011). Traductores-Interpretes Jurados'. from
http://www.maec.es/es/menuppal/ministerio/tablondeanuncios/interpretesjurados/Paginas/Intrpret
es%20Jurados.aspx
Moore, T., & Morton, J. (2005). Dimensions of difference: a comparison of university writing and IELTS
writing. Journal of English for Academic Purposes, 4(1), 43-66.
Mortensen, D. (2001). Measuring Quality in Interpreting: A report on the Norwegian Interpreter
Certification Examination (NICE): University of Oslo.
Moser-Mercer, B. (2003). Remote interpreting: assessment of human factors and performance
parameters. Communicate! Retrieved from http://www.aiic.net/ViewPage.cfm/article879.htm
Moser-Mercer, B. (2005). Remote interpreting: Issues of multi-sensory integration in a multilingual task.
Meta, 50(2), 727-738.
Mouzourakis, P. (2008). Remote Interpreter Training – Training for Remote Interpreting? Retrieved 2
March, 2012, from http://multimedialinguas.wordpress.com/edicoes/ano-i-2010/0001-
janeiro/panayotis-mouzourakis-%c2%abremote-interpreter-training-training-for-remote-
interpreting%c2%bb/
NAATI. (2010-2011). NAATI Annual Report 2010-2011: National Accreditation Authority for Translators
and Interpreters Ltd.
NAATI Test Review Translators Group. (2001). Final Summary.
Napier, J. (2004). Sign language interpreter training, testing, and accreditation: an international
comparison. American Annals of the Deaf, 149(4), 350-359.
National Accreditation Authority for Translators and Interpreters. (2002). NAATI Test Review
(Accreditation, Test Formats and Test Methodology). Executive Summary of Responses to the
Discussion Paper by Stakeholders.
National Accreditation Authority for Translators and Interpreters. (2010). Improvements to NAATI Testing:
Expressions of Interest Available from
http://www.naati.com.au/pdf/misc/Improvements%20to%20NAATI%20Testing.pdf
National Board of Certification for Medical Interpreters. (2011). Written Exam. from
http://www.certifiedmedicalinterpreters.org/written-exam
National Center for State Courts. (2011). Federal Court Interpreter Certification Examination For
Spanish/English Examinee Handbook. Retrieved 5 Nov, 2011, from
http://www.ncsonline.org/d_research/fcice_exam/2011approvedbyAO-Online.pdf
National Centre for State Courts. (2009). Consortium for State Interpreter Certification. Survey:
Certification Requirements 2009. Retrieved 5 Nov, 2011, from
http://www.ncsonline.org/D_Research/CourtInterp/Res_CtInte_ConsortCertRqmntssurvey2009.p
df
Nord, C. (1991). Scopos, loyalty, and translational conventions. Target, 3(1), 91-109.
NSW Community Justice Centre. (2009). 2009 Policy: Use of interpreters. from
http://www.lawlink.nsw.gov.au/lawlink/Community_Justice_Centres/ll_cjc.nsf/pages/CJC_publicat
ions on 11/03/2010
NSW Workers Compensation Commission. (n.d.). Internal Policy: Provision of interpreter services
(received by email from. S. Leatham on 08/12/2009).
NT Department of Housing Local Government and Regional Services. (2011). Working with interpreters.
Retrieved 16/12/2011, from
http://www.dlgh.nt.gov.au/interpreting/aboriginal_interpreter_service/working_with_interpreters
Organizacion Mexicana de Traductores, A. C. (2011). 'Certificaciones'. from
http://www.omt.org.mx/certificaciones.htm
Commercial-in-confidence 93
Project Ref: RG114318
Orlando, M. (2010). Digital pen technology and consecutive interpreting: another dimension in notetaking
training and assessment. The interpreter's newsletter(15), 71-86.
Ozolins, U. (1991). Interpreting translating and language policy. Melbourne: National Languages Institute
of Australia.
Ozolins, U. (2004). Survey of interpreting practitioners. Report. Melbourne: VITS.
Ozolins, U. (2011). Telephone interpreting: understanding practice and identifying research needs. The
international journal of translation and interpreting research, 3(2), 33-47.
PACTE Group. (2009). Results of the validation of the PACTE translation competence model. Retrieved
9 November, 2012, from http://grupsderecerca.uab.cat/pacte/en/content/publications
Pochhacker, F. (2001). Quality assessment in conference and community interpreting. Meta, 46(2), 410-
425.
QLD Department of Justice and Attorney-General. (2009). Language Services Policy. from
http://www.justice.qld.gov.au/__data/assets/pdf_file/0005/33683/DJAG-Lang-Serv-Policy.pdf on
07/01/2010
QLD Health Interpreter Service. (2007). Working with Interpreters: Guidelines. Retrieved 16/12/2011,
from http://www.health.qld.gov.au/multicultural/interpreters/guidelines_int.pdf
Roat, C. (2006). Certification of Health Care Professionals in the United States. A Primer, a Status
Report, and Considerations for National Certification: The California Endowment.
Roberts, R. P. (2000). Interpreter assessment tools for different settings. BENJAMINS TRANSLATION
LIBRARY, 31, 103-120.
Rosenberg, B. A. (2007). A data driven analysis of telephone interpreting. In C. Wadensjö, B. Englund
Dimitrova & A. L. Nilsson (Eds.), The critical link 4. Professionalisation of interpreting in the
community. (pp. 65-76). Amsterdam: Benjamins.
Roy, C. (2000). Interpreting as a discourse process. New York and Oxford: Oxford University Press.
Russell, D., & Malcolm, K. (2009). Assessing ASL-English interpreters. The Canadian model of national
certification. In C. V. Angelelli & H. Jacobson (Eds.), Testing and Assessment in Translation and
Interpreting Studies (pp. 371-376): John Benjamins.
SA Health. (2006). Language Services Provision: Operational Guidelines For Health Units. Retrieved
16/11/2011, from
http://www.sahealth.sa.gov.au/wps/wcm/connect/e41bb280455fe974a8b8fa8a21f01153/2011+La
nguage+Services+Provision+-
+Operational+Guidelines+for+Health+Unitsand+coversheet.pdf?MOD=AJPERES&CACHEID=e4
1bb280455fe974a8b8fa8a21f01153&CACHE=NONE
Sandrelli, A. (2001). Teaching Liaison Interpreting. Combining tradition and innovation. In I. Mason (Ed.),
Triadic Exchanges. Studies in Dialogue Interpreting (pp. 173-196). Manchester: St. Jerome.
Schjoldager, A. (1995). An exploratory study of translational norms in simultaneous interpreting:
methodological reflections. Hermes, 14, 65-87.
Skaaden, H. (1999). Lexical knowledge and interpreter aptitude. International Journal of Applied
Linguistics, 9(1), 77-97.
Skaaden, H., & Wattne, M. (2009). Teaching interpreting in cyberspace. The answer to all our prayers? In
R. de Pedro Ricoy, I. Prerez & C. Wilson (Eds.), Interpreting and translating in public service
settings (pp. 74-88). Manchester: St. Jerome.
Slatyer, H., Elder, C., Hargreaves, M., & Luo, K. (2008). An investigation into rater reliability, rater
behaviour and comparability of test tasks. Sydney: Access Macquarie, Macquarie University.
South African Translators' Institute, S. (2007). Translation Accreditation. Retrieved 9 Dec, 2011, from
http://translators.org.za/sati_cms/index.php?frontend_action=display_text_content&content_ind=
1761
Stansfield, C. W., Scott, M. L., & Kenyon, D. (1992). The measurement of translation ability. The Modern
Language Journal, 76(4), 455-467.
Stejskal, J. (2001). International Certification Study: Accreditation Program in Brazil. The ATA Chronicle,
July.
Stejskal, J. (2002a). International Certification Study: Argentina. The ATA Chronicle, June, 13-17.
Stejskal, J. (2002b). International Certification Study: Finland and Sweden. The ATA Chronicle, February,
14-15.
Stejskal, J. (2002c). International Certification Study: Japan. The ATA Chronicle, September, 17-19.
Stejskal, J. (2002d). International Certification Study: Norway. The ATA Chronicle, July, 13-22.
Stejskal, J. (2002e). International Certification Study: Spain and Portugal. The ATA Chronicle, October,
20-30.
Stejskal, J. (2002f). International Certification Study: U.K. and Ireland. The ATA Chronicle, May, 12-23.
Commercial-in-confidence 94
Project Ref: RG114318
Stejskal, J. (2002g). International Certification Study: Ukraine. The ATA Chronicle, November/December,
12-18.
Stejskal, J. (2005). Survey of the FIT Committee for Information on the Status of the Translation &
Interpretation Profession. Retrieved 28 November 2011, from FIT [Online]: http://www.fit-
europe.org/vault/admission/FITsurvey2005.pdf
STIBC [Society of Translators and Interpreters, British Columbia],. (2008). Court Interpreting - CTTIC
Certification Examination Information for Candidates. Retrieved 8 Dec, 2011, from
http://www.stibc.org/page/court%20interpreter%20-
%20criteria%20and%20procedures%20for%20certification%20by%20exam.aspx
Supreme Court of Queensland. (2005). Equal Treatment Benchbook. from
http://www.courts.qld.gov.au/The_Equal_Treatment_Bench_Book/S-ETBB.pdf on 18/12/2009
Timarova, S., & Ungoed-Thomas, H. (2008). Admission testing for interpreting courses. The Interpreter
and Translator Trainer, 2(1), 29-46.
Tiselius, E. (2010). Revisiting Carroll's scales. In C. V. Angelelli & H. Jacobson (Eds.), Testing and
Assessment in Translation and Interpreting Studies : A Call for Dialogue between Research and
Practice (Vol. 14, pp. 95-121). Amsterdam/Philadelphia: John Benjamins Publishing Co. .
Turner, B., Lai, M., & Huang, N. (2010). Error deduction and descriptors–A comparison of two methods of
translation test assessment. Translation & Interpreting, 2(1), 11-23.
Turner, B., & Ozolins, U. (2007). The standards of linguistic competence in English and LOTE among
NAATI accredited interpreters and translators: RMIT University, Melbourne.(Available at www.
mit. edu/gsssp/ti.).
University of Oslo. (2001). Measuring Quality in Interpreting: A report on the Norwegian Interpreter
Certification Examination (NICE). Retrieved 12 Dec, 2011, from
http://folk.uio.no/dianem/IntQuality-Internet.pdf
Veisbergs, P. (2007). Terminology issues in interpeter training. Proceedings of the Baltic Sea Region
University Network: Quality and Qualifications in Translation and Interpreting II Retrieved 1 Mar,
2012, from http://www.tlu.ee/files/arts/645/Quali698bb7e395e0b88d73d603e33f5b153f.pdf
Vermeiren, H., Van Gucht, J., & De Bontridder, L. (2009). Standards as critical success factors in
assessment. Testing and Assessment in Translation and Interpreting Studies: A Call For
Dialogue Between Research and Practice, 14, 297-330.
WA Office of Multicultural Interests. (2008). The Western Australian Language Services Policy 2008. from
http://multicultural.wa.gov.au/OMI_language.cfm on 14/01/2010
Waddington, C. (2001). Different methods of evaluating student translations: The question of validity.
Meta: Journal des traducteurs, 46(2).
Waddington, C. (2004). Should student translations be assessed holistically or through error analysis?
Lebende Sprachen, 49(1), 28-35.
Wadensjö, C. (1998). Interpreting as interaction. London & New York: Longman.
Wakabayashi, J. (1996). The Translator's voice. Paper presented at the Jill Blewitt Memorial Lecture.
Retrieved from http://ausit.org/national/?page_id=253
Weir, C. J. (2005). Language testing and validation: Palgrave Macmillan Houndmills, Basingstoke.
Wilss, W. (1982). The Science of Translation: Problems and Methods. Tubingen: Gunter Narr.
Winteringham, S. (2010). The usefulness of ICTs in interpreting practice. The Interpreter's Newsletter(15),
87-99.
Commercial-in-confidence 95
Project Ref: RG114318
Appendices
Appendix 1 NAATI Project Specialist Working Group Memberships
1. Working group on rubrics, descriptors and competency-based assessment
Name Role Country
Barry Turner Co-chair Australia
Miranda Lai Co-chair Australia
Gyde Hansen Consultant (T&I Educator) Denmark
David Deck Research Assistant Australia
Claudia Angelelli Consultant (T&I Educator, Researcher) USA
Dave Gilbert Participant (Translator) Australia
Commercial-in-confidence 96
Project Ref: RG114318
Commercial-in-confidence 97
Project Ref: RG114318
Practitioners
AUSIT e-bulletin
Interpreter and Translator panels of the following agencies to distribute to their panels of interpreters and
translators:
Commercial-in-confidence 98
Project Ref: RG114318
4. Does your agency record how NAATI accreditation was obtained by each practitioner (by
training/testing)?
Please pick one of the answers below.
Yes
No
5. Do you give preference to practitioners with formal training in addition to NAATI accreditation?
Please mark the corresponding circle - only one per line.
Never Rarely Sometimes Usually Always
6. Do you pay higher fees to practitioners with higher NAATI accreditation level?
Please pick one of the answers below or add your own.
Yes
No
Comment
7. Do you pay higher fees to practitioners with Interpreting/Translating tertiary qualifications plus NAATI
accreditation?
Please pick one of the answers below or add your own.
Yes
No
Comment
8. How often do you receive feedback from users about interpreter performance?
Please pick one of the answers below and add your comments.
very often
not at all
Comments
Commercial-in-confidence 99
Project Ref: RG114318
9. List the top five comments you receive as feedback from clients:
Please use the blank space to write your answers.
1.
.................................................. .................................................. ..................................................
2.
.................................................. .................................................. ..................................................
3.
.................................................. .................................................. ..................................................
4.
.................................................. .................................................. ..................................................
5.
.................................................. .................................................. ..................................................
10. Please indicate your level of agreement with the following statements:
Please mark the corresponding circle - only one per line.
1. Strongly Disagree 2. Disagree 3. Neutral 4. Agree 5. Strongly Agree
11. Do you have any other suggestions for the review of NAATI testing and related issues?
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
Commercial-in-confidence 100
Project Ref: RG114318
Commercial-in-confidence 101
Project Ref: RG114318
8. If you obtained your TRANSLATOR accreditation by course completion, how long was the course?
Please pick one of the answers below and add your comments.
Six months equivalent full time
Twelve months equivalent full time
Eighteen months equivalent full time
Three years equivalent full time
Other
What was the name of the course?
11. If you obtained your INTERPRETER accreditation by course completion, how long was the course?
Please pick one of the answers below.
Six months equivalent full time
Twelve months equivalent full time
Eighteen months equivalent full time
Three years equivalent full time
Other
12. What other formal qualifications do you have? Please specify qualifications and country where they
were awarded:
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
Commercial-in-confidence 102
Project Ref: RG114318
13. Do you have any formal qualifications in assessment and evaluation? If YES, please specify which
ones:
Please pick one of the answers below and add your comments.
Yes
No
14. Are you a practising interpreter and/or translator? If YES, go to questions 15 and/or 16 , if NO, to
question 17.
Please pick one of the answers below.
Yes
No
16. As translator
Please mark the corresponding circle - only one per line.
more than 1000 words a week
more than 500 words a week
less than 1000 words a month
less than 500 words a month
18. Please indicate your level of agreement with the following statements:
Please mark the corresponding circle - only one per line.
1. Strongly Disagree 2. Disagree 3. Neutral 4. Agree 5. Strongly Agree
Commercial-in-confidence 103
Project Ref: RG114318
19. List the top five skills you think a translator test should test
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
20. List the top five skills you think an Interpreter test should test
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
21. Do you have any other suggestions for the review of NAATI testing and related issues?
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
Commercial-in-confidence 104
Project Ref: RG114318
5. If you obtained your TRANSLATOR accreditation by course completion, how long was the course?
Please pick one of the answers below and add your comments.
Six months equivalent full time
Twelve months equivalent full time
Eighteen months equivalent full time
Three years equivalent full time
Other
What was the name of the course?
Commercial-in-confidence 105
Project Ref: RG114318
8. If you obtained INTERPRETING accreditation by course completion, how long was the course?
Please pick one of the answers below and add your comments.
Six months full time equivalent?
Twelve months full time equivalent?
Eighteen months full time equivalent?
Three years full time equivalent?
What was the name of the course?
Commercial-in-confidence 106
Project Ref: RG114318
14. If you gained your accreditation by sitting a test, please indicate your level of agreement with the
following statements:
Please mark the corresponding circle - only one per line.
1. Strongly Disagree 2. Disagree 3. Neutral 4. Agree 5. Strongly Agree
15. If you gained your accreditation by completing a formal course of study, indicate your level of
agreement with the following statements:
Please mark the corresponding circle - only one per line.
1. Strongly Disagree 2. Disagree 3. Neutral 4. Agree 5. Strongly Agree
16. Please indicate your level of agreement with the following statements:
Please mark the corresponding circle - only one per line.
1. Strongly Disagree 2. Disagree 3. Neutral 4. Agree 5. Strongly Agree
18. If you are new in the profession, would you be willing to be mentored by experienced interpreters
and/or translators?
Please pick one of the answers below and add your comments.
Yes
No
Comments
Commercial-in-confidence 107
Project Ref: RG114318
Commercial-in-confidence 108
Project Ref: RG114318
19. List the top five skills you think a translator test should assess
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
20. List the top five skills you think an Interpreter test should assess
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
21. Do you have any other suggestions for the review of NAATI testing and related issues?
Please write your answer in the space below.
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
.................................................. .................................................. ..................................................
Commercial-in-confidence 109
Project Ref: RG114318
Level Three may be referred to as an intermediate stage of proficiency. Users at this level are expected to
be able to handle the main structures of the language with some confidence, demonstrate knowledge of a
wide range of vocabulary and use appropriate communicative strategies in a variety of social situations.
Their understanding of spoken language and written texts should go beyond being able to pick out items
of factual information, and they should be able to distinguish between main and subsidiary points and
between the general topic of a text and specific detail. They should be able to produce written texts of
various types, showing the ability to develop an argument as well as describe or recount events. This
level of ability allows the user a certain degree of independence when called upon to use the language in
a variety of contexts. At this level the user has developed a greater flexibility and an ability to deal with the
unexpected and to rely less on fixed patterns of language and short utterances. There is also a
developing awareness of register and the conventions of politeness and degrees of formality as they are
expressed through language.
Examinations at Level B2 are frequently used as proof that the learner can do office work or take a non-
academic course of study in the language being learned, e.g. in the country where the language is
spoken. Learners at this level can be assumed to have sufficient expertise in the language for it to be of
use in clerical, secretarial and managerial posts, and in some industries, in particular tourism.
Productive Skills
Speaking
In social and travel contexts, users at this level can deal with most situations that may arise in shops,
restaurants, and hotels; for example, they can ask for a refund or for faulty goods to be replaced, and
express pleasure or displeasure at the service given. Similarly, routine situations at the doctors, in a bank
or post office or at an airport or station can all be handled. In social conversation they can talk about a
range of topics and express opinions to a limited extent. As tourists they can ask for further explanations
about information given on a guided tour. They themselves can show visitors around, describe a place
and answer questions about it.
In the workplace, users at this level can give detailed information and state detailed requirements within a
familiar topic area, and can take some limited part in a meeting. They can take and pass on messages,
although there may be difficulties if these are complex, and can carry out simple negotiations, for example
on prices and conditions of delivery.
If studying, users at this level can ask questions during a lecture or presentation on a familiar or
predictable topic, although this may be done with some difficulty. They can also give a short, simple
presentation on a familiar topic. They can take part in a seminar or tutorial, again with some difficulty.
Receptive Skills
Listening
In social and travel contexts, users at this level can cope with casual conversation on a fairly wide range
of familiar, predictable topics, such as personal experiences, work and current events. They can
understand routine medical advice. They can understand most of a TV programme because of the visual
support provided, and grasp the main points of a radio programme. On a guided tour they have the
understanding required in order to ask and answer questions.
In the workplace, they can follow presentations or demonstrations of a factual nature if they relate to a
visible, physical object such as a product.
If studying, they can understand the general meaning of a lecture, as long as the topic is predictable.
Commercial-in-confidence 110
Project Ref: RG114318
Commercial-in-confidence 111
Project Ref: RG114318
Rubrics for competence in the use and transfer of discourse management (Jacobson, 2009, p. 65)
Discourse management
Provides a clear, concise pre-session to primary interlocutors on interpreter’s role when
possible; consistently uses the first person while interpreting, switching to third person for
Superior clarifications; encourages interaction, including eye contact, between interlocutors, both
verbally and through other paralinguistic cues; allows interlocutors to complete turns due
to strong memory and note-taking skills; demonstrates strategies for dealing with overlap.
Provides a clear, concise pre-session to primary interlocutors on interpreter’s role when
possible; consistently uses the first person while interpreting, switching to third person for
clarifications; usually encourages interaction between interlocutors, both verbally and
Advanced through other paralinguistic cues; usually demonstrates skill in allowing interlocutors to
complete turns without interrupting for clarifications, with some difficulty due to need to
further develop memory and note-taking skills; generally deals calmly and effectively with
overlaps, with demonstrated need for further practice.
In most cases, provides a clear, concise pre-session to primary interlocutors on
interpreter’s role, although at least one or two of the principal points are usually left out; is
inconsistent in using the first person while interpreting, and exhibits excessive use of the
third person, leading to awkward renditions; does not often encourage interaction
Fair
between interlocutors, either verbally or through other paralinguistic cues; often interrupts
interlocutors mid-turn for clarifications due to need to develop memory and note-taking
skills and to build vocabulary; becomes nervous when challenged by overlaps,
demonstrating clear need for further practice.
Does not always provide a clear, concise pre-session to primary interlocutors on
interpreter’s role, leaving out principal points; is inconsistent in using the first person while
interpreting and almost always uses the third person, leading to awkward renditions; does
not encourage interaction between interlocutors, either verbally or through other
Poor
paralinguistic cues; does not allow interlocutors to complete turns, and interrupts
frequently to request clarification, resulting in choppy discourse; note-taking and memory
skills are poor; does not deal effectively with overlaps, leading to interruptions in the
dialogue and excessive omissions.
Commercial-in-confidence 112
Project Ref: RG114318
Commercial-in-confidence 113
Project Ref: RG114318
T is very well organized into sections and/or paragraphs in a manner consistent with similar TL
5
texts. The T has a masterful style. It flows together flawlessly and forma a natural whole.
T is well organized into sections and/or paragraphs in a manner consistent with similar TL
4
texts. The T has style. It flows together well and forms a coherent whole.
T is organized into sections and/or paragraphs in a manner generally consistent with similar
3 TL texts. The T style may be inconsistent. There are occasional awkward or oddly placed
elements.
T is disorganized and lacks divisions into coherent sections and/or paragraphs in a manner
1 consistent with similar TL texts. T lacks style. T does not flow together. It is awkward.
Sentences and ideas seem unrelated.
T shows a masterful ability to address the intended TL audience and achieve the translations
5 intended purpose in the TL. Word choice is skilful and apt. Cultural references, discourse, and
register are completely appropriate for the TL domain, text-type and readership.
T shows a proficient ability in addressing the intended TL audience and achieving the
translations intended purpose in the TL. Word choice is consistently good. Cultural references,
4
discourse, and register are consistently appropriate for the TL domain, text type and
readership.
T shows a good ability to address the intended TL audience and achieve the translation’s
intended purpose in the TL. Cultural references, discourse, and register are mostly
3
appropriate for the TL domain but some phrasing or work choices are either too formal or too
colloquial for the TL domain, text type and readership.
T shows a weak ability to address the intended TL audience and/or achive the translation’s
intended purpose in the TL. Cultural references, discourse and register are at times
2
inappropriate for the TL domain. Numerous phrasings and/or word choices are either too
formal or too colloquial for the TL domain, text type and readership.
T shows an inability to appropriately address the intended TL audience and/or achieve the
translation’s intended purpose in the TL. Cultural references, discourse and register are
1
consistently inappropriate for the TL domain. Most phrasing and/or word choices are either too
formal or too colloquial for the TL domain, text type and readership.
Commercial-in-confidence 114
Project Ref: RG114318
5 T shows a masterful control of TL grammar, spelling and punctuation. Very few or no errors.
3 T shows a weak control of TL grammar, spelling and punctuation. T has frequent minor errors.
T shows lack of control of TL grammar, spelling and punctuation. Serious and frequent errors
1
exist.
T demonstrates able and creative solutions to translation problems. Skilful use of resource
5
materials is evident.
T reflects an inability to identify and overcome common translation problems. Numerous major
1 and minor translation errors lead to a seriously flawed translation. Reference materials and
resources are consistently used improperly.
Commercial-in-confidence 115
Project Ref: RG114318
1. Examiners / Educators
a. Compared with the present marking system(s), do you feel that the use of the rubrics-based system
provided you with clearer guidance on assessing? Yes / Unsure / No
Comments:
b. Compared with the present marking system(s), do you feel that a rubrics-based system would be
easier to use / apply? Yes / Unsure / No
Comments:
c. Compared with the present marking system(s), do you feel that the rubrics-based system
encouraged you to take a wider range of factors into account when marking?
Yes / Unsure / No
Comments:
d. The example set of rubrics describes five levels for each assessment area. On the basis of this trial
of using these rubrics, what level do you think should be the ‘adequate’ / ‘passing’ level?
54321
Comments:
e. Some rubrics-based assessment systems also determine ‘hurdle’ levels; that is, in any or all
assessment areas, if candidates are awarded a certain level or lower, no matter how well they
have performed in other assessment areas, a pass is automatically precluded. On the basis of this
trial of using these rubrics, if such a ‘hurdle’ level were to be applied, what level do you feel should
automatically preclude a pass?
Commercial-in-confidence 116
Project Ref: RG114318
54321
Comments:
f. The example set of rubrics makes assessments in five areas. Are there any areas of assessment
that you would suggest to be added or removed?
Comments:
g. Do you have any other general comments about the idea of NAATI looking into using a system of
assessment that was partly or wholly rubrics-based?
Commercial-in-confidence 117
Project Ref: RG114318
Commercial-in-confidence 118
Project Ref: RG114318
Commercial-in-confidence 119
Project Ref: RG114318
Appendix 13 ATA (2011b) Framework for Standardized Error Marking Explanation of Error
Categories
ATA Certification Exam – Type and Frequency of Errors
Addition: (A): An addition error occurs when the translator introduces superfluous information or stylistic
effects. Candidates should generally resist the tendency to insert “clarifying” material.
Explicitation is permissible. Explicitation is defined as “A translation procedure where the translator
introduces precise semantic details into the target text for clarification or due to constraints imposed by
the target language that were not expressed in the source text, but which are available from contextual
knowledge or the situation described in the source text.” (Translation Terminology, p. 139)
Ambiguity: (AMB): An ambiguity error occurs when either the source or target text segment allows for
more than one semantic interpretation, where its counterpart in the other language does not.
Capitalization: (C): A capitalization error occurs when the conventions of the target language
concerning upper and lower case usage are not followed.
Cohesion: (COH): A cohesion error occurs when a text is hard to follow because of inconsistent use of
terminology, misuse of pronouns, inappropriate conjunctions, or other structural errors. Cohesion is the
network of lexical, grammatical, and other relations which provide formal links between various parts of a
text. These links assist the reader in navigating within the text. Although cohesion is a feature of the text
as a whole, graders will mark an error for the individual element that disrupts the cohesion.
Diacritical marks / Accents: (D): A diacritical marks error occurs when the target-language
conventions of accents and diacritical marks are not followed. If incorrect or missing diacritical marks
obscure meaning (sense), the error is more serious.
Faithfulness: (F): A faithfulness error occurs when the target text does not respect the meaning of the
source text as much as possible. Candidates are asked to translate the meaning and intent of the source
text, not to rewrite it or improve upon it. The grader will carefully compare the translation to the source
text. If a “creative” rendition changes the meaning, an error will be marked. If recasting a sentence or
paragraph—i.e., altering the order of its major elements—destroys the flow, changes the emphasis, or
obscures the author’s intent, an error may be marked.
Faux ami: (FA): A faux ami error occurs when words of similar form but dissimilar meaning across the
language pair are confused. Faux amis, also known as false friends, are words in two or more languages
that probably are derived from similar roots and that have very similar or identical forms, but that have
different meanings, at least in some contexts.
Grammar: (G): A grammar error occurs when a sentence in the translation violates the grammatical
rules of the target language. Grammar errors include lack of agreement between subject and verb,
incorrect verb tenses or verb forms, and incorrect declension of nouns, pronouns, or adjectives.
Illegibility: (ILL): An illegibility error occurs when graders cannot read what the candidate has written. It
is the candidate’s responsibility to ensure that the graders can clearly discern what is written. Candidates
are instructed to use pen or dark pencil and to write firmly enough to produce legible photocopies.
Deletions, insertions, and revisions are acceptable if they do not make the intent unclear.
Indecision: (IND): An indecision error occurs when the candidate gives more than one option for a
given translation unit. Graders will not choose the right word for the candidate. Even if both options are
correct, an error will be marked. More points will be deducted if one or both options are incorrect.
Literalness: (L): A literalness error occurs when a translation that follows the source text word for word
results in awkward, unidiomatic, or incorrect renditions.
Mistranslation: (MT): A mistranslation error occurs when the meaning of a segment of the original text
is not conveyed properly in the target language. “Mistranslation” includes the more specific error
categories described in separate entries. Mistranslations can also involve choice of prepositions, use of
definite and indefinite articles, and choice of verb tense and mood.
Misunderstanding: (MU): A misunderstanding error occurs when the grader can see that the error
arises from misreading a word, for example, or misinterpreting the syntax of a sentence.
Omission: (O): An omission error occurs when an element of information in the source text is left out of
the target text. This covers not only textual information but also the author's intention (irony, outrage).
Missing titles, headings, or sentences within a passage may be marked as one or more errors of
omission, depending on how much is omitted.
Implicitation is permissible. Implicitation is defined as “A translation procedure intended to increase the
Commercial-in-confidence 120
Project Ref: RG114318
economy of the target text and achieved by not explicitly rendering elements of information from the
source text in the target text when they are evident from the context or the described situation and can be
readily inferred by the speakers of the target language.” (Translation Terminology, p. 145)
Punctuation: (P): A punctuation error occurs when the conventions of the target language regarding
punctuation are not followed, including those governing the use of quotation marks, commas, semicolons,
and colons. Incorrect or unclear paragraphing is also counted as a punctuation error.
Register: (R): A register error occurs when the language level or degree of formality produced in the
target text is not appropriate for the target audience or medium specified in the Translation Instructions.
Examples of register errors include using everyday words instead of medical terms in a text intended for a
medical journal, translating a text intended to run as a newspaper editorial in legalese, using the familiar
rather than the polite form of address, and using anachronistic or culturally inappropriate expressions.
Register is defined as “A property of discourse that takes into account the nature of relationships among
speakers, their socio-cultural level, the subjects treated and the degree of formality and familiarity
selected for a given utterance or text.” (Translation Terminology, p. 172)
Spelling: (SP) (Character (CH) for non-alphabetic languages): A spelling/character error occurs when
a word or character in the translation is spelled/used incorrectly according to target-language
conventions. A spelling/character error that causes confusion about the intended meaning is more serious
and may be classified as a different type of error using the Flowchart and Framework. If a word has
alternate acceptable spellings, the candidate should be consistent throughout the passage.
Style: (ST): A style error occurs when the style of the translation is inappropriate for publication or
professional use as specified by the Translation Instructions. For example, the style of an instructional
text should correspond to the style typical of instructions in the target culture and language, or the temper
of a persuasive essay may need to be toned down or amplified in order to achieve the desired effect in
the target language.
Syntax: (SYN): A syntax error occurs when the arrangement of words or other elements of a sentence
does not conform to the syntactic rules of the target language. Errors in this category include improper
modification, lack of parallelism, and unnatural word order. If incorrect syntax changes or obscures the
meaning, the error is more serious and may be classified as a different type of error using the Flowchart
and Framework.
Terminology: (T): A terminology error occurs when a term specific to a special subject field is not used
when the corresponding term is used in the source text. This type of error often involves terms used in
various technical contexts. This also applies to legal and financial contexts where words often have very
specific meanings. In more general texts, a terminology error can occur when the candidate has not
selected the most appropriate word among several that have similar (but not identical) meanings.
Unfinished: (UNF): A substantially unfinished passage is not graded. Missing titles, headings, or
sentences within a passage may be marked as one or more errors of omission, depending on how much
is omitted.
Usage: (U): A usage error occurs when conventions of wording in the target language are not followed.
Correct and idiomatic usage of the target language is expected.
Word form / Part of speech: (WF / PS): A word form error occurs when the root of the word is correct,
but the form of the word is incorrect or nonexistent in the target language (e.g. “conspiration” instead of
“conspiracy”). A part of speech error occurs when the grammatical form (adjective, adverb, verb, etc.) is
incorrect (e.g. “conspire” instead of “conspiracy”).
Exam number:
Version 2011 Exam passage: Evaluation by Dimensions
Instructions: In each column, the grader marks the box that best reflects performance in that dimension,
measured against the ideal performance defined for that dimension in the “Standard” row. The grader
may also insert, circle, and/or cross out words in a description to make the evaluation more specific.
Note: A passage may show uneven performance across the dimensions. For example, a candidate with
excellent command of the target language but limited knowledge of the source language might show
Strong performance for Target mechanics but Minimal performance for Usefulness / transfer.
See also the Explanation on the reverse.
Commercial-in-confidence 121
Project Ref: RG114318
Commercial-in-confidence 122
Project Ref: RG114318
Commercial-in-confidence 123
Project Ref: RG114318
The FCICE is a two-phase process, involving a Spanish-English Written Examination (Phase One) and
an Oral Examination (Phase Two) administered on a biennial basis with Phase One and Phase Two
occurring in alternating years. Interpreters must pass the Written Examination with a score of 75 percent
or higher in order to be eligible to sit for the Oral Examination.
Written Examination
The Phase One Written Examination serves primarily as a screening test for linguistic competence in
English and Spanish and is a prerequisite for the Phase Two Oral Examination. The Written Examination
is a four-option, multiple choice examination of job-relevant language ability in English and Spanish. In
2008, and possibly in the future, there will be 100 items in the English section and 100 items in the
Spanish section of the test. When that happens, additional time will be provided for the candidates to
take this longer, 200 item test. Each section consist of five parts, and each part involves a task that is
considered to be relevant for a court interpreter. The Written Examination tests comprehension of written
text, knowledge of vocabulary, idioms, and grammatically correct expression, and the ability to select an
appropriate target language rendering of source language text.
The English and Spanish sections of the examination are scored separately and the criterion to pass is 75
percent correct answers on each section of the test.
Oral Examination
The Phase Two Oral Examination directly measures interpreting skills. Because it fulfils the legal
mandate for a “criterion-referenced performance examination” the Oral Examination is the basis for
certification to interpret in the federal courts. The Oral Exam assesses the ability of the interpreter to
adequately perform the kinds of interpretation discourse that reflects both form and content pertinent to
authentic interpreter functions encountered in the federal courts. It consists of five parts: Interpreting in
the consecutive mode; interpreting a monologue in the simultaneous mode; interpreting a witness
examination in the simultaneous mode; sight translation of a document from English into Spanish; and
sight translation of a document from Spanish into English. All five parts are simulations of what
interpreters do in court.
The language used in the examinations varies widely across speech registers and vocabulary range. Test
items include both formal and informal/colloquial language, technical and legal terminology, and other
specialized language that is part of the active vocabulary of a highly articulate speaker, both in English
and in Spanish. Overall, there are 220 scored items in the test and the examinee must render 80 percent
of them correctly to pass the test. In addition, the examinee’s performance is scored holistically on three
skills, including interpreting skills, English skills, and Spanish skills.
Practice Oral Examination
Overview and instructions for the Oral Examination Practice Test
This section contains all of the Oral Practice Examination material relating to the Oral Examination for the
Federal Court Interpreter Certification Examination (FCICE). These instructions are for the web-based
Practice Examination. You will need to follow the online instructions and use your computer to play .mp3
files. If you ordered a hard copy of the Handbook, these materials are included as Part 8 of the book and
a CD is included, containing all of the audio files.
These materials contain everything you need to self-administer the Practice Test in a way that closely
simulates the actual test experience. When you are finished administering the practice examination, you
can then score your examination. You can also listen to and score an example of a strong passing
performance.
The practice test materials are presented in the same sequence that they are given during the
examination itself, as follows:
§ Part 1-English to Spanish Sight Translation
§ Part 1-Spanish to English Sight Translation
§ Part 2 Simultaneous Monologue
§ Part 3 Consecutive
§ Part 4 Simultaneous Witness Examination
Commercial-in-confidence 124
Project Ref: RG114318
Appendix 15 Marker’s Guide for the CTTIC (Canadian Translators, Terminologists and
Interpreters Council) Translation Test (CTTIC [Canadian Translators, n.d., pp.
3-5)
Marking Scale
Translation (Comprehension)
Language (Expression)
Application
Errors must be indicated in the margin of the paper using the appropriate letter.
When a paper has been corrected, the various types of errors must be entered at
the end of each text, together with the total points deducted-- e.g.
(T) 1 x 10 =10
T1x5=5
(L) 1 x 10 =10
L1x5=5
l1x3=3
-33
Commercial-in-confidence 125
Project Ref: RG114318
N.B. Please consider this example for a moment. As can be seen, the candidate is only three points short
of the pass mark. If we were to take the combined mark given by the two markers for the two texts, we
would then be faced with the worst-case scenario. So as to avoid CTTIC having to deal with complaints
from unhappy candidates (which can be a costly and time-consuming process), we must try to distance
ourselves as far as possible from the 70% pass mark.
In a case such as this one, the two markers must try to confirm the failure or success of the candidate,
leaning in so far as possible towards success. Would it not be possible here to overlook the three points
taken off for misuse of punctuation?
Let’s now presume that the two translations, despite a major error of transfer and one of language, are
generally well done, that the style used makes for pleasant reading. In such a case, we would try to
slightly offset the two or three major mistakes by giving a positive overall mark (maximum of 10 points) in
order to recognize the quality of each of the translations.
Experience shows us, however, that such a case seldom occurs and that the style of borderline
candidates usually leaves something to be desired. In such a situation, read the translations again and
see if you have been too generous towards the candidate. Might you not have failed to note one or two
spelling or punctuation mistakes, which would in fact push the mark further down in the 60% range.
Commercial-in-confidence 126
Project Ref: RG114318
General vocabulary 2
Technical terms 2
Grammar 2
Appropriate register, level of language and tone 2
Pronunciation and audibility 2
Gradings:
− very good (native fluency) = 2;
− good for a non-native = 1.5;
− understandable but numerous errors = 1;
− definitely too poor to be an interpreter = 0
The testee is given a mark out of 10 for each respective language. These are added together to give a
mark out of 20. A marking key is provided for examiners. For the consecutive interpreting exercise,
‘global’ criteria are presented with recommended mark allocations:
1. How well is the source text understood? Total of 6 points. For each mistranslation – deduct 1
point.
2. How accurately does the candidate present the ideas in the target language (excluding names
and numbers)? Total of 6 points. For each omission, addition or distortion of ideas, deduct 1
point.
3. How well does the candidate handle names and numbers? Total of 2 points.
4. Does the candidate use appropriate target language grammar and syntax? Total of 6 points
based on overall impression (3 points for English, 3 points for the foreign language).
Commercial-in-confidence 127
Project Ref: RG114318
Examination format
The examination is a two-step process consisting of a written and an oral component. Candidates who
pass the written will be eligible to sit the oral at a later date. Candidates register and pay for the two
examinations separately.
a) Sight translation from English (or French) to language of specialization and from
language of specialization into English (or French).
b) Simultaneous interpretation from English (or French) into language of
specialization.
c) Consecutive interpretation from English (or French) into language of
specialization and from language of specialization into English (or French).
Marking
Examinations will be marked independently by two markers. Candidates must pass all three parts of the
written component with a minimum of 70% on each. Similarly, all three parts of the oral component must
be passed with a minimum of 70% on each.
How are the exams marked, and what do the comments mean?
(Answers provided by Creighton Douglas, Chair, CTTIC Board of Certification; Oct. 1998)
Let me assure candidates that every paper is carefully read and corrected by two markers, who must
agree on the final mark. If they do not agree, the paper is referred to a third marker, whose decision is
final.
The pass mark is 70% and any paper that falls between 65% and 70% is reviewed very carefully to
ensure that a pass or failure is clearly justified.
Re "General Comments": they will usually seem repetitive, since the pattern of errors from candidate to
candidate and from year to year is very similar. These comments in no way disqualify a candidate, but are
written after the paper is marked to indicate the nature of the problems in a general way.
The term "transfer error" refers to a shift in meaning, sometimes quite subtle, between the original
meaning in the source text and the meaning as translated into the target language. Such errors can be
very important, but at the same time difficult for the candidate to recognize – if the candidate had
perceived the error, they likely would not have made it!
Commercial-in-confidence 128
Project Ref: RG114318
I DISCOURSE STRATEGIES
II FORM
c. Pausing is appropriate
e. Accurate use of non-manual sign modifications (e.g. mouth movement, eyebrows, sign
movement/intensity, etc.)
b. Sentence structures are appropriately marked (e.g. eyebrows, eye gaze, mouth
movements, used to indicate negation, questions etc.
Commercial-in-confidence 129
Project Ref: RG114318
I MESSAGE PROCESSING
II INTERPRETING SUB-TASKS
IV ADDITIONAL OBSERVATIONS
Commercial-in-confidence 130
Project Ref: RG114318
Appendix 19 Official Journal of the European Union / Amtsblatt der Europäischen Union
(2006)
Interpreting examinations
1. Consecutive interpreting – maximum length of speech: 6 minutes
2. Simultaneous interpreting – maximum length of speech: 12 minutes.
A pass mark of 10/20 is required in all interpreting tests
Part I: (A + B + C)
Consecutive B > A
Simultaneous B > A
Consecutive C > A
Simultaneous C > A
Part II
Option 1: (A + CCC)
Consecutive C1 > A
Simultaneous C1 > A
Consecutive C2 > A
Simultaneous C2 > A
Consecutive C3 > A
Simultaneous C3 > A
Or
Option 2: (AA + C)
Consecutive A2 > A1
Simultaneous A2 > A1
Consecutive A1 > A2
Simultaneous A1 > A2
Consecutive C > A1
Simultaneous C > A1
Formally assessed interview. Candidate’s knowledge of the EU, about the workings of the EU, its areas of
responsibility. Candidate’s ability to work in the multi-cultural and multi-lingual environment of an EU
department is also assessed. Length of interview: 30 minutes in candidate’s A language.
Commercial-in-confidence 131
Improvements to NAATI testing
For:
30 November 2012
Authorised Contact:
Louise Milazzo
Consulting and Contracts Officer
Grants Management Office
Level 3 South Wing
Rupert Myers Building
The University of New South Wales
USNW SYDNEY | NSW | 2052
T: +61 2 9385 4465
F: +61 2 9385 7238
E: [email protected]
W: www.unsw.edu.au
Commercial-in-Confidence
Any use of this Report, in part or in its entirety, or use of the names The University of New South Wales, UNSW, the name of any unit of
The University or the name of the Consultant, in direct or in indirect advertising or publicity, is forbidden, without the prior approval of
UNSW.