0% found this document useful (0 votes)
16 views

Lecture 7

The document outlines the process of test development in language education, emphasizing the importance of teacher-designed tests that adhere to language testing guidelines. It discusses the historical context of language testing, the significance of Bloom's taxonomy in creating effective assessments, and the cyclical nature of the test development process, which includes planning, design, development, operational, and monitoring phases. The document highlights the need for careful test construction to accurately reflect student abilities and reduce anxiety associated with testing.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Lecture 7

The document outlines the process of test development in language education, emphasizing the importance of teacher-designed tests that adhere to language testing guidelines. It discusses the historical context of language testing, the significance of Bloom's taxonomy in creating effective assessments, and the cyclical nature of the test development process, which includes planning, design, development, operational, and monitoring phases. The document highlights the need for careful test construction to accurately reflect student abilities and reduce anxiety associated with testing.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Lecture 7. Developing a Test. Key Terms and Concepts in Test Development.

The
Important Role of Test Specifications. An Overview of a Test Development
Process
1. Introduction
2. History of language teaching and testing
3. An Overview of a Test Development Process

1. Introduction

Testing in general, language testing in particular is the process in which


teachers are engaged in test construction processes for collecting information in
their teaching-learning practices. Better testing is needed to determine how
teaching is accomplished in the context of EFL instruction. Teachers would
therefore be able to accurately assess the results of their instruction. As a result,
teacher-designed tests must be developed in accordance with language testing
guidelines. This inference supports the idea that teachers should also be
responsible for constructing appropriate classroom tests. Thus, tests should be
carefully designed to support the value of instruction. Since teachers’ perspectives
on using techniques in test preparation are complex, constructing qualifying
classroom examinations is not an easy undertaking. The ways teachers adhere to
develop tests are important factor to determine the outcome of learning. In view of
this, Wood (2013) revealed that practice is a process by which stakeholders grasp
the concepts in question to put them into practice rather than a mindless act of
perceiving external reality. As a result, instructional strategies and practices are
crucial in regard to test construction. In other words, teachers are unable to design
better exams if practices are used incorrectly due to following a random
practice. When the processes are adhered haphazardly mean to
say without following the test development steps, students do not
get the required knowledge. In connection to this, Brown (2004)
confirmed that the less teachers obey language testing principles
and stages, the higher anxiety students would be vulnerable to.
Learners feel more stressed and anxious about their performance
and the repercussions of receiving low marks if they believe that
tests do not fairly represent their knowledge or ability.
Pedagogically, it is recommended that educators build tests using Bloom’s
taxonomy as a framework to assess six cognitive domains: recall, comprehension,
application, analysis, evaluation and creation. It is also advised that teachers
familiarize themselves with constructivism theory before developing tests
involving problem-solving tasks (Bredemeier, 2001).
Indeed, there are global and local studies being conducted in the area.
Coniam’s (2009) research concentrated on the caliber of Chinese teacher-made
examinations. The study findings indicated that the test was valid. Research on the
content validity of teacher-made English language accomplishment assessments
and the Ethiopian General Secondary Education Certificate of English
Examinations (EGSEC) was also conducted by Mekonnen (2017) and Netsanet
(2017). Their results demonstrated the lack of validity of the subject in the exam.

2. History of language teaching and testing


In order for language specialists to understand how language
was tested, why it was examined, and which language abilities
were tested, it is crucial to look back at the history of language
teaching and testing. Knowing these issues helps experts for
identifying the major challenges in the teaching of language.
Therefore, language tests are the concerns that should be paid
great attention in the language pedagogy. Indeed, teachers ought
to adhere to the principles of tests so that they can construct
better tests. However, the advocators of standardization and
comparability method of test preparation on isolated language
testing state that teachers can assess students’ proficiency levels
and conduct a thorough comparison from various angles by using
grammar, vocabulary and other syntactic structures (Alderson &
Banerjee, 2002). Nevertheless, the effect of this trend leads
teachers to follow a shallow assessment trend and learners are
not being proficient to disseminate the required information via
language skills. To substantiate this, opponents of standardization
and comparability, like Bachman and Palmer (1996) and Savignon
(2002), argued that isolated language testing results inadequate
for real-world activities due to a lack of authenticity, which is
typified by behaviorism theory. Isolated language testing is the
trend of developing tests without context. It is accompanied most
probably by language patterns. This exposes learners to rote
learning, i.e. memorization of language rules without any context.
The influence is caused by the impacts of standardized tests
(Simachew, 2013). Teachers usually sticked to the trends of
standardized tests are prepared. Teachers who follow the
procedures used to construct standardized exams for their
students’ evaluations may find themselves narrowly focused on
testing particular, frequently lower-order skills, such as
recognition and recall. This method may limit the evaluation of
students’ creative thinking, critical thinking and practical
language use because standardized examinations usually give
more weight to comparability and uniformity than to tailored
instruction. Exams administered in the classroom may therefore
not adequately reflect the entire spectrum of students’ skills and
may not be in line with more general educational objectives,
which could discourage students’ participation and limit their
prospects for relevant, context-driven learning.
Accordingly, Alderson and Banerjee (2002) and Savignon
(2002) claimed that teachers ought to employ Bloom’s taxonomy
as a guideline in developing better tests. As a matter of fact, it
was used as a framework to this study. The development of tests
should be supported by the alignment of learning objectives and a
variety of language proficiency. By following testing protocols,
this permits stakeholders to incorporate various tasks in the test
preparation (Hattingh & Kilfoil, 2015). With a similar vein,
Anderson and Krathwohl (2001) delineated that the reason for
employing Bloom’s taxonomy as a framework in test construction
processes provides two main significances: a) it enables teachers
to align the learning objectives of the lesson and the tests. The
constructed tests should be clearly reflecting the stated
objectives. b) Bloom’s taxonomy allows teachers to balance the
cognitive domains of students’ skills. This is mean to say that the
taxonomy of Bloom guides teachers to develop fair tests which
measure each of the cognitive levels. When those cognitive levels
are assessed well, all the four major and sub-skills of English
language are properly examined/tested so that the desired
learning objectives of the lesson are achieved. When tests are
prepared, there are steps which needed to followed by teachers.
According to McNamara (2000), constructing and implementing a
new test is comparable to registering a brand-new vehicle for use
on the road. Before the test is completely operational, there are
several stages involved: planning, design and try-out. According
to Lynch and Davidson (1994), designing tests involves the
following steps: first, deciding which skills to evaluate. Second,
preparing test specification, the third step consists of drafting the
test standards as well as the test items and tasks.
After writing the items and tasks, interactive feedback for
the test revision is advised to determine the weakness of the test.
Therefore, teachers should focus on the learning objectives of the
lesson while constructing tests. These objectives determine the
type of questions incorporated. In such circumstances, Bloom’s
taxonomy is used as a framework to construct classroom tests.

3. An Overview of a Test Development Process


It is important and useful to think of the process of test development as cyclical
and iterative. This involves feeding back the knowledge and experience gained at
different stages of the process into a continuous re-assessment of a given test and
each administration of it.
Figure 1 shows an attempt to capture this process in diagrammatic form. The
diagram offers a comprehensive blueprint for the stages that may be gone through,
beginning from the initial perception that a new test is necessary.
• perceived need for a new test
• planning phase
• design phase
• development phase
• operational phase
• monitoring phase
Not all of these stages are always necessary; whether or not they are all included is
a rational decision based on the particular requirements of the test development
context.
Once the need for a new test has been established, the model involves a planning
phase during which data on the exact requirements of candidates is collected. In
the classroom context, this process may be based on direct personal knowledge of
the students and experience of the teaching program. In wider contexts,
information may be gathered by means of questionnaires, formal consultation and
so on. Whatever the context, the aim will be to establish a clear picture of who the
potential candidates are likely to be and who the users of the test results will be.
The planning phase is followed by a design phase, during which an attempt is
made to produce the initial specifications of a test which will be suitable for the
test takers. The specifications describe and discuss the appearance of the test and
all aspects of its content, together with the considerations and constraints which
affect this. Initial decisions can be made on such matters as the length of each part
of the test, which particular item types are chosen, and what range of topics are
available for use. At this stage, sample materials should also be written and
reactions to these should be sought from interested parties. Even at the level of
classroom tests it is always worth showing sample materials to a colleague since
another person’s reactions can be invaluable in informing the development process.
During the development phase the sample materials need to be trialled and/or
pretested. This means that students who are at the appropriate level to take the test
and who are similar to projected candidates (in terms of age, background, etc.) are
given test materials under simulated examination conditions. This phase may
involve analysing and interpreting the data provided by candidate scores; useful
information can also be gathered by means of questionnaires and feedback reports
from candidates and their teachers, as well as video/audio recordings and
observations. Decisions can then be made on whether the materials are at the right
level of difficulty and whether they are suitable in other ways for use in live tests.
Information from trialling also allows fairly comprehensive mark schemes and
rating scales to be devised. Even small-scale trialling of classroom or school tests,
using just a handful of candidates, can provide valuable information on issues such
as the timing allowance needed for individual tasks, the clarity of task instructions,
appropriate layout for the response, etc. At this stage it is still possible to make
radical changes to the specifications, to the item types used, or to any other aspects
of the test which cause concern.
Once the initial phases of planning, design and development have been
completed, the test specifications reach their final form, test materials are written,
and test papers are constructed. A regular process of administering and marking the
test is then set up. This is the operational phase (or ‘live’ phase) during which the
test is made available to candidates. (The various stages of this phase are shown in
detail in Figure 3 on page 13; the process described here is most applicable to end-
of-year school tests, to end-of-course tests in other settings, and to those
administered on a wider scale.)
Once a test is fully operational, the test development process enters the
monitoring phase during which results of live test administrations need to be
carefully monitored. This includes obtaining regular feedback from candidates and
teachers at schools where the test is used as well as carrying out analyses of
candidates' performance on the test; such data is used to evaluate the test’s
performance and to assess any need for revision. Research may be done into
various aspects of candidate and examiner performance in order to see what
improvements need to be made to the test or the administrative processes which
surround it. Revision of the test is likely to be necessary at some point in the future
and any major revision of a test means going back to the planning phase at the
beginning of the cycle.

You might also like